id
stringlengths 10
10
| title
stringlengths 7
231
| abstract
stringlengths 3
2.43k
| authors
stringlengths 5
21.5k
| published_date
stringlengths 20
20
| link
stringlengths 33
34
| markdown
stringlengths 133
1.92M
|
---|---|---|---|---|---|---|
2304.01080
|
LIPSFUS: A neuromorphic dataset for audio-visual sensory fusion of lip
reading
|
This paper presents a sensory fusion neuromorphic dataset collected with
precise temporal synchronization using a set of Address-Event-Representation
sensors and tools. The target application is the lip reading of several
keywords for different machine learning applications, such as digits, robotic
commands, and auxiliary rich phonetic short words. The dataset is enlarged with
a spiking version of an audio-visual lip reading dataset collected with
frame-based cameras. LIPSFUS is publicly available and it has been validated
with a deep learning architecture for audio and visual classification. It is
intended for sensory fusion architectures based on both artificial and spiking
neural network algorithms.
|
Antonio Rios-Navarro, Enrique Piñero-Fuentes, Salvador Canas-Moreno, Aqib Javed, Jin Harkin, Alejandro Linares-Barranco
|
2023-03-28T12:27:43Z
|
http://arxiv.org/abs/2304.01080v1
|
# LIPSFUS: A neuromorphic dataset for audio-visual sensory fusion of lip reading
###### Abstract
This paper presents a sensory fusion neuromorphic dataset collected with precise temporal synchronization using a set of Address-Event-Representation sensors and tools. The target application is the lip reading of several keywords for different machine learning applications, such as digits, robotic commands, and auxiliary rich phonetic short words. The dataset is enlarged with a spiking version of an audio-visual lip reading dataset collected with frame-based cameras. LIPSFUS is publicly available and it has been validated with a deep learning architecture for audio and visual classification. It is intended for sensory fusion architectures based on both artificial and spiking neural network algorithms.
Neuromorphic dataset, sensory fusion, dynamic vision sensor, neuromorphic auditory sensor
## I Introduction
Sensor fusion is known as the process of combining sensor data derived from several sources of the same reality such that the fused information has less uncertainty than when these sources are used individually. In [1] authors demonstrated that combining radar and visual information improves the accuracy in vehicle detection systems, where radar information is used to focus on the important part of the visual information. In the healthcare field sensory fusion improves decision-making when combining data from different sensors, like in [2], where up to eight data sources are combined for the detection of diabetes. In urban search and rescue robots the sensory fusion goes further and combines proprioceptive (inertial measurement unit and tracks odometry) and exteroceptive sensors (omnidirectional camera and rotating laser rangefinder) to improve the accuracy [3]. In [4] a review of sensory fusion for quality control in manufacturing is presented, where different kinds of sensors are combined including visual, acoustic, laser, vibration, thermal, etc. In general, data fusion is a challenging task because of a several difficulties. The majority of these difficulties arise from the data being fused, imperfection and diversity of the sensor technologies, and the nature of the application environment as data imperfection, outliers and spurious, conflicting, modality, correlation, alignment, association data, operational timing, etc. [5].
Living organisms usually have several sensory mechanism to interact with the real world. Those with neural architectures, learn from the experience taken from sensory systems and their combination or fusion. Neuromorphic engineering is devoted to study these and others neural systems in biology by the implementation of engineering systems that mimics those present in biology [6]. One of those neural architectures is in charge of audio-visual sensory fusion, where the temporal synchrony is one of the strongest binding cues in multi-sensory perception [7][8], where the consideration of a temporal window is of utmost importance for this type of sensory fusion [9]. This window temporal length decreases in humans with the age [10].
Audio-visual sensory fusion developments with neuromorphic engineering open the range of applications i.e, Mobile robotics, Internet of Things (IoT), Edge Computing, etc where low latency and low power consumption are important factors. The use of spiking sensors and spiking neural networks will considerably improve these characteristics over conventional sensors and artificial neural networks.
This paper is focused on the collection of a neuromorphic audio-visual dataset for machine learning applications around lip reading. It takes into account the temporal synchronization of the measured data using specific neuromorphic sensors and hardware tools. The dataset is collected from people of different nationalities and age, speaking the number of Natural Language Processing (NLP) words and one English-language pangram _"The quick brown fox jumps over the lazy dog"_ for training of learning architectures. The dataset is publicly available and it has been tested, for validation, with convolutional neural networks. To the best of our knowledge, there is no previous neuromorphic dataset recorded with spiking sensors and specific logic to ensure temporal synchronization.
## II Materials and methods
This section explains the setup for the dataset collection that has been recorded directly from neuromorphic sensors while maintaining the time synchronization of the visual and audio parts. It also explains how the BBC dataset has been converted to the spikes domain.
### _LIPSFUS recording setup_
The Neuromorphic Auditory Sensor (NAS) [11] and the Neuromorphic Dynamic Vision Sensor (DVS) [12] has been
used to record the LIPSFUS dataset. Both sensors generate spike information at their outputs and both are encoded in AER (Address-Event-Representation) format at their outputs using a digital parallel bus (16-bit maximum in our setup) with a 2-bit asynchronous handshake. The digital words encode the identifier of the emitter neuron, in the corresponding sensor, called the address. The length of an address is fixed by the number of neurons in the sensor: pixels for the DVS or channels for the NAS. In our setup, it is used the DVS developed at IMSE-CNM in Seville, called cnmDV. This DVS has 128x128 pixels, so the output parallel AER bus requires 15-bit for the addresses plus the polarity bit. For the audition, the selected NAS has been obtained from the openNAS tool [13] and synthesized for the Spartan6 FPGA of the AER-Node board [14]. A binaural sensor composed of two identical banks of 64 band-pass filter per ear has been selected, with the cut-off frequencies shown in figure 2 in the range from 18,91Hz to 20,81KHz, with an average error of 0,005% and a standard deviation of 0,001 along the 64 channels. For this audio sensor, the AER bus is sending addresses of 8-bit length (6-bit to identify an active channel, 1-bit for polarity and 1-bit for left or right filter bank). The input of the NAS comes from a stereo set of ear-shaped microphones that mimics a human head. This is the 3DIO Free Space XLR Binaural setup [15]. These two neuromorphic sensors produce spikes with their own temporal distributions, that intrinsically depend on the sensed activity. An AER-tool [16] is used for merging the two sensors' activity in such a way that the temporal distribution is maintained for both sensors. This requires the AER-merger to add 1-bit into the AER bus to distinguish between a visual spike or an audio spike. The most significant bit is set to '0' for a DVS event, and to '1' for a NAS event. Therefore, the full range of 16-bit allowed by the hardware is used in this setup. The output of the AER-merger is then connected to an AER-monitor board [17], that assigns a timestamp mark to each received event, regardless of which sensor it belongs to, before queuing the data (address and timestamp) for sending USB packets to a computer running jAER, which stores.AEDAT files for each sample of the dataset.
Figure 1 on the left shows the distribution of the neuromorphic sensors (cnmDV and NAS), the AER-merger platform and the AER-monitor. In addition, another neuromorphic sensor, DV-Xplorer-Lite-346-USB, can be seen in the figure, but it has not been included in the dataset presented in this paper. It is incorporated in the setup for future use in applications that requires higher visual resolutions. The right side of the figure shows the distances of each neuromorphic sensor to the person to be recorded. The cnmDV is placed at 45cm from the person face, while the 3DIO microphones are placed at 75cm and they can be oriented at different angles (0deg, -45deg, -90deg, 45deg and 90deg in our dataset). The microphones are placed in the holes of two silicone ears that simulate the human ear, and both are separated at the same distance as in an average human head. The dataset is recorded in two different environments: noisy and quiet. The noisy environment consisted of a glass meeting room, where the air-conditioning system was running, and behind the window was a car park and a main street in the city. In contrast, the quiet environment consisted of a small room with acoustically insulated walls and a door. The lighting condition in both environments were kept similar. The dataset, for each of the environments, consisted of 22 persons of 5 different nationalities (Indian, Iranian, Irish, Pakistani, Spanish), of both genders and aged between 6 and 61 years.
The dataset consists of a series of words that have been selected from different challenges or are considered to be of interest in the area of language processing. The words in the dataset are:
* Spoken digits: One, two, three, four, five, six, seven, eight, nine, zero and o [18].
* Robotic commands: Yes, no, up, down, left, right, on, off, stop and go [19].
* Bed, bird, cat, dog, happy, house, Marvin, Sheila, tree, wow [20].
* About, border, forward, missing, press, short, threat, young [21].
* The quick brown fox jumps over the lazy dog [22].
Each participant was reading each word (or sentence) when it is shown in a presentation with 2 seconds delay between words. An AEDAT file was recorded for the whole presentation from the word "one" to the "for" sentence. This is repeated five times per participant placing the ears at the orientations commented before. Therefore, a total of 5 different AEDAT recordings are stored for the each person. A repository with access to these files is available on GitHub [23].
Fig. 1: LIPSFUS recording dataset setup. Left figure shows the sensors and boards distribution and the right figure shows the setup sensor distances.
Fig. 2: Cut-off frequencies of the binaural NAS 64-channels. Ideals versus real implemented ones on FPGA with error in %.
### _BBC-LIPSFUS dataset_
The forth set of words recorded with the previous setup are also part of the "Lips reading in the wild" dataset from BBC [21, 24, 25]. In this work, this set of words has been converted to a neuromorphic format and made available for training spiking learning architectures. This dataset includes 1k sentences with 500 different words taken from news and interviews (MP4 videos) from BBC channels with 1k different speakers [26]. Each MP4 video has 29 frames (1.16 seconds) and selected word is in its temporal center. An important difference from our recorded dataset is that our speakers pronounced each word / sentence in an isolated way. Nevertheless, for the BBC dataset, the words are part of a sentence and they are pronounced in a natural conversation. In order to properly extract the required word from each MP4, we have used the IBM Watson Studio engine [27], as it was used in [21] for validation, to extract the precise timestamps where our chosen words start and end at each MP4. Using these timestamps, each MP4 was cut to contain only the right word, in an isolated way, as it is in our recorded dataset. These cut MP4s were then used for the neuromorphic conversion. The same neuromorphic auditory sensor (NAS) used in the previous recordings was connected to the audio output of a computer and used to capture.AEDAT files in jAER while the computer was playing each MP4 file of the dataset for the 8 selected words and 1k speakers of each word. For the visual neuromorphic representation of the MP4 videos, the ESIM [28] has been used, which also produces AEDAT files for each of the 8k given inputs. ESIM behaves as the DVS for the presented input video.
### _Dataset validation_
This dataset has been validated by performing a classification task on a subset of the words in the dataset, the spoken digits. This paper proposes an Artificial Neural Network (ANN) based learning model to classify extracted neuromorphic words dataset. Currently, the authors are also working on the design of a Spiking Neural Network (SNN) to perform the classification task using the spiking information from the sensors.
#### Iii-C1 Data conversion and augmentation
This work has considered Convolutional Neural Network (CNN), to perform the classification of the spoken digits, the first step is to convert the spiking information into a data type that the CNN can process i.e, an histogram based image. As we have information from two different sensors that generate different nature of temporal information. Therefore a separate conversion is designed for each sensory data.
The NAS used for the recording of this dataset consists of 64 stereo channels. The number of channels corresponds to the number of spike-based filters used to decompose the signal into different frequencies. As this sensor is stereo, it has 128 channels in total. At the output of each of these spike-based filters there are two neurons, one encoding the positive part of the signal and one encoding the negative part of the same signal. Therefore, this NAS contains 256 neurons that emit information at its outputs. For the conversion of the pulsed information of the NAS, we proceeded to generate the sonogram of each spoken digit sample. The sonogram shows the activity of each sensor channel over time. To calculate the activity of each channel, a time window is used in which the number of spikes emitted by each channel is accumulated and grouped and the channel value is set to that accumulation for that time window. Thus, the information of each channel is encoded in the intensity of spikes produced in each time window. The figure 3 (left) shows the sonogram of a sample where on the x-axis is the time, on the y-axis are the neurons of the channels and the colour represents the intensity of the channel.
As can be seen in figure 3, the spoken word information is at the beginning of the recording (delimited by the red rectangle). It can be seen that there is a large part of the recording with some activity in the channels, which corresponds to some noise. Part of this noise is from the environment as well as noise generated by the sensor at the output of the filters. This work focused on the part of the recording that contains the most activity, the part delimited by the red rectangle, and we will also divide the stereo recording to obtain two mono samples as shown in the figure 3 (right).
In order to obtain more samples from the recorded dataset, a data augmentation process has been carried out. This is done by starting with a temporal window that is adjusted to the region where the spoken word activity is located and enlarging that region by allowing some noise prior to the spoken word activity. Then, to generate new samples, this region of noise prior to the activity is reduced a little and enlarged by the same amount at the end of the spoken word activity allowing for some noise. This process was repeated 10 times per sample. The Number of Samples (NS) in the audio dataset is 26,620 (see equation 1, for 22 People (P), 11 Words (W), 2 channel audio Sensor (S), 10 Data Augmentation (DA) techniques and 5 different microphones Orientation (O).
\[\begin{split} NS&=P\times(W\times S+W\times S\times DA )\times O\\ &=22\times(11\times 2+11\times 2\times 10)\times 5\end{split} \tag{1}\]
The DVS used to record this dataset has a resolution of 128x128 neurons that emit positive and negative spikes depending on the change in brightness that the scene represents.
Fig. 3: Stereo sonogram of a spoken digit sample (left). The spoken word information is delimited by the red box. Mono sonogram for right (top-right) and left (bottom right) channels cut from the red box.
For the conversion of the spiking information from the DVS sensor, only the lip area of the original sample was trimmed, discarding the rest of the information from the scene. The entire recording is then divided into equal temporal regions and the spikes are accumulated to generate a histogram equivalent to an image. The activity of each pixel or neuron in the sensor is encoded in the colour intensity of the pixel in the generated image. This approach provides a sequence of images that capture the movement of the lips when saying the corresponding word. Figure 4 shows how this conversion is performed.
A similar data augmentation process has been repeated for the audio dataset to generate more samples. A temporal window is adjusted to the region where the lip movement activity is found when the word is spoken. This region is enlarged thus allowing some noise before the activity of the spoken word. To generate new samples, the selected region of noise prior to the activity is reduced marginally and enlarged by the same amount at the end of the speech activity allowing for some noise. This process was repeated 10 times per sample. The number of samples in the visual dataset is 13,310 (see equation 2):
\[\begin{split} NS&=P\times(W+W\times DA)\times O\\ &=22\times(11+11\times 10)\times 5\end{split} \tag{2}\]
#### Ii-B2 Visual and Audio networks
As mentioned at the end of the introduction section, to validate the dataset, two CNNs have been used to classify the spoken digits, one for the audio part and one for the visual part. The aim is to demonstrate with a simple application that the data collected in this dataset are valid for use by the community. The CNN architecture that will be trained to classify the spoken digits using the audio information from the sonograms (via NAS sensor) has the following layout:
* Input(64,32,1) \(\rightarrow\) Conv2D(3,3,100) \(\rightarrow\) MaxPool(2, 2) \(\rightarrow\) Conv2D(2,2,200) \(\rightarrow\) MaxPool(2,2) \(\rightarrow\) Dropout(0.5) \(\rightarrow\) Dense(10).
In contrast, the CNN architecture that will be trained to classify the spoken digits using the visual information from the sequence of histograms (via the DVS sensor) has the following configuration:
* Input(33,49,6) \(\rightarrow\) Conv2D(3,3,16) \(\rightarrow\) MaxPool(2,2) \(\rightarrow\) Conv2D(2,2,32) \(\rightarrow\) MaxPool(2,2,2) \(\rightarrow\) Conv2D(2,2,64) \(\rightarrow\) MaxPool(2,2,2) \(\rightarrow\) Dropout(0.5) \(\rightarrow\) Dense(10).
Both networks have been trained using the Keras+Tensorflow framework.
## III Results
The Keras+Tensorflow framework was used to train the CNNs described in the previous section. For the different training sessions performed, the datasets have been partitioned into 70% for training, 20% for validation and the remaining 10% has been used to test the network once it has been trained. The samples of the 10% used for testing belong to a subject whose samples have not been used in the training phase, thus ensuring that during the test all the samples are totally unknown to the trained network.
Table I shows the results obtained when testing the audio (A_) and the visual (V_) CNNs across different training scenarios. Each row of the table represents the result of testing the CNN using the specified number of training epochs (Tr_ep) and batch size (B_size). The learning ratio has been set to the same value for all training epochs, in this case to 0.001.
In the table the best test loss and accuracy values are highlighted for both cases. These results do not seem promising initially when compared to other applications that can be found in the literature, however the aim of this work is not to design a network model that is optimal in the classification of the spoken digits, but rather to validate the dataset in order to make it available to the community.
## IV Conclusion
This paper presents a dataset in which there is visual and auditory information when speaking a set of words. The visual information consists of the lip movement when the subject articulates a word, while the auditory information refers to the sound that the subject makes when pronouncing a word. This information has been captured by the NAS sensor for the audio and the DVS sensor for the visual part, synchronising the information from both sensors with the same timing source, therefore, achieving a perfect temporal sequencing of the spikes generated by both sensors. To validate the dataset, a classification task has been performed on a subset of words from the dataset using Deep Learning algorithms. Although the reported classification results are not the same as those obtained with Deep Learning techniques in the literature, they are satisfactory to validate that the dataset samples are useful and suitable for the research community to implement sensory fusion algorithms.
Fig. 4: Conversion process from spiking information to histograms.
|
2303.05241
|
The fundamental group of Galois covers of surfaces of degree $8$
|
We compute the fundamental group of the Galois cover of a surface of degree
$8$, with singularities of degree $4$ whose degeneration is homeomorphic to a
sphere. The group is shown to be a metabelian group of order $2^{23}$. The
computation amalgamates local groups, classified elsewhere, by an iterative
combination of computational and group theoretic methods. Three simplified
surfaces, for which the fundamental group of the Galois cover is trivial, hint
toward complications that depend on the homotopy of the degenerated surface.
|
Meirav Amram, Cheng Gong, Praveen Kumar Roy, Uriel Sinichkin, Uzi Vishne
|
2023-03-09T13:25:41Z
|
http://arxiv.org/abs/2303.05241v1
|
# The fundamental group of Galois covers of surfaces of degree \(8\)
###### Abstract.
We compute the fundamental group of the Galois cover of a surface of degree \(8\), with singularities of degree \(4\) whose degeneration is homeomorphic to a sphere. The group is shown to be a metabelian group of order \(2^{23}\). The computation amalgamates local groups, classified elsewhere, by an iterative combination of computational and group theoretic methods. Three simplified surfaces, for which the fundamental group of the Galois cover is trivial, hint toward complications that depend on the homotopy of the degenerated surface.
_Key words and phrases._ fundamental group, degeneration, Galois cover, classification of surfaces _MSC2010:_ Primary: 14D06, 14H30, 14J10
\(a\) and \(b\) relatively prime [16], \({\mathbb{C}}{\mathbb{P}}^{1}\times T\) [2, 9] where \(T\) is the complex torus, toric varieties [7], and surfaces with Zappatic singularities of type \(E_{k}\) [4]. In one of the most complex cases, that of the surface \(T\times T\) [8], the Van Kampen presentation had 54 generators and nearly 2000 relations.
To the degeneration of a surface \(X\) we associate the **shadow**, which is a 2-dimensional simplicial complex. There is one 2-cell (i.e., a triangle) for each irreducible component of the degeneration, which are all copies of \({\mathbb{C}}{\mathbb{P}}^{2}\). Two triangles share a 1-cell (an edge), respectively a 0-cell (a vertex), if the corresponding planes intersect in a line, respectively at a point. In particular, the number of triangles in the shadow is the degree of \(X\). Edges that belong to a single triangle are **redundant**. The number of non-redundant edges meeting in a vertex is the multiplicity (or the degree) of the singularity at this point. The shadow is **planar** if it can be embedded in the Euclidean plane \({\mathbb{R}}^{2}\). Experience shows that the computation of the fundamental group is typically easier when the shadow is planar.
Most examples studied so far have planar shadows [4, 7, 16, 17, 18]. Relatively new works on surfaces of degrees 5 and 6, with planar shadows, appear in [5, 3]. Recently, the first named author studied surfaces with non-planar shadows that have degrees 4 and 6, see [1]. The shadow of the surface \({\mathbb{C}}{\mathbb{P}}^{1}\times T\) (where \(T\) is the torus), which was studied in [2], is homeomorphic to a cylinder.
In this work we focus on surfaces of degree 8, whose shadow is homeomorphic to the sphere \(S^{2}\), such that every vertex in the shadow has degree 4. Such a shadow can be described as the union of two triangulated squares glued along all four edges, see Figure 1; for that reason we denote the shadow as \(X^{4}\). As expected, \(\pi_{1}(X^{4}_{\text{Gal}})\) has rather complicated computations. We thus implement the following technique: We introduce intermediate shadows, obtained by "cutting open" one glued pair of edges in each step. There are thus four shadows,
\[X^{1}\to X^{2}\to X^{3}\to X^{4},\]
where each arrow denotes a covering map obtained from identifying two edges, see Figures 2, 3, and 4, respectively. Computing the four fundamental groups in this manner becomes a sequence of problems of increasing complexity, each of which requires additional ideas to solve. For \(i=1,2,3\), we were able to use Magma to simplify the presentation of \(\pi_{1}(X^{i}_{\text{Gal}})\), resulting in each case in a trivial group. Singularities of high multiplicity complicate the computation, so for \(X^{4}\) we needed to incorporate the methods of [6] and [19].
For every graph \(T\) there are a Coxeter group \(C(T)\) and a natural quotient \(C_{Y}(T)\). This is relevant here because in all cases computed so far, the fundamental group of the surface turns out to be a quotient of \(C_{Y}(T)\), where \(T\) is the dual graph of the shadow. Naively, we would want to make all computations in \(C_{Y}(T)\), because this group is fully understood. However, not all the defining relations of \(C_{Y}(T)\) are given _a priori_. This is resolved by a technical innovation of a multi-layered computation. We run Magma on the given presentation. When we cannot show that the fundamental group is indeed a quotient of \(C_{Y}(T)\), we show that a subgroup is a quotient of \(C_{Y}(T^{\prime})\) for a subgraph \(T^{\prime}\), use this to obtain more relations, run Magma again, and repeat. This is best demonstrated in the computation for \(X^{3}\) below.
As we progress from \(X^{1}\) to \(X^{2}\) to \(X^{3}\) and finally to \(X^{4}\), the situation becomes more complex. For \(X^{1}\) the traditional techniques suffice. For \(X^{2}\) we used Magma and methods from [19]. For \(X^{3}\) we used the multi-layered computation described above. For \(X^{4}\) we needed to also combine ideas from [6], which computes the Artin covers of \(C_{Y}(T)\).
The remainder of this paper is organized as follows. In Section 2 we give the constructions of \(X^{i}\), \(i=1,2,3,4\), we recall the Van Kampen Theorem to get presentations of fundamental groups, and give some necessary details on Coxeter groups and dual graphs from the papers [6] and [19]. In Theorems 3.1, 3.2, 3.3, and 3.4 (Section 3) the groups \(\pi_{1}(X_{\text{Gal}})\) related to \(X^{i}\), \(i=1,2,3,4\), are determined.
## 2. Preliminaries and methods
In this section we describe the shadows \(X^{1}\), \(X^{2}\), \(X^{3}\), and \(X^{4}\) in detail, and apply standard techniques to exhibit presentations for their fundamental groups \(\pi_{1}(X^{i}_{\text{Gal}})\). We recommend [10, 11, 13] as sources for relevant information about degenerations and [3] for detailed background about fundamental groups of complements of branch curves and fundamental groups of Galois covers.
### Projective degeneration
We begin by defining a degeneration. Let \(\Delta\) denote the unit disc, and let \(X,Y\) be projective algebraic surfaces. Let \(p:Y\to\mathbb{CP}^{2}\) and \(p^{\prime}:X\to\mathbb{CP}^{2}\) be generic projections. We say that \(p^{\prime}\) is a projective degeneration of \(p\) if there is a flat family \(\pi:V\to\Delta\) and an embedding \(F:V\to\Delta\times\mathbb{CP}^{2}\), such that \(F\) composed with the first projection is \(\pi\), and:
1. \(\pi^{-1}(0)\simeq X\);
2. there is a \(t_{0}\neq 0\) in \(\Delta\) such that \(\pi^{-1}(t_{0})\simeq Y\);
3. the family \(V-\pi^{-1}(0)\to\Delta-0\) is smooth;
4. restricting to \(\pi^{-1}(0)\), \(F=0\times p^{\prime}\) under the identification of \(\pi^{-1}(0)\) with \(X\);
5. restricting to \(\pi^{-1}(t_{0})\), \(F=t_{0}\times p\) under the identification of \(\pi^{-1}(t_{0})\) with \(Y\).
In the above process, we construct a flat degeneration of \(X\) into a union of planes. We consider degenerations with only two planes intersecting at a line, with each plane homeomorphic to \(\mathbb{CP}^{2}\).
### Constructions and figures of degenerations
Here we describe the four shadows \(X^{i}\), \(i=1,2,3,4\), which as explained above are all of degree \(8\) and have singularities of degree at most (and in \(X^{4}\): precisely) \(4\).
**Construction 1** (The shadow \(X^{4}\)).: _A triangulation of degree \(8\) of the sphere is obtained by cutting the Euclidean sphere by three orthogonal planes passing through the origin, see Figure 1. This triangulation has \(6\) vertices which we denote \(V_{1},\dots,V_{6}\) and \(12\) edges; the repeated edges, \(5,6,8,9\), are glued in pairs. We numerate the vertices and edges for future reference._
It is worth noting that following [11, Corollary 12.2], each degree \(4\) degeneration can be deformed to a cone over an elliptic curve \(E_{0}\), where \(E_{0}\) is a curve of degree \(4\) in \(\mathbb{P}^{3}\). The Zappatic singularities of type \(E_{4}\) are obtained as the intersection of \(4\) lines. For more details about such surfaces, see [4].
Now consider the two connected components of Figure 1, without the gluing. We will construct \(X^{1}\), \(X^{2}\), and \(X^{3}\) by gluing one pair, then two pairs, and eventually, three pairs of edges.
**Construction 2** (The shadow \(X^{1}\)).: _The shadow \(X^{1}\) appears in Figure 2. It has \(8\) vertices and \(15\) edges, of which only \(9\) edges are non-redundant (the redundant edges, those belonging to a single
Figure 1. _Degree \(4\) degenerations with four identified edges give \(X^{4}\)_
triangle, are not numerated, as they do not appear in the computation. Edge \(5\) appears twice in the figure, and the two copies are of course glued._
_Notice that the degree of \(V_{1}\) and \(V_{2}\) is \(4\); the degree of \(V_{3}\) and \(V_{4}\) is \(3\); and the other vertices have degree \(1\)._
We now glue in \(X^{1}\) the edge connecting the vertices \(V_{4}\) and \(V_{6}\) with the edge connecting the vertices \(V_{4}\) and \(V_{7}\), to obtain \(X^{2}\) (the enumeration of the vertices and edges changes).
**Construction 3** (Degeneration \(X^{2}\)).: _The degeneration \(X^{2}\) appears in Figure 3. This shadow has \(7\) vertices and \(10\) non-redundant edges. Edges \(5\) and \(6\) appear twice, and the two copies are being glued. The degree of \(V_{1}\), \(V_{2}\), and \(V_{3}\) is \(4\); the degree of \(V_{4}\) and \(V_{5}\) is \(3\); and the degree of \(V_{6}\) and \(V_{7}\) is \(1\)._
We now glue the edge connecting \(V_{5}\) and \(V_{6}\) with the edge connecting \(V_{5}\) and \(V_{7}\) (in the num
Figure 3. _Degree \(4\) degenerations with two identified edges give \(X^{2}\)_
Figure 2. _Degree \(4\) degenerations with one identified edge give \(X^{1}\)_
**Construction 4** (The shadow \(X^{3}\)).: _There are now \(6\) vertices and \(11\) non-redundant edges, see Figure 4. The degrees of \(V_{1}\), \(V_{2}\), \(V_{3}\), and \(V_{4}\) are \(4\); and the degrees of \(V_{5}\) and \(V_{6}\) are \(3\)._
One can also depict \(X^{1}\), \(X^{2}\), and \(X^{3}\) as planar shadows, without gluing, as shown in Figure 5. We maintain the numeration of the vertices and edges for each shadow as given above. To obtain \(X^{4}\) from \(X^{3}\) we must glue the two edges connecting \(V_{5}\) and \(V_{6}\) (one from the triangle \(V_{5}V_{6}V_{4}\) and one from \(V_{5}V_{6}V_{1}\)), and the resulting shadow is no longer planar.
### Fundamental groups
We now describe how the fundamental groups for \(X^{i}\), \(i=1,2,3,4\), are computed from the diagrams. Let \(X\) be an algebraic surface of degree \(n\) embedded in some projective space, with a generic projection \(f\!:\!X\to\mathbb{CP}^{2}\) of degree \(n\). Let
\[X_{\rm Gal}=\overline{(X\times_{\mathbb{CP}^{2}}\dots\times_{\mathbb{CP}^{2}} X)-\triangle},\]
Figure 4. Degree \(4\) degenerations with three identified edges give \(X^{3}\)
Figure 5. Alternative presentations for the shadows \(X^{1}\), \(X^{2}\) and \(X^{3}\).
be the Galois cover of \(X\), where the product is taken \(n\) times, and \(\triangle\) is the extended diagonal that is defined as
\[\triangle=\{(x_{1},\ldots,x_{k})\in X^{k}\bigm{|}x_{i}=x_{j}\quad\text{for some} \quad i\neq j\}.\]
The projection has a branch curve \(S\), and we turn to compute \(G=\pi_{1}(\mathbb{CP}^{2}-S)\) and \(\pi_{1}(X_{\text{Gal}})\). The degeneration of \(X\) into a union of planes has a branch curve which is a line arrangement of degree \(q\), the number of edges in the degeneration.
The **vertices** of the line arrangement are the points where more than two lines intersect. The multiplicity of a vertex is the number of lines \(k\) that intersect at the point. In this work, we have singularities with multiplicities 1, 3, and 4. There is a regeneration process (explained, for example, in [3]), which "opens" each of the lines to either a double line or a conic. We obtain a local description for the regenerated curve \(S\), which is a cuspidal curve of degree \(2q\). To compute the presentation we first must choose a global ordering of the edges in the shadow. Then, the relations arising from each vertex depend on the local ordering at this point. Now, the Van Kampen Theorem [20] states that group \(G=\pi_{1}(\mathbb{CP}^{2}-S)\) is generated by \(\gamma_{i},\gamma_{i^{\prime}}\) for \(i=1,\ldots,q\), where each generator represents an edge between two planes in the degeneration, subject to relations of the following five types.
1. for every branch point of a conic, a relation of the form \(\gamma=\gamma^{\prime}\) where \(\gamma,\gamma^{\prime}\) are certain conjugates of the generators \(\gamma_{j}\) and \(\gamma_{j^{\prime}}\),
2. for every node, \([\gamma,\gamma^{\prime}]=1\) where \(\gamma,\gamma^{\prime}\) are certain conjugates of \(\gamma_{i}\) and \(\gamma_{j}\), where \(i,j\) are the lines meeting in this node,
3. for every cusp, \(\langle\gamma,\gamma^{\prime}\rangle=1\) where \(\gamma,\gamma^{\prime}\) are as in (2),
4. the "projective relation" \(\prod\limits_{i=d}^{1}\gamma_{i^{\prime}}\gamma_{i}=1\),
5* a commutator relation for every pair of edges which meet in the branch curve.
As a consequence, other than the projective relation, the relations can be computed locally. In other words, \(G\) is an amalgamated product of "local groups", one for each vertex, generated by the generators corresponding to the edges meeting in the vertex -- modulo the projective relation of (4) and the relations of type (5*). The amalgamation is done by identifying the generators associated to the same edge, each appearing in the two local groups of the vertices of this edge.
The following lemmas summarize the local groups for vertices of multiplicity 3 or 4 after regeneration, for a particular order of the edges. Our numeration of the edges in each of the shadows \(X^{i}\) was chosen so that locally we only see the orderings appearing in these lemmas.
In Figure 6 we have an intersection of three lines, being globally ordered as \(i<j<k\).
**Lemma 2.1**.: _For each vertex whose numeration is as in Figure 6, the relations induced on \(G\) are:_
\[\langle\gamma_{i^{\prime}},\gamma_{j}\rangle=\langle\gamma_{i^{\prime}},\gamma _{j^{\prime}}\rangle=\langle\gamma_{i^{\prime}},\gamma_{j}^{-1}\gamma_{j^{ \prime}}\gamma_{j}\rangle=e \tag{1}\]
\[\gamma_{i}=\gamma_{j^{\prime}}\gamma_{j}\gamma_{i^{\prime}}\gamma_{j}^{-1} \gamma_{j^{\prime}}{}^{-1} \tag{2}\]
\[\langle\gamma_{k},\gamma_{j^{\prime}}\gamma_{j}\gamma_{i^{\prime}}\gamma_{j} \gamma_{i^{\prime}}{}^{-1}\gamma_{j^{\prime}}{}^{-1}\rangle= \langle\gamma_{k},\gamma_{j^{\prime}}\gamma_{j}\gamma_{i^{\prime}} \gamma_{j^{\prime}}\gamma_{i^{\prime}}{}^{-1}\gamma_{j}{}^{-1}\rangle= \tag{3}\]
\[\gamma_{k^{\prime}}=\gamma_{k}\gamma_{j^{\prime}}\gamma_{j}\gamma_{i^{\prime }}\gamma_{j^{\prime}}\gamma_{j}\gamma_{i^{\prime}}{}^{-1}\gamma_{j}{}^{-1} \gamma_{j^{\prime}}{}^{-1}\gamma_{k}\gamma_{j^{\prime}}\gamma_{j}\gamma_{i^{ \prime}}\gamma_{j}^{-1}\gamma_{j^{\prime}}{}^{-1}\gamma_{i^{\prime}}{}^{-1} \gamma_{j^{\prime}}{}^{-1}\gamma_{k}^{-1} \tag{4}\]
\[[\gamma_{i},\gamma_{k}]=[\gamma_{i},\gamma_{k^{\prime}}]=[\gamma_{i^{\prime} },\gamma_{k}]=[\gamma_{i^{\prime}},\gamma_{k^{\prime}}]=e. \tag{5}\]
Figure 7 depicts an intersection of four lines, being globally ordered as \(i<j<k<l\).
Figure 6. Intersection of three lines
Figure 7. Intersection of four lines
**Lemma 2.2**.: _A point that is an intersection of four lines contributes to \(G\) the following list of relations:_
\[\langle\gamma_{i^{\prime}},\gamma_{j}\rangle=\langle\gamma_{i^{\prime}},\gamma_{ j^{\prime}}\rangle=\langle\gamma_{i^{\prime}},\gamma_{j}^{-1}\gamma_{j^{\prime}} \gamma_{j}\rangle=e \tag{6}\]
\[\langle\gamma_{k},\gamma_{l}\rangle=\langle\gamma_{k^{\prime}},\gamma_{l} \rangle=\langle\gamma_{k}^{-1}\gamma_{k^{\prime}}\gamma_{k},\gamma_{l}\rangle=e \tag{7}\]
\[[\gamma_{j^{\prime}}\gamma_{j}\gamma_{i^{\prime}}\gamma_{j}^{-1}{\gamma_{j^{ \prime}}}^{-1},\gamma_{l}]=e \tag{8}\]
\[[\gamma_{j^{\prime}}\gamma_{j}\gamma_{i^{\prime}}\gamma_{j}^{-1}{\gamma_{j^{ \prime}}}^{-1},\gamma_{k}^{-1}{\gamma_{k^{\prime}}}^{-1}\gamma_{l^{\prime}} \gamma_{l^{\prime}}\gamma_{k^{\prime}}\gamma_{k}]=e \tag{9}\]
\[\langle\gamma_{i},\gamma_{j}\rangle=\langle\gamma_{i},\gamma_{j^{\prime}} \rangle=\langle\gamma_{i},\gamma_{j}^{-1}\gamma_{j^{\prime}}\gamma_{j}\rangle=e \tag{10}\]
\[\langle\gamma_{k},\gamma_{l}^{-1}\gamma_{l^{\prime}}\gamma_{l}\rangle=\langle \gamma_{k^{\prime}},\gamma_{l}^{-1}\gamma_{l^{\prime}}\gamma_{l}\rangle= \langle\gamma_{k}^{-1}\gamma_{k^{\prime}}\gamma_{k},\gamma_{l}^{-1}\gamma_{l^{ \prime}}\gamma_{l}\rangle=e \tag{11}\]
\[[\gamma_{j^{\prime}}\gamma_{j}\gamma_{i}\gamma_{j}^{-1}\gamma_{j^{\prime}}^{- 1},\gamma_{l}^{-1}\gamma_{l^{\prime}}\gamma_{l}]=e \tag{12}\]
\[[\gamma_{j^{\prime}}\gamma_{j}\gamma_{i}\gamma_{j}^{-1}{\gamma_{j^{\prime}}}^ {-1},\gamma_{k}^{-1}{\gamma_{k^{\prime}}}^{-1}\gamma_{l}^{-1}\gamma_{l^{ \prime}}\gamma_{l}\gamma_{l^{\prime}}\gamma_{k^{\prime}}\gamma_{k}]=e \tag{13}\]
\[\gamma_{j^{\prime}}\gamma_{j}\gamma_{i^{\prime}}\gamma_{j}\gamma_{i^{\prime}} ^{-1}\gamma_{j}^{-1}{\gamma_{j^{\prime}}}^{-1}=\gamma_{l}\gamma_{k^{\prime}} \gamma_{l}^{-1} \tag{14}\]
\[\gamma_{j^{\prime}}\gamma_{j}\gamma_{i^{\prime}}\gamma_{j^{\prime}}\gamma_{i^ {\prime}}^{-1}\gamma_{j}^{-1}\gamma_{j^{\prime}}^{-1}=\gamma_{l}\gamma_{k^{ \prime}}\gamma_{k}{\gamma_{k^{\prime}}}^{-1}\gamma_{l}^{-1} \tag{15}\]
\[\gamma_{j^{\prime}}\gamma_{j}\gamma_{i}\gamma_{j}\gamma_{i}^{-1}\gamma_{j}^{- 1}{\gamma_{j^{\prime}}}^{-1}=\gamma_{l}^{-1}\gamma_{l^{\prime}}\gamma_{l} \gamma_{k^{\prime}}\gamma_{l}^{-1}\gamma_{l^{\prime}}^{-1}\gamma_{l} \tag{16}\]
\[\gamma_{j^{\prime}}\gamma_{j}\gamma_{i}\gamma_{j^{\prime}}\gamma_{i}^{-1} \gamma_{j}^{-1}{\gamma_{j^{\prime}}}^{-1}=\gamma_{l}^{-1}\gamma_{l^{\prime}} \gamma_{l}\gamma_{k^{\prime}}\gamma_{k}\gamma_{k^{\prime}}^{-1}\gamma_{l}^{-1 }{\gamma_{l^{\prime}}}^{-1}\gamma_{l}. \tag{17}\]
### The dual graph
Group \(G\) comes with a natural projection to the symmetric group \(S_{n}\), where each generator \(\gamma_{i}\) or \(\gamma_{i^{\prime}}\) is mapped to the transposition switching the planes meeting in edge number \(i\).
From a presentation of \(G\) we can find a presentation of \(\pi_{1}(X_{\text{Gal}})\), using the exact sequence
\[0\rightarrow\pi_{1}(X_{\text{Gal}})\to G/\langle\langle\gamma^{2} \rangle\rangle\to S_{n}\to 0; \tag{18}\]
this can be done using the Reidmeister-Schreier method; however, the number of generators gets multiplied by \(n!\), so one must find ways to make the computation manageable.
Consider a shadow, such as \(X^{i}\) for \(i=1,2,3,4\). We now define a map from the fundamental group \(G/\langle\langle\gamma^{2}\rangle\rangle\) to the symmetric group \(S_{n}\), where \(n\) is the number of 2-cells in the shadow. Recall that \(G/\langle\langle\gamma^{2}\rangle\rangle\) is generated by \(\gamma_{i}\) and \(\gamma_{i^{\prime}}\), one pair for each nonredundant edge of the shadow. The map to \(S_{n}\) is then defined by mapping both \(\gamma_{i}\) and \(\gamma_{i^{\prime}}\) to the transposition \((ab)\), where \(a,b\) are the triangles meeting in the common edge \(i\).
A set of transpositions in \(S_{n}\) can be encoded as the edges of a graph \(T\) on \(n\) vertices. The transpositions generate the whole group if and only if \(T\) is connected. Furthermore, \(S_{n}\) can be presented on the given transpositions with respect to a certain set of relations. Group \(C_{Y}(T)\), which we define below, is defined by imposing the "local" relations, and so there is a projection \(C_{Y}(T)\to S_{n}\). In all cases studied so far, it was possible to prove that the generators of \(G/\langle\langle\gamma^{2}\rangle\rangle\) satisfy the defining relations of \(C_{Y}(T)\).
The **dual graph** of the shadow is graph \(T\), whose vertices are the 2-cells, with an edge connecting every intersecting pair of 2-cells. For example, the dual graph of \(X^{1}\) (from Figure 2), is given in Figure 10.
Because the product of two transpositions is of order 2 or 3, depending on whether they intersect, we are led to associate to the dual graph \(T\) an abstract group \(C_{Y}(T)\) defined as follows.
**Definition 2.3**.: _Let \(T\) be a connected graph, not necessarily simple, but without loops. Let \(C(T)\) be the Coxeter group whose generators are the edges of \(T\), subject to three types of relations: \(u^{2}=1\) for every generator; \((uv)^{2}=1\) if the edges corresponding to \(u,v\) are disjoint; and \((uv)^{3}=1\) if \(u,v\) intersect in one vertex. (No relation is assumed if \(u,v\) connect the same two vertices)._
_Furthermore, we define the group \(C_{Y}(T)\) as the quotient of \(C(T)\) with respect to four types of relations:_
\[[wuw,v] = e\qquad\text{if $u,v,w$ are as in Figure 8 (left)}, \tag{20}\] \[\langle vuw,v\rangle = e\qquad\text{if $u,v,w$ are as in Figure 8 (right)},\] (21) \[[wuw,vxv] = e\qquad\text{if $x,u,v,w$ are as in Figure 9 (left)},\] (22) \[\langle vuw,vxv\rangle = e\qquad\text{if $x,u,v,w$ are as in Figure 9 (right)} \tag{19}\]
It is easy to see that if we map each generator to the transposition, switching its two vertices, all the relations defining \(C_{Y}(T)\) hold in \(S_{n}\). We thus have a well-defined projection \(C_{Y}(T)\to S_{n}\), where \(n\) is the number of vertices.
It is not hard to see that if \(T\) is a tree, then \(C_{Y}(T)\) is isomorphic to \(S_{n}\). In general, the transpositions arranged in a cycle satisfy another relation, the "cyclic relation", which does not hold in \(C_{Y}(T)\). This was used to compute the kernel of the map \(C_{Y}(T)\to S_{n}\), as we now explain.
### The group \(S_{n}\ltimes A_{t,n}\)
The kernel of the map \(C_{Y}(T)\to S_{n}\) was computed in [19], when \(T\) is a simple graph. For a planar graph \(T\), this was extended in [6] to a similarly defined group \(A_{Y}(T)\), which naturally covers Artin's braid group \(B_{n}\); the kernel of the map \(A_{Y}(T)\to B_{n}\) was computed there. From this result one can deduce the structure of the kernel of \(C_{Y}(T)\to S_{n}\) for any planar graph \(T\), simple or not. We believe that the same description holds for any graph (see [12] for details), but this is not needed for the current paper.
Figure 8.
Figure 9.
Let \(t\geq 0\) and \(n\) be natural numbers. Recall the definition of group \(A_{t,n}\) from [19]. Let \(U:=\{u,u^{\prime},\dots\}\) be a set consisting of \(t\) elements.
**Definition 2.4**.: _Group \(A_{t,n}\) is generated by the \(n^{2}|U|\) elements \(u_{x,y}\), for \(u\in U\) and \(x,y\in\{1,\dots,n\}\), satisfying the following relations for any \(u,u^{\prime}\in U\) and any \(x,y,z\):_
1. \(u_{x,x}=e\)__
2. \(u_{x,y}u_{y,z}=u_{x,z}=u_{y,z}u_{x,y}\)__
3. \([u_{x,y},u^{\prime}_{w,z}]=e\) _(for distinct_ \(x,y,w\) _and_ \(z\)_)_
Another description of the same group is provided by the following.
**Definition 2.5**.: _For \(x\in\{1,\dots,n\}\), let \(F^{(x)}\) denote the free group on \(t\) letters \(u_{x}\)\((u\in U)\). Put \(F^{*}_{t,n}:=\prod\limits_{x\in\{1,\dots,n\}}F^{(x)}\). Map \(\text{ab}:F^{(x)}\to\mathbb{Z}^{t}\), defined by \(\text{ab}(u_{x})=u\), (where \(\mathbb{Z}^{t}\) is thought of as a free Abelian group generated by \(U\)), can be extended naturally to give map \(F^{*}_{t,n}\to\mathbb{Z}^{t}\). We define the kernel of this map to be \(F_{t,n}\)[19, Definition 5.6]._
The main result of [19] is that \(C_{Y}(T)\cong S_{n}\ltimes A_{t,n}\) where \(n\) is the number of vertices in \(T\) and \(t\) is the rank of \(\pi_{1}(T)\). The symmetric group acts on \(A_{t,n}\), by its action on the indices. It is also verified in [19, Theorem 5.7] for \(n\geq 5\), that \(A_{t,n}\cong F_{t,n}\). In particular, there is a short exact sequence
\[1\longrightarrow F_{t,n}\longrightarrow C_{Y}(T)\longrightarrow S_{n} \longrightarrow 1.\]
## 3. Results
In this section we describe the groups \(\pi_{1}(X^{i}_{\text{Gal}})\) for each \(i=1,2,3,4\). We apply the algorithm and methods described in Section 2.
We have the following notation that will be used in commutation relations.
**Notation 1**.: _A formula involving \(\gamma_{i,i^{\prime}}\) is a shorthand for all the formulas that are obtained by replacing this term with \(\gamma_{i}\) or \(\gamma_{i^{\prime}}\)._
### The surface \(X^{1}\)
**Theorem 3.1**.: _The fundamental group of the Galois cover of \(X^{1}\) is trivial._
Proof.: Let us describe group \(G/\langle\langle\gamma^{2}\rangle\rangle\) for the shadow \(X^{1}\) depicted in Figure 2. The group is generated by \(\{\gamma_{i}\}\) for \(i=1,1^{\prime},2,2^{\prime},\dots,9,9^{\prime}\). Vertices \(V_{1}\) and \(V_{2}\) give relations as in Lemma 2.2,
where \((i,j,k,l)\) are equal to \((1,2,3,4)\) and \((6,7,8,9)\), respectively. For the vertices \(V_{3}\) and \(V_{4}\) we have the relations from Lemma 2.1, with \((i,j,k)\) equal to \((2,5,7)\) and \((4,5,6)\), respectively. The vertices \(V_{5}\), \(V_{6}\), \(V_{7}\), and \(V_{8}\) give the relations \(\gamma_{i}=\gamma_{i^{\prime}}\) for \(i=1,3,8\), and \(9\) respectively. Then we have the relations of type (5*):
\[[\gamma_{i,i^{\prime}},\gamma_{j,j^{\prime}}]=e\,\,\,\mbox{for}\,\,\,(i,j)\in( \{1,3\}\times\{5,6,7,8,9\})\cup(\{2\}\times\{6,8,9\})\cup(\{4\}\times\{7,8,9\}) \cup(\{5\}\times\{8,9\}).\]
The last relation is the projective one \(\prod\limits_{i=9}^{1}\gamma_{i^{\prime}}\gamma_{i}=e\).
Next we compare (14) and (15) with \((i,j,k,l)\) equal to \((1,2,3,4)\), and together with \(\gamma_{1}=\gamma_{1^{\prime}}\) and \(\gamma_{3}=\gamma_{3^{\prime}}\), we get \(\gamma_{2}=\gamma_{2^{\prime}}\). Then we can get from (14) and (16) that \(\gamma_{4}=\gamma_{4^{\prime}}\). Similarly, we use \((i,j,k,l)\) equal to \((6,7,8,9)\), along with \(\gamma_{8}=\gamma_{8^{\prime}}\) and \(\gamma_{9}=\gamma_{9^{\prime}}\) to get \(\gamma_{7}=\gamma_{7^{\prime}}\) and \(\gamma_{6}=\gamma_{6^{\prime}}\) as well.
Now we turn to relation (2) and substitute \(i=2\) and \(j=5\) along with \(\gamma_{2}=\gamma_{2^{\prime}}\), which was obtained before, to get \(\gamma_{5}=\gamma_{5^{\prime}}\). In a similar way, we substitute \(i=4\) and \(j=5\) and \(\gamma_{5}=\gamma_{5^{\prime}}\), and get \(\gamma_{4}=\gamma_{4^{\prime}}\).
It follows that group \(G/\langle\langle\gamma^{2}\rangle\rangle\) is generated by \(\{\gamma_{i}\}\), \(1\leq i\leq 9\). We now present the dual graph of the edges in \(X^{1}\), see Figure 10, from which one can easily read the map \(G/\langle\langle\gamma^{2}\rangle\rangle\to S_{8}\) (where \(8\) is the number of vertices in the dual graph).
The relations that we have in \(G/\langle\langle\gamma^{2}\rangle\rangle\), from the above simplification, are of type
\[[\gamma_{i},\gamma_{j}]=e,\,\,\,\,\mbox{if i,j are disjoint lines}; \tag{23}\]
\[\langle\gamma_{i},\gamma_{j}\rangle=e,\,\,\,\,\mbox{otherwise}.\]
We also have two relations \(\gamma_{1}\gamma_{2}\gamma_{1}=\gamma_{3}\gamma_{4}\gamma_{3}\) and \(\gamma_{6}\gamma_{7}\gamma_{6}=\gamma_{8}\gamma_{9}\gamma_{8}\) that relate to the two cycles in the dual graph. Moreover, relations \([\gamma_{4},\gamma_{2}\gamma_{5}\gamma_{2}]=e\) and \([\gamma_{5},\gamma_{6}\gamma_{7}\gamma_{6}]=e\) appear in our presentation, and they are associated to the two sets of three lines each meeting at a point in the graph. This means that \(G/\langle\langle\gamma^{2}\rangle\rangle\to S_{8}\) is an isomorphism, and since \(\pi_{1}(X^{1}_{\text{Gal}})\) is the kernel of this map, the fundamental group of the Galois cover of \(X^{1}\) is trivial.
### The surface \(X^{2}\)
**Theorem 3.2**.: _The fundamental group of the Galois cover related to \(X^{2}\) is trivial._
Proof.: The group \(G/\langle\langle\gamma^{2}\rangle\rangle\) is generated by \(\{\gamma_{i},\gamma_{i^{\prime}}\}\) for \(i=1,\ldots,10\). Following Figure 3, vertices \(V_{1}\), \(V_{2}\) and \(V_{3}\) give the relations from Lemma 2.2, where \((i,j,k,l)\) equal to \((1,2,3,4)\) for \(V_{1}\), to \((4,5,6,7)\) for \(V_{2}\), and to \((7,8,9,10)\) for \(V_{3}\). Vertices \(V_{4}\) and \(V_{5}\) give the relations from Lemma 2.1, with \((i,j,k)\) equal to \((2,5,8)\) and \((3,6,9)\), respectively. The vertices \(V_{6}\) and \(V_{7}\) give the relations \(\gamma_{i}=\gamma_{i^{\prime}}\) for \(i=1\) and \(i=10\), respectively. The relations of type (5*) are:
\[[\gamma_{i,i^{\prime}},\gamma_{j,j^{\prime}}]=e\ \ \text{for}\ \ (i,j)\in \begin{array}{l}(\{1\}\times\{5,6,7,8,9,10\})\cup(\{2\}\times\{6,7,9,10\}) \cup(\{3\}\times\{5,7,8,10\})\\ \cup(\{4\}\times\{8,9,10\})\cup(\{5\}\times\{9,10\})\cup(\{6\}\times\{8,10\}). \end{array}\]
And as always, the projective relation is \(\prod\limits_{i=10}^{1}\gamma_{i^{\prime}}\gamma_{i}=e\).
Now we explain the simplification of the presentation of \(G/\langle\langle\gamma^{2}\rangle\rangle\). As written above, we have 20 generators and a quite a few relations. We try a run of the presentation in Magma and get as an output a simplified presentation of \(G/\langle\langle\gamma^{2}\rangle\rangle\) with eight generators \(\gamma_{1},\gamma_{3^{\prime}},\gamma_{5},\gamma_{5^{\prime}},\gamma_{6^{\prime }},\gamma_{7},\gamma_{9},\gamma_{10}\), still with a long list of relations.
We construct the dual graph according to the list of generators, see Figure 11.
Figure 11. _The dual graph after the first run in Magma_
We see in the output that there is a missing relation \([\gamma_{7},\gamma_{10}]=e\), which can be derived from another relation \([\gamma_{1}\gamma_{7}\gamma_{6^{\prime}}\gamma_{3^{\prime}}\gamma_{6^{\prime}} \gamma_{1}\gamma_{7},\gamma_{10}]=e\) in the output. We then add it to the presentation, and run Magma for the second time.
The output of the second run on Magma consists of \(\gamma_{1},\gamma_{2},\gamma_{3^{\prime}},\gamma_{5^{\prime}},\gamma_{6^{ \prime}},\gamma_{7},\gamma_{9},\gamma_{10}\), as generators, again with a long list of relations. We pick up all relations of type (23) from the simplified presentation, and construct the dual graph related to these generators, see Figure 12.
In the output we could not find the relation \([\gamma_{5^{\prime}},\gamma_{6^{\prime}}]=e\), but the relation \([\gamma_{6^{\prime}}\gamma_{3^{\prime}}\gamma_{6^{\prime}},\gamma_{5^{\prime} }]=e\) appearing in the output simplifies to \([\gamma_{5^{\prime}},\gamma_{6^{\prime}}]=e\). At this point, a cyclic relation for the cycle appearing in the graph, is still missing.
For \(T\), the graph depicted in Figure 12, we get a well-defined surjection \(\varphi:C_{Y}(T)\to G/\langle\langle\gamma^{2}\rangle\rangle\), see Definition 2.3. This map sends the generator corresponding to an edge \(E\) to the generator \(\gamma_{i}\), where \(i\) is the label on edge \(E\) in Figure 12. It is well-defined by the definition of \(C_{Y}(T)\). By [19], the map \(\psi:C_{Y}(T)\to S_{8}\ltimes A_{1,8}\), defined by
\[\gamma_{1}\mapsto(e\ d),\ \ \gamma_{2}\mapsto(e\ f),\ \ \gamma_{3^{\prime}} \mapsto(c\ d),\ \ \gamma_{5^{\prime}}\mapsto(a\ f)u_{a,f},\ \ \gamma_{6^{\prime}}\mapsto(b\ c),\]
and
\[\gamma_{7}\mapsto(a\ b),\ \ \gamma_{9}\mapsto(b\ g),\ \ \gamma_{10}\mapsto(g\ h)\]
is an isomorphism. Here, \(S_{8}\) acts on the label set \(L=\{a,b,c,d,e,f,g,h\}\), and \(A_{1,8}\) is generated by the elements of the form \(u_{x,y}\) (see Subsection 2.5), for \(x,y\in L\). So, we are left to compute \(\ker\varphi\circ\psi^{-1}\). This is done by pulling back the defining relations of \(G\) to \(S_{8}\ltimes A_{1,8}\), which gives us a unique non-vanishing relation: \(u_{h,f}u_{g,d}=e\). However, we can act with \(S_{8}\) on this relation, and obtain \(u_{x,y}=u_{x^{\prime},y^{\prime}}^{-1}\) for any distinct \(x,y,x^{\prime},y^{\prime}\in L\). In particular, \(u_{a,b}^{2}=u_{c,d}u_{d,e}=u_{c,e}=u_{a,b}\), so \(u_{a,b}=1\) and all the generators of \(A_{1,8}\) are contained in \(\ker\varphi\circ\psi^{-1}\).
We thus proved that \(G/\langle\langle\gamma^{2}\rangle\rangle\cong S_{8}\). In particular, the fundamental group of \(X_{\rm Gal}^{2}\) is trivial.
Figure 12. _The dual graph after the second run in Magma_
### The surface \(X^{3}\)
**Theorem 3.3**.: _The fundamental group of the Galois cover related to \(X^{3}\) is trivial._
Proof.: Group \(G/\langle\langle\gamma^{2}\rangle\rangle\) related to \(X^{3}\) is generated by \(\{\gamma_{i}\}\) for \(i=1,1^{\prime},2,2^{\prime},\ldots,11,11^{\prime}\). Vertices \(V_{1}\), \(V_{2}\), \(V_{3}\), and \(V_{4}\) give the relations from Lemma 2.2, where \((i,j,k,l)\) are equal to \((1,2,3,4)\) for \(V_{1}\), \((3,6,8,10)\) for \(V_{2}\), \((4,5,6,7)\) for \(V_{3}\), and \((7,9,10,11)\) for \(V_{4}\). Vertices \(V_{5}\) and \(V_{6}\) give the relations from Lemma 2.1, with \((i,j,k)\) equals \((2,5,9)\) and \((1,8,11)\), respectively. The relations of type (5*) are:
\[[\gamma_{i,i^{\prime}},\gamma_{j,j^{\prime}}]=e\]
for
\[(i,j)\in\begin{array}{l}(\{1\}\times\{5,6,7,9,10\})\cup(\{2\}\times\{6,7,8,1 0,11\})\cup(\{3\}\times\{5,7,9,11\})\cup(\{4\}\times\{9\})\\ \cup(\{4,5\}\times\{8,10,11\})\cup(\{6\}\times\{9,11\})\cup(\{8\}\times\{7,9\}), \end{array}\]
and the projective relation is \(\prod\limits_{i=11}^{1}\gamma_{i^{\prime}}\gamma_{i}=e\).
Group \(G/\langle\langle\gamma^{2}\rangle\rangle\) now has 22 generators and a long list of relations. This input was fed into Magma, and the output is now the group with the generators \(\gamma_{1^{\prime}},\gamma_{2^{\prime}},\gamma_{5},\gamma_{5^{\prime}},\gamma_ {6^{\prime}},\gamma_{7},\gamma_{8^{\prime}},\gamma_{9}\), and \(\gamma_{10}\), and a long list of relations. Let us label the triangles of \(X^{3}\), as in Figure 13.
The dual graph \(T\) with these generators is now given in Figure 14.
Figure 13. _The shadow \(X^{3}\) with triangles labelled_
The list of simplified relations we got from Magma contains all the defining relations of \(C_{Y}(T)\) of a type (23) corresponding to \(T\) except \(\langle\gamma_{5},\gamma_{7}\rangle=e,\langle\gamma_{5},\gamma_{9}\rangle=e, \langle\gamma_{5^{\prime}},\gamma_{7}\rangle=e,\langle\gamma_{6^{\prime}}, \gamma_{10}\rangle=e\) and \(\langle\gamma_{7},\gamma_{10}\rangle=e\). It also contains the relations \(\langle\gamma_{6^{\prime}}\gamma_{7}\gamma_{6^{\prime}},\gamma_{5^{\prime}} \rangle=e\), \(\langle\gamma_{6^{\prime}}\gamma_{5^{\prime}}\gamma_{8^{\prime}}\gamma_{6^{ \prime}},\gamma_{7}\gamma_{10}\gamma_{8^{\prime}}\gamma_{7}\rangle=e\), and \(\langle\gamma_{6^{\prime}}\gamma_{8^{\prime}}\gamma_{7}\gamma_{6^{\prime}} \gamma_{8^{\prime}},\gamma_{1^{\prime}}\gamma_{5}\gamma_{2^{\prime}}\gamma_{5 }\gamma_{1^{\prime}}\rangle=e\), which can be simplified to \(\langle\gamma_{5^{\prime}},\gamma_{7}\rangle=e\), \(\langle\gamma_{7},\gamma_{10}\rangle=e\), and \(\langle\gamma_{5},\gamma_{7}\rangle=e\) respectively. We thus obtained the relations of type (23) corresponding to \(T\), except \(\langle\gamma_{5},\gamma_{9}\rangle=e\) and \(\langle\gamma_{6^{\prime}},\gamma_{10}\rangle=e\).
Denote by \(T_{0}^{\prime}\) the subgraph of \(T\) resulting by removing the edges labeled \(6^{\prime}\), \(9\), and \(5^{\prime}\), and by \(T_{0}\) the subgraph of \(T\) resulting by removing the edges labeled \(6^{\prime}\), \(9\), and \(5\). Since we have derived the relations corresponding to \(T_{0}\) and \(T_{0}^{\prime}\), we conclude that there are natural maps \(C_{Y}(T_{0})\to G/\langle\langle\gamma^{2}\rangle\rangle\) and \(C_{Y}(T_{0}^{\prime})\to G/\langle\langle\gamma^{2}\rangle\rangle\). Those maps agree on the generators that appear in both groups, i.e., \(\gamma_{1^{\prime}},\gamma_{2^{\prime}},\gamma_{7},\gamma_{8^{\prime}},\gamma_ {10}\), and so we get a map from their amalgamated product. Each of the graphs \(T_{0}\) and \(T_{0}^{\prime}\) has one cycle, so by [19], \(C_{Y}(T_{0})\) and \(C_{Y}(T_{0}^{\prime})\) are both isomorphic to \(S_{6}\ltimes A_{1,6}\) by
\[\gamma_{1^{\prime}}\mapsto(b\ c),\gamma_{2^{\prime}}\mapsto(a\ b),\gamma_{5} \mapsto(a\ f)u_{a,f},\gamma_{5^{\prime}}\mapsto(a\ f)u^{\prime}_{a,f},\gamma_ {7}\mapsto(e\ f),\gamma_{8^{\prime}}\mapsto(c\ d),\gamma_{10}\mapsto(d\ e).\]
Here, \(S_{6}\) acts on the label set \(L=\{a,b,c,d,e,f\}\), the copy of \(A_{1,6}\) corresponding to \(C_{Y}(T_{0})\) is generated by the elements of the form \(u_{x,y}\) for \(x,y\in L\), and the copy of \(A_{1,6}\) corresponding to \(C_{Y}(T_{0}^{\prime})\) is generated by elements of the form \(u^{\prime}_{x,y}\) for \(x,y\in L\) (see Subsection 2.5). Thus, their amalgamated product is isomorphic to \(S_{6}\ltimes(A_{1,6}*A_{1,6})\), which we denote by \(H\). The resulting map to \(G/\langle\langle\gamma^{2}\rangle\rangle\) is of course not surjective, so we take the free product with the missing generators to get the map
\[\varphi_{1}:H*\langle\gamma_{6^{\prime}},\gamma_{9}\rangle\twoheadrightarrow G /\langle\langle\gamma^{2}\rangle\rangle,\]
of which we need to find the kernel after adding the remaining relations.
Figure 14. _The dual graph for the remaining generators_
One of the relations in the output pulls back by \(\varphi_{1}\) to
\[(a\ b\ c)\gamma_{6^{\prime}}(a\ b)(c\ d\ e)\gamma_{6^{\prime}}(a\ b\ c)(d\ e) \gamma_{6^{\prime}}(a\ b)(c\ d\ e)\gamma_{6^{\prime}}(a\ b\ d\ c)\gamma_{6^{ \prime}}(a\ b)(c\ e\ d)=e.\]
Using the fact that \(\gamma_{6^{\prime}}\) commutes with all the permutations that fix \(e\), this relation simplifies to \(\langle\gamma_{6^{\prime}},(d\ e)\rangle=e\) which is the pullback (by \(\varphi_{1}\)) of \(\langle\gamma_{6^{\prime}},\gamma_{10}\rangle=e\), one of the missing relations of type (23).
Adding edge \(6^{\prime}\) to \(T_{0}\) and \(T_{0}^{\prime}\) and performing the procedure described above by counting the newly derived relation \(\langle\gamma_{6^{\prime}},\gamma_{10}\rangle=e\), we get a map
\[\varphi_{2}:(S_{7}\ltimes(A_{1,7}*A_{1,7}))*\langle\gamma_{9}\rangle\twoheadrightarrow G/\langle\langle\gamma^{2}\rangle\rangle.\]
Here, \(S_{7}\) acts on the label set \(L=\{a,b,c,d,e,f,h\}\); one of the copies of \(A_{1,7}\) is generated by the elements of the form \(u_{x,y}\) and the other by elements of the form \(u^{\prime}_{x,y}\) for \(x,y\in L\).
Another relation in the output pulls back by \(\varphi_{2}\) to \(u_{f,a}u^{\prime}_{c,d}u_{a,f}u^{\prime}_{d,c}=e\) which, after conjugating with transpositions, implies that \(u_{x,y}\) commutes with \(u^{\prime}_{w,z}\) for all distinct \(x,y\) and \(w,z\) (recall that \(u_{y,x}=u_{x,y}^{-1}\)).
Next we consider the relation from the output that pulls back by \(\varphi_{2}\) to
\[u^{\prime}_{e,a}u_{h,e}u^{\prime}_{a,h}u_{c,a}u^{\prime}_{h,c}u_{a,h}u^{\prime }_{c,e}=e,\]
and simplify it. We use \(\sim\) to denote conjugacy in the group.
\[\eqalign{u_{c,e}&\sim&u^{\prime}_{c,h}u_{c,e}u^{\prime}_{h,c}\cr&=&u^{\prime}_ {c,e}u^{\prime}_{e,h}u_{c,a}u_{a,e}u^{\prime}_{h,c}\cr&\sim&u^{\prime}_{e,h}u _{c,a}u^{\prime}_{h,c}u_{a,e}u^{\prime}_{c,e}\cr&=&u^{\prime}_{e,h}u_{c,a}u^{ \prime}_{h,c}u_{a,h}u_{d,e}u_{h,d}u^{\prime}_{c,e}\cr&\sim&u_{h,d}u^{\prime}_ {e,h}u_{c,a}u^{\prime}_{h,c}u_{a,h}u_{d,e}u^{\prime}_{c,e}\cr&=&u^{\prime}_{e,a }u_{h,d}u_{d,e}u^{\prime}_{a,h}u_{c,a}u^{\prime}_{h,c}u_{a,h}u^{\prime}_{c,e}= e.\cr}\]
This implies that \(u_{x,y}=e\) for all distinct \(x\) and \(y\). In particular, \(\gamma_{5}\) pulls back by \(\varphi_{2}\) to \((a\ f)\), i.e., \(\gamma_{5}\) is a conjugation on \(\gamma_{7}\) by \(\gamma_{2^{\prime}}\gamma_{1^{\prime}}\gamma_{8^{\prime}}\gamma_{10}\). Since \(\gamma_{1^{\prime}},\gamma_{2^{\prime}},\gamma_{8^{\prime}}\), and \(\gamma_{10}\) commute with \(\gamma_{9}\), and since \(\langle\gamma_{7},\gamma_{9}\rangle=e\) holds, we conclude that \(\langle\gamma_{5},\gamma_{9}\rangle=e\) holds as well.
Moreover, \(\gamma_{5}\) can be expressed as product of other generators. This means that we have a surjection \(C_{Y}(\hat{T})\twoheadrightarrow G/\langle\langle\gamma^{2}\rangle\rangle\) where \(\hat{T}\) is the subgraph of \(T\) that we get by removing edge \(5\). We want to find the kernel of this surjection.
As before, \(C_{Y}(\hat{T})\) is isomorphic to \(S_{8}\ltimes A_{1,8}\) by
\[\gamma_{1^{\prime}}\mapsto(b\ c),\ \ \gamma_{2^{\prime}}\mapsto(a\ b),\ \ \gamma_{5^{\prime}}\mapsto(a\ f)u^{\prime}_{a,f},\ \ \gamma_{6^{\prime}}\mapsto(e\ h),\ \ \gamma_{7}\mapsto(e\ f),\]
and
\[\gamma_{8^{\prime}}\mapsto(c\ d),\ \ \gamma_{9}\mapsto(f\ g),\ \ \gamma_{10}\mapsto(d\ e).\]
Pulling back some of the defining relations of \(G/\langle\langle\gamma^{2}\rangle\rangle\) to \(C_{Y}(\hat{T})\) and using the above isomorphism with \(S_{8}\ltimes A_{1,8}\), we get the relations \(u^{\prime 2}_{f,d}=e\) and \(u^{\prime}_{c,g}u^{\prime}_{c,d}u^{\prime}_{f,d}u^{\prime}_{e,a}u^{\prime}_{h,b}=e\). We can use them to find that \(u^{\prime}_{w,z}=e\) for all \(w,z\). To this end, we first conjugate \(u^{\prime 2}_{f,d}=e\) by a transposition \((f\ c)\) to get \(u^{\prime 2}_{c,d}=e\), and then simplify the other relation:
\[e=u^{\prime}_{c,g}u^{\prime}_{c,d}u^{\prime}_{f,d}u^{\prime}_{e,a}u^{\prime}_{ h,b}=u^{\prime}_{d,g}u^{\prime 2}_{c,d}u^{\prime}_{f,d}u^{\prime}_{e,a}u^{ \prime}_{h,b}=u^{\prime}_{f,g}u^{\prime}_{e,a}u^{\prime}_{h,b}. \tag{24}\]
Conjugating by the transposition \((h\ c)\) we get
\[u^{\prime}_{f,g}u^{\prime}_{e,a}u^{\prime}_{c,b}=e. \tag{25}\]
The difference of (24) and (25) is \(u^{\prime}_{h,c}=e\), and after conjugating by elements of \(S_{8}\) we get \(u^{\prime}_{w,z}=e\) for all \(w,z\). This implies that \(A_{1,8}\) is contained in the kernel we are computing, and therefore \(G/\langle\langle\gamma^{2}\rangle\rangle\cong S_{8}\). This means that the fundamental group of the Galois cover related to \(X^{3}\) is trivial.
### The surface \(X^{4}\)
Finally, we consider a surface \(X\) whose degeneration is the shadow \(X^{4}\). Recall that \(G=\pi_{1}(\mathbb{CP}^{2}-S)\) is the fundamental group of the complement of the branch curve. We let \(\bar{G}\) denote \(\pi_{1}(X_{\rm Gal})=G/\langle\langle\gamma^{2}\rangle\rangle\).
**Theorem 3.4**.: _The fundamental group \(\bar{G}\) of the Galois cover related to \(X^{4}\) is an abelian-by-abelian group of order \(2^{23}\). (In fact, a central extension \(1\to\mathbb{Z}_{2}^{3}\to\bar{G}\to\mathbb{Z}_{2}^{20}\to 1\).)_
Proof.: Group \(G/\langle\langle\gamma^{2}\rangle\rangle\), in this case, is generated by \(\{\gamma_{i}\}\) for \(i=1,1^{\prime},2,2^{\prime},\ldots,12,12^{\prime}\). This time the degree of each of the six vertices \(V_{1},\ldots,V_{6}\) is \(4\), and the relations they induce are as in Lemma 2.2. For \(V_{1}\) we have \((i,j,k,l)=(1,2,3,4)\), for \(V_{2}\) we have \((i,j,k,l)=(3,6,8,11)\), for \(V_{3}\) we have \((i,j,k,l)=(4,5,6,7)\), for \(V_{4}\) we have \((i,j,k,l)=(7,10,11,12)\), for \(V_{5}\) the indices are
\((i,j,k,l)=(2,5,9,10)\), and finally, for \(V_{6}\), we have \((i,j,k,l)=(1,8,9,12)\). The relations of type (5*) are:
\[[\gamma_{i,i^{\prime}},\gamma_{j,j^{\prime}}]=e\]
for
\[(\{1\}\times\{5,6,7,10,11\})\cup(\{2\}\times\{6,7,8,11,12\})\cup( \{3\}\times\{5,7,9,10,12\})\] \[(i,j)\in \cup(\{4\}\times\{8,9,10,11,12\})\cup(\{5\}\times\{8,11,12\})\cup( \{6\}\times\{9,10,12\})\] \[\cup(\{7\}\times\{8,9\})\cup(\{8\}\times\{10\})\cup(\{9\}\times\{1 1\}).\]
The projective relation is, as before, \(\prod\limits_{i=12}^{1}\gamma_{i^{\prime}}\gamma_{i}=e\).
Group \(G/\langle\langle\gamma^{2}\rangle\rangle\) now has 24 generators and a long list of relations. We used Magma to simplify the relations. Again, some generators can be expressed in terms of others, and we are left with the generators \(\gamma_{2^{\prime}},\gamma_{3^{\prime}},\gamma_{4},\gamma_{5},\gamma_{5^{ \prime}},\gamma_{6^{\prime}},\gamma_{7},\gamma_{8^{\prime}},\gamma_{10}, \gamma_{10^{\prime}}\). We label the triangles as in Figure 15, and consider the dual graph which is generated by these generators, see Figure 16.
We collected all relations that are of form (23), noticing that relations \(\langle\gamma_{5},\gamma_{10^{\prime}}\rangle=e,\langle\gamma_{5},\gamma_{10} \rangle=e\), and \(\langle\gamma_{5},\gamma_{7}\rangle=e\) are missing. Relation \(\langle\gamma_{10^{\prime}},\gamma_{5^{\prime}}\gamma_{5}\gamma_{2^{\prime}} \gamma_{5}\gamma_{5^{\prime}}\gamma_{5}\gamma_{2^{\prime}}\gamma_{5}\gamma_{5^ {\prime}}\rangle=e\) from the output can be simplified to \(\langle\gamma_{5},\gamma_{10^{\prime}}\rangle=e\), and this is enough for us to run Magma on the original presentation,
Figure 16. _The dual graph after the first run in Magma_
Figure 15. \(X^{4}\) _with labelled triangles_
with this additional relation, which is now proven. We now get another simplified presentation for \(G/\langle\langle\gamma^{2}\rangle\rangle\), this time with the generators \(\gamma_{1^{\prime}},\gamma_{2^{\prime}},\gamma_{3^{\prime}},\gamma_{5},\gamma_{ 5^{\prime}},\gamma_{7},\gamma_{8^{\prime}},\gamma_{9},\gamma_{11},\gamma_{11^{ \prime}}\), whose graph, denoted by \(T\), is depicted in Figure 17.
We want to verify that our group is a quotient of \(C_{Y}(T)\), where \(T\) is given in Figure 17. First we notice that the relations \(\langle\gamma_{2^{\prime}},\gamma_{9}\rangle=e,\langle\gamma_{5},\gamma_{7} \rangle=e\), \([\gamma_{5},\gamma_{9}]=e\) and \([\gamma_{5^{\prime}},\gamma_{9}]=e\) are missing; but these can be deduced using other relations from the output; for example, we simplified \(\langle\gamma_{2^{\prime}}\gamma_{5^{\prime}}\gamma_{2^{\prime}},\gamma_{9} \rangle=e\) to get \(\langle\gamma_{2^{\prime}},\gamma_{9}\rangle=e\). Thereafter, we proved the following relations:
\[[\gamma_{1^{\prime}}\gamma_{2^{\prime}}\gamma_{1^{\prime}}, \gamma_{9}] = e\] \[[\gamma_{1^{\prime}}\gamma_{3^{\prime}}\gamma_{1^{\prime}}, \gamma_{8^{\prime}}] = e\] \[\langle\gamma_{5^{\prime}},\gamma_{2^{\prime}}\gamma_{5}\gamma_{ 2^{\prime}}\rangle = e\] \[\langle\gamma_{5^{\prime}},\gamma_{7}\gamma_{5}\gamma_{7}\rangle = e\] \[\langle\gamma_{11^{\prime}},\gamma_{7}\gamma_{11}\gamma_{7}\rangle = e\] \[\langle\gamma_{11^{\prime}},\gamma_{8^{\prime}}\gamma_{11}\gamma_{ 8^{\prime}}\rangle = e\] \[[\gamma_{2^{\prime}}\gamma_{5^{\prime}}\gamma_{2^{\prime}}, \gamma_{7}\gamma_{5}\gamma_{7}] = e\] \[[\gamma_{8^{\prime}}\gamma_{11^{\prime}}\gamma_{8^{\prime}}, \gamma_{7}\gamma_{11}\gamma_{7}] = e.\]
By that, we verified all the defining relations (19)-(22), so our group is indeed a quotient of \(C_{Y}(T)\).
Let \(F_{t,n}\) be the group defined in Subsection 2.5. As explained in Subsection 2.4, \(C_{Y}(T)\cong S_{8}\ltimes F_{3,8}\). To compute \(G/\langle\langle\gamma^{2}\rangle\rangle\), it remains to cast all the relations in the language of the generators of \(F_{3,8}\). Recall that \(F_{3,8}\) is a subgroup of the direct product \((\mathbb{F}_{3})^{8}\); let \(u_{i},v_{i},w_{i}\) be the
Figure 17. _The dual graph after the second run in Magma_
generators of the \(i\)th component. So the free groups \(\langle u_{i},v_{i},w_{i}\rangle\) commute elementwise. In this language, \(F_{3,8}\) is generated by all the elements \(u_{j}u_{i}^{-1}\), \(v_{j}v_{i}^{-1}\) and \(w_{j}w_{i}^{-1}\).
Based on the spanning subgraph \(T_{0}\) that was obtained from \(T\) by removing edges \(5\), \(5^{\prime}\) and \(11^{\prime}\), the final stage is to reinterpret the defining relations of \(\bar{G}=G/\langle\langle\gamma^{2}\rangle\rangle\), which we do by mapping \(\bar{G}\) to a quotient of \(S_{8}\ltimes F_{3,8}\) by
\[\gamma_{1^{\prime}}\mapsto(bg),\ \ \gamma_{2^{\prime}}\mapsto(bc),\ \ \gamma_{3^{\prime}}\mapsto(gh),\ \ \gamma_{7}\mapsto(de),\ \ \gamma_{8^{\prime}}\mapsto(fg),\ \ \gamma_{9}\mapsto(ab)\]
and
\[\gamma_{5}\mapsto(cd)u_{c}u_{d}^{-1},\ \ \gamma_{5^{\prime}}\mapsto(cd)v_{c}v_{ d}^{-1},\ \ \gamma_{11^{\prime}}\mapsto(ef)w_{e}w_{f}^{-1}.\]
As may be expected, most of the defining relations of \(\bar{G}\) now become trivial. One exception is the relation \(\omega=e\), where
\[\omega=[u_{d},v_{d}][v_{c},w_{c}]u_{a}^{-1}w_{a}v_{b}^{-1}u_{b}w_{b}^{-1}v_{b} u_{c}w_{c}w_{d}^{-1}u_{d}^{-1}w_{e}^{-1}u_{e}^{-1}w_{f}u_{f}^{-1}u_{g}w_{g}u_{h}w _{h}^{-1}. \tag{26}\]
The other nontrivial relations are much nicer, and by acting with \(S_{8}\), we obtain the following rules:
\[[u_{i},v_{i}] = [u_{j},v_{j}], \tag{28}\] \[[u_{i},w_{i}] = [u_{j},w_{j}],\] (29) \[[v_{i},w_{i}] = [v_{j},w_{j}],\] (30) \[(u_{i})^{2} = (u_{j})^{2},\] (31) \[(v_{i})^{2} = (v_{j})^{2},\] (32) \[(w_{i})^{2} = (w_{j})^{2}. \tag{27}\]
Let \(C^{*}\) and \(C\), respectively, denote the quotient of \(F_{3,8}^{*}\) and \(F_{3,8}\) by the relations (27)-(32). It is easier to compute in \(C^{*}\): Write \([u,v]\) for all \([u_{i},v_{i}]\), and likewise for \([u,w]\), \([v,w]\), and the squares \(u^{2},v^{2},w^{2}\). We have that \(v_{i}^{2}=v_{j}^{2}=u_{i}v_{j}^{2}u_{i}^{-1}=u_{i}v_{i}^{2}u_{i}^{-1}=[u,v]^{2 }v_{i}^{2}\), so \([u,v]^{2}=1\), and similarly, \([u,w]^{2}=[v,w]^{2}=1\). Modulo the commutators \([u,v]\), \([v,w]\), and \([w,u]\), which are central, the group is abelian.
**Corollary 3.5**.: _Groups \(C\) and \(C^{*}\) are central extensions, as described in the following commutative diagram:_
_Here the group at the upper-left corner is generated by the commutator; the arrows so-marked are the abelianization maps, and the middle row is induced by the short exact series_
\[1\to F_{3,8}\to F_{3,8}^{*}\to\mathbb{Z}^{3}\to 1\]
_described in Definition 2.5._
Notice that (for distinct \(i,j,k\)) \(u_{i}u_{j}^{-1}\) commutes with \(v_{i}v_{j}^{-1}\) in \(C\), but \([u_{i}u_{j}^{-1},v_{i}v_{k}^{-1}]=[u,v]\). In light of Corollary 3.5 we can simplify the element \(\omega\) defined in (26). The \(b\)-entry of this element relation cancels the commutators, and we get \(\omega=u_{a}^{-1}w_{a}u_{b}w_{b}^{-1}u_{c}w_{c}w_{d}^{-1}u_{d}^{-1}w_{e}^{-1}u _{e}^{-1}w_{f}u_{f}^{-1}u_{g}w_{g}u_{h}w_{h}^{-1}\). Using \(u_{i}^{2}=u_{j}^{2}\) and \(w_{i}^{2}=w_{j}^{2}\), as well as \([u,w]^{2}=1\). We then get a further simplification to
\[\omega=[u,w](u_{a}u_{b}u_{c}u_{d}u_{e}^{-1}u_{f}^{-1}u_{g}^{-1}u_{h}^{-1})(w_ {a}w_{b}w_{c}w_{d}w_{e}^{-1}w_{f}^{-1}w_{g}^{-1}w_{h}^{-1}). \tag{33}\]
Again, by (30) and (32), this element is symmetric under the action of \(S_{8}\) on the indices, and has order \(2\), so \(C/\langle\omega\rangle\) has order \(2^{23}\). Recall that this is the group \(\pi_{1}(X^{4}_{\rm Gal})\), whose computation is the main goal of this paper.
Recalling that the fundamental group of the complement of the branch curve is the semidirect product of this group with \(S_{8}\), it is desirable to describe the \(S_{8}\)-module structure of the abelianization.
**Remark 3.6**.: _Considering all groups as \(S_{8}\)-modules, the abelianization \(C^{*}/[C^{*},C^{*}]\) is a direct sum of three copies of the natural module \(V=\mathbb{Z}_{2}^{8}\), namely \(\langle u_{i}\rangle\), \(\langle v_{i}\rangle\), and \(\langle w_{i}\rangle\)._
_For the sake of comparison, note that for \(p>2\), \(\mathbb{Z}_{p}^{8}\) is a direct sum of two irreducible submodules, \(\mathbb{Z}_{p}^{8}=(\mathbb{Z}_{p}^{8})_{0}\oplus D\), where the left component is the subspace of zero-sum vectors and \(D\) is the "diagonal" subspace, spanned by \((1,\dots,1)\). On the other hand, for \(p=2\) the irreducible
modules are \(0\subset D\subset V_{0}\subset V\), of dimensions \(0<1<7<8\) over \(\mathbb{Z}_{2}\), respectively. Now, while \(C^{*}/[C^{*},C^{*}]=V\oplus V\oplus V\), it follows from Corollary 3.5 that the abelianization of \(C\) is \(C/[C,C]\cong V_{0}\oplus V_{0}\oplus V_{0}\)._
_Finally, \(\omega\) glues the \(u\) and \(w\) copies of \(D\) in this sum, so that the abelianization of \(\pi_{1}(X^{4}_{\mathrm{Gal}})\), is_
\[(C/\langle\omega\rangle)/[C/\langle\omega\rangle,C/\langle\omega \rangle]\cong V_{0}\oplus(V_{0}\oplus V_{0})/D(1,1),\]
_where \(D(1,1)\) is the diagonal copy of \(D\) in the direct sum \(D\oplus D\subset V_{0}\oplus V_{0}\)._
|
2308.07724
|
New constructions of non-regular cospectral graphs
|
We consider two types of joins of graphs $G_{1}$ and $G_{2}$, $G_{1}\veebar
G_{2}$ - the Neighbors Splitting Join and $G_{1}\underset{=}{\lor}G_{2}$ - the
Non Neighbors Splitting Join, and compute the adjacency characteristic
polynomial, the Laplacian characteristic polynomial and the signless Laplacian
characteristic polynomial of these joins. When $G_{1}$ and $G_{2}$ are regular,
we compute the adjacency spectrum, the Laplacian spectrum, the signless
Laplacian spectrum of $G_{1}\underset{=}{\lor}G_{2}$ and the normalized
Laplacian spectrum of $G_{1}\veebar G_{2}$ and $G_{1}\underset{=}{\lor}G_{2}$.
We use these results to construct non regular, non isomorphic graphs that are
cospectral with respect to the four matrices: adjacency, Laplacian , signless
Laplacian and normalized Laplacian.
|
Suliman Hamud, Abraham Berman
|
2023-08-15T12:00:07Z
|
http://arxiv.org/abs/2308.07724v1
|
# New constructions of non-regular cospectral graphs
###### Abstract
We consider two types of joins of graphs \(G_{1}\) and \(G_{2}\), \(G_{1}\vee G_{2}\) - the Neighbors Splitting Join and \(G_{1}\underset{=}{\vee}G_{2}\) - the Non Neighbors Splitting Join, and compute the adjacency characteristic polynomial, the Laplacian characteristic polynomial and the signless Laplacian characteristic polynomial of these joins. When \(G_{1}\) and \(G_{2}\) are regular, we compute the adjacency spectrum, the Laplacian spectrum, the signless Laplacian spectrum of \(G_{1}\underset{=}{\vee}G_{2}\) and the normalized Laplacian spectrum of \(G_{1}\underset{=}{\vee}G_{2}\) and \(G_{1}\underset{=}{\vee}G_{2}\). We use these results to construct non regular, non isomorphic graphs that are cospectral with respect to the four matrices: adjacency, Laplacian, signless Laplacian and normalized Laplacian.
## 2 Introduction
Spectral graph theory is the study of graphs via the spectrum of matrices associated with them [3, 6, 8, 22, 27]. The graphs in this paper are undirected and simple. There are several matrices associated with a graph and we consider four of them; the adjacency matrix, the Laplacian matrix, the signless Laplacian matrix and the normalized Laplacian matrix.
Let \(G=\left(V\left(G\right),E\left(G\right)\right)\) be a graph with vertex set \(V\left(G\right)=\left\{v_{1},v_{2},...,v_{n}\right\}\) and edge set \(E\left(G\right)\).
**Definition 2.1**.: The adjacency matrix of \(G\), \(A\left(G\right),\) is defined by;
\[\left(A(G)\right)_{ij}=\begin{cases}1,&\text{if $v_{i}$ and $v_{j}$ are adjacent;}\\ 0,&\text{otherwise.}\end{cases}\]
Let \(d_{i}=d_{G}\left(v_{i}\right)\) be the degree of vertex \(v_{i}\) in \(G\), and let \(D\left(G\right)\) be the diagonal matrix with diagonal entries \(d_{1},d_{2},...,d_{n}\).
**Definition 2.2**.: The Laplacian matrix, \(L\left(G\right),\) and the signless Laplacian matrix, \(Q\left(G\right),\) of \(G\) are defined as \(L\left(G\right)=D\left(G\right)-A\left(G\right)\) and \(Q\left(G\right)=D\left(G\right)+A\left(G\right)\).
**Definition 2.3**.: ([6]) The normalized Laplacian matrix, \(\mathcal{L}\left(G\right),\)is defined to be \(\mathcal{L}\left(G\right)=I_{n}-D\left(G\right)^{-\frac{1}{2}}A\left(G\right)D \left(G\right)^{-\frac{1}{2}}\) (with the convention that if the degree of vertex \(v_{i}\) in \(G\) is \(0,\) then \(\left(d_{i}\right)^{-\frac{1}{2}}=0\)). In other words,
\[\left(\mathcal{L}(G)\right)_{ij}=\begin{cases}1,&if\text{ i=j and }d_{i}\neq 0;\\ -\frac{1}{\sqrt{d_{i}d_{j}}},&\text{if }i\neq j\text{ and }v_{i}\text{ is adjacent to }v_{j};\\ 0,&\text{otherwise.}\end{cases}\]
_Notation 2.4_.: For an \(n\times n\) matrix \(M\), we denote the characteristic polynomial \(det\left(xI_{n}-M\right)\) of \(M\) by \(f_{M}\left(x\right)\), where \(I_{n}\) is the identity matrix of order \(n\). In particular, for a graph \(G\), \(f_{X\left(G\right)}\left(x\right)\) is the \(X\)-characteristic polynomial of \(G\), for \(X\in\left\{A,L,Q,\mathcal{L}\right\}\). The roots of the \(X\)-characteristic polynomial of \(G\) are the \(X\)-eigenvalues of \(G\) and the collection of the \(X\)-eigenvalues, including multiplicities, is called the \(X\)-spectrum of \(G\).
_Notation 2.5_.: The multiplicity of an eigenvalue \(\lambda\) is denoted by a superscript above \(\lambda\).
**Example 2.6**.: The \(A\)-spectrum of the complete graph \(K_{n}\) is \(\left\{n-1,(-1)^{[n-1]}\right\}\).
_Remark 2.7_.: If
\[\lambda_{1}\left(G\right)\geq\lambda_{2}\left(G\right)\geq\cdots\geq\lambda_{ n}\left(G\right),\]
\[\mu_{1}\left(G\right)\leq\mu_{2}\left(G\right)\leq\cdots\leq\mu_{n}\left(G \right),\]
\[\nu_{1}\left(G\right)\geq\nu_{2}\left(G\right)\geq\cdots\geq\nu_{n}\left(G \right),\]
\[\delta_{1}\left(G\right)\leq\delta_{2}\left(G\right)\leq\cdots\leq\delta_{n} \left(G\right),\]
are the eigenvalues of \(A\left(G\right)\), \(L\left(G\right)\), \(Q\left(G\right)\) and \(\mathcal{L}\left(G\right)\), respectively. Then \(\underset{i=1}{\overset{n}{\sum}}\lambda_{i}=0\), \(\mu_{1}\left(G\right)=0\), \(\nu_{n}\left(G\right)\geq 0\) and \(\delta_{1}\left(G\right)=0\), \(\delta_{n}\left(G\right)\leq 2\) (equality iff \(G\) is bipartite).
_Remark 2.8_.: If \(G\) is a r-regular graph, then \(\mu_{i}\left(G\right)=r-\lambda_{i}\left(G\right)\), \(\nu_{i}\left(G\right)=r+\lambda_{i}\left(G\right)\) and \(\delta_{i}\left(G\right)=1-\frac{1}{r}\lambda\left(G\right)\).
**Definition 2.9**.: Two graphs \(G\) and \(H\) are \(X\)-cospectral if they have the same \(X\)-spectrum. If \(X\)-cospectral graphs are not isomorphic we say that they are XNICS.
**Definition 2.10**.: Let \(S\) be a subset of \(\left\{A,L,Q,\mathcal{L}\right\}.\) The graphs \(G\) and \(H\) are SNICS if they are XNICS for all \(X\in S.\)
**Definition 2.11**.: A graph \(G\) is determined by its \(X\)-spectrum if every graph \(H\) that is \(X\)-cospectral with \(G\) is isomorphic to \(G\).
A basic proponent in spectral graph theory, [28, 29], is determining which graphs are determined by their spectrum or finding non isomorphic \(X\)-cospectral graphs.
**Theorem 2.12**.: ([28]) _If \(G\) is regular, then the following are equivalent;_
* \(G\) _is determined by its_ \(A\)_-spectrum,_
* \(G\) _is determined by its_ \(L\)_-spectrum,_
* \(G\) _is determined by its_ \(Q\)_-spectrum,_
* \(G\) _is determined by its_ \(\mathcal{L}\)_-spectrum._
Thus, for regular graphs \(G\) and \(H\), we say that \(G\) and \(H\) are cospectral if they are \(X\)-cospectral with respect to any \(X\in\left\{A,L,Q,\mathcal{L}\right\}.\)
**Proposition 2.13**.: ([28]) _Every regular graph with less than ten vertices is determined by its spectrum._
**Example 2.14**.: The following graphs are regular and cospectral. They are non isomorphic since in \(G\) there is an edge that lies in three triangles but there is no such edge in \(H\).
In recent years, several researchers studied the spectral properties of graphs which are constructed by graph operations. These operations include disjoint union, the Cartesian product, the Kronecker product, the strong product, the lexicographic product, the rooted product, the corona, the edge corona, the neighbourhood corona etc. We refer the reader to [1, 2, 8, 13, 16, 21, 9, 12, 24, 23, 25, 26] and the references therein for more graph operations and the results on the spectra of these graphs.
Many operations are based on the join of graphs.
**Definition 2.15**.: **([14])** The join of two graphs is their disjoint union together with all the edges that connect all the vertices of the first graph with all the vertices of the second graph.
Recently, many researchers provided several variants of join operations of graphs and investigated their spectral properties. Some examples are Cardoso [5], Indulal [17], Liu and Zhang [18], Varghese and Susha [30] and Das and Panigrahi [11].
Butler [4] constructed non regular bipartite graphs which are cospectral with respect to both the adjacency and the normalized Laplacian matrices. He asked for examples of non-regular \(\left\{A,L,\mathcal{L}\right\}\)NICS graphs. A slightly more general question is
**Question 2.16**.: _Construct non regular \(\left\{A,L,Q,\mathcal{L}\right\}\)NICS graphs._
Such examples can be constructed using special join operation defined by Lu, Ma and Zhang [19] and a variant of this operation, suggested in this paper.
**Definition 2.17**.: **([19])** Let \(G_{1}\) and \(G_{2}\) be two vertex disjoint graphs with \(V(G_{1})=\left\{u_{1},u_{2},...,u_{n}\right\}\).The splitting \(V\)-vertex join of \(G_{1}\) and \(G_{2}\), denoted by \(G_{1}\vee G\), is obtained by adding vertices \(u_{1}^{\prime},u_{2}^{\prime},...,u_{n}^{\prime}\) to \(G_{1}\lor G_{2}\) and connecting \(u_{i}^{\prime}\) to \(u_{j}\) if and only if \(\left(u_{i},u_{j}\right)\in E\left(G_{1}\right)\).
We refer to the splitting \(V\)-vertex join as NS (Neighbors Splitting) join and define a new type of join, NNS (Non Neighbors Splitting) join.
**Definition 2.18**.: Let \(G_{1}\) and \(G_{2}\) be two vertex disjoint graphs with \(V\left(G_{1}\right)=\left\{u_{1},u_{2},...,u_{n}\right\}.\) The NNS join of \(G_{1}\) and \(G_{2}\), denoted by \(G_{1}\vee\limits_{=}G_{2}\), is obtained by adding vertices \(u_{1}^{\prime},u_{2}^{\prime},...,u_{n}^{\prime}\) to \(G\lor G_{2}\) and connecting \(u_{i}^{\prime}\) to \(u_{j}\) iff \(\left(u_{i},u_{j}\right)\)\(\mathcal{E}E\left(G_{1}\right).\)
Figure 2.1: Two regular non isomorphic cospectral graphs.
**Example 2.19**.: Let \(G_{1}\) and \(G_{2}\) be the path \(P_{4}\) and the path \(P_{2}\), respectively. The graphs \(P_{4}\underset{=}{\bigvee}P_{2}\) and \(P_{4}\not\subseteq P_{2}\) are given in Figure 2.2.
The structure of the paper is as follows; After preliminaries, we compute the adjacency characteristic polynomial, the Laplacian characteristic polynomial and the signless Laplacian characteristic polynomial of \(G_{1}\underset{=}{\bigvee}G_{2}\) and \(G_{1}\not\subseteq G_{2}\), and use the results to construct \(\{A,L,Q\}\)NICS graphs, and finally, under regularity assumptions we compute the \(A\)-spectrum, the \(L\)-spectrum, the \(Q\)-spectrum and the \(\mathcal{L}\)-spectrum of NS and NNS joins and use the results to construct \(\{A,L,Q,\mathcal{L}\}\)NICS graphs.
## 3 Preliminaries
_Notation 3.1_.:
* \(\mathbf{1}_{n}\) _denotes_ \(n{\times}1\) _column whose all entries are_ \(1\)_,_
* \(J_{s{\times}t}\)_=_ \(\mathbf{1}_{s}\mathbf{1}_{t}^{T}\)_,_ \(J_{s}=J_{s{\times}s}\)_,_
* \(O_{s{\times}t}\) _denotes the zero matrix of order_ \(s\times t\)_,_
* \(adj\left(A\right)\) _denotes the adjugate of_ \(A\)_._
* \(\overline{G}\) _denotes the complement of graph_ \(G\)_._
**Definition 3.2**.: ([7, 20]) The coronal \(\Gamma_{M}(x)\) of an n\(\times\)n matrix \(M\) is the sum of the entries of the inverse of the characteristic matrix of \(M\), that is,
\[\Gamma_{M}(x)=\mathbf{1}_{n}^{T}(xI_{n}-M)^{-1}\mathbf{1}_{n}. \tag{3.1}\]
**Lemma 3.3**.: ([7, 20]) _Let \(M\) be an n\(\times\)n matrix with all row sums equal to r ( for example, the adjacency matrix of a \(r\)-regular graph). Then_
\[\Gamma_{M}(x)=\frac{n}{x-r}.\]
**Definition 3.4**.: Let \(M\) be a block matrix
\[M=\left(\begin{array}{ccc}A&&B\\ &&\\ C&&D\end{array}\right)\]
such that its blocks \(A\) and \(D\) are square. If \(A\) is invertible, the Schur complement of \(A\) in \(M\) is
\[M/A=D-CA^{-1}B\]
and if \(D\) is invertible, the Schur complement of \(D\) in \(M\) is
\[M/D=A-BD^{-1}C.\]
Issai Schur proved the following lemma.
**Lemma 3.5**.: ([15]). _If \(D\) is invertible then,_
\[detM=det(M/D)detD\]
_and if \(A\) is invertible then,_
\[detM=det(M/A)detA.\]
**Lemma 3.6**.: _Let \(M\) be a block matrix_
\[M=\left(\begin{array}{ccc}A&B&J_{n_{1}\times n_{2}}\\ B&C&O_{n_{1}\times n_{2}}\\ \\ J_{n_{2}\times n_{1}}&O_{n_{2}\times n_{1}}&D\end{array}\right)\]
_where \(A\), \(B\), and \(C\) are square matrices of order \(n_{1}\) and \(D\) is a square matrix of order \(n_{2}\). Then the Schur complement of \(xI_{n_{2}}-D\) in the charactristic matrix of \(M\) is_
\[\left(\begin{array}{ccc}xI_{n_{1}}-A-\Gamma_{D}(x)J_{n_{1}}&-B\\ \\ -B&xI_{n_{1}}-C\end{array}\right).\]
Proof.: The charactristic matrix of \(M\) is
\[xI_{2n_{1}+n_{2}}-M=\left(\begin{array}{ccc}xI_{n_{1}}-A&-B&-J_{n_{1}\times n _{2}}\\ \\ -B&xI_{n_{1}}-C&O_{n_{1}\times n_{2}}\\ \\ -J_{n_{2}\times n_{1}}&O_{n_{2}\times n_{1}}&xI_{n_{2}}-D\end{array}\right).\]
The Schur complement of \((xI_{n_{2}}-D)\) is
\[\left(xI_{2n_{1}+n_{2}}-M\right)/(xI_{n_{2}}-D) =\left(\begin{array}{cc}xI_{n_{1}}-A&-B\\ &\\ -B&xI_{n_{1}}-C\end{array}\right)-\left(\begin{array}{c}-J_{n_{1}\times n_{2} }\\ \\ O_{n_{1}\times n_{2}}\end{array}\right)\left(xI_{n_{2}}-D\right)^{-1}\left( \begin{array}{cc}-J_{n_{2}\times n_{1}}&O_{n_{2}\times n_{1}}\end{array}\right)\] \[=\left(\begin{array}{cc}xI_{n_{1}}-A&-B\\ &\\ -B&xI_{n_{1}}-C\end{array}\right)-\left(\begin{array}{c}\mathbf{1}_{n_{1}} \mathbf{1}_{n_{2}}^{T}\\ \\ O_{n_{1}\times n_{1}}\end{array}\right)\left((xI_{n_{2}}-D)^{-1}\right)\left( \begin{array}{cc}\mathbf{1}_{n_{2}}\mathbf{1}_{n_{1}}^{T}&O_{n_{1}\times n_ {1}}\end{array}\right)\] \[=\left(\begin{array}{cc}xI_{n_{1}}-A-\Gamma_{D}(x)J_{n_{1}}&-B\\ &\\ -B&xI_{n_{1}}-C\end{array}\right).\]
**Lemma 3.7**.: **([8])**_. If A is an n\(\times n\) real matrix and a is an real number, then_
\[det(A+\alpha J_{n})=det(A)+\alpha\mathbf{1}_{n}^{T}adj(A)\mathbf{1}_{n}. \tag{3.2}\]
## 4 The characteristic polynomials of the NNS join and the NS join
In **[**19**]** the authers compute the adjacency, Laplacian and signless Laplacian characteristic polynomial of \(G_{1}\!\vee\!G_{2}\) where \(G_{1}\) and \(G_{2}\) are regular.
Here we compute the characteristic polynomials of \(G_{1}\!\vee\!G_{2}\) and \(G_{1}\!\vee\!G_{2}\) where \(G_{1}\) and \(G_{2}\) are arbitrary graphs. The proofs for the two joins (NS and NNS) are quite similar and use Lemma **3.5** (twice), Lemma **3.6** and Lemma **3.7**. The results are used to construct non regular \(\{A,L,Q\}\)NICS graphs.
### Adjacency characteristic polynomial
**Theorem 4.1**.: _Let \(G_{i}\) be a graph on \(n_{i}\) vertices for \(i=1,2\). Then_
_(a)_ \[f_{A(G_{1}\!\vee\!G_{2})}(x) =x^{n_{1}}f_{{}_{A(G_{2})}}(x)det\left(xI_{n_{1}}-A(G_{1})-\tfrac {1}{x}A^{2}(\overline{G_{1}})\right)\left[1-\Gamma_{A(G_{2})}(x)\Gamma_{A(G_{ 1})+\frac{1}{x}A^{2}(\overline{G_{1}})}(x)\right].\] _(b)_ \[f_{A(G_{1}\!\vee\!G_{2})}(x) =x^{n_{1}}f_{{}_{A(G_{2})}}(x)det\left(xI_{n_{1}}-A(G_{1})-\tfrac {1}{x}A^{2}(G_{1})\right)\left[1-\Gamma_{A(G_{2})}(x)\Gamma_{A(G_{1})+\frac{1 }{x}A^{2}(G_{1})}(x)\right].\]
Proof.: We prove (a). The proof of (b) is similar.
With a suitable ordering of the vertices of \(G_{1}\underset{=}{\vee}G_{2}\), we get
\[A\left(G_{1}\underset{=}{\vee}G_{2}\right)= \left(\begin{array}{ccc}A\left(G_{1}\right)&A\left(\overline{G_{1} }\right)&J_{n_{1}\times n_{2}}\\ \\ A\left(\overline{G_{1}}^{-}\right)&O_{n_{1}\times n_{1}}&O_{n_{1}\times n_{2}} \\ \\ J_{n_{2}\times n_{1}}&O_{n_{2}\times n_{1}}&A\left(G_{2}\right)\end{array} \right).\]
Thus
\[f_{A\left(G_{1}\lor G_{2}\right)}(x)= det\left(xI_{2n_{1}+n_{2}}-A\left(G_{1}\underset{=}{\vee}G_{2} \right)\right)\] \[= det\left(\begin{array}{ccc}xI_{n_{1}}-A(G_{1})&-A(\overline{G_ {1}})&-J_{n_{1}\times n_{2}}\\ \\ -A(\overline{G_{1}})&xI_{n_{1}}&O_{n_{1}\times n_{2}}\\ \\ \hline-J_{n_{2}\times n_{1}}&O_{n_{2}\times n_{1}}&xI_{n_{2}}-A(G_{2})\end{array}\right)\] \[= det\left(xI_{n_{2}}-A(G_{2})\right)det\left(\left(xI_{2n_{1}+n_{ 2}}-A\left(G_{1}\underset{=}{\vee}G_{2}\right)\right)/(xI_{n_{2}}-A(G_{2})) \right),\]
by the Lemma of Schur (Lemma **3.5**).**
By Lemma **3.6**,
\[\left(xI_{2n_{1}+n_{2}}-A\left(G_{1}\underset{=}{\vee}G_{2}\right)\right)/(xI _{n_{2}}-A(G_{2}))=\begin{pmatrix}xI_{n_{1}}-A(G_{1})-\varGamma_{A(G_{2})}(x) \,J_{n_{1}\times n_{1}}&-A(\overline{G_{1}})\\ \\ -A(\overline{G_{1}})&xI_{n_{1}}\end{pmatrix}.\]
Using again Lemma **3.5**, we get
\[det\left(\left(xI_{2n_{1}+n_{2}}-A\left(G_{1}\underset{=}{\vee}G_{2}\right) \right)/(xI_{n_{2}}-A(G_{2}))\right)= det\left(xI_{n_{1}}\right)det\left(xI_{n_{1}}-A(G_{1})-\varGamma_{A(G_{2})}(x) \,J_{n_{1}\times n_{1}}-\frac{1}{x}A^{2}(\overline{G_{1}})\right).\]
By Lemma **3.7**, we get
\[det\left(\left(xI_{2n_{1}+n_{2}}-A\left(G_{1}\underset{=}{\lor}G_{2}\right) \right)/(xI_{n_{2}}-A(G_{2}))\right)=\]
\[=x^{n_{1}}\left(det(xI_{n_{1}}-A(G_{1})-\frac{1}{x}A^{2}(\overline{G_{1}}))- \varGamma_{A(G_{2})}(x)1_{n_{1}}^{T}adj\left(xI_{n_{1}}-A(G_{1})-\frac{1}{x}A^ {2}(\overline{G_{1}})\right)1_{n_{1}}\right)\]
\[=x^{n_{1}}det\left(xI_{n_{1}}-A(G_{1})-\frac{1}{x}A^{2}(\overline{G_{1}})\right) \left[1-\varGamma_{A(G_{2})}(x)1_{n_{1}}^{T}(xI_{n_{1}}-A(G_{1})-\frac{1}{x}A^ {2}(\overline{G_{1}}))^{-1}1_{n_{1}}\right]\]
\[=x^{n_{1}}det\left(xI_{n_{1}}-A(G_{1})-\frac{1}{x}A^{2}(\overline{G_{1}}) \right)\left[1-\varGamma_{A(G_{2})}(x)\varGamma_{A(G_{1})+\frac{1}{x}A^{2}( \overline{G_{1}})}(x)\right].\]
Thus
\[f_{A(G_{1}\lor G_{2})}(x)=x^{n_{1}}f_{A(G_{2})}(x)det\left(xI_{n_{1}}-A(G_{1})- \frac{1}{x}A^{2}(\overline{G_{1}})\right)\left[1-\varGamma_{A(G_{2})}(x) \varGamma_{A(G_{1})+\frac{1}{x}A^{2}(\overline{G_{1}})}(x)\right].\]
### Laplacian characteristic polynomial
In this section, we derive the Laplacian characteristic polynomials of \(G_{1}\underset{=}{\lor}G_{2}\) and \(G_{1}\underset{=}{\lor}G_{2}\) when \(G_{1}\)and \(G_{2}\) are arbitrary graphs.
**Theorem 4.2**.: _Let \(G_{i}\) be a graph on \(n_{i}\) vertices for \(i=1,2\). Then_
_(a)_
\[f_{{}_{L}\left(\varGamma_{1}\cup G_{2}\right)}\left(x\right) =det\left(\left(x-n_{1}\right)I_{n_{2}}-L\left(G_{2}\right)\right) det\left(\left(x-n_{1}+1\right)I_{n_{1}}+D\left(G_{1}\right)\right)\] \[\cdot det\left(\left(x-n_{1}-n_{2}+1\right)I_{n_{1}}-L\left(G_{1} \right)+D\left(G_{1}\right)-A\left(\overline{G_{1}}\right)\left(\left(x-n_{1 }+1\right)I_{n_{1}}+D\left(G_{1}\right)\right)^{-1}A\left(\overline{G_{1}} \right)\right)\] \[\cdot\left[1-\varGamma_{{}_{L(G_{2})}}\left(x-n_{1}\right) \varGamma_{{}_{L(G_{1})-D(G_{1})+A(\overline{G_{1}})(\left(x-n_{1}+1\right)I_ {n_{1}}+D(G_{1}))^{-1}A(\overline{G_{1}})}}\left(x-n_{1}-n_{2}+1\right)\right].\]
_(b)_
\[f_{{}_{L(G_{1}\overset{\lor}{}G_{2})}} =det\left(\left(x-n_{1}\right)I_{n_{2}}-L\left(G_{2}\right) \right)det\left(xI_{n_{1}}-D\left(G_{1}\right)\right)\] \[\cdot det\left(\left(x-n_{2}\right)I_{n_{1}}-L\left(G_{1}\right)- D\left(G_{1}\right)-A\left(G_{1}\right)\left(xI_{n_{1}}-D\left(G_{1}\right) \right)^{-1}A\left(G_{1}\right)\right)\] \[\cdot\left[1-\varGamma_{{}_{L(G_{2})}}\left(x-n_{1}\right) \varGamma_{{}_{L(G_{1})+D(\varGamma_{1})+A(\varGamma_{1})(xI_{n_{1}}-D(G_{1} ))^{-1}A(\varGamma_{1})}}\left(x-n_{2}\right)\right].\]
Proof.: (a) With a suitable ordering of the vertices of \(G_{1}\underset{=}{\lor}G_{2}\), we get
\[L\left(G_{1}\underset{=}{\vee}G_{2}\right)= \left(\begin{array}{ccc}\left(n_{1}+n_{2}-1\right)I_{n_{1}}-A \left(G_{1}\right)&-A\left(\overline{G_{1}}\right)&-J_{n_{1}\times\,n_{2}}\\ -A\left(\overline{G_{1}}\right)&\left(n_{1}-1\right)I_{n_{1}}-D\left(G_{1} \right)&O_{n_{1}\times\,n_{2}}\\ -J_{n_{2}\times\,n_{1}}&O_{n_{2}\times\,n_{1}}&D\left(G_{2}\right)+n_{1}I_{n_{ 2}}-A\left(G_{2}\right)\end{array}\right)\] \[= \left(\begin{array}{ccc}\left(n_{1}+n_{2}-1\right)I_{n_{1}}+L \left(G_{1}\right)-D\left(G_{1}\right)&-A\left(\overline{G_{1}}\right)&-J_{n_{ 1}\times\,n_{2}}\\ -A\left(\overline{G_{1}}\right)&\left(n_{1}-1\right)I_{n_{1}}-D\left(G_{1} \right)&O_{n_{1}\times\,n_{2}}\\ -J_{n_{2}\times\,n_{1}}&O_{n_{2}\times\,n_{1}}&n_{1}I_{n_{2}}+L\left(G_{2} \right)\end{array}\right).\]
The Laplacian characteristic polynomial is
\[f_{{}_{L\left(G_{1}\underset{=}{\vee}G_{2}\right)}}\left(x\right)= det\left(xI_{2n_{1}+n_{2}}-L\left(G_{1}\underset{=}{\vee}G_{2} \right)\right)\] \[= det\left(\begin{array}{ccc}\left(x-n_{1}-n_{2}+1\right)I_{n_{1} }-L\left(G_{1}\right)+D\left(G_{1}\right)&A\left(\overline{G_{1}}\right)&J_{n _{1}\times\,n_{2}}\\ \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit \span\omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit\span \omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit \span\omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit \omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\omit\span\omit \span\omit\span\omit\omit\span\omit\span\omit\omit\span\omit\omit\span \omit\omit\span\omit\span\omit\span\omit\omit\span\omit\omit\span\omit\span \omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\omit\span\omit \span\omit\omit\span\omit\omit\span\omit\omit\span\omit\omit\span\omit \span\omit\omit\span\omit\span\omit\omit\span\omit\span\omit\omit\span \omit\omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\omit\span \omit\span\omit\omit\span\omit\span\omit\omit\span\omit\span\omit\omit \span\omit\omit\span\omit\span\omit\omit\span\omit\omit\span\omit\span \omit\omit\span\omit\omit\span\omit\span\omit\omit\span\omit\span\omit \omit\span\omit\omit\span\omit\span\omit\omit\span\omit\omit\span\omit\span \omit\omit\span\omit\span\omit\omit\span\omit\span\omit\omit\span \omit\omit\span\omit\span\omit\omit\span\omit\span\omit\omit\span\omit\omit \span\omit\omit\span\omit\omit\span\omit\span\omit\omit\span\omit\omit\span \omit\span\omit\omit\span\omit\omit\span\omit\span\omit\omit\span\omit\omit \span\omit\span\omit\omit\span\omit\omit\span\omit\omit\span\omit\span \omit\omit\span\omit\omit\span\omit\omit\span\omit\omit\span\omit\span \omit\omit\span\omit\span\omit\omit\span\omit\span\omit\omit\span\omit \span\omit\omit\span\omit\span\omit\omit\span\omit\omit\span\omit\span \omit\omit\span\omit\span\omit\omit\span\omit\span\omit\omit\span \omit\span\omit\omit\span\omit\span\omit\omit\span\omit\omit\span\omit \omit\span\omit\omit\span\omit\span\omit\omit\span\omit\omit\span\omit \omit\span\omit\omit\span\omit\span\omit\omit\span\omit\omit\span \omit\omit\span\omit\omit\span\omit\omit\span\omit\omit\span\omit\omit \span\omit\omit\span\omit\span\omit\omit\span\omit\omit\span\omit\span \omit\omit\span\omit\omit\span\omit\span\omit\omit\span\omit\span\omit \span\omit\omit\span\omit\omit\span\omit\span\omit\omit\span\omit\omit \span\omit\omit\span\omit\omit\span\omit\omit\span\omit\span\omit\omit \span\omit\omit\span\omit\omit\span\omit\omit\span\omit\span\omit\omit \span\omit\span\omit\omit\span\omit\span\omit\omit\span\omit\omit\span \omit\span\omit\omit\span\omit\span\omit\omit\span\omit\omit\span \omit\span\omit\omit\span\omit\span\omit\omit\span\omit\omit\span\omit \omit\span\omit\omit\span\omit\span\omit\omit\span\omit\omit\span\omit \omit\span\omit\omit\span\omit\omit\span\omit\omit\span\omit\omit\span \omit\omit\span\omit\omit\span\omit\omit\omit\span\omit\omit\span\omit \omit\span\omit\omit\span\omit\omit
where
\[B=\left(x-n_{1}-n_{2}+1\right)I_{n_{1}}-L\left(G_{1}\right)+D\left(G_{1}\right)-A \left(\overline{G_{1}}\right)\left(\left(x-n_{1}+1\right)I_{n_{1}}+D\left(G_{1 }\right)\right)^{-1}A\left(\overline{G_{1}}\right).\]
By Lemma 3.7 we get
\[det\left(\left(xI_{2n_{1}+n_{2}}-L\left(G_{1}\mathop{\vee}\limits_{=}G_{2} \right)\right)/((x-n_{1})I_{n_{2}}-L(G_{2}))\right)=\] \[=det\left(\left(x-n_{1}+1\right)I_{n_{1}}+D\left(G_{1}\right) \right)det\left(B\right)\left[1-\varGamma_{L\left(G_{2}\right)}\left(x-n_{1} \right)\mathbf{1}_{n_{1}}^{T}B^{-1}\mathbf{1}_{n_{1}}\right]\] \[\quad=det\left(\left(x-n_{1}+1\right)I_{n_{1}}+D\left(G_{1}\right)\right)\] \[\quad\cdot det\left(\left(x-n_{1}-n_{2}+1\right)I_{n_{1}}-L\left( G_{1}\right)+D\left(G_{1}\right)-A\left(\overline{G_{1}}\right)\left(\left(x-n_{1}+1 \right)I_{n_{1}}+D\left(G_{1}\right)\right)^{-1}A\left(\overline{G_{1}}\right)\right)\] \[\cdot\left[1-\varGamma_{{}_{L\left(G_{2}\right)}}\left(x-n_{1} \right)\varGamma_{{}_{L\left(G_{1}\right)-D\left(G_{1}\right)+A\left( \overline{G_{1}}\right)\left(\left(x-n_{1}+1\right)I_{n_{1}}+D\left(G_{1} \right)\right)^{-1}A\left(\overline{G_{1}}\right)}}\left(x-n_{1}-n_{2}+1 \right)\right].\]
Thus
\[f_{{}_{L\left(G_{1}\mathop{\vee}\limits_{\leq}G_{2}\right)}} \left(x\right) =det\left(\left(x-n_{1}\right)I_{n_{2}}-L\left(G_{2}\right) \right)det\left(\left(x-n_{1}+1\right)I_{n_{1}}+D\left(G_{1}\right)\right)\] \[\cdot det\left(\left(x-n_{1}-n_{2}+1\right)I_{n_{1}}-L\left(G_{1 }\right)+D\left(G_{1}\right)-A\left(\overline{G_{1}}\right)\left(\left(x-n_{1 }+1\right)I_{n_{1}}+D\left(G_{1}\right)\right)^{-1}A\left(\overline{G_{1}} \right)\right)\] \[\cdot\left[1-\varGamma_{{}_{L\left(G_{2}\right)}}\left(x-n_{1} \right)\varGamma_{{}_{L\left(G_{1}\right)-D\left(G_{1}\right)+A\left( \overline{G_{1}}\right)\left(\left(x-n_{1}+1\right)I_{n_{1}}+D\left(G_{1} \right)\right)^{-1}A\left(\overline{G_{1}}\right)}}\left(x-n_{1}-n_{2}+1 \right)\right].\]
The proof of (b) is similar.
### Signless Laplacian characteristic polynomial
**Theorem 4.3**.: _Let \(G_{i}\) be a graph on \(n_{i}\) vertices for \(i=1,2\). Then_
_(a)_
\[f_{{}_{Q\left(G_{1}\mathop{\vee}\limits_{\leq}G_{2}\right)}} \left(x\right) =det\left(\left(x-n_{1}\right)I_{n_{2}}-Q\left(G_{2}\right)\right) det\left(\left(x-n_{1}+1\right)I_{n_{1}}+D\left(G_{1}\right)\right)\] \[\cdot det\left(\left(x-n_{1}-n_{2}+1\right)I_{n_{1}}-Q\left(G_{1 }\right)+D\left(G_{1}\right)-A\left(\overline{G_{1}}\right)\left(\left(x-n_{1 }+1\right)I_{n_{1}}+D\left(G_{1}\right)\right)^{-1}A\left(\overline{G_{1}} \right)\right)\] \[\cdot\left[1-\varGamma_{{}_{Q\left(G_{2}\right)}}\left(x-n_{1} \right)\varGamma_{{}_{Q\left(G_{1}\right)-D\left(G_{1}\right)+A\left( \overline{G_{1}}\right)\left(\left(x-n_{1}+1\right)I_{n_{1}}+D\left(G_{1} \right)\right)^{-1}A\left(\overline{G_{1}}\right)}}\left(x-n_{1}-n_{2}+1 \right)\right].\]
_(b)_
Proof.: The proof is similar to the proof of Theorem 4.2.
**Corollary 4.4**.: _Let \(F\) and \(H\) be r-regular non isomorphic cospectral graphs. Then for every \(G\),_
_a)_ \(G\mathop{\not\subseteq}F\) _and_ \(G\mathop{\not\subseteq}H\) _are_ \(\{A,Q,L\}\)_NICS._
_b)_ \(G\mathop{\vee}F\) _and_ \(G\mathop{\vee}H\) _are_ \(\{A,Q,L\}\)_NICS._
Proof.: (a) \(G\mathop{\not\subseteq}F\) and \(G\mathop{\not\supseteq}H\) are non isomorphic since \(F\) and \(H\) are non isomorphic. By Theorems **4.1**, **4.2** and **4.3**, \(f_{A(G\mathop{\not\subseteq}F)}\left(x\right)=f_{A(G\mathop{\vee}H)}\left(x\right)\), \(f_{{}_{L(G\mathop{\vee}F)}}\left(x\right)=f_{{}_{L(G\mathop{\vee}H)}}\left(x\right)\) and \(f_{{}_{Q(G\mathop{\vee}F)}}\left(x\right)=f_{{}_{Q(G\mathop{\vee}H)}}\left(x\right)\) since the matrices \(A(F)\) and \(A(H)\) have the same coronal (Lemma **3.3**) and the same characteristic polynomial. This completes the proof of (a).
The proof of (b) is similar.
**Corollary 4.5**.: _Let \(F\)and \(H\) be r-regular non isomorphic cospectral graphs. Then for every \(G\),_
_a)_ \(F\mathop{\not\subseteq}G\) _and_ \(H\mathop{\not\subseteq}G\) _are_ \(\{A,Q,L\}\)_NICS._
_b)_ \(F\mathop{\vee}G\) _and_ \(H\mathop{\vee}G\) _are_ \(\{A,Q,L\}\)_NICS._
Proof.: The proof is similar to the proof of Corollary **4.4**.
_Remark 4.6_.: _The following examples demonstrate the importance of the regularity of the graphs \(F\) and \(H\)._
**Example 4.7**.: The graphs \(F\) and \(H\) in Figure **4.1** are non regular and \(A\)-cospectral**[**10**]**. The joins \(K_{2}\mathop{\vee}F\) and \(K_{2}\mathop{\not\subseteq}H\) in Figure **4.2** are not \(A\)-cospectral since the \(A\)-spectrum of \(K_{2}\mathop{\vee}F\) is \(\{-2.2332\), \(-2\), \(-1.618\), \(0^{[3]}\), \(0.577\), \(0.618\), \(4.6562\}\) and the \(A\)-spectrum of \(K_{2}\mathop{\not\subseteq}H\) is \(\{-2.7039,-1.618,-1.2467,0^{[3]},0.2526,0.618,4.698\}\). The joins \(K_{2}\mathop{\vee}F\) and \(K_{2}\mathop{\vee}H\) in Figure **4.3** are not \(A\)-cospectral since the \(A\)-spectrum of \(K_{2}\mathop{\vee}F\) is \(\{(-2)^{[2]}\), \(-1\), \(0^{[4]}\), \(0.4384\), \(4.5616\}\) and the \(A\)-spectrum of \(K_{2}\mathop{\vee}H\) is \(\{-2.6056,(-1)^{[2]},0^{[5]},4.6056\}\). The joins \(F\mathop{\vee}K_{2}\) and \(H\mathop{\vee}K_{2}\) in Figure **4.4** are not \(A\)-cospectral since the \(A\)-spectrum of \(F\mathop{\vee}K_{2}\) is \(\{-3.2361\), \(-2.5205\), \(-1\), \(-0.5812\), \(0^{[5]}\), \(1.0895\), \(1.2361\), \(5.0122\}\) and the \(A\)-spectrum of \(H\mathop{\vee}K_{2}\) is \(\{-3.5337\), \(-2.1915\), \(-1\), \(0^{[6]}\), \(0.3034\), \(1.3403\), \(5.0815\}\) and the joins \(F\mathop{\vee}K_{2}\) and \(H\mathop{\vee}K_{2}\) in Figure **4.5** are not \(A\)-cospectral since the \(A\)-spectrum of \(F\mathop{\vee}K_{2}\) is \(\{-3.1903\), \(-2.4142\), \(-1.2946\), \((-1)^{[3]}\), \(0.4046\), \(0.4142\), \(1^{[2]}\), \(1.8201\), \(5.2602\}\) and the \(A\)-spectrum of \(H\mathop{\vee}K_{2}\) is \(\{-4.1337,(-1)^{[5]},0,0.8194,1^{[3]},5.3143\}\).
Figure 4.1: Two non regular \(A\)-cospectral graphs \(F\) and \(H\).
Figure 4.2: Two non A-cospectral graphs \(K_{2}\not\subseteq F\) and \(K_{2}\not\subseteq H\).
Figure 4.5: Two non A-cospectral graphs \(\mathrm{F}\!\vee\!K_{2}\) and \(\mathrm{H}\!\vee\!K_{2}\).
_Remark 4.8_.: _Numerical computations suggest that in the corollaries 4.4 and 4.5, \(\{A,Q,L\}\) can be replaced by \(\{A,Q,L,\mathcal{L}\}\)._
**Conjecture 4.9**.:
1. _Let_ \(H_{1}\) _and_ \(H_{2}\) _be regular non isomorphic cospectral graphs. Then for every_ \(G\)_,_ \(G\not\subseteq H_{1}\) _and_ \(G\not\subseteq H_{2}\) _are_ \(\{A,Q,L,\mathcal{L}\}\)_NICS and_ \(G\underset{=}{\vee}H_{1}\) _and_ \(G\underset{=}{\vee}H_{2}\) _are_ \(\{A,Q,L,\mathcal{L}\}\)_NICS._
2. _Let_ \(G_{1}\) _and_ \(G_{2}\) _be regular non isomorphic cospectral graphs. Then for every_ \(H\)_,_ \(G_{1}\not\subseteq H\) _and_ \(G_{2}\not\subseteq H\) _are_ \(\{A,Q,L,\mathcal{L}\}\)_NICS and_ \(G_{1}\underset{=}{\vee}H\) _and_ \(G_{2}\underset{=}{\vee}H\) _are_ \(\{A,Q,L,\mathcal{L}\}\)_NICS._
## 5 \(\mathcal{L}-Spectra\) of NS Joins
Let \(G_{i},H_{i}\) be \(r_{i}\)-regular graphs, i = 1, 2. Lu, Ma and Zhang showed that if \(G_{1}\) and \(H_{1}\) are cospectral and \(G_{2}\) and \(H_{2}\) are cospectral and non isomorphic, then \(G_{1}\not\subseteq G_{2}\) and \(H_{1}\not\subseteq H_{2}\) are \(\{A,L,Q\}\)NICS. In this section,we extend this result by showing that \(G_{1}\not\subseteq G_{2}\) and \(H_{1}\not\subseteq H_{2}\) are \(\{A,L,Q,\mathcal{L}\}\)NICS. To do it we determine the spectrum of the normalized Laplacian of the graph \(G_{1}\not\subseteq G_{2}\).
**Theorem 5.1**.: _Let \(G_{1}\) be a \(r_{1}\)-regular graph with \(n_{1}\) vertices and \(G_{2}\) be a \(r_{2}\)-regular graph with \(n_{2}\)vertices. Then the normalized Laplacian spectrum of \(G_{1}\not\subseteq G_{2}\) consists of:_
* \(1+\frac{r_{2}(\delta_{i}(G_{2})-1)}{n_{1}+r_{2}}\) _for_ \(i=2,3,...,n_{2};\)__
* \(1+\frac{(\delta_{i}(G_{1})-1)\big{(}\sqrt{9r_{1}^{2}+4r_{1}n_{2}+r_{1}}\big{)} }{2(2r_{1}+n_{2})}\) _for_ \(i=2,3,...,n_{1};\)__
* \(1+\frac{(1-\delta_{i}(G_{1}))\big{(}\sqrt{9r_{1}^{2}+4r_{1}n_{2}-r_{1}}\big{)} }{2(2r_{1}+n_{2})}\) _for_ \(i=2,3,...,n_{1};\)__
* _the three roots of the equation_ \[(2r_{1}r_{2}+2r_{1}n_{1}+n_{2}r_{2}+n_{1}n_{2})\,x^{3}-(3r_{1}r_{2}+5r_{1}n_{1 }+2r_{2}n_{2}+3n_{1}n_{2})\,x^{2}+(3r_{1}n_{1}+n_{2}r_{2}+2n_{1}n_{2})\,x=0.\]
Proof.: Let \(u_{1},u_{2},\ldots,u_{n_{1}}\)be the vertices of \(G_{1}\), \(u_{1}^{\prime},u_{2}^{\prime},\ldots,u_{n_{1}}^{\prime}be\) the vertices added by the splitting and \(v_{1},v_{2},...,v_{n_{2}}\) be the vertices of \(G_{2}\). Under this vertex partitioning the adjacency matrix of \(G_{1}\not\subseteq G_{2}\) is,
\[A\left(G_{1}\not\subseteq G_{2}\right)=\left(\begin{array}{ccc}A\left(G_{1 }\right)&A\left(G_{1}\right)&J_{n_{1}\times n_{2}}\\ A\left(G_{1}\right)&O_{n_{1}\times n_{1}}&O_{n_{1}\times n_{2}}\\ J_{n_{2}\times n_{1}}&O_{n_{2}\times n_{1}}&A\left(G_{2}\right)\end{array}\right)\]
The corresponding degrees matrix of \(G_{1}\not\subseteq G_{2}\) is,
\[D\left(G_{1}\not\subseteq G_{2}\right)=\left(\begin{array}{ccc}\left(2r_{1 }+n_{2}\right)I_{n_{1}}&O_{n_{1}\times n_{1}}&O_{n_{1}\times n_{2}}\\ O_{n_{1}\times n_{1}}&r_{1}I_{n_{1}}&O_{n_{1}\times n_{2}}\\ O_{n_{2}\times n_{1}}&O_{n_{2}\times n_{1}}&\left(r_{2}+n_{1}\right)I_{n_{2}} \end{array}\right).\]
By simple calculation we get
\[\mathcal{L}\left(G_{1}\not\subset G_{2}\right)=\left(\begin{array}{ccc}I_{n_{1} }-\frac{A\left(G_{1}\right)}{2r_{1}+n_{2}}&\frac{-A\left(G_{1}\right)}{\sqrt{r _{1}\left(2r_{1}+n_{2}\right)}}&\frac{-J_{n_{1}\times n_{2}}}{\sqrt{\left(2r_{ 1}+n_{2}\right)\left(r_{2}+n_{1}\right)}}\\ \\ \frac{-A\left(G_{1}\right)}{\sqrt{r_{1}\left(2r_{1}+n_{2}\right)}}&I_{n_{1}}&O_ {n_{1}\times n_{2}}\\ \\ \frac{-J_{n_{2}\times n_{1}}}{\sqrt{\left(2r_{1}+n_{2}\right)\left(r_{2}+n_{1} \right)}}&O_{n_{2}\times n_{1}}&I_{n_{2}}-\frac{A\left(G_{2}\right)}{r_{2}+n_{1 }}\end{array}\right).\]
We prove the theorem by constructing an orthogonal basis of eigenvectors of \(\mathcal{L}\left(G_{1}\not\subset G_{2}\right)\). Since \(G_{2}\) is \(r_{2}\)-regular, the vector \(\mathbf{1}_{n_{2}}\)is an eigenvector of \(A\left(G_{2}\right)\) that corresponds to \(\lambda_{1}\left(G_{2}\right)=r_{2}\). For \(i=2,3,\ldots,n_{2}\) let \(Z_{i}\) be an eigenvector of \(A\left(G_{2}\right)\) that corresponds to \(\lambda_{i}\left(G_{2}\right)\). Then \(\mathbf{1}_{n_{2}}^{T}Z_{i}=0\) and \(\left(0_{1\times n_{1}},0_{1\times n_{1}},Z_{i}^{T}\right)^{T}\) is an eigenvector of \(\mathcal{L}\left(G_{1}\not\subset G_{2}\right)\) corresponding to the eigenvalue \(1-\frac{\lambda_{i}\left(G_{2}\right)}{r_{2}+n_{1}}\).
By Remark **2.8**, \(1+\frac{r_{2}\left(\delta_{i}\left(G_{2}\right)-1\right)}{n_{1}+r_{2}}\) are eigenvalues of \(\mathcal{L}\left(G_{1}\not\subset G_{2}\right)\) for \(i=2,...,n_{2}\).
For \(i=2,...,n_{1}\), let \(X_{i}\) be an eigenvector of \(A(G_{1})\) corresponding to the eigenvalue \(\lambda_{i}\left(G_{1}\right)\). We now look for a non zero real number \(\alpha\) such that \(\left(\begin{array}{cc}X_{i}^{T}&\alpha X_{i}^{T}&0_{1\times n_{2}}\end{array} \right)^{T}\)is an eigenvector of \(\mathcal{L}(G_{1}\not\subset G_{2})\).
so, \(\alpha\) must be a root of the equation
\[1-\frac{\lambda_{i}\left(G_{1}\right)}{2r_{1}+n_{2}}-\frac{\lambda_{i}\left( G_{1}\right)\alpha}{\sqrt{r_{1}\left(2r_{1}+n_{2}\right)}}=-\frac{\lambda_{i} \left(G_{1}\right)}{\alpha\sqrt{r_{1}\left(2r_{1}+n_{2}\right)}}+1, \tag{5.1}\]
\[\sqrt{2r_{1}+n_{2}}\alpha^{2}+\sqrt{r_{1}}\alpha-\sqrt{2r_{1}+n_{2}}=0.\]
Thus, \(\alpha=\frac{2\sqrt{2r_{1}+n_{2}}}{\sqrt{9r_{1}+4n_{2}}+\sqrt{r_{1}}}\) or \(\alpha=\frac{-2\sqrt{2r_{1}+n_{2}}}{\sqrt{9r_{1}+4n_{2}}-\sqrt{r_{1}}}\). Substituting the values of \(\alpha\) in the right side of
**(5.1)**, we get by Remark **2.8** that
\[1+\frac{\left(\delta_{i}\left(G_{1}\right)-1\right)\left(\sqrt{9r_{1}^{2}+4r_ {1}n_{2}+r_{1}}\right)}{n_{1}+r_{2}}\text{, }1+\frac{\left(1-\delta_{i}\left(G_{1}\right)\right)\left(\sqrt{9r_{1}^{2}+4r_ {1}n_{2}}-r_{1}\right)}{2\left(2r_{1}+n_{2}\right)}\text{ are eigenvalues of }\mathcal{L}\left(G_{1}\not\subset G_{2}\right)\text{ for }i=2,3,...,n_{1}.\]
So far, we obtained \(n_{2}-1+2\left(n_{1}-1\right)=2n_{1}+n_{2}-3\) eigenvalues of \(\mathcal{L}\left(G_{1}\not\subset G_{2}\right)\). Their eigenvectors are orthogonal to \(\left(\mathbf{1}_{n_{1}}^{T},0_{1\times n_{1}},0_{1\times n_{2}}\right)^{T}, \left(0_{1\times n_{1}},1_{n_{1}}^{T},0_{1\times n_{2}}\right)^{T}\) and \(\left(0_{1\times n_{1}},0_{1\times n_{1}},1_{n_{2}}^{T}\right)^{T}\).
To find three additional eigenvalues, we look for eigenvectors of \(\mathcal{L}\left(G_{1}\not\subset G_{2}\right)\) of the form \(Y=\left(\alpha\mathbf{1}_{n_{1}}^{T},\beta\mathbf{1}_{n_{1}}^{T},\gamma \mathbf{1}_{n_{2}}^{T}\right)^{T}\) for \(\left(\alpha,\beta,\gamma\right)\neq\left(0,0,0\right)\). Let \(x\) be an eigenvalue of \(\mathcal{L}\left(G\not\subset G_{2}\right)\) corresponding to the eigenvector \(Y\). From \(\mathcal{L}Y=xY\) we get
\[\begin{cases}\alpha-\frac{r_{1}}{2r_{1}+n_{2}}\alpha\,-\frac{r_{1}}{\sqrt{r_{1} \left(2r_{1}+n_{2}\right)}}\beta\,-\,\frac{n_{2}}{\sqrt{\left(2r_{1}+n_{2} \right)\left(r_{2}+n_{1}\right)}}\gamma=\alpha x\\ \\ \frac{-r_{1}}{\sqrt{r_{1}\left(2r_{1}+n_{2}\right)}}\alpha+\beta=\beta x\\ \\ \frac{-n_{1}}{\sqrt{\left(2r_{1}+n_{2}\right)\left(r_{2}+n_{1}\right)}}\alpha+ \gamma-\frac{r_{2}}{r_{2}+n_{1}}\gamma=\gamma x.\end{cases}\]
Thus
\[\alpha-\frac{r_{1}}{2r_{1}+n_{2}}\alpha+\frac{r_{1}^{2}\alpha}{r_{1}(2r_{1}+n_{2})( x-1)}+\frac{n_{1}n_{2}(r_{2}+n_{1})\alpha}{(2r_{1}+n_{2})(r_{2}+n_{1})\left((x-1)(r_{2} +n_{1})+r_{2}\right)}=\alpha x.\]
Notice that \(\alpha\neq 0\), since if \(\alpha=0\) then \(\alpha=\beta=\gamma=0\) and also \(x\neq 1\) since \(x=1\) implies that \(\alpha=0\).
Dividing by \(\alpha\), we get the following cubic equation
\[\left(2r_{1}r_{2}+2r_{1}n_{1}+n_{2}r_{2}+n_{1}n_{2}\right)x^{3}-\left(3r_{1}r_ {2}+5r_{1}n_{1}+2r_{2}n_{2}+3n_{1}n_{2}\right)x^{2}+\left(3r_{1}n_{1}+n_{2}r_{ 2}+2n_{1}n_{2}\right)x=0\]
and this completes the proof.
Now we can answer Question **2.16** by constructing pairs of non regular \(\{A,L,Q,\mathcal{L}\}\)NICS graphs.
**Corollary 5.2**.: _Let \(G_{i},H_{i}\) be \(r_{i}\)-regular graphs, \(i\) = 1, 2. If \(G_{1}\) and \(H_{1}\) are cospectral and \(G_{2}\) and \(H_{2}\) are cospectral and non isomorphic then \(G_{1}\not\subset G_{2}\) and \(H_{1}\not\subset H_{2}\) are \(\{A,L,Q,\mathcal{L}\}\)NICS._
Proof.: \(G_{1}\not\subset G_{2}\) and \(H_{1}\not\subset H_{2}\) are non isomorphic since \(G_{2}\) and \(H_{2}\) are non isomorphic. By Theorem **5.1** and Theorems **3.1**, **3.2** and **3.3** in [19], the graphs \(H_{1}\not\subset H_{2}\) are \(\{A,L,Q,\mathcal{L}\}\)NICS.
**Example 5.3**.: Let \(G_{1}=H_{1}=C_{4}\), and choose \(G_{2}=G\) and \(H_{2}=H\) where \(G\) and \(H\) are graphs in Figure **2.1**, then the graphs in Figure **2.1** are \(\{A,L,Q,\mathcal{L}\}\)NICS.
Figure 5.1: Non regular \(\{A,L,Q,\mathcal{L}\}\)NICS graphs.
Spectra of NNS Joins
In this section we compute the \(A\)-spectrum, \(L\)-spectrum, \(Q\)-spectrum and \(\mathcal{L}\)-spectrum of \(\mathrm{G_{1}\underset{=}{\vee}\mathrm{G_{2}}}\) where \(G_{1}\) and \(G_{2}\) are regular.
We use it to answer Question **2.16** by constructing pairs of non regular \(\{A,L,Q,\mathcal{L}\}\)NICS graphs.
### A-spectra of NNS join
The adjacency matrix of \(\mathrm{G_{1}\underset{=}{\vee}\mathrm{G_{2}}}\) can be written in a block form
\[A\left(G_{1}\underset{=}{\vee}G_{2}\right)=\left(\begin{array}{ccc}A\left( G_{1}\right)&A\left(\overline{G_{1}}\right)&J_{n_{1}\times n_{2}}\\ \\ A\left(\overline{G_{1}}^{-}\right)&O_{n_{1}\times n_{1}}&O_{n_{1}\times n_{2}} \\ \\ J_{n_{2}\times n_{1}}&O_{n_{2}\times n_{1}}&A\left(G_{2}\right)\end{array}\right) \tag{6.1}\]
**Theorem 6.1**.: _Let \(G_{1}\) be a \(r_{1}\)-regular graph with \(n_{1}\) vertices and \(G_{2}\) be a \(r_{2}\)-regular graph with \(n_{2}\)vertices. Then the adjacency spectrum of \(G_{1}\underset{=}{\vee}G_{2}\) consists of:_
* _(i)_ \(\lambda_{j}\left(G_{2}\right)\) _for each_ \(j\) _= 2, 3,...,_ \(n_{2}\)_;_
* _(ii) two roots of the equation_ \[x^{2}-\left(\lambda_{i}\left(G_{1}\right)\right)x-\left(\lambda_{i}^{2}\left(G _{1}\right)+2\lambda_{i}\left(G_{1}\right)+1\right)=0\text{ for each }i =\text{ 2, 3,..., }n_{1}\text{;}\]
* _(iii) the three roots of the equation_ \[x^{3}-(r_{1}+r_{2})x^{2}+\left(r_{1}r_{2}-(n_{1}-r_{1}-1)^{2}-n_{1}n_{2}\right) x+r_{2}(n_{1}-r_{1}-1)^{2}=0\]
Proof.: By Theorem **4.1**, the adjacency characteristic polynomial of \(G_{1}\underset{=}{\vee}G_{2}\) is
\[f_{A\left(G_{1}\underset{=}{\vee}G_{2}\right)}(x) =x^{n_{1}}f_{A\left(G_{2}\right)}(x)det\left(xI_{n_{1}}-A(G_{1})- \frac{1}{x}A^{2}(\overline{G_{1}})\right)\left[1-\varGamma_{A\left(G_{2} \right)}(x)\varGamma_{A\left(G_{1}\right)+\frac{1}{x}A^{2}(\overline{G_{1}})}( x)\right]\] \[=x^{n_{1}}\prod_{j=1}^{n_{2}}\left(x-\lambda_{j}(G_{2})\right)det \left(xI_{n_{1}}-A(G_{1})-\frac{1}{x}A^{2}(\overline{G_{1}})\right)\left[1- \varGamma_{A\left(G_{2}\right)}(x)\varGamma_{A\left(G_{1}\right)+\frac{1}{x}A ^{2}(\overline{G_{1}})}(x)\right]\]
Since \(G_{1}\) and \(G_{2}\) are regular, we can use Lemma **3.3** to get
\[f_{A(G_{1}\lor G_{2})}(x) =x^{n_{1}}\prod_{j=1}^{n_{2}}\left(x-\lambda_{j}(G_{2})\right)det \left(xI_{n_{1}}-A(G_{1})-\frac{1}{x}A^{2}(\overline{G_{1}})\right)\left[1- \frac{n_{2}}{x-r_{2}}\,\frac{n_{1}}{x-r_{1}-\frac{1}{x}(n_{1}-r_{1}-1)^{2}}\right]\] \[=x^{n_{1}}\prod_{j=1}^{n_{2}}\left(x-\lambda_{j}(G_{2})\right)det \left(xI_{n_{1}}-A(G_{1})-\frac{1}{x}(J-I-A(G_{1}))^{2}\right)\left[1-\frac{n_ {1}n_{2}}{(x-r_{2})(x-r_{1}-\frac{1}{x}(n_{1}-r_{1}-1)^{2})}\right]\] \[=x^{n_{1}}\prod_{j=1}^{n_{2}}\left(x-\lambda_{j}(G_{2})\right)det \left(xI_{n_{1}}-A(G_{1})-\frac{1}{x}(J^{2}-2J+I-2r_{1}J+2A(G_{1})+A^{2}(G_{1} ))\right)\] \[\cdot\left[1-\frac{n_{1}n_{2}}{(x-r_{2})(x-r_{1}-\frac{1}{x}(n_{1 }-r_{1}-1)^{2})}\right]\] \[=x^{n_{1}}\prod_{j=1}^{n_{2}}\left(x-\lambda_{j}(G_{2})\right) \left[det\left(B\right)-\frac{1}{x}(n_{1}-2-2r_{1})1_{n_{1}}^{T}adj\left(B \right)1_{n_{1}}\right]\left[1-\frac{n_{1}n_{2}}{(x-r_{2})(x-r_{1}-\frac{1}{x} (n_{1}-r_{1}-1)^{2})}\right],\]
where \(B=(x-\frac{1}{x})I_{n_{1}}-(1+\frac{2}{x})A(G_{1})-\frac{1}{x}A^{2}(G_{1})\).
Thus, based on Definition 3.2 and Lemma 3.3, we have
\[f_{A(G_{1}\backslash\frac{1}{x}G_{2})}(x)= x^{n_{1}}\prod_{j=1}^{n_{2}}\left(x-\lambda_{j}(G_{2})\right)\prod_{i=1}^{n_{ 1}}\left[\left(x-\frac{1}{x}\right)-\left(1+\frac{2}{x}\right)\lambda_{i}(G_{1 })-\frac{1}{x}\lambda_{i}^{2}(G_{1})\right]\] \[\cdot\left[1-\frac{1}{x}(n_{1}-2-2r_{1})\Gamma_{\frac{1}{x}I_{n_{ 1}}+(1+\frac{2}{x})A(G_{1})+\frac{1}{x}A^{2}(G_{1})}(x)\right]\left[1-\frac{n _{1}n_{2}}{(x-r_{2})(x-r_{1}-\frac{1}{x}(n_{1}-r_{1}-1)^{2})}\right]\] \[= x^{n_{1}}\prod_{j=1}^{n_{2}}\left(x-\lambda_{j}(G_{2})\right) \prod_{i=1}^{n_{1}}\left[\left(x-\frac{1}{x}\right)-\left(1+\frac{2}{x}\right) \lambda_{i}(G_{1})-\frac{1}{x}\lambda_{i}^{2}(G_{1})\right]\left[1-\frac{n_{ 1}-2-2r_{1}}{x}\cdot\frac{n_{1}}{x-\frac{1}{x}-r_{1}-\frac{r_{1}^{2}}{x}}- \frac{r_{1}^{2}}{x}\right]\] \[\cdot\left[1-\frac{n_{1}n_{2}}{(x-r_{2})(x-r_{1}-\frac{1}{x}(n_{ 1}-r_{1}-1)^{2})}\right]\] \[= \prod_{j=1}^{n_{2}}\left(x-\lambda_{j}(G_{2})\right)x^{n_{1}}\prod _{i=1}^{n_{1}}\left(\left(x-\frac{1}{x}\right)-\left(1+\frac{2}{x}\right) \lambda_{i}(G_{1})-\frac{1}{x}\lambda_{i}^{2}(G_{1})\right)\] \[\cdot\left(1-\frac{n_{1}(n_{1}-2-2r_{1})}{x^{2}-r_{1}x-r_{1}^{2}- 2r_{1}-1}\right)\left(\frac{(x-r_{2})\left(x(x-r_{1})-(n_{1}-r_{1}-1)^{2} \right)-n_{1}n_{2}x}{(x-r_{2})\left(x(x-r_{1})-(n_{1}-r_{1}-1)^{2}\right)}\right)\] \[= \prod_{j=2}^{n_{2}}\left(x-\lambda_{j}(G_{2})\right)\prod_{i=2}^{ n_{1}}\left(x^{2}-1-(x+2)\lambda_{i}(G_{1})-\lambda_{i}^{2}(G_{1})\right)\] \[\cdot\left((x-r_{2})\left(x(x-r_{1})-(n_{1}-r_{1}-1)^{2}\right)- n_{1}n_{2}x\right)\] \[= \prod_{j=2}^{n_{2}}\left(x-\lambda_{j}(G_{2})\right)\prod_{i=2}^{ n_{1}}\left(x^{2}-(\lambda_{i}(G_{1}))x-\left(\lambda_{i}^{2}(G_{1})+2\lambda_{i}(G_{1 })+1\right)\right)\] \[\cdot\left(x^{3}-(r_{1}+r_{2})x^{2}+(r_{1}r_{2}-(n_{1}-r_{1}-1)^{ 2}-n_{1}n_{2})x+r_{2}(n_{1}-r_{1}-1)^{2}\right).\]
### L-spectra of NNS join
The degrees of the vertices of \(G_{1}\mathop{\vee}G_{2}\) are:
\[\begin{array}{c}\mathrm{d}_{G_{1}\mathop{\vee}G_{2}}\left(u_{i}\right)=\mathrm{ n}_{1}+n_{2}-1,\,\mathrm{i}=1,\,.\,.\,.\,\,.\,\,,\,\mathrm{n}_{1},\\ \mathrm{d}_{G_{1}\mathop{\vee}G_{2}}(u_{i}^{\prime})=\mathrm{n}_{1}-r_{1}-1, \,\mathrm{i}=1,\,.\,.\,.\,\,.\,\,,\,\,\mathrm{n}_{1},\\ \mathrm{d}_{G_{1}\mathop{\vee}G_{2}}\left(v_{j}\right)=\mathrm{r}_{2}+\, \mathrm{n}_{1},\,\mathrm{j}=1,\,.\,.\,.\,\,.\,\,,\,\,\mathrm{n}_{2},\end{array}\]
so the degrees matrix of \(G_{1}\mathop{\vee}G_{2}\), that corresponds to (**6.1**) is
\[D\left(G_{1}\mathop{\vee}G_{2}\right)=\left(\begin{array}{ccc}(n_{1}+n_{2}- 1)I_{n_{1}}&O_{n_{1}\times n_{1}}&O_{n_{1}\times n_{2}}\\ O_{n_{1}\times n_{1}}&(n_{1}-r_{1}-1)I_{n_{1}}&O\\ O_{n_{2}\times n_{1}}&O_{n_{2}\times n_{1}}&(r_{2}+n_{1})I_{n_{2}}\end{array}\right) \tag{6.2}\]
**Theorem 6.2**.: _Let \(G_{1}\) be a \(r_{1}\)-regular graph with \(n_{1}\) vertices and \(G_{2}\) be a \(r_{2}\)-regular graph with \(n_{2}\)vertices. Then the Laplacian spectrum of \(G_{1}\mathop{\vee}G_{2}\) consists of:_
* \(n_{1}+\mu_{j}(G_{2})\) _for each_ \(j=2,3,...,n_{2}\)_;_
* _two roots of the equation_ \[x^{2}+(2r_{1}-2n_{1}-n_{2}-\mu_{i}(G_{1})+2)x+n_{1}^{2}-2r_{1}n_{1}-2n_{1}+n_{ 1}n_{2}-r_{1}n_{2}-n_{2}+\mu_{i}(G_{1})(n_{1}+r_{1}+1-\mu_{i}(G_{1}))=0\] _for each_ \(i=2,3,...,n_{1}\)_;_
* _the three roots of the equation_ \[x^{3}+(2r_{1}-3n_{1}-n_{2}+2)\,x^{2}+\left(n_{1}n_{2}-n_{2}r_{1}-n_{2}+2n_{1}^ {2}-2r_{1}n_{1}-2n_{1}\right)x=0.\]
Proof.: By substituting \(D\left(G_{1}\right)=r_{1}I_{n_{1}}\) in Theorem **4.2**, the Laplacian characteristic polynomial of \(G_{1}\mathop{\vee}G_{2}\) is
\[f_{{}_{L\left(G_{1}\mathop{\vee}G_{2}\right)}}\left(x\right)=det\left(\left(x -n_{1}\right)I_{n_{2}}-L\left(G_{2}\right)\right)det\left(x-n_{1}+1+r_{1} \right)I_{n_{1}}\]
\[\cdot det\left(\left(x-n_{1}-n_{2}+r_{1}+1\right)I_{n_{1}}-L\left(G_{1}\right) -\frac{1}{x-n_{1}+r_{1}+1}A^{2}\left(\overline{G_{1}}\right)\right)\]
\[\cdot\left[1-\Gamma_{{}_{L\left(G_{2}\right)}}\left(x-n_{1}\right)\Gamma_{{}_{ L\left(G_{1}\right)+\frac{1}{x-n_{1}+1+r_{1}}A^{2}\left(\overline{G_{1}} \right)}}\left(x-n_{1}-n_{2}+1+r_{1}\right)\right].\]
Using Lemma **3.3**, we obtain
\(f_{{}_{L}\left(G_{1}\rightleftharpoons G_{2}\right)}\left(x\right)=det\left( \left(x-n_{1}\right)I_{n_{2}}-L\left(G_{2}\right)\right)\left(x-n_{1}+r_{1}+1 \right)^{n_{1}}\)
\[\cdot det\left(\left(x-n_{1}-n_{2}+r_{1}+1\right)I_{n_{1}}-L\left(G_{1}\right) -\frac{1}{x-n_{1}+r_{1}+1}A^{2}\left(\overline{G_{1}}\right)\right)\]
\[\cdot\left[1-\frac{n_{2}n_{1}}{\left(x-n_{1}\right)\left(x-n_{1}-n_{2}+1+r_{1} -\frac{\left(n_{1}-r_{1}-1\right)^{2}}{x-n_{1}+r_{1}+1}\right)}\right]\]
\[\cdot\prod_{i=2}^{n_{1}}\left(x-n_{1}-n_{2}+1+r_{1}-\mu_{i}\left(G_{1}\right) -\frac{\left(\mu_{i}\left(G_{1}\right)-r_{1}-1\right)^{2}}{x-n_{1}+r_{1}+1}\right)\]
[MISSING_PAGE_POST]
* _the three roots of the equation_ \[x^{3}+ \left(2-3n_{1}-n_{2}-2r_{2}\right)x^{2}+ \left(n_{1}n_{2}-n_{2}r_{1}-n_{2}+2n_{1}^{2}+2r_{1}n_{1}-2n_{1}-2r_{1}-2r_{1}^{2 }+4r_{2}n_{1}-4r_{2}+2r_{2}n_{2}\right)x\] \[+2n_{1}r_{1}^{2}+2r_{1}n_{1}-2r_{1}n_{1}^{2}-2r_{2}n_{1}n_{2}+2r_{ 1}r_{2}n_{2}+2r_{2}n_{2}+4r_{2}r_{1}^{2}+4r_{1}r_{2}-4r_{1}r_{2}n_{1}=0\]
Proof.: The proof is similar to the proof of Theorem **6.2**.
### \(\mathcal{L}\)-spectra of NNS join
Let \(G_{1}\) be a \(r_{1}\)-regular graph on order \(n_{1}\). Let \(S\) be a subset of \(\{2,3,\ldots,n_{1}\}\) such that \(\delta_{i}\left(G_{1}\right)=1+\frac{1}{r_{1}}\) for \(i\in S\), and denote the cardinality of \(S\) by \(n(S)\). Let \(G_{2}\) be a \(r_{2}\)-regular graph on order \(n_{2}\). In the following theorem we determine the normalized Laplacian spectrum of \(G_{1}\underset{=}{\vee}G_{2}\) in terms of the normalized Laplacian eigenvalues of \(G_{1}\) and \(G_{2}.\) The proof is slightly more complicated than the proof of Theorem **5.1** and we consider three cases.
**Theorem 6.4**.:
_a)_ _If_ \(S=\varPhi\)_, then the normalized Laplacian spectrum of_ \(G_{1}\underset{=}{\vee}G_{2}\) _consists of:_ _i)_ \(1+\frac{r_{2}(\delta_{i}(G_{2})-1)}{n_{1}+r_{2}}\)_, for each_ \(i=2,3,...,n_{2}\)_;_ _ii)_ \(1+\frac{2(1+r_{1}-r_{1}\delta_{i}(G_{1}))^{2}}{r_{1}(1-\delta_{i}(G_{1}))(n_{ 1}-r_{1}-1)\mp\sqrt{(n_{1}-r_{1}-1)\left[r_{1}^{2}(1-\delta_{i}(G_{1}))^{2}(n_ {1}-r_{1}-1)+4(1+r_{1}-r_{1}\delta_{i}(G_{1}))^{2}(n_{1}+n_{2}-1)\right]}}\) _, for each_ _i=2,3,...,n_\({}_{1}\)_;_ _iii) the three roots of the equation_ \[\left(n_{1}^{2}+n_{1}n_{2}-n_{1}+r_{2}n_{1}+r_{2}n_{2}-r_{2} \right)x^{3}-\left(3n_{1}^{2}+3n_{1}n_{2}-3n_{1}-r_{1}n_{1}+2r_{2}n_{1}+2r_{2} n_{2}-2r_{2}-r_{1}r_{2}\right)x^{2}\] \[+\left(2n_{1}^{2}+2n_{1}n_{2}-2n_{1}-r_{1}n_{1}+r_{2}n_{2}\right)x=0.\]
_b)_ _If_ \(S=\{2,3,\ldots,n_{1}\}\)_, then the normalized Laplacian spectrum of_ \(G_{1}\underset{=}{\vee}G_{2}\) _consists of:_
_i)_ \(1+\frac{r_{2}(\delta_{i}(G_{2})-1)}{n_{1}+r_{2}}\) _for each_ \(i=2,3,...,n_{2};\)__
_ii)_ \(1+\frac{2(1+r_{1}-r_{1}\delta_{i}(G_{1}))^{2}}{r_{1}(1-\delta_{i}(G_{1}))(n_{ 1}-r_{1}-1)\mp\sqrt{(n_{1}-r_{1}-1)\left[r_{1}^{2}(1-\delta_{i}(G_{1}))^{2}(n_ {1}-r_{1}-1)+4(1+r_{1}-r_{1}\delta_{i}(G_{1}))^{2}(n_{1}+n_{2}-1)\right]}}\) _for each_ \(i\) \(\in\{2,3,\ldots,n_{1}\}\setminus S\)_;_ _iii)_ \(1^{[n(S)]}\) _and_ \((1+\frac{1}{n_{1}+n_{2}-1})^{[n(S)]}\)_._
_iv) the three roots of the equation_
\[\left(n_{1}^{2}+n_{1}n_{2}-n_{1}+r_{2}n_{1}+r_{2}n_{2}-r_{2}\right)x ^{3}-\left(3n_{1}^{2}+3n_{1}n_{2}-3n_{1}-r_{1}n_{1}+2r_{2}n_{1}+2r_{2}n_{2}-2r_{ 2}-r_{1}r_{2}\right)x^{2}\] \[+\left(2n_{1}^{2}+2n_{1}n_{2}-2n_{1}-r_{1}n_{1}+r_{2}n_{2}\right)x =0.\]
Proof.: **(a)** If \(S=\varPhi\), then \(\delta_{i}\left(G_{1}\right)\neq 1+\frac{1}{r_{1}}\) for each \(i=2,3,\ldots,n_{1}\), so \(\lambda_{i}\left(G_{1}\right)\neq-1\) for each \(i=2,3,\ldots,n_{1}\). The normalized Laplacian matrix of \(G_{1}\setminus G_{2}\) is:
\[\mathcal{L}\left(G_{1}\underset{=}{\vee}G_{2}\right)=\left(\begin{array}{ ccc}I_{n_{1}}-\frac{A\left(G_{1}\right)}{n_{1}+n_{2}-1}&\frac{-A\left(G_{1} \right)}{\sqrt{\left(n_{1}+n_{2}-1\right)\left(n_{1}-r_{1}-1\right)}}&\frac{- J_{n_{1}\times n_{2}}}{\sqrt{\left(n_{1}+n_{2}-1\right)\left(r_{2}+n_{1} \right)}}\\ \\ \frac{-A\left(G_{1}\right)}{\sqrt{\left(n_{1}+n_{2}-1\right)\left(n_{1}-r_{1} -1\right)}}&I_{n_{1}}&O_{n_{1}\times n_{2}}\\ \\ \frac{-J_{n_{2}\times n_{1}}}{\sqrt{\left(n_{1}+n_{2}-1\right)\left(r_{2}+n_{1 }\right)}}&O_{n_{2}\times n_{1}}&I_{n_{2}}-\frac{A\left(G_{2}\right)}{r_{2}+n _{1}}\end{array}\right).\]
Since \(G_{2}\) is \(r_{2}\)-regular, the vector \(\mathbf{1}_{n_{2}}\)is an eigenvector of \(A(G_{2})\) that corresponds to \(\lambda_{1}\left(G_{2}\right)=r_{2}\). For \(i=2,3,\ldots,n_{2}\) let \(Z_{i}\) be an eigenvector of \(A\left(G_{2}\right)\) that corresponds to \(\lambda_{i}\left(G_{2}\right)\). Then \(\mathbf{1}_{n_{2}}^{T}Z_{i}=0\) and \(\left(0_{1\times n_{1}},0_{1\times n_{1}},Z_{i}^{T}\right)^{T}\) is an eigenvector of \(\mathcal{L}\left(G_{1}\underset{=}{\vee}G_{2}\right)\) corresponding to the eigenvalue \(1-\frac{\lambda_{i}\left(G_{2}\right)}{r_{2}+n_{1}}\). By Remark **2.8**, \(1+\frac{r_{2}\left(\delta_{i}\left(G_{2}\right)-1\right)}{n_{1}+r_{2}}\) are an eigenvalues of \(\oint\mathcal{L}\left(G_{1}\underset{=}{\vee}G_{2}\right)\) for \(i=2,...,n_{2}\).
For \(i=2,...,n_{1}\), let \(X_{i}\) be an eigenvector of \(A(G_{1})\) corresponding to the eigenvalue \(\lambda_{i}\left(G_{1}\right)\). We now look for a non zero real number \(\alpha\) such that \(\left(\begin{array}{cc}X_{i}^{T}&\alpha X_{i}^{T}&0_{1\times n_{2}}\end{array} \right)^{T}\) is an eigenvector of \(\mathcal{L}(G_{1}\underset{=}{\vee}G_{2})\). Notice that \(\alpha\neq 0\), since if \(\alpha=0\) then \(\lambda_{i}\left(G_{1}\right)=-1\).
\[\mathcal{L}\left(\begin{array}{c}X_{i}\\ \alpha X_{i}\\ 0_{n_{1}\times 1}\end{array}\right)=\left(\begin{array}{c}X_{i}-\frac{\lambda_{i} \left(G_{1}\right)}{n_{1}+n_{2}-1}X_{i}+\frac{1+\lambda_{i}\left(G_{1}\right) }{\sqrt{\left(n_{1}+n_{2}-1\right)\left(n_{1}-r_{1}-1\right)}}\alpha X_{i}\\ \\ \frac{1+\lambda_{i}\left(G_{1}\right)}{\sqrt{\left(n_{1}+n_{2}-1\right)\left(n_{ 1}-r_{1}-1\right)}}X_{i}+\alpha X_{i}\\ \\ 0_{n_{1}\times 1}\end{array}\right)\]
\[=\left(\begin{array}{c}1-\frac{\lambda_{i}\left(G_{1}\right)}{n_{1}+n_{2}-1}+ \frac{1+\lambda_{i}\left(G_{1}\right)}{\sqrt{\left(n_{1}+n_{2}-1\right)\left(n _{1}-r_{1}-1\right)}}\alpha\\ \\ \frac{1+\lambda_{i}\left(G_{1}\right)}{\alpha\sqrt{\left(n_{1}+n_{2}-1\right) \left(n_{1}-r_{1}-1\right)}}+1\\ \\ 0_{n_{1}\times 1}\end{array}\right)\left(\begin{array}{c}X_{i}\\ \alpha X_{i}\\ 0_{n_{1}\times 1}\end{array}\right).\]
Thus
\[1-\frac{\lambda_{i}\left(G_{1}\right)}{n_{1}+n_{2}-1}+\frac{1+\lambda_{i} \left(G_{1}\right)}{\sqrt{\left(n_{1}+n_{2}-1\right)\left(n_{1}-r_{1}-1 \right)}}\alpha=\frac{1+\lambda_{i}\left(G_{1}\right)}{\alpha\sqrt{\left(n_{1}+n _{2}-1\right)\left(n_{1}-r_{1}-1\right)}}+1 \tag{6.3}\]
\[-\frac{\lambda_{i}\left(G_{1}\right)}{n_{1}+n_{2}-1}+\frac{1+\lambda_{i}\left(G_{ 1}\right)}{\sqrt{\left(n_{1}+n_{2}-1\right)\left(n_{1}-r_{1}-1\right)}}\alpha= \frac{1+\lambda_{i}\left(G_{1}\right)}{\alpha\sqrt{\left(n_{1}+n_{2}-1\right) \left(n_{1}-r_{1}-1\right)}}\]
\[\frac{\alpha^{2}\left(1+\lambda_{i}\left(G_{1}\right)\right)-\left(1+\lambda_{i} \left(G_{1}\right)\right)}{\alpha\sqrt{\left(n_{1}+n_{2}-1\right)\left(n_{1}- r_{1}-1\right)}}=\frac{\lambda_{i}\left(G_{1}\right)}{n_{1}+n_{2}-1}\]
\[\left(1+\lambda_{i}\left(G_{1}\right)\right)\sqrt{n_{1}+n_{2}-1}\alpha^{2}- \lambda_{i}\left(G_{1}\right)\sqrt{n_{1}-r_{1}-1}\alpha-\left(1+\lambda_{i} \left(G_{1}\right)\right)\sqrt{n_{1}+n_{2}-1}=0,\]
so
Substituting the values of \(\alpha\) in the right side of (**6.3**), we get by Remark **2.8** that
\(1+\frac{2\left(1+r_{1}-r_{1}\delta_{i}\left(G_{1}\right)\right)^{2}}{r_{1} \left(1-\delta_{i}\left(G_{1}\right)\right)\left(n_{1}-r_{1}-1\right)\mp \left(n_{1}-r_{1}-1\right)\left[r_{1}^{2}\left(1-\delta_{i}\left(G_{1}\right) \right)^{2}\left(n_{1}-r_{1}-1\right)+4\left(1+r_{1}-r_{1}\delta_{i}\left(G_{ 1}\right)\right)^{2}\left(n_{1}+n_{2}-1\right)\right]}\) are eigenvalues of \(\mathcal{L}\left(G_{1}\underset{}{\vee}G_{2}\right)\) for each \(i=2,3,...,n_{1}\).
So far we obtain \(n_{2}-1+2\left(n_{1}-1\right)=2n_{1}+n_{2}-3\) eigenvalues of \(\mathcal{L}\left(G_{1}\underset{}{\vee}G_{2}\right)\). The corresponding eigenvectors are orthogonal to \(\left(\mathbf{1}_{n_{i}}^{T},0_{1\times n_{1}},0_{1\times n_{2}}\right)^{T}, \left(0_{1\times n_{1}},\mathbf{1}_{n_{i}}^{T},0_{1\times n_{2}}\right)^{T}\) and \(\left(0_{1\times n_{1}},0_{1\times n_{1}},\mathbf{1}_{n_{2}}^{T}\right)^{T}.\) To find three additional eigenvalues, we look for eigenvectors of \(\mathcal{L}\left(G_{1}\underset{}{\vee}G_{2}\right)\) of the form \(Y=\left(\alpha\mathbf{1}_{n_{1}}^{T},\beta\mathbf{1}_{n_{1}}^{T},\gamma \mathbf{1}_{n_{2}}^{T}\right)^{T}\) for \(\left(\alpha,\beta,\gamma\right)\neq\left(0,0,0\right)\). Let \(x\) be an eigenvalue of \(\mathcal{L}\left(G\underset{}{\vee}G_{2}\right)\) corresponding to the eigenvector \(Y\). From \(\mathcal{L}Y=xY\) we get,
\[\alpha-\frac{\alpha r_{1}}{n_{1}+n_{2}-1}+\frac{\left(1+r_{1}-n_{1}\right) \beta}{\sqrt{n_{1}+n_{2}-1}\sqrt{n_{1}-r_{1}-1}}-\frac{n_{2}\gamma}{\sqrt{ \left(n_{1}+n_{2}-1\right)\left(r_{2}+n_{1}\right)}}=\alpha x \tag{6.4}\]
\[\frac{\left(1+r_{1}-n_{1}\right)\alpha}{\sqrt{n_{1}-r_{1}-1}\sqrt{n_{1}+n_{2} -1}}+\beta=\beta x \tag{6.5}\]
\[\frac{-n_{1}\alpha}{\sqrt{r_{2}+n_{1}}\sqrt{n_{1}+n_{2}-1}}+\gamma-\frac{r_{2 }\gamma}{r_{2}+n_{1}}\gamma x \tag{6.6}\]
Thus
\[\alpha-\frac{\alpha r_{1}}{n_{1}+n_{2}-1}+\frac{\alpha(n_{1}-1-r_{1})}{\left(n _{1}+n_{2}-1\right)\left(x-1\right)}+\frac{\alpha(n_{1}n_{2})}{\left(n_{1}+n_{2 }-1\right)\left(xr_{2}+n_{1}(x-1)\right)}=\alpha x.\]
Notice that \(\alpha\neq 0\), since if \(\alpha=0\) then \(\alpha=\beta=\gamma=0\) and also \(x\neq 1\) since \(x=1\) implies that \(\alpha=0\).
Dividing by \(\alpha\), we get
\[1-\frac{r_{1}}{n_{1}+n_{2}-1}+\frac{n_{1}-1-r_{1}}{\left(n_{1}+n_{2}-1\right) \left(x-1\right)}+\frac{n_{1}n_{2}}{\left(n_{1}+n_{2}-1\right)\left(xr_{2}+n_{1 }(x-1)\right)}=x.\]
Then,
\[\left(n_{1}^{2}+n_{1}n_{2}-n_{1}\right)\left(x-1\right)^{3}+\left( r_{2}n_{1}x+r_{2}n_{2}x-r_{2}x+r_{1}n_{1}\right)\left(x-1\right)^{2}\] \[+\left(r_{1}r_{2}x-n_{1}^{2}+n_{1}+r_{1}n_{1}-n_{1}n_{2}\right) \left(x-1\right)-r_{2}\left(n_{1}-1-r_{1}\right)x=0,\]
so, by simple calculation we see that x is a root of the cubic equation
\[\left(n_{1}^{2}+n_{1}n_{2}-n_{1}+r_{2}n_{1}+r_{2}n_{2}-r_{2}\right)x^{3}- \left(3n_{1}^{2}+3n_{1}n_{2}-3n_{1}-r_{1}n_{1}+2r_{2}n_{1}+2r_{2}n_{2}-2r_{2}- r_{1}r_{2}\right)x^{2}\] \[+\left(2n_{1}^{2}+2n_{1}n_{2}-2n_{1}-r_{1}n_{1}+r_{2}n_{2} \right)x=0\]
and this completes the proof of (a).
**(b)** The proof of (i) is similar to the proof of (i) in (a). Now we prove (ii), If \(S=\left\{2,3,\ldots,n_{1}\right\}\), then \(\delta_{j}\left(G_{1}\right)=1+\frac{1}{r_{1}}\) for each \(i=2,3,\ldots,n_{1}\), so \(\lambda_{i}\left(G_{1}\right)=-1\) for each \(i=2,3,\ldots,n_{1}\), i.e. \(G_{1}=K_{n_{1}}\)and \(r_{1}=n_{1}-1\). So the normalized Laplacian matrix of \(G_{1}\underset{=}{\vee}G_{1}\) as follows:
\[\mathcal{L}\left(G_{1}\underset{=}{\vee}G_{2}\right)=\left(\begin{array}{ ccc}I_{n_{1}}-\frac{A(G_{1})}{n_{1}+n_{2}-1}&O_{n_{1}\times n_{1}}&\frac{-J_{n_{1} \times n_{2}}}{\sqrt{\left(n_{1}+n_{2}-1\right)\left(r_{2}+n_{1}\right)}}\\ \\ O_{n_{1}\times n_{1}}&O_{n_{1}\times n_{1}}&O_{n_{1}\times n_{2}}\\ \\ \frac{-J_{n_{2}\times n_{1}}}{\sqrt{\left(n_{1}+n_{2}-1\right)\left(r_{2}+n_{ 1}\right)}}&O_{n_{2}\times n_{1}}&I_{n_{2}}-\frac{A(G_{2})}{r_{2}+n_{1}}\\ \end{array}\right).\]
For \(i=2,...,n_{1}\), let \(X_{i}\) be an eigenvector of \(A(G_{1})\) corresponding to the eigenvalue \(\lambda_{i}\left(G_{1}\right)=-1\). So \(\left(\begin{array}{cc}X_{i}^{T}&0_{1\times n_{1}}&0_{1\times n_{2}}\\ \\ 0_{1\times n_{1}}&X_{i}^{T}&0_{1\times n_{2}}\\ \end{array}\right)^{T}\) is an eigenvector of \(\mathcal{L}(G_{1}\underset{=}{\vee}G_{2})\) corresponding to the eigenvalue \(0\) because,
\[\mathcal{L}\left(\begin{array}{c}X_{i}\\ 0_{n_{1}\times 1}\\ 0_{n_{2}\times 1}\\ \end{array}\right)=\left(\begin{array}{c}X_{i}-\frac{\lambda_{i}\left(G_{1} \right)X_{i}}{n_{1}+n_{2}-1}\\ \\ 0_{n_{1}\times 1}\\ \end{array}\right)=\left(1+\frac{1}{n_{1}+n_{2}-1}\right)\left(\begin{array} []{c}X_{i}\\ 0_{n_{1}\times 1}\\ 0_{n_{2}\times 1}\\ \end{array}\right)\text{ and }\mathcal{L}\left(\begin{array}{c}0_{n_{1}\times 1}\\ X_{i}\\ 0_{n_{2}\times 1}\\ \end{array}\right)=\left(\begin{array}{c}0_{n_{1}\times 1}\\ 0_{n_{1}\times 1}\\ 0_{n_{2}\times 1}\\ \end{array}\right)\text{.}\]
Therefore, \(\left(1+\frac{1}{n_{1}+n_{2}-1}\right)^{\left[n_{1}-1\right]}\) and \(0^{\left[n_{1}-1\right]}\) are eigenvalues of \(\mathcal{L}(G_{1}\underset{=}{\vee}G_{2})\).
So far, we obtained \(n_{2}-1+2\left(n_{1}-1\right)=2n_{1}+n_{2}-3\) eigenvalues of \(\mathcal{L}\left(G_{1}\underset{=}{\vee}G_{2}\right)\). Their eigenvectors are orthogonal to \(\left(\mathbf{1}_{n_{1}}^{T},0_{1\times n_{1}},0_{1\times n_{2}}\right)^{T}, \left(0_{1\times n_{1}},1_{n_{1}}^{T},0_{1\times n_{2}}\right)^{T}\) and \(\left(0_{1\times n_{1}},0_{1\times n_{1}},1_{n_{2}}^{T}\right)^{T}\). To find three additional eigenvalues, we look for eigenvectors of \(\mathcal{L}\left(G\underset{=}{\vee}G_{2}\right)\) of the form \(Y=\left(\alpha\mathbf{1}_{n_{1}}^{T},\beta\mathbf{1}_{n_{1}}^{T},\gamma \mathbf{1}_{n_{2}}^{T}\right)^{T}\)for \(\left(\alpha,\beta,\gamma\right)\neq\left(0,0,0\right)\). Let \(x\) be an eigenvalue of \(\mathcal{L}\left(G\underset{=}{\vee}G_{2}\right)\) corresponding to the eigenvector \(Y\). Then from \(\mathcal{L}Y=xY\) we get
\[\alpha-\frac{r_{1}\alpha}{n_{1}+n_{2}-1}-\frac{\gamma n_{2}}{\sqrt{\left(n_{1 }+n_{2}-1\right)\left(r_{2}+n_{1}\right)}}=\alpha x \tag{6.7}\]
\[\beta x=0 \tag{6.8}\]
\[\frac{-\alpha n_{1}}{\sqrt{\left(n_{1}+n_{2}-1\right)\left(r_{2}+n_{1}\right)} }+\gamma-\frac{r_{2}\gamma}{r_{2}+n_{1}}=\gamma x \tag{6.9}\]
If \(\beta\neq 0\), then \(\left(0,\beta,0\right)\) is a one of the solutions of the three above equations, so \(\left(0_{1\times n_{1}},\beta\mathbf{1}_{n_{1}}^{T},0_{1\times n_{2}}\right)^{T}\) is an eigenvector corresponding to the eigenvalue \(0\). On the other hand if \(\beta=0\) we get
\[\alpha\left(1-x-\frac{r_{1}}{n_{1}+n_{2}-1}\right)=\frac{\gamma n_{2}}{\sqrt{ \left(n_{1}+n_{2}-1\right)\left(r_{2}+n_{1}\right)}} \tag{6.10}\]
\[\gamma\left(1-x-\frac{r_{2}}{r_{2}+n_{1}}\right)=\frac{\alpha n_{1}}{\sqrt{ \left(n_{1}+n_{2}-1\right)\left(r_{2}+n_{1}\right)}} \tag{6.11}\]
By solving above two equations we get the equation
\[\left(r_{2}+n_{1}\right)\left(n_{1}+n_{2}-1\right)x^{2}+\left(n_{1}-r_{2}n_{2} -2n_{1}n_{2}-n_{1}^{2}\right)x=0,\]
whose roots are \(0\) and \(\frac{n_{1}^{2}+2n_{1}n_{2}+r_{2}n_{2}-n_{1}}{\left(r_{2}+n_{1}\right)\left(n_{ 1}+n_{2}-1\right)}\).
This completes the proof of (b).
**(c)** The proofs of (i), (ii) and (iv) are similar to the proofs of (i), (ii) and (iii) of (a), respectively. Now we prove (iii), Let \(S\neq\varPhi\) and \(S\neq\{2,3,\ldots,n_{1}\}\). If \(i\in S\), then \(1+\frac{1}{n_{1}+n_{2}-1}\) and \(1\) are eigenvalues of \(\mathcal{L}\left(G\underset{=}{\vee}G_{2}\right)\) because if \(X_{i}\) is an eigenvector corresponding to the eigenvalue \(\delta_{i}(G_{1})\), then
\[\mathcal{L}\left(\begin{array}{c}X_{i}\\ 0_{n_{1}\times 1}\\ 0_{n_{2}\times 1}\end{array}\right)=\left(\begin{array}{c}X_{i}+\frac{X_{i}}{ n_{1}+n_{2}-1}\\ 0_{n_{1}\times 1}\\ 0_{n_{2}\times 1}\end{array}\right)=\left(1+\frac{1}{n_{1}+n_{2}-1}\right) \left(\begin{array}{c}X_{i}\\ 0_{n_{1}\times 1}\\ 0_{n_{2}\times 1}\end{array}\right)\text{ and }\mathcal{L}\left(\begin{array}{c}0_{n_ {1}\times 1}\\ X_{i}\\ 0_{n_{2}\times 1}\end{array}\right)=\left(\begin{array}{c}0_{n_{1}\times 1}\\ X_{i}\\ 0_{n_{2}\times 1}\end{array}\right)\text{.}\]
So, \(1^{[n(S)]}\) and \((1+\frac{1}{n_{1}+n_{2}-1})^{[n(S)]}\) are eigenvalues of \(\mathcal{L}\left(G\underset{=}{\vee}G_{2}\right)\) and this completes the proof of (c).
Now we can give another answer to Question **2.16** by constructing several pairs of non regular \(\{A,L,Q,\mathcal{L}\}\)NICS graphs.
**Corollary 6.5**.: _Let \(G_{1}\), \(H_{1}\) be cospectral regular graphs and \(G_{2}\), \(H_{2}\) be non isomorphic, regular and cospectral graphs. Then \(G\underset{=}{\vee}G_{2}\) and \(H_{1}\underset{=}{\vee}H_{2}\) are non regular \(\{A,L,Q,\mathcal{L}\}\)NICS._
Proof.: \(G_{1}\lor G_{2}\) and \(H_{1}\underset{=}{\vee}H_{2}\) are non isomorphic since \(G_{2}\) and \(H_{2}\) are non isomorphic. By Theorems **6.1**, **6.2**, **6.3** and **6.4**, we get that \(G_{1}\underset{=}{\vee}G_{2}\) and \(H_{1}\underset{=}{\vee}H_{2}\) are non regular \(\{A,L,Q,\mathcal{L}\}\)NICS.
**Example 6.6**.: Let \(G_{1}=H_{1}=C_{4}\), and choose \(G_{2}=G\) and \(H_{2}=H\) where \(G\) and \(H\) are graphs in Figure **2.1**, then the graphs in Figure **6.1** are \(\{A,L,Q,\mathcal{L}\}\)NICS.
|
2306.16385
|
The Skolem property in rings of integer-valued rational functions
|
$\DeclareMathOperator{\Int}{Int} \DeclareMathOperator{\IntR}{Int{}^\text{R}}
\newcommand{\Z}{{\mathbb Z}}$Let $D$ be a domain and let $\Int(D)$ and
$\IntR(D)$ be the ring of integer-valued polynomials and the ring of
integer-valued rational functions, respectively. Skolem proved that if $I$ is a
finitely-generated ideal of $\Int(\Z)$ with all the value ideals of $I$ not
being proper, then $I = \Int(\Z)$. This is known as the Skolem property, which
does not hold in $\Z[x]$. One obstruction to $\Int(D)$ having the Skolem
property is the existence of unit-valued polynomials. This is no longer an
obstruction when we consider the Skolem property on $\IntR(D)$. We determine
that the Skolem property on $\IntR(D)$ is equivalent to the maximal spectrum
being contained in the ultrafilter closure of the set of maximal pointed
ideals. We generalize the Skolem property using star operations and determine
an analogous equivalence under this generalized notion.
|
Baian Liu
|
2023-06-28T17:22:49Z
|
http://arxiv.org/abs/2306.16385v2
|
# The Skolem property in rings of integer-valued rational functions
###### Abstract
Let \(D\) be a domain and let \(\mathrm{Int}(D)\) and \(\mathrm{Int}^{\mathrm{R}}(D)\) be the ring of integer-valued polynomials and the ring of integer-valued rational functions, respectively. Skolem proved that if \(I\) is a finitely-generated ideal of \(\mathrm{Int}(\mathbb{Z})\) with all the value ideals of \(I\) not being proper, then \(I=\mathrm{Int}(\mathbb{Z})\). This is known as the Skolem property, which does not hold in \(\mathbb{Z}[x]\). One obstruction to \(\mathrm{Int}(D)\) having the Skolem property is the existence of unit-valued polynomials. This is no longer an obstruction when we consider the Skolem property on \(\mathrm{Int}^{\mathrm{R}}(D)\). We determine that the Skolem property on \(\mathrm{Int}^{\mathrm{R}}(D)\) is equivalent to the maximal spectrum being contained in the ultrafilter closure of the set of maximal pointed ideals. We generalize the Skolem property using star operations and determine an analogous equivalence under this generalized notion.
## 1 Introduction
Given a domain \(D\), the ring of integer-valued polynomials over \(D\) has been studied extensively. A collection of results on integer-valued polynomials can be found in [1]. However, not much is known about the ring of integer-valued rational functions. Despite having similar definitions, the ring of integer-valued polynomials and the ring of integer-valued rational functions can behave very differently. We start by giving the definitions of these two ring extensions of \(D\) with a variation that is slightly more general.
**Definition 1.1**.: Let \(D\) be a domain with \(K\) as the field of fractions. Also let \(E\subseteq K\) be a non-empty subset. We denote by
\[\mathrm{Int}^{\mathrm{R}}(D)\coloneqq\{\varphi\in K(x)\mid\varphi(a)\in D,\, \forall a\in D\}\quad\text{and}\quad\mathrm{Int}^{\mathrm{R}}(E,D)\coloneqq\{ \varphi\in K(x)\mid\varphi(a)\in D,\,\forall a\in E\}\]
the **ring of integer-valued rational functions over \(D\)** and the **ring of integer-valued rational functions on \(E\) over \(D\)**, respectively. Note that \(\mathrm{Int}^{\mathrm{R}}(D,D)=\mathrm{Int}^{\mathrm{R}}(D)\).
Compare these definitions to
\[\mathrm{Int}(D)\coloneqq\{f\in K[x]\mid f(a)\in D,\,\forall a\in D\}\quad \text{and}\quad\mathrm{Int}(E,D)\coloneqq\{f\in K[x]\mid f(a)\in D,\,\forall a \in E\}\]
the **ring of integer-valued polynomials over \(D\)** and **ring of integer-valued polynomials on \(E\) over \(D\)**, respectively.
One way the behavior of rings of integer-valued rational functions differs from that of integer-valued polynomials is the Skolem property. Having the Skolem property allows us to determine if a finitely-generated ideal of \(\mathrm{Int}^{\mathrm{R}}(E,D)\) is proper through evaluation.
**Definition 1.2**.: Let \(D\) be a domain with field of fractions \(K\). Take \(E\subseteq K\) to be some subset. Let \(A\) be a ring such that \(D\subseteq A\subseteq\mathrm{Int}^{\mathrm{R}}(E,D)\). For \(I\) an ideal of \(A\) and \(a\in E\), we denote by
\[I(a)\coloneqq\{\varphi(a)\mid\varphi\in I\}\]
the **value ideal of \(I\) at \(a\)**. Note that \(I(a)\) is an ideal of \(D\). We say that \(A\) has the **Skolem property** if for every finitely-generated ideal \(I\) of \(\operatorname{Int}^{\mathrm{R}}(E,D)\) such that \(I(a)=D\) for all \(a\in E\), then \(I=A\).
In order for \(\operatorname{Int}(D)\) to have the Skolem property, it is necessary that there does not exist a non-constant \(f\in D[x]\) such that \(f(d)\in D^{\times}\) for all \(d\in D\). Such a polynomial \(f\) is called a **unit-valued polynomial**. This is because all of the value ideals for \(f\operatorname{Int}(D)\) equal \(D\), but \(f\) is not a unit of \(\operatorname{Int}(D)\). We know that \(f\) is not a unit of \(\operatorname{Int}(D)\) because \(\frac{1}{f}\) is not a polynomial. However, \(\frac{1}{f}\) is a rational function, so \(f\operatorname{Int}^{\mathrm{R}}(D)=\operatorname{Int}^{\mathrm{R}}(D)\). Therefore, the existence of a unit-valued polynomial prevents \(\operatorname{Int}(D)\) from having the Skolem property, but it does not necessarily prevent \(\operatorname{Int}^{\mathrm{R}}(D)\) from having the Skolem property.
Furthermore, if \(D\) is a noetherian domain that is not a field, then \(\operatorname{Int}(D)\) having the Skolem property implies that for every maximal ideal \(\mathfrak{m}\) of \(D\), we have that \(D/\mathfrak{m}\) is algebraically closed, or \(\mathfrak{m}\) has height one and a finite residue field [1, 2.1 Proposition]. For \(\operatorname{Int}^{\mathrm{R}}(D)\), the Skolem property is not as restrictive. For example, if \(D\) is any local domain with a finite residue field, then \(\operatorname{Int}^{\mathrm{R}}(D)\) has the Skolem property [1, Proposition X.3.7]. Another example that the Skolem property is less restrictive for rings of integer-valued rational functions is that if \(D\) is a domain with \(E\) a non-empty subset of the field of fractions of \(D\) such that \(\operatorname{Int}^{\mathrm{R}}(E,D)\) is a Prufer domain, then \(\operatorname{Int}^{\mathrm{R}}(E,D)\) has the Skolem property [1, Lemma 4.1]. One condition that makes the ring \(\operatorname{Int}^{\mathrm{R}}(D)\) Prufer is if \(D\) is a Prufer domain such that there exists a monic unit-valued polynomial in \(D[x]\)[1, Corollary 3.4], which can be achieved with no restrictions on the height of maximal ideals of \(D\) using [1, Corollary 2.6].
The Skolem property is also related to the form of the maximal ideals. Given a noetherian domain \(D\) of dimension strictly greater than one, we have that \(\operatorname{Int}(D)\) has the Skolem property if and only if all of the maximal ideals of \(\operatorname{Int}(D)\) is of the form \(\{f(x)\in\operatorname{Int}(D)\mid f(a)\in\mathfrak{m}\hat{D}_{\mathfrak{m}}\}\) for some maximal ideal \(\mathfrak{m}\) of \(D\) and \(a\in\hat{D}_{\mathfrak{m}}\), where \(\hat{D}_{\mathfrak{m}}\) is the \(\mathfrak{m}\)-adic completion of \(D\)[1, 4.6 Theoreme]. A result of the same nature concerning rings integer-valued rational functions is that if \(V\) is a valuation domain with \(E\) a non-empty subset of the field of fractions such that \(\operatorname{Int}^{\mathrm{R}}(E,V)\) is Prufer, then all of the maximal ideals of \(\operatorname{Int}^{\mathrm{R}}(E,V)\) are ultrafilter limits of the set of maximal pointed ideals [1, Proposition 6.1]. We will define ultrafilter limits and maximal pointed ideals in the next section.
In this work, we generalize the relationship between the Skolem property and the maximal ideals of rings of integer-valued rational functions. Section 2 gives definitions and previous results necessary to state our results. Section 3 connects the Skolem property and the maximal spectrum of a ring of integer-valued rational functions. Section 4 extends this connection using star operations. Section 5 provides examples of when the strong Skolem property, a property for distinguishing finitely-generated ideals through evaluation, fails. Section 6 gives a result about rational functions between fields needed for an example in Section 5.
## 2 Background
A notion related to the Skolem property is the strong Skolem property. The strong Skolem property can be thought of as the ability to distinguish finitely-generated ideals of \(\operatorname{Int}^{\mathrm{R}}(E,D)\) using evaluation.
**Definition 2.1**.: Suppose \(D\) is a domain with field of fractions \(K\) and \(E\) is a subset of \(K\). Then we say that \(\operatorname{Int}^{\mathrm{R}}(E,D)\) has the **strong Skolem property** if whenever \(I\) and \(J\) are finitely-generated ideals of \(\operatorname{Int}^{\mathrm{R}}(E,D)\) such that \(I(a)=J(a)\) for all \(a\in E\), then \(I=J\).
**Remark 2.2**.: _The strong Skolem property implies the Skolem property by taking \(J=\operatorname{Int}^{R}(E,D)\)._
The strong Skolem property is closely related to the property of being a Prufer domain. If \(\operatorname{Int}^{\mathrm{R}}(D)\) is a Prufer domain, then \(\operatorname{Int}^{\mathrm{R}}(D)\) has the strong Skolem property [1, Theorem 4.2].
In fact, if \(D\) is assumed to be a Prufer domain, then \(\operatorname{Int}^{\mathrm{R}}(D)\) having the strong Skolem property implies that \(\operatorname{Int}^{\mathrm{R}}(D)\) is a Prufer domain [10, Theorem 4.7].
We may reformulate the definitions of the Skolem property and the strong Skolem property using a closure operation.
**Definition 2.3**.: Let \(D\) be a domain and let \(E\) be a subset of the field of fractions of \(D\). For an ideal \(I\) of \(\operatorname{Int}^{\mathrm{R}}(E,D)\), the **Skolem closure of \(I\)** is
\[I^{\mathrm{Sk}}\coloneqq\{\varphi\in\operatorname{Int}^{\mathrm{R}}(E,D)\mid \varphi(a)\in I(a)\text{ for all }a\in E\}.\]
We say that \(I\) is **Skolem closed** if \(I=I^{\mathrm{Sk}}\).
**Remark 2.4**.: _Take \(D\) to be a domain with field of fractions \(K\). Let \(E\) be a subset of \(K\). Then_
1. \(\operatorname{Int}^{R}(E,D)\) _has the Skolem property if and only if the Skolem closure of a finitely-generated proper ideal of_ \(\operatorname{Int}^{R}(E,D)\) _is a proper ideal, and_
2. \(\operatorname{Int}^{R}(E,D)\) _has the strong Skolem property if and only if every finitely-generated ideal of_ \(\operatorname{Int}^{R}(E,D)\) _is Skolem closed._
In Section 5, we give an example of a domain \(D\) such that \(\operatorname{Int}^{\mathrm{R}}(D)\) has the Skolem property but not the strong Skolem property.
In this work, we relate the Skolem property and related properties to the prime spectrum of \(\operatorname{Int}^{\mathrm{R}}(E,D)\). The following are some definitions concerning the prime spectrum of \(\operatorname{Int}^{\mathrm{R}}(E,D)\).
**Definition 2.5**.: Let \(D\) be a domain with field of fractions \(K\). Take \(E\) to be some subset of \(K\). For any \(a\in E\) and \(\mathfrak{p}\in\operatorname{Spec}(D)\), we can define
\[\mathfrak{P}_{\mathfrak{p},a}\coloneqq\{\varphi\in\operatorname{Int}^{\mathrm{ R}}(E,D)\mid\varphi(a)\in\mathfrak{p}\}.\]
The set \(\mathfrak{P}_{\mathfrak{p},a}\) is a prime ideal of \(\operatorname{Int}^{\mathrm{R}}(E,D)\) since \(\operatorname{Int}^{\mathrm{R}}(E,D)/\mathfrak{P}_{\mathfrak{p},a}\cong D/ \mathfrak{p}\). Prime ideals of \(\operatorname{Int}^{\mathrm{R}}(E,D)\) of the form \(\mathfrak{P}_{\mathfrak{p},a}\) for some \(a\in E\) and \(\mathfrak{p}\in\operatorname{Spec}(D)\) are called **pointed prime ideals**. If \(\mathfrak{m}\) is a maximal ideal of \(D\), then we write \(\mathfrak{M}_{\mathfrak{m},a}\) for \(\mathfrak{P}_{\mathfrak{m},a}\). Ideals of \(\operatorname{Int}^{\mathrm{R}}(E,D)\) of the form \(\mathfrak{M}_{\mathfrak{m},a}\) for some \(a\in E\) and \(\mathfrak{m}\in\operatorname{Max}(D)\), where \(\operatorname{Max}(D)\) is the set of all maximal ideals of \(D\), are called **maximal pointed ideals**. The maximal pointed ideals are indeed maximal ideals of \(\operatorname{Int}^{\mathrm{R}}(E,D)\).
Pointed prime ideals and maximal pointed ideals in general do not describe all of the prime ideals or all of the maximal ideals of a ring in the form \(\operatorname{Int}^{\mathrm{R}}(E,D)\). Some prime ideals of \(\operatorname{Int}^{\mathrm{R}}(E,D)\) are described through ultrafilters.
**Definition 2.6**.: Let \(S\) be a set. A **filter**\(\mathcal{F}\) on \(S\) is a collection of subsets of \(S\) such that
1. \(\emptyset\notin\mathcal{F}\);
2. if \(A,B\in\mathcal{F}\), then \(A\cap B\in\mathcal{F}\); and
3. if \(A\in\mathcal{F}\) and \(B\subseteq S\) is such that \(A\subseteq B\), then \(B\in\mathcal{F}\).
If \(\mathcal{U}\) is a filter on \(S\) such that for every \(A\subseteq S\), we have \(A\in\mathcal{U}\) or \(S\setminus A\in\mathcal{U}\), then we call \(\mathcal{U}\) an **ultrafilter**. Every filter of \(S\) is contained in some ultrafilter of \(S\) by the Ultrafilter Lemma.
Through the use of filters and ultrafilters, we can obtain a notion of a limit of a family of ideals.
**Definition 2.7**.: Let \(R\) be a commutative ring. Take \(\{I_{\lambda}\}_{\lambda\in\Lambda}\) to be a family of ideals of \(R\). For each \(r\in R\), we define the **characteristic set of \(r\) on \(\{I_{\lambda}\}\)** to be
\[\chi_{r}=\{I_{\lambda}\mid r\in I_{\lambda}\}.\]
For a filter \(\mathcal{F}\) on \(\{I_{\lambda}\}\), we define the **filter limit of \(\{I_{\lambda}\}\) with respect to \(\mathcal{F}\)** as
\[\lim_{\mathcal{F}}I_{\lambda}=\{r\in R\mid\chi_{r}\in\mathcal{F}\}.\]
If \(\mathcal{F}\) is an ultrafilter, we call \(\lim_{\mathcal{F}}I_{\lambda}\) the **ultrafilter limit of \(\{I_{\lambda}\}\) with respect to \(\mathcal{F}\)**.
**Remark 2.8**.: _The filter and ultrafilter limits of a family of ideals are also themselves ideals. If \(\{\mathfrak{p}_{\lambda}\}_{\lambda\in\Lambda}\) is a family of prime ideals and \(\mathcal{U}\) is an ultrafilter of \(\{\mathfrak{p}_{\lambda}\}\), then the ultrafilter limit \(\lim_{\mathcal{U}}\mathfrak{p}_{\lambda}\) is also a prime ideal. This gives rise to the **ultrafilter topology** on \(\operatorname{Spec}(R)\), which is identical to the patch topology and the constructible topology on \(\operatorname{Spec}(R)\)[10]._
We now define star operations gives some examples and properties related to star operations. This will later assist us in extending the Skolem property. Furthermore, under certain conditions, we can derive a star operation from the Skolem closure.
**Definition 2.9**.: Let \(D\) be a domain with field of fractions \(K\). Denote by \(F(D)\) the set of nonzero fractional ideals of \(D\). A map \(F(D)\to F(D)\) given by \(I\mapsto I^{\star}\) is a **star operation on**\(D\) if for every \(a\in K\) and all \(I,J\in F(D)\), we have
1. \((a)^{\star}=(a)\), \((aI)^{\star}=aI^{\star}\);
2. \(I\subseteq I^{\star}\), if \(I\subseteq J\), then \(I^{\star}\subseteq J^{\star}\); and
3. \((I^{\star})^{\star}=I^{\star}\)
We say that \(I\) is a \(\star\)**-ideal** if \(I^{\star}=I\). We say two star operations \(\star_{1}\) and \(\star_{2}\) on \(D\) are equal, denoted as \(\star_{1}=\star_{2}\) if for all \(I\in F(D)\), we have \(I^{\star_{1}}=I^{\star_{2}}\). We say that \(\star_{1}\leq\star_{2}\) if for all \(I\in F(D)\), we have \(I^{\star_{1}}\subseteq I^{\star_{2}}\).
**Example 2.10**.: Let \(D\) be a domain. The \(d\)-operation given by \(I_{d}=I\) for all nonzero fractional ideals of \(D\) is a star operation.
**Example 2.11**.: [11] Let \(D\) be a domain with field of fractions \(K\). A nonzero fractional ideal \(J\) of \(D\) is called a GV-ideal if \(J\) is finitely-generated and \(J^{-1}=R\). The \(w\)-operation is a star operation given by
\[I_{w}=\{a\in K\mid\text{there exists a GV-ideal }J\text{ such that }aJ\subseteq I\}\]
for each \(I\in F(D)\).
**Example 2.12**.: Let \(D\) be a domain. Take a nonzero fractional ideal \(I\) of \(D\). The divisorial closure operation \(v\) given by \(I_{v}=(I^{-1})^{-1}\) is a star operation. Alternatively, \(I_{v}\) is the intersection of the nonzero fractional ideals of \(D\) containing \(I\).
**Example 2.13**.: Let \(D\) be a domain and \(I\) be a nonzero fractional ideal of \(D\). The \(t\)-operation given by
\[I_{t}=\bigcup_{J\in\Lambda_{I}}J_{v},\]
where \(\Lambda_{I}\) is the family of all nonzero finitely-generated fractional ideals of \(D\) contained in \(I\), is a star operation.
**Remark 2.14**.: _For any domain \(D\), we have \(d\leq w\leq t\leq v\)[10]. Moreover, for any star operation \(\star\) on \(D\), we have \(\star\leq v\)._
In general, we can do the same procedure as the one to obtain the \(t\)-operation from the \(v\)-operation on any star operation to obtain another star operation.
**Definition 2.15**.: Let \(D\) be a domain and \(\star\) be a star operation on \(D\). We define the star operation \(\star_{f}\) on \(D\) by taking a nonzero fractional ideal \(I\) of \(D\) and assigning
\[I^{\star_{f}}=\bigcup_{J\in\Lambda_{I}}J^{\star},\]
where \(\Lambda_{I}\) is the family of all nonzero finitely-generated fractional ideals of \(D\) contained in \(I\). Note that \(\star_{f}\leq\star\).
If \(\star=\star_{f}\), we say that \(\star\) is a **finite type** star operation.
**Example 2.16**.: The \(t\)-operation is a finite type star operation. In fact, for any finite type star operation \(\star\) on a domain \(D\), we have \(\star\leq t\).
Finite type star operations are important as they behave nicely with filter limits.
**Proposition 2.17**.: _Let \(\star\) be a finite type star operation on \(D\). Suppose that \(\lim_{\mathcal{F}}\mathfrak{a}_{\lambda}\) is the limit of a family \(\{\mathfrak{a}_{\lambda}\}\) of \(\star\)-ideals of \(D\) with respect to a filter \(\mathcal{F}\). If \(\mathfrak{a}_{\mathcal{F}}\) is a nonzero ideal, then \(\mathfrak{a}_{\mathcal{F}}\) is a \(\star\)-ideal._
Proof.: Let \(J\subseteq\mathfrak{a}_{\mathcal{F}}\) be a nonzero finitely-generated ideal, generated by \(a_{1},\ldots,a_{n}\). Let \(\mathfrak{a}_{\lambda}\in\chi_{a_{1}}\cap\cdots\cap\chi_{a_{n}}\). This means that \(J\subseteq\mathfrak{a}_{\lambda}\) and since \(\mathfrak{a}_{\lambda}\) is a \(\star\)-ideal, we have that \(J^{\star}\subseteq\mathfrak{a}_{\lambda}\). Thus, \(\chi_{a_{1}}\cap\cdots\cap\chi_{a_{n}}\subseteq\chi_{a}\) for all \(a\in J^{\star}\), which implies that \(\chi_{a}\in\mathcal{F}\) for all \(a\in J^{\star}\). This shows that \(J^{\star}\subseteq\mathfrak{a}_{\mathcal{F}}\). Since \(\star\) is of finite type, we can conclude that \(\mathfrak{a}_{\mathcal{F}}^{\star}=\mathfrak{a}_{\mathcal{F}}\).
Given a domain \(D\), it is not true in general that the Skolem closure \({}^{\mathrm{Sk}}\) on \(\mathrm{Int}^{\mathrm{R}}(D)\) is a star operation. However, if we apply the finite type construction on \({}^{\mathrm{Sk}}\), we do get a star operation.
**Definition 2.18**.: Let \(D\) be a domain with \(E\) a subset of the field of fractions. For a nonzero integral ideal \(I\) of \(\mathrm{Int}^{\mathrm{R}}(E,D)\), we define
\[I^{\mathrm{Sk}_{f}}=\bigcup_{J\in\Lambda_{I}}J^{\mathrm{Sk}},\]
where \(\Lambda_{I}\) is the family of all nonzero finitely-generated ideals of \(\mathrm{Int}^{\mathrm{R}}(E,D)\) contained in \(I\). The operation \(\mathrm{Sk}_{f}\) extends uniquely to the nonzero fractional ideals of \(\mathrm{Int}^{\mathrm{R}}(E,D)\).
For a domain \(D\) that is not a field with field of fractions \(K\), a rational function \(\varphi\in K(x)\) such that \(\varphi(d)\in D\) for all but finitely many \(d\in D\) is in \(\mathrm{Int}^{\mathrm{R}}(D)\)[12, p. 260]. We also know that \(\mathrm{Int}^{\mathrm{R}}(D)^{\mathrm{Sk}}=\mathrm{Int}^{\mathrm{R}}(D)\). Putting these two facts together along with [10, Theorem 4.2] gives that \(\mathrm{Sk}_{f}\) is a finite type star operation on \(\mathrm{Int}^{\mathrm{R}}(D)\). Using the notion of a strongly coherent set, defined in [12], we can determine that if \(E\) is a strongly coherent set for \(\mathrm{Int}^{\mathrm{R}}(E,D)\), then \(\mathrm{Sk}_{f}\) is a finite-type star operation for \(\mathrm{Int}^{\mathrm{R}}(E,D)\).
**Definition 2.19**.: Let \(D\) be a domain with field of fractions \(K\). A subset \(E\) of \(K\) is a **strongly coherent set** for \(\mathrm{Int}^{\mathrm{R}}(E,D)\) if for every finitely-generated ideal \(I\) of \(\mathrm{Int}^{\mathrm{R}}(E,D)\) and every rational function \(\varphi\in K(x)\) such that \(\varphi(a)\in I(a)\) for all but finitely many \(a\in E\), we have \(\varphi\in I^{\mathrm{Sk}}\).
A proof similar to that of [10, Theorem 4.2] allows us to make the following conclusion.
**Proposition 2.20**.: _Let \(D\) be a domain with field of fractions \(K\). Let \(E\subseteq K\) such that \(E\) is a strongly coherent set for \(\mathrm{Int}^{R}(E,D)\). Then \(\mathrm{Sk}_{f}\) is a finite-type star operation on \(\mathrm{Int}^{R}(E,D)\)._
Proof.: It suffices to check the properties of a star operation on integral ideals [11, p. 393]. We begin by letting \(\varphi\in K(x)\) be a nonzero element and \(I\) be a nonzero finitely-generated integral ideal of \(\mathrm{Int}^{\mathrm{R}}(E,D)\) and showing that \((\varphi I)^{\mathrm{Sk}}=\varphi I^{\mathrm{Sk}}\). We immediately have \(\varphi I^{\mathrm{Sk}}\subseteq(\varphi I)^{\mathrm{Sk}}\). Now let \(\psi\in(\varphi I)^{\mathrm{Sk}}\). Then for all \(a\in E\) except the finitely many zeroes of \(\varphi\), we have \(\psi(a)\in\varphi(a)I(a)\) and consequently, \(\frac{\psi(a)}{\varphi(a)}\in I(a)\). Since \(E\) is a strongly coherent set of \(\mathrm{Int}^{\mathrm{R}}(E,D)\), we obtain \(\frac{\psi}{\varphi}\in I^{\mathrm{Sk}}\)
Therefore, \(\psi\in\varphi I^{\rm Sk}\), so we have \((\varphi I)^{\rm Sk}=\varphi I^{\rm Sk}\) for a nonzero \(\varphi\in K(x)\) and a nonzero finitely-generated integral ideal \(I\) of \({\rm Int}^{\rm R}(E,D)\).
Let \(\varphi\in{\rm Int}^{\rm R}(E,D)\) be a nonzero element and let \(I\) be a nonzero integral ideal of \({\rm Int}^{\rm R}(E,D)\). Any finitely-generated ideal contained in \(\varphi I\) is of the form \(\varphi J\) for some finitely-generated ideal \(J\) of \({\rm Int}^{\rm R}(E,D)\) contained in \(I\). Therefore,
\[(\varphi I)^{\rm Sk_{f}}=\bigcup_{J\in\Lambda_{I}}(\varphi J)^{\rm Sk}=\bigcup _{J\in\Lambda_{I}}\varphi J^{\rm Sk}=\varphi I^{\rm Sk_{f}},\]
where \(\Lambda_{I}\) is the family of all nonzero finitely-generated ideals of \({\rm Int}^{\rm R}(E,D)\) contained in \(I\).
Next, let \(I\) and \(J\) be nonzero integral ideals of \({\rm Int}^{\rm R}(E,D)\). We can verify that \(I\subseteq I^{\rm Sk_{f}}\) and if \(I\subseteq J\), then \(I^{\rm Sk_{f}}\subseteq J^{\rm Sk_{f}}\). Lastly, \((I^{\rm Sk_{f}})^{\rm Sk_{f}}=I^{\rm Sk_{f}}\). Therefore, \({\rm Sk}_{f}\) is indeed a star operation on \({\rm Int}^{\rm R}(E,D)\). Moreover, due to the definition of \({\rm Sk}_{f}\), we have \(({\rm Sk}_{f})_{f}={\rm Sk}_{f}\) and thus \({\rm Sk}_{f}\) is finite type.
A use of star operations is to get a finer look at the ideals of a ring. We can get a finer look at the prime ideals of a ring as well.
**Definition 2.21**.: Let \(D\) be a domain with a star operation \(\star\). If \(I\) is a \(\star\)-ideal and a prime ideal of \(D\), then we say that \(I\) is a \(\star\)**-prime**. If \(I\) is maximal among all \(\star\)-ideals with respect to inclusion, then we say that \(I\) is a \(\star\)**-maximal** ideal. We denote by \(\star-{\rm Max}(D)\) the set of all \(\star\)-maximal ideals of \(D\).
**Remark 2.22**.: _[_GR04_, Jaf60]_ _A \(\star\)-maximal ideal, if it exists, is a \(\star\)-prime ideal. If \(\star\) is of finite type, then \(\star-{\rm Max}(D)\) is not empty._
Lastly, we also need the tools of the minimum valuation functions and local polynomials from [11]. This allows us to consider a lot of the monomial valuations on a field of rational functions at once.
**Definition 2.23**.: Let \(V\) be a valuation domain with value group \(\Gamma\), valuation \(v\), and field of fractions \(K\). Take a nonzero polynomial \(f\in K[x]\) and write it as \(f(x)=a_{n}x^{n}+\cdots+a_{1}x+a_{0}\) for \(a_{0},a_{1},\ldots,a_{n}\in K\). We define the **minimum valuation function of \(f\)** as \({\rm minval}_{f,v}:\Gamma\to\Gamma\) by
\[\gamma\mapsto\min\{v(a_{0}),v(a_{1})+\gamma,v(a_{2})+2\gamma,\ldots,v(a_{n})+ n\gamma\}\]
for each \(\gamma\in\Gamma\). We will denote \({\rm minval}_{f,v}\) as \({\rm minval}_{f}\) if the valuation \(v\) is clear from context. Let \(\mathbb{Q}\Gamma\coloneqq\Gamma\otimes_{\mathbb{Z}}\mathbb{Q}\). It is oftentimes helpful to think of \({\rm minval}_{f}\) as a function from \(\mathbb{Q}\Gamma\) to \(\mathbb{Q}\Gamma\) defined \(\gamma\mapsto\min\{v(a_{0}),v(a_{1})+\gamma,v(a_{2})+2\gamma,\ldots,v(a_{n})+ n\gamma\}\) for each \(\gamma\in\mathbb{Q}\Gamma\). For a nonzero rational function \(\varphi\in K(x)\), we may write \(\varphi=\frac{f}{g}\) for some \(f,g\in K[x]\) and define \({\rm minval}_{\varphi}={\rm minval}_{f}-{\rm minval}_{g}\).
In the same setup, taking \(t\in K\), we can define the **local polynomial of \(f\) at \(t\)** to be
\[{\rm loc}_{f,v,t}(x)=\frac{f(tx)}{a_{d}t^{d}}\mod\mathfrak{m},\]
where \(\mathfrak{m}\) is the maximal ideal of \(V\) and \(d=\max\{i\in\{0,1,\ldots,n\}\mid v(a_{i})+iv(t)={\rm minval}_{f}(v(t))\}\). Again, we may omit the valuation \(v\) in \({\rm loc}_{f,v,t}(x)\) and write \({\rm loc}_{f,t}(x)\) if the valuation is clear from the context.
## 3 The Skolem property and the maximal spectrum
In this section, we establish a connection between the Skolem property and the description of the maximal spectrum through ultrafilters. We then generalize a result about when a domain has a maximal spectrum that admits such a description. This will determine that the Skolem property holds for a family of rings of integer-valued rational functions larger than the family of Prufer domains of integer-valued rational functions.
**Theorem 3.1**.: _Let \(D\) be a domain with field of fractions \(K\) and \(E\) be a nonempty subset of \(K\). Let \(\Lambda\subseteq\mathrm{Max}(D)\) such that \(\mathrm{Max}(D)\) is contained in the ultrafilter closure of \(\Lambda\). Also define the family \(\Pi=\{\mathfrak{M}_{\mathfrak{m},a}\mid\mathfrak{m}\in\Lambda,a\in E\}\). For each \(\varphi\in\mathrm{Int}^{R}(E,D)\), we define \(\chi_{\varphi}=\{\mathfrak{M}_{\mathfrak{m},a}\in\Pi\mid\varphi\in\mathfrak{ M}_{\mathfrak{m},a}\}\), and for an ideal \(I\subseteq\mathrm{Int}^{R}(E,D)\), we define \(\chi_{I}=\{\chi_{\varphi}\mid\varphi\in I\}\). Then the following are equivalent:_
1. \(\chi_{I}\) _has the finite intersection property for all proper ideals_ \(I\subsetneq\mathrm{Int}^{R}(E,D)\)_._
2. \(\chi_{I}\) _has the finite intersection property for all finitely-generated proper ideals_ \(I\subsetneq\mathrm{Int}^{R}(E,D)\)_._
3. \(\mathrm{Max}(\mathrm{Int}^{R}(E,D))\) _is contained in the ultrafilter closure of_ \(\Pi\)_._
4. \(\mathrm{Int}^{R}(E,D)\) _has the Skolem property._
Proof.: We will show that 1, 2, and 3 are equivalent and then 2 and 4 are equivalent.
1. \(\Longrightarrow\) 2 : Finitely-generated ideals are ideals.
2. \(\Longrightarrow\) 1 : Let \(I\subsetneq\mathrm{Int}^{R}(E,D)\) be a proper ideal. Take \(\varphi_{1},\ldots,\varphi_{n}\in I\). Then \(\chi_{(\varphi_{1},\ldots,\varphi_{n})}\) has the finite intersection property. In particular, \(\chi_{\varphi_{1}}\cap\cdots\cap\chi_{\varphi_{n}}\) is nonempty.
3. \(\Longrightarrow\) 3 : Let \(\mathfrak{P}\) be a maximal ideal of \(\mathrm{Int}^{R}(E,D)\). Then \(\chi_{\mathfrak{P}}\) has the finite intersection property so there exists some ultrafilter \(\mathcal{U}\) of \(\Pi\) such that \(\chi_{\mathfrak{P}}\subseteq\mathcal{U}\). This implies that \(\mathfrak{P}\subseteq\lim\limits_{\mathcal{U}}\mathfrak{M}_{\mathfrak{m},a}\), the ultrafilter limit of \(\Pi\) with respect to \(\mathcal{U}\). Since \(\lim\limits_{\mathcal{U}}\mathfrak{M}_{\mathfrak{m},a}\) is a (proper) prime ideal, we conclude that \(\mathfrak{P}=\lim\limits_{\mathcal{U}}\mathfrak{M}_{\mathfrak{m},a}\).
4. \(\Longrightarrow\) 1 : Let \(I\subsetneq\mathrm{Int}^{R}(E,D)\) be a proper ideal. This means that \(I\subseteq\lim\limits_{\mathcal{U}}\mathfrak{M}_{\mathfrak{m},a}\), the ultrafilter limit of \(\Pi\) with respect to \(\mathcal{U}\), for some ultrafilter \(\mathcal{U}\) of \(\Pi\). This shows that \(\chi_{I}\subseteq\mathcal{U}\) and therefore \(\chi_{I}\) has the finite intersection property.
5. \(\Longrightarrow\) 4 : Let \((\varphi_{1},\ldots,\varphi_{n})\subsetneq\mathrm{Int}^{R}(E,D)\) be a finitely-generated proper ideal. Then \(\chi_{\varphi_{1}}\cap\cdots\cap\chi_{\varphi_{n}}\) is not empty so the intersection contains some \(\mathfrak{M}_{\mathfrak{m},a}\in\Pi\) for some \(a\in E\) and \(\mathfrak{m}\in\Lambda\). This implies that \(\varphi_{1}(a),\ldots,\varphi_{n}(a)\in\mathfrak{m}\). Thus, the value ideal \((\varphi_{1},\ldots,\varphi_{n})(a)\) is a proper ideal of \(D\).
6. \(\Longrightarrow\) 2 : Let \((\varphi_{1},\ldots,\varphi_{n})\) be a proper ideal of \(\mathrm{Int}^{R}(E,D)\). Due to \(\mathrm{Int}^{R}(E,D)\) having the Skolem property, there exists \(a\in E\) such that the value ideal \((\varphi_{1},\ldots,\varphi_{n})(a)\) is a proper ideal of \(D\). This means that \((\varphi_{1},\ldots,\varphi_{n})(a)\subseteq\mathfrak{q}\) for some \(\mathfrak{q}\in\mathrm{Max}(D)\). Since the ultrafilter closure of \(\Lambda\) contains all maximal ideals and \((\varphi_{1}(a),\ldots,\varphi_{n}(a))\) is finitely-generated, there exists some \(\mathfrak{m}\in\Lambda\) such that \((\varphi_{1}(a),\ldots,\varphi_{n}(a))\subseteq\mathfrak{m}\). Therefore, \(\mathfrak{M}_{\mathfrak{m},a}\in\chi_{\varphi_{1}}\cap\cdots\cap\chi_{\varphi_ {n}}\). We conclude that \(\chi_{(\varphi_{1},\ldots,\varphi_{n})}\) has the finite intersection property.
We utilize this theorem to establish the Skolem property for a domain of the form \(\mathrm{Int}^{R}(E,D)\) by determining if all of the maximal ideals of \(\mathrm{Int}^{R}(E,D)\) are ultrafilter limits of some family of pointed maximal ideals. The following proposition offers an instance of when the maximal ideals admit this description.
**Proposition 3.2**.: _[_2_, Proposition 2.8]_ _Let \(D\) be the intersection \(D=\bigcap\limits_{\lambda\in\Lambda}V_{\lambda}\) of a family of valuation domains. For each \(\lambda\in\Lambda\), denote by \(\mathfrak{p}_{\lambda}\), the center in \(D\) of the maximal ideal of \(V_{\lambda}\)._
1. _If_ \(I\) _is a_ \(t\)_-ideal of_ \(D\)_, then_ \(I\) _is contained in the limit_ \(\lim\limits_{\mathcal{F}}\mathfrak{p}_{\lambda}\)_, of the family_ \(\{\mathfrak{p}_{\lambda}\}\)_, with respect to some filter_ \(\mathcal{F}\)
2. _If moreover_ \(I\) _is maximal, or if_ \(I\) _is_ \(t\)_-maximal and every_ \(V_{\lambda}\) _is essential, i.e._ \(V_{\lambda}=D_{\mathfrak{p}_{\lambda}}\)_, then_ \(I=\lim\limits_{\mathcal{F}}\mathfrak{p}_{\lambda}\)_._
**Remark 3.3**.: _If \(D\) is a domain and \(E\) is a subset of the field of fractions such that \(\operatorname{Int}^{R}(E,D)\) is a Prufer domain, then the previous proposition implies that \(\operatorname{Int}^{R}(E,D)\) has the Skolem property. We write_
\[\operatorname{Int}^{R}(E,D)=\bigcap\limits_{\mathfrak{m}\in\operatorname{Max} (D),a\in E}\operatorname{Int}^{R}(E,D)_{\mathfrak{M}_{\mathfrak{m},a}}\]
_with each \(\operatorname{Int}^{R}(E,D)_{\mathfrak{M}_{\mathfrak{m},a}}\) being a valuation domain. Furthermore, every maximal ideal of \(\operatorname{Int}^{R}(E,D)\) is a \(t\)-ideal and therefore an ultrafilter limit of \(\{\mathfrak{M}_{\mathfrak{m},a}\mid\mathfrak{m}\in\operatorname{Max}(D),a\in E\}\) by Proposition 3.2(2). Then Theorem 3.1 implies that \(\operatorname{Int}^{R}(E,D)\) has the Skolem property. This is essentially the same proof as [10, Lemma 4.1] but in the language of ultrafilters._
We now generalize Proposition 3.2 in order to show the Skolem property for a larger family of rings of integer-valued rational functions.
**Proposition 3.4**.: _Let \(D\) be the intersection \(D=\bigcap\limits_{\lambda}D_{\lambda}\) of a family of local overrings. For each \(\lambda\), denote by \(\mathfrak{p}_{\lambda}\) the center in \(D\) of the maximal ideal \(\mathfrak{m}_{\lambda}\) of \(D_{\lambda}\)._
1. _If_ \(I\) _is a_ \(t\)_-ideal of_ \(D\)_, then_ \(I\) _is contained in_ \(\lim\limits_{\mathcal{F}}\mathfrak{p}_{\lambda}\) _with respect to some filter_ \(\mathcal{F}\) _of_ \(\{\mathfrak{p}_{\lambda}\}\)_._
2. _If moreover_ \(I\) _is maximal, or if_ \(I\) _is_ \(t\)_-maximal and every_ \(\mathfrak{p}_{\lambda}\) _is a_ \(t\)_-ideal, then_ \(I=\lim\limits_{\mathcal{F}}\mathfrak{p}_{\lambda}\)_._
Proof.: Let \(I\) be a \(t\)-ideal of \(D\) and take \(J\subseteq I\) to be a finitely-generated ideal. We claim that \(J\subseteq\mathfrak{p}_{\lambda}\) for some \(\lambda\). If not, then for each \(\lambda\), there exists some \(a\in J\) such that \(a\notin\mathfrak{p}_{\lambda}\). We can more specifically say that \(a\in D_{\lambda}\setminus\mathfrak{m}_{\lambda}\), so \(a\) is a unit of \(D_{\lambda}\). Now let \(b\in J^{-1}\). Then \(bJ\subseteq D\). In particular, we have \(ba\in D\subseteq D_{\lambda}\). This implies that \(b=baa^{-1}\in D_{\lambda}\). Thus, \(J^{-1}\subseteq D_{\lambda}\) for each \(\lambda\), which means \(J^{-1}=D\). This is a contradiction, as this implies \(J_{v}=D\), but \(J_{v}\subseteq I\subsetneq D\). We see now that \(\chi_{I}\coloneqq\{\chi_{d}\mid d\in I\}\) has the finite intersection property. This is because for \(d_{1},\ldots,d_{n}\in I\), we have \((d_{1},\ldots,d_{n})\subseteq\mathfrak{p}_{\lambda}\) for some \(\lambda\) and thus \(\mathfrak{p}_{\lambda}\in\chi_{d_{1}}\cap\cdots\cap\chi_{d_{n}}\). Then \(\chi_{I}\) can be extended to a filter \(\mathcal{F}\) of \(\{\mathfrak{p}_{\lambda}\}\), and this means that \(I\subseteq\lim\limits_{\mathcal{F}}\mathfrak{p}_{\lambda}\).
If we furthermore assume that \(I\) is a maximal ideal, then \(I=\lim\limits_{\mathcal{F}}\mathfrak{p}_{\lambda}\). If we assume that \(I\) is \(t\)-maximal and every \(\mathfrak{p}_{\lambda}\) is a \(t\)-ideal, then \(\lim\limits_{\mathcal{F}}\mathfrak{p}_{\lambda}\) is a \(t\)-ideal, so by the \(t\)-maximality of \(I\), we must have \(I=\lim\limits_{\mathcal{F}}\mathfrak{p}_{\lambda}\).
We can always write a domain \(D\) as \(D=\bigcap\limits_{\mathfrak{m}\in\Lambda}D_{\mathfrak{m}}\), where \(\Lambda\) is some subset of \(\operatorname{Max}(D)\). If all of the maximal ideals of \(D\) are \(t\)-ideals, the previous proposition applies and shows that \(\operatorname{Max}(D)\) is contained in the ultrafilter closure of \(\Lambda\). A ring with such a property is a DW-domain.
**Definition 3.5**.: A domain in which \(d=w\) as star operations is called a **DW-domain**. Equivalent for a domain \(D\), every maximal ideal of \(D\) is a \(t\)-ideal if and only if \(D\) is a DW-domain [14, Proposition 2.2].
We can then use the form of the maximal ideals of a DW-domain to show the Skolem property.
**Theorem 3.6**.: _Let \(D\) be a domain with field of fractions \(K\) and \(E\) a subset of \(K\). Suppose that \(\operatorname{Int}^{R}(E,D)\) is a DW-domain. Then \(\operatorname{Int}^{R}(E,D)\) has the Skolem property._
Proof.: Denote by \(D_{\mathfrak{m},a}=\{\varphi\in K(x)\mid\varphi(a)\in D_{\mathfrak{m}}\}\) for each \(\mathfrak{m}\in\operatorname{Max}(D)\) and \(a\in E\). We have that
\[\operatorname{Int}^{R}(E,D)=\bigcap\limits_{\mathfrak{m}\in\operatorname{Max} (D),a\in E}D_{\mathfrak{m},a}.\]
Notice that each \(D_{\mathfrak{m},a}\) is a local domain centered on \(\mathfrak{M}_{\mathfrak{m},a}\). Because every maximal ideal of \(\mathrm{Int}^{\mathrm{R}}(E,D)\) is a \(t\)-ideal, we use Proposition 3.4 to conclude that all of the maximal ideals of \(\mathrm{Int}^{\mathrm{R}}(E,D)\) are ultrafilter limits of \(\{\mathfrak{M}_{\mathfrak{m},a}\mid\mathfrak{m}\in\mathrm{Max}(D),a\in E\}\). We satisfy Theorem 3.1(3) and thus \(\mathrm{Int}^{\mathrm{R}}(E,D)\) has the Skolem property.
If \(D\) is a Prufer domain, then \(d=t\) since every nonzero finitely-generated ideal of \(D\) is invertible. Therefore, \(d=w\) and thus every Prufer domain is a DW-domain. The previous result is therefore more general than the previously known result of \(\mathrm{Int}^{\mathrm{R}}(E,D)\) being Prufer implying the Skolem property [12, Lemma 4.1].
## 4 The \((\star_{1},\star_{2})\)-Skolem property
In Theorem 3.1, the Skolem property is linked to a description of the maximal spectrum through ultrafilters. We notice that for a star operation \(\star\) on a domain \(D\), the \(\star\)-maximal ideals of \(D\) are prime ideals of \(D\). By Proposition 2.17, if \(\star\) is also a finite type star operation, then ultrafilter limits of a set of \(\star\)-prime ideals are also \(\star\)-prime, as long as the limit is not the zero ideal. This leads to a generalization of Theorem 3.1 using star operations that gives rise to a more generalized notion of the Skolem property using star operations.
**Definition 4.1**.: Let \(D\) be a domain and \(E\) a subset of the field of fractions of \(D\). Also let \(\star_{1}\) and \(\star_{2}\) be star operations on \(D\) and \(\mathrm{Int}^{\mathrm{R}}(E,D)\), respectively. We say that \(\mathrm{Int}^{\mathrm{R}}(E,D)\) has the \((\star_{1},\star_{2})\)**-Skolem property** if \(I\subseteq\mathrm{Int}^{\mathrm{R}}(E,D)\) is a finitely-generated ideal such that \(I(a)^{\star_{1}}=D\) for each \(a\in E\), then \(I^{\star_{2}}=\mathrm{Int}^{\mathrm{R}}(E,D)\). Note that the \((d,d)\)-Skolem property and the Skolem property are equivalent.
The \((\star_{1},\star_{2})\)-Skolem property is equivalent to the property that if \(\varphi_{1},\ldots,\varphi_{n}\) are elements of \(\mathrm{Int}^{\mathrm{R}}(E,D)\) such that \((\varphi_{1},\ldots,\varphi_{n})^{\star_{2}}\) is a proper ideal of \(\mathrm{Int}^{\mathrm{R}}(E,D)\), then there exists some \(a\in E\) such that \((\varphi_{1}(a),\ldots,\varphi_{n}(a))^{\star_{1}}\) is a proper ideal of \(D\).
**Remark 4.2**.: _Let \(D\) be a domain and \(E\) some subset of the field of fractions. If \(I\) be a finitely-generated ideal of \(\mathrm{Int}^{\mathrm{R}}(E,D)\) such that \(I^{\star_{2}}=\mathrm{Int}^{R}(E,D)\), it is not necessarily true that \(I(a)^{\star_{1}}=D\) for all \(a\in E\)._
_For example, if \(D=\mathbb{C}[s]\), then \(\mathrm{Int}^{R}(D)=\mathbb{C}[x,s]\) and \((x,s)\subseteq\mathrm{Int}^{R}(D)\) is a finitely-generated ideal such that \((x,s)_{t}=\mathrm{Int}^{R}(D)\), yet \((x,s)(a)_{t}=(s)_{t}=(s)\) for \(a\in s\mathbb{C}[s]\)._
If we vary the star operations on \(D\) and \(\mathrm{Int}^{\mathrm{R}}(E,D)\), we can compare the star operation Skolem properties provided that the star operations are comparable in a certain way.
**Proposition 4.3**.: _Let \(D\) be a domain with field of fractions \(K\) and \(E\) be a subset of \(K\). Suppose that \(\star_{1},\star_{1}^{\prime}\) are two star operations on \(D\) such that \(\star_{1}^{\prime}\leq\star_{1}\), and \(\star_{2},\star_{2}^{\prime}\) are two star operations on \(\mathrm{Int}^{R}(E,D)\) such that \(\star_{2}\leq\star_{2}^{\prime}\). If \(\mathrm{Int}^{R}(E,D)\) has the \((\star_{1},\star_{2})\)-Skolem property, then \(\mathrm{Int}^{R}(E,D)\) has the \((\star_{1}^{\prime},\star_{2}^{\prime})\)-Skolem property._
Proof.: Let \(I\) be a nonzero finitely-generated ideal of \(\mathrm{Int}^{\mathrm{R}}(E,D)\) such that \(I^{\star_{2}^{\prime}}\) is proper. Since \(I^{\star_{2}^{\prime}}\subseteq I^{\star_{2}^{\prime}}\subsetneq\mathrm{Int}^{ \mathrm{R}}(E,D)\) and \(\mathrm{Int}^{\mathrm{R}}(E,D)\) has the \((\star_{1},\star_{2})\)-Skolem property, there exists \(a\in E\) such that \(I(a)^{\star_{1}}\) is a proper ideal of \(D\). By assumption, \(I(a)^{\star_{1}^{\prime}}\subseteq I(a)^{\star_{1}}\), so we have that \(I(a)^{\star_{1}^{\prime}}\) is proper. Thus, \(\mathrm{Int}^{\mathrm{R}}(E,D)\) has the \((\star_{1}^{\prime},\star_{2}^{\prime})\)-Skolem property.
We now give the analogue of Theorem 3.1 in terms of the star operations.
**Theorem 4.4**.: _Let \(D\) be a domain with field of fractions \(K\) and \(E\) be a nonempty subset of \(K\). Take \(\star_{1}\) to be a finite type star operation on \(D\) and \(\star_{2}\) to be a finite type star operation on \(\mathrm{Int}^{R}(E,D)\). Let \(\Lambda\subseteq\mathrm{Spec}(D)\) such that each \(\mathfrak{p}\in\Lambda\) has the property that \(\mathfrak{p}^{\star_{1}}\subsetneq D\) and \(\star_{1}-\mathrm{Max}(D)\) is contained
_in the ultrafilter closure of \(\Lambda\). Define the family \(\Pi=\{\mathfrak{P}_{\mathfrak{p},a}\mid\mathfrak{p}\in\Lambda,a\in E\}\). We assume every ideal in \(\Pi\) is a \(\star_{2}\)-prime ideal._
_For an element \(\varphi\in\operatorname{Int}^{R}(E,D)\), we define \(\chi_{\varphi}=\{\mathfrak{P}_{\mathfrak{p},a}\in\Pi\mid\varphi\in\mathfrak{P }_{\mathfrak{p},a}\}\), and for an ideal \(I\subseteq\operatorname{Int}^{R}(E,D)\), we define \(\chi_{I}=\{\chi_{\varphi}\mid\varphi\in I\}\). Then the following are equivalent:_
1. \(\chi_{I}\) _has the finite intersection property for all ideals_ \(I\subseteq\operatorname{Int}^{R}(E,D)\) _such that_ \(I^{\star_{2}}\subsetneq\operatorname{Int}^{R}(E,D)\)__
2. \(\chi_{I}\) _has the finite intersection property for every finitely-generated ideals_ \(I\) _of_ \(\operatorname{Int}^{R}(E,D)\) _such that_ \(I^{\star_{2}}\subsetneq\operatorname{Int}^{R}(E,D)\)__
3. \(\star_{2}-\operatorname{Max}(\operatorname{Int}^{R}(E,D))\) _is contained in the ultrafilter closure of_ \(\Pi\)__
4. \(\operatorname{Int}^{R}(E,D)\) _has the_ \((\star_{1},\star_{2})\)_-Skolem property_
Proof.: Again, we first establish the equivalence of 1, 2, and 3. Then, we show that 2 and 4 are equivalent.
1. \(\implies\) 2 : Finitely-generated ideals are ideals.
2. \(\implies\) 1 : Let \(I\subseteq\operatorname{Int}^{\mathrm{R}}(E,D)\) such that \(I^{\star_{2}}\) is proper. Take \(\varphi_{1},\ldots,\varphi_{n}\in I\). Then \((\varphi_{1},\ldots,\varphi_{n})^{\star_{2}}\subseteq I^{\star_{2}}\) is proper, so \(\chi_{(\varphi_{1},\ldots,\varphi_{n})}\) has the finite intersection property. In particular, \(\chi_{\varphi_{1}}\cap\cdots\cap\chi_{\varphi_{n}}\) is nonempty.
3. \(\implies\) 3 : Let \(\mathfrak{P}\) be a \(\star_{2}\)-maximal ideal of \(\operatorname{Int}^{\mathrm{R}}(E,D)\). Then \(\chi_{\mathfrak{P}}\) has the finite intersection property so there exists some ultrafilter \(\mathcal{U}\) of \(\Pi\) such that \(\chi_{\mathfrak{P}}\subseteq\mathcal{U}\). This implies that \(\mathfrak{P}\subseteq\lim\limits_{\mathcal{U}}\mathfrak{P}_{\mathfrak{p},a}\), the ultrafilter limit of \(\Pi\) with respect to \(\mathcal{U}\). Since each \(\mathfrak{P}_{\mathfrak{p},a}\in\Pi\) is a \(\star_{2}\)-ideal, by Proposition 2.17, we have that \(\lim\limits_{\mathcal{U}}\mathfrak{P}_{\mathfrak{p},a}\) is a \(\star_{2}\)-ideal. Together with the fact that \(\mathfrak{P}\) is \(\star_{2}\)-maximal, we conclude that \(\mathfrak{P}=\lim\limits_{\mathcal{U}}\mathfrak{P}_{\mathfrak{p},a}\).
4. \(\implies\) 1 : Let \(I\subseteq\operatorname{Int}^{\mathrm{R}}(E,D)\) such that \(I^{\star_{2}}\) is a proper ideal of \(\operatorname{Int}^{\mathrm{R}}(E,D)\). This means that \(I\subseteq\lim\limits_{\mathcal{U}}\mathfrak{P}_{\mathfrak{p},a}\), the ultrafilter limit of \(\Pi\) with respect to \(\mathcal{U}\), for some ultrafilter \(\mathcal{U}\) of \(\Pi\). This shows that \(\chi_{I}\subseteq\mathcal{U}\) and therefore \(\chi_{I}\) has the finite intersection property.
5. \(\implies\) 4 : Let \((\varphi_{1},\ldots,\varphi_{n})\subseteq\operatorname{Int}^{\mathrm{R}}(E,D)\) such that \((\varphi_{1},\ldots,\varphi_{n})^{\star_{2}}\) is a proper ideal. Then \(\chi_{\varphi_{1}}\cap\cdots\cap\chi_{\varphi_{n}}\) is not empty so it contains some \(\mathfrak{P}_{\mathfrak{p},a}\in\Pi\). This implies that \(\varphi_{1}(a),\ldots,\varphi_{n}(a)\in\mathfrak{p}\). Thus, \((\varphi_{1}(a),\ldots,\varphi_{n}(a))^{\star_{1}}\subseteq\mathfrak{p}^{ \star_{1}}\), a proper ideal of \(D\).
6. \(\implies\) 2 : Let \((\varphi_{1},\ldots,\varphi_{n})\subseteq\operatorname{Int}^{\mathrm{R}}(E,D)\) such that \((\varphi_{1},\ldots,\varphi_{n})^{\star_{2}}\) is a proper ideal of \(\operatorname{Int}^{\mathrm{R}}(E,D)\). Due to the fact that \(\operatorname{Int}^{\mathrm{R}}(E,D)\) has the \((\star_{1},\star_{2})\)-Skolem property, there exists \(a\in E\) such that \((\varphi_{1}(a),\ldots,\varphi_{n}(a))^{\star_{1}}\) is proper. This means that \((\varphi_{1}(a),\ldots,\varphi_{n}(a))\subseteq\mathfrak{q}\) for some \(\mathfrak{q}\in\star_{1}-\operatorname{Max}(D)\). Since the ultrafilter closure of \(\Lambda\) contains all \(\star_{1}\)-maximal ideals and \((\varphi_{1}(a),\ldots,\varphi_{n}(a))\) is finitely-generated, there exists \(\mathfrak{p}\in\Lambda\) so that \((\varphi_{1}(a),\ldots,\varphi_{n}(a))\subseteq\mathfrak{p}\). Thus, \(\mathfrak{P}_{\mathfrak{p},a}\in\chi_{\varphi_{1}}\cap\cdots\cap\chi_{\varphi_{ n}}\). We conclude that \(\chi_{I}\) has the finite intersection property.
The extra assumption that every ideal in \(\Pi\) is a \(\star_{2}\)-prime ideal is achieved if \(\star_{1}\) and \(\star_{2}\) are integrally compatible.
**Definition 4.5**.: Let \(D\) be a domain and take \(E\) to be a subset of the field of fractions of \(D\). Let \(\star_{1}\) be a star operation on \(D\) and \(\star_{2}\) be a star operation on \(\operatorname{Int}^{\mathrm{R}}(E,D)\). We say that \(\star_{1}\) and \(\star_{2}\) are **integrally compatible** if for all ideals \(I\) of \(\operatorname{Int}^{\mathrm{R}}(E,D)\), we have \(I^{\star_{2}}(a)^{\star_{1}}=I(a)^{\star_{1}}\) for all \(a\in E\).
Compare this to the definition of compatibility of star operations on rings in a ring extension given in [1]. For domains \(A\) and \(B\) with \(A\subseteq B\) and star operations \(\star_{1}\) on \(A\) and \(\star_{2}\) on \(B\), we say that \(\star_{1}\) and \(\star_{2}\) are compatible if for any fractional ideal \(I\) of \(A\), we have \((IB)^{\star_{2}}=(I_{1}^{*}B)^{\star_{2}}\). The comparison of ideals for compatibility happens in the larger ring, whereas the comparison of ideals for integral compatibility happens in the smaller ring.
**Proposition 4.6**.: _Let \(D\) be a domain and take \(E\) to be a subset of the field of fractions of \(D\). Also let \(\star_{1}\) be a star operation on \(D\) and \(\star_{2}\) be a star operation on \(\operatorname{Int}^{R}(E,D)\). Suppose that \(\star_{1}\) and \(\star_{2}\) are integrally compatible. If \(\mathfrak{p}\) is a \(\star_{1}\)-prime of \(D\), then \(\mathfrak{P}_{\mathfrak{p},a}\) is a \(\star_{2}\)-prime of \(\operatorname{Int}^{R}(E,D)\) for any \(a\in E\)._
Proof.: Let \(\mathfrak{p}\) be a \(\star_{1}\)-prime of \(D\) and let \(a\in E\). We want to show that \(\mathfrak{P}_{\mathfrak{p},a}^{\star_{2}}\subseteq\mathfrak{P}_{\mathfrak{p},a}\). To this end, we calculate that
\[\mathfrak{P}_{\mathfrak{p},a}^{\star_{2}}(a)^{\star_{1}}=\mathfrak{P}_{ \mathfrak{p},a}(a)^{\star_{1}}=\mathfrak{p}^{\star_{1}}=\mathfrak{p}.\]
Thus, \(\mathfrak{P}_{\mathfrak{p},a}^{\star_{2}}(a)\subseteq\mathfrak{p}\), which means that \(\mathfrak{P}_{\mathfrak{p},a}^{\star_{2}}\subseteq\mathfrak{P}_{\mathfrak{p},a}\). This implies that \(\mathfrak{P}_{\mathfrak{p},a}\) is a \(\star_{2}\)-prime of \(\operatorname{Int}^{R}(E,D)\).
We want to extend Proposition 3.2 using star operations in order to use Theorem 4.4 to show the \((\star_{1},\star_{2})\)-Skolem property. The following results discuss star operations that are \(w\)-operations. It is another generalization of Proposition 3.2. Instead of generalizing the family of ring of which \(D\) can be written as the intersection, we generalize the star operation.
**Proposition 4.7**.: _Let \(D=\bigcap\limits_{\lambda}V_{\lambda}\) be an intersection of a family of valuation domains. Let \(\star\) denote the star operation on \(D\) given by \(I^{\star}=\bigcap\limits_{\lambda}IV_{\lambda}\) for every nonzero fractional ideal of \(D\), and denote by \(\mathfrak{p}_{\lambda}\) the center of \(V_{\lambda}\) in \(D\). Then the following statements hold._
1. _If_ \(I\subsetneq D\) _is a_ \(\star_{f}\)_-ideal of_ \(D\)_, then_ \(I\subseteq\varinjlim_{\mathcal{F}}\mathfrak{p}_{\lambda}\) _for some filter_ \(\mathcal{F}\) _of_ \(\{\mathfrak{p}_{\lambda}\}\)_._
2. _If moreover_ \(I\) _is maximal, or if_ \(I\) _is_ \(\star_{f}\)_-maximal and every_ \(\mathfrak{p}_{\lambda}\) _is a_ \(\star_{f}\)_-ideal, then_ \(I=\varinjlim_{\mathcal{F}}\mathfrak{p}_{\lambda}\)_._
Proof.:
1. Let \(J\subseteq I\) be a finitely-generated ideal. If \(J\not\subseteq\mathfrak{p}_{\lambda}\), then \(JV_{\lambda}=V_{\lambda}\). This is because \(JV_{\lambda}\subseteq\mathfrak{m}_{\lambda}\), the maximal ideal of \(V_{\lambda}\), implies \(J\subseteq JV_{\lambda}\cap D\subseteq\mathfrak{p}_{\lambda}\). Thus, if \(J\not\subseteq\mathfrak{p}_{\lambda}\) for all \(\lambda\), then \(J^{\star}=D\). This is a contradiction since \(D=J^{\star}\subseteq I^{\star_{f}}\), making \(I\) not a \(\star_{f}\)-ideal. This implies for all finitely-generated ideals \(J\subseteq I\), there exists some \(\lambda\) such that \(J\subseteq\mathfrak{p}_{\lambda}\). In particular, take \(a_{1},\ldots,a_{n}\in I\). Then there exists a \(\lambda\) such that \((a_{1},\ldots,a_{n})\subseteq\mathfrak{p}_{\lambda}\). This means that \(\chi_{a_{1}}\cap\cdots\cap\chi_{a_{n}}\) is nonempty, so the collection \(\chi_{I}\coloneqq\{\chi_{d}\mid d\in I\}\) can be extended to some filter \(\mathcal{F}\) of \(\{\mathfrak{p}_{\lambda}\}\). This implies that \(I\subseteq\varinjlim_{\mathcal{F}}\mathfrak{p}_{\lambda}\).
2. If moreover \(I\) is a maximal ideal, then \(I\subseteq\varinjlim_{\mathcal{F}}\mathfrak{p}_{\lambda}\) implies \(I=\varinjlim_{\mathcal{F}}\mathfrak{p}_{\lambda}\). If each \(\mathfrak{p}_{\lambda}\) is a \(\star_{f}\)-ideal, then \(\varinjlim_{\mathcal{F}}\mathfrak{p}_{\lambda}\) is a \(\star_{f}\)-ideal by Proposition 2.17 since \(\varinjlim_{\mathcal{F}}\mathfrak{p}_{\lambda}\) is nonzero. Therefore, if \(I\) is \(\star_{f}\)-maximal, then \(I=\varinjlim_{\mathcal{F}}\mathfrak{p}_{\lambda}\).
**Corollary 4.8**.: _Let \(D=\bigcap\limits_{\lambda}V_{\lambda}\) be an intersection of a family of essential valuation domains. Let \(\star\) denote the star operation on \(D\) given by \(I^{\star}=\bigcap\limits_{\lambda}IV_{\lambda}\) for every nonzero fractional ideal of \(D\), and denote by \(\mathfrak{p}_{\lambda}\) the center of \(V_{\lambda}\) in \(D\). If \(I\) is a \(\star_{f}\)-maximal ideal of \(D\), then \(I=\varinjlim_{\mathcal{U}}\mathfrak{p}_{\lambda}\) for some ultrafilter \(\mathcal{U}\) of \(\{\mathfrak{p}_{\lambda}\}\)._
Proof.: It is sufficient to show that each \(\mathfrak{p}_{\lambda}\) is a \(\star_{f}\)-ideal. This follows from [15, Lemma 3.17(1)] and the fact that \(\star_{f}\leq t\).
We then use this result to show that for certain star operations \(\star_{1}\) on \(D\) and \(\star_{2}\) on \(\operatorname{Int}^{\mathrm{R}}(E,D)\) built using valuation overrings, we have the \((\star_{1},\star_{2})\)-Skolem property for \(\operatorname{Int}^{\mathrm{R}}(E,D)\).
**Theorem 4.9**.: _Let \(D\) be a domain with field of fractions \(K\). Take \(E\) to be some subset of \(K\). Let \(\star_{1}\) be a finite type star operation on \(D\) and \(\Lambda\subseteq\operatorname{Spec}(D)\) such that_
* \(\mathfrak{p}\in\Lambda\) _implies that_ \(\mathfrak{p}^{\star_{1}}\) _is a proper ideal,_
* \(D=\bigcap\limits_{\mathfrak{p}\in\Lambda}D_{\mathfrak{p}}\)_, and_
* \(\star_{1}-\operatorname{Max}(D)\) _is contained in the ultrafilter closure of_ \(\Lambda\)_._
_Let \(\star_{2}\) be a star operation on \(\operatorname{Int}^{R}(E,D)\) defined by \(I^{\star_{2}}=\bigcap\limits_{\mu}IV_{\mu}\) for every nonzero fractional ideal \(I\) of \(D\), where \(\{V_{\mu}\}\) is a family of valuation overrings such that_
* \(\operatorname{Int}^{R}(E,D)=\bigcap\limits_{\mu}V_{\mu}\)_,_
* _each_ \(V_{\mu}\) _is centered on_ \(\mathfrak{P}_{\mathfrak{p},a}\) _for some_ \(\mathfrak{p}\in\Lambda\) _and_ \(a\in E\)_, and_
* _for each_ \(\mathfrak{p}\in\Lambda\) _and_ \(a\in E\)_, the ideal_ \(\mathfrak{P}_{\mathfrak{p},a}\) _is a_ \((\star_{2})_{f}\) _ideal._
_Then \(\operatorname{Int}^{R}(E,D)\) has the \((\star_{1},(\star_{2})_{f})\)-Skolem property._
Proof.: Let \(I\) be a \((\star_{2})_{f}\)-maximal ideal of \(\operatorname{Int}^{\mathrm{R}}(E,D)\). Then by Corollary 4.8, we have that \(I\) is an ultrafilter limit of \(\{\mathfrak{M}_{\mu}\cap\operatorname{Int}^{\mathrm{R}}(E,D)\}\), where \(\mathfrak{M}_{\mu}\) is the maximal ideal of \(V_{\mu}\). Since \(\{\mathfrak{M}_{\mu}\cap\operatorname{Int}^{\mathrm{R}}(E,D)\}\subseteq\{ \mathfrak{P}_{\mathfrak{p},a}\mid\mathfrak{p}\in\Lambda,a\in E\}\), we know that \(I\) is also an ultrafilter limit of \(\{\mathfrak{P}_{\mathfrak{p},a}\mid\mathfrak{p}\in\Lambda,a\in E\}\). By Theorem 4.4, we get that \(\operatorname{Int}^{\mathrm{R}}(E,D)\) has the \((\star_{1},(\star_{2})_{f})\)-Skolem property.
## 5 Failure of the strong Skolem property
In this section, we give examples of rings of integer-valued rational functions which do not have the strong Skolem property. First, we show that if \(D\) is a pseudovaluation domain whose associated valuation domain has a principal maximal ideal, then \(\operatorname{Int}^{\mathrm{R}}(D)\) has the Skolem property but not the strong Skolem property. Then we analyze the case when \(\operatorname{Int}^{\mathrm{R}}(V)\) is not Prufer, for \(V\) a valuation domain. In this case, \(\operatorname{Int}^{\mathrm{R}}(V)\) does not have the strong Skolem property and we give a description of the finitely-generated ideals of \(\operatorname{Int}^{\mathrm{R}}(V)\) that are not Skolem closed.
If \(D\) is a domain such that \(\operatorname{Int}^{\mathrm{R}}(D)\) is Prufer and has the Skolem property, then \(\operatorname{Int}^{\mathrm{R}}(D)\) automatically has the strong Skolem property. However, it is not necessarily the case that the Skolem property implies the strong Skolem property if \(\operatorname{Int}^{\mathrm{R}}(D)\) is not Prufer. We can being constructing such an example using a PVD \(D\) whose associated valuation domain has principal maximal ideal. We show that with some conditions on residue fields, we can find an ideal that is not Skolem closed in \(\operatorname{Int}^{\mathrm{R}}(D)\). Further assumptions will make this ideal finitely-generated, showing that under some conditions, \(\operatorname{Int}^{\mathrm{R}}(D)\) can have the Skolem property but not the strong Skolem property.
First, we define a PVD. These are local rings that behave almost like valuation domains. For references on these rings, see [11, 12].
**Definition 5.1**.: A domain \(D\) is a **pseudovaluation domain** (PVD) if \(D\) has a valuation overring \(V\) such that \(\operatorname{Spec}(D)=\operatorname{Spec}(V)\) as sets. The valuation domain is uniquely determined and is called the **associated valuation domain** of \(D\).
**Remark 5.2**.: _In particular, a pseudovaluation domain and the associated valuation domain have the same (unique) maximal ideal._
_One way to construct a pseudovaluation domain is to start with a valuation domain \(V\). Let \(\mathfrak{m}\) be the maximal ideal of \(V\). Consider the canonical projection \(\pi:V\to V/\mathfrak{m}\). Take a subfield \(F\subseteq V/\mathfrak{m}\). Then \(D\coloneqq\pi^{-1}(F)\) is a pseudovaluation domain with associated valuation domain \(V\)._
We will focus on PVDs \(D\) whose associated valuation domain has a principal maximal ideal. In this case, we can make use of Theorem 3.1 to show that \(\mathrm{Int}^{\mathrm{R}}(E,D)\) has the Skolem property for any nonempty subset \(E\) of the field of fractions of \(D\). We do this by showing that all of the maximal ideals of \(\mathrm{Int}^{\mathrm{R}}(E,D)\) are ultrafilter limits of maximal pointed ideals. This proof makes use of the rational function \(\theta\) found in [12, Theorem 3.5] setting \(n=2\).
**Proposition 5.3**.: _Let \(D\) be a PVD whose associated valuation domain \(V\) has a principal maximal ideal. Also let \(E\) be a nonempty subset of the field of fractions \(K\) of \(D\). Then the ring \(\mathrm{Int}^{R}(E,D)\) has the Skolem property._
Proof.: We will take our characteristic sets with respect to \(\{\mathfrak{M}_{\mathfrak{m},a}\mid a\in E\}\), where \(\mathfrak{m}\) is the maximal ideal of \(D\). Also let \(t\in\mathfrak{m}\) be a generator of \(\mathfrak{m}\) in \(V\).
Let \(A\subseteq\mathrm{Int}^{\mathrm{R}}(E,D)\) be a proper ideal. We first want to show that \(\chi_{A}\coloneqq\{\chi_{\varphi}\mid\varphi\in A\}\) is closed under finite intersections. Take \(\varphi_{1},\varphi_{2}\in A\). If \(\varphi_{2}=0\), then \(\chi_{\varphi_{2}}=\{\mathfrak{M}_{\mathfrak{m},a}\mid a\in E\}\), so \(\chi_{\varphi_{1}}\cap\chi_{\varphi_{2}}=\chi_{\varphi_{1}}\). Now suppose that \(\varphi_{2}\neq 0\). Set
\[\theta(x)=\frac{t(1+x^{4})}{(1+tx^{2})(t+x^{2})}.\]
We claim that \(\theta\in\mathrm{Int}^{\mathrm{R}}(K,D)\). Denote by \(v\) the valuation associated with \(V\). Take \(a\in K\). If \(v(a)=0\), then \(v(\theta(a))=v(t)+v(1+a^{4})-0-0>0\), so \(\theta(a)\in\mathfrak{m}\subseteq D\). If \(v(a)>0\), then
\[\theta(a)=\frac{t(1+a^{4})}{(1+ta^{2})(t+a^{2})}=\frac{1+a^{4}}{(1+ta^{2})(1+ \frac{a^{2}}{t})}.\]
We have \(1+a^{4}\equiv 1\ (\mathrm{mod}\ \mathfrak{m})\) and \((1+ta^{2})(1+\frac{a^{2}}{t})\equiv 1\cdot 1\equiv 1\ (\mathrm{mod}\ \mathfrak{m})\). This means that \(\theta(a)\in 1+\mathfrak{m}\subseteq D\). Lastly, suppose that \(v(a)<0\). We calculate that
\[\theta(a)=\frac{t(1+a^{4})}{(1+ta^{2})(t+a^{2})}=\frac{\frac{1}{a^{2}}+1}{( \frac{1}{ta^{2}}+1)(\frac{t}{a^{2}}+1)}.\]
We see that \(\frac{1}{a^{4}}+1\equiv 1\ (\mathrm{mod}\ \mathfrak{m})\) and also \((\frac{1}{ta^{2}}+1)(\frac{t}{a^{2}}+1)\equiv 1\cdot 1\equiv 1\ (\mathrm{mod}\ \mathfrak{m})\). Thus, \(\theta(a)\in 1+\mathfrak{m}\subseteq D\). Since \(\theta(a)\in D\) for all \(a\in K\), we have that \(\theta\in\mathrm{Int}^{\mathrm{R}}(K,D)\).
Now we consider \(\rho(x)=\varphi_{1}(x)+\theta\Big{(}\frac{\varphi_{1}(x)}{\varphi_{2}(x)} \Big{)}\varphi_{2}(x)\). We know that \(\varphi_{2}\neq 0\) so \(\varphi_{2}\) has only finitely many roots. Thus, \(\theta\Big{(}\frac{\varphi_{1}(x)}{\varphi_{2}(x)}\Big{)}\in\mathrm{Int}^{ \mathrm{R}}(K,D)\) as this rational function maps almost every element of \(K\) to \(D\)[14, Proposition 1.4]. This implies that \(\theta\Big{(}\frac{\varphi_{1}(x)}{\varphi_{2}(x)}\Big{)}\in\mathrm{Int}^{ \mathrm{R}}(E,D)\) and therefore \(\rho(x)\in(\varphi_{1},\varphi_{2})\subseteq A\).
Let \(a\in E\). Suppose that \(v(\varphi_{1}(a))=v(\varphi_{2}(a))\). Then \(v(\rho(a))=v(\varphi_{1}(a))\). If \(v(\varphi_{1}(a))<v(\varphi_{2}(a)))\), we have \(v(\rho(a))=v(\varphi_{1}(a))\). If \(v(\varphi_{1}(a))>v(\varphi_{2}(a))\), we have \(v(\rho(a))=v(\varphi_{2}(a))\). In summary, \(v(\rho(a))=\min\{v(\varphi_{1}(a)),v(\varphi_{2}(a))\}\). This implies that \(\chi_{\varphi_{1}}\cap\chi_{\varphi_{2}}=\chi_{\rho}\). Thus, \(\chi_{A}\) is closed under finite intersections.
Since \(A\) is a proper ideal, \(\chi_{A}\) does not contain the empty set. If \(\chi_{A}\) contained the empty set, then there would be some \(\varphi\in A\) such that \(\chi_{\varphi}=\emptyset\), which means that \(\varphi(a)\) is a unit for all \(a\in E\), making \(\varphi\) a unit of \(\mathrm{Int}^{\mathrm{R}}(E,D)\) and thus \(A\) not a proper ideal. We have also just shown that \(\chi_{A}\) is closed under finite intersections, so \(\chi_{A}\) has the finite intersection property. By Theorem 3.1, we know that \(\mathrm{Int}^{\mathrm{R}}(E,D)\) has the Skolem property.
For a PVD \(D\) whose associated valuation domain has a principal maximal ideal, even though \(\mathrm{Int}^{\mathrm{R}}(D)\) has the Skolem property, we can find some ideals of \(\mathrm{Int}^{\mathrm{R}}(D)\) that are not Skolem closed. This uses a result about rational functions as mappings between fields that will be proven in Section 6.
**Proposition 5.4**.: _Let \(D\) be a PVD whose associated valuation domain \(V\) has principal maximal ideal \(\mathfrak{m}\). Assume that \(V/\mathfrak{m}\) is infinite and \((V/\mathfrak{m})/(D/\mathfrak{m})\) is not a purely inseparable field extension of finite exponent. Then \((x^{2},\mathfrak{m})\) of \(\mathrm{Int}^{\mathrm{R}}(D)\) is not Skolem closed._
Proof.: We claim that \(x\) is in the Skolem closure of \((x^{2},\mathfrak{m})\). If \(a\in D\) is such that \(v(a)=0\), then the value ideal of \((x^{2},\mathfrak{m})\) at \(a\) is \(D\). If \(a\in D\) is such that \(v(a)>0\), then the value ideal of \((x^{2},\mathfrak{m})\) at \(a\) is \(\mathfrak{m}\). Thus, \(x\in(x^{2},\mathfrak{m})^{\mathrm{Sk}}\).
However, we claim that \(x\notin(x^{2},\mathfrak{m})\). If \(x\in(x^{2},\mathfrak{m})\), then
\[x=\varphi(x)x^{2}+\sum\limits_{i=1}^{n}\psi_{i}(x)t_{i},\]
for some \(t_{1},\ldots,t_{n}\in\mathfrak{m}\) and \(\varphi,\psi_{1},\ldots,\psi_{n}\in\mathrm{Int}^{\mathrm{R}}(D)\). Let \(t\in V\) be such that \(t\) generates \(\mathfrak{m}\) in \(V\). We can permute the indices so that for some \(m\in\mathbb{N}\), we have \(v(t_{1})=\cdots=v(t_{m})=v(t)\) and \(v(t)<v(t_{i})\) for any \(i>m\). Then we can rewrite
\[\psi_{m+1}(x)t_{m+1}+\cdots+\psi_{n}(x)t_{n}=t^{2}\bigg{(}\psi_{m+1}(x)\frac{t _{m+1}}{t^{2}}+\cdots+\psi_{n}(x)\frac{t_{n}}{t^{2}}\bigg{)}=t^{2}\chi(x),\]
where \(\chi(x)=\psi_{m+1}(x)\frac{t_{m+1}}{t^{2}}+\cdots+\psi_{n}(x)\frac{t_{n}}{t^{ 2}}\in\mathrm{Int}^{\mathrm{R}}(D,V)\). Therefore, we now are considering
\[x=\varphi(x)x^{2}+\chi(x)t^{2}+\sum\limits_{i=1}^{m}\psi_{i}(x)t_{i}.\]
We now assume that \(\frac{t_{1}}{t},\ldots,\frac{t_{n}}{t}\mod\mathfrak{m}\) are linearly independent over \(D/\mathfrak{m}\). If this is not the case, then there is some relation \(d_{1}\frac{t_{1}}{t}+\cdots+d_{m}\frac{t_{m}}{t}\in\mathfrak{m}\) for some \(d_{1},\ldots,d_{m}\in D\) and not all \(d_{1},\ldots,d_{m}\) are in \(\mathfrak{m}\). Without loss of generality, we can assume that \(d_{1}\notin\mathfrak{m}\). Then \(t_{1}=-\frac{d_{2}}{d_{1}}t_{2}-\cdots-\frac{d_{m}}{d_{1}}t_{m}\). This allows us to write \(\sum\limits_{i=1}^{m}\psi_{i}(x)t_{i}\) as an \(\mathrm{Int}^{\mathrm{R}}(D)\)-linear combination of just \(t_{2},\ldots,t_{m}\). We may keep going until we have linear independence.
Let \(a_{1},\ldots,a_{m}\in D\). We evaluate at \(a_{1}t_{1}+\cdots+a_{m}t_{m}\) to obtain
\[a_{1}t_{1}+\cdots+a_{m}t_{m} =\varphi(a_{1}t_{1}+\cdots+a_{m}t_{m})(a_{1}t_{1}+\cdots+a_{m}t_{ m})^{2}\] \[\quad+\chi(a_{1}t_{1}+\cdots+a_{m}t_{m})t^{2}+\sum\limits_{i=1}^{ m}\psi_{i}(a_{1}t_{1}+\cdots+a_{m}t_{m})t_{i}.\]
Divide both sides by \(t\) to get
\[a_{1}\frac{t_{1}}{t}+\cdots+a_{m}\frac{t_{m}}{t} =\varphi(a_{1}t_{1}+\cdots+a_{m}t_{m})(a_{1}t_{1}+\cdots+a_{m}t_{m })^{2}/t\] \[\quad+\chi(a_{1}t_{1}+\cdots+a_{m}t_{m})t+\sum\limits_{i=1}^{m} \psi_{i}(a_{1}t_{1}+\cdots+a_{m}t_{m})\frac{t_{i}}{t}.\]
We treat both sides as elements of \(V\) and reducing modulo \(\mathfrak{m}\), we now have
\[a_{1}\frac{t_{1}}{t}+\cdots+a_{m}\frac{t_{m}}{t}=\sum\limits_{i=1}^{m}\psi_{i}( a_{1}t_{1}+\cdots+a_{m}t_{m})\frac{t_{i}}{t}\mod\mathfrak{m}.\]
Due to linear independence of \(\frac{t_{1}}{t},\ldots,\frac{t_{m}}{t}\mod\mathfrak{m}\), we have
\[a_{i}=\psi_{i}(a_{1}t_{1}+\cdots+a_{m}t_{m})\mod\mathfrak{m}\]
for each \(i\in\{1,\ldots,m\}\).
Fix an \(i\in\{1,\ldots,m\}\). Writing \(\psi_{i}=\frac{f_{i}}{g_{i}}\) for some \(f_{i},g_{i}\in K[x]\), where \(K\) is the field of fractions of \(D\), we see that \(\frac{\operatorname{loc}_{f_{i},t}(x)}{\operatorname{loc}_{g_{i},t}(x)}\) maps all but finitely many elements of \(D/\mathfrak{m}(\frac{t_{1}}{t}+\mathfrak{m},\ldots,\frac{t_{m}}{t}+\mathfrak{ m})\) to \(D/\mathfrak{m}\). This implies that \(\frac{\operatorname{loc}_{f_{i},t}(x)}{\operatorname{loc}_{g_{i},t}(x)}\) is constant by Lemma 6.4, which is not the case since \(a_{i}\equiv\psi_{i}(a_{1}t_{1}+\cdots+a_{m}t_{m})\) mod \(\mathfrak{m}\). Thus, \(x\notin(x^{2},\mathfrak{m})\), which means \((x^{2},\mathfrak{m})\) is not Skolem closed.
If \(\mathfrak{m}\) is a finitely-generated ideal of \(D\), then \((x^{2},\mathfrak{m})\) is a finitely-generated ideal of \(\operatorname{Int}^{\mathbb{R}}(D)\), meaning \(\operatorname{Int}^{\mathbb{R}}(D)\) has a finitely-generated ideal that is not Skolem closed, so \(\operatorname{Int}^{\mathbb{R}}(D)\) does not have the strong Skolem property. The next proposition shows that the condition that \(\mathfrak{m}\) is finitely-generated in \(D\) can be easily met.
**Proposition 5.5**.: _Let \(D\) be a PVD with maximal ideal \(\mathfrak{m}\) and associated valuation domain \(V\). Then \(\mathfrak{m}\) is a finitely-generated ideal of \(D\) if and only if \([V/\mathfrak{m}:D/\mathfrak{m}]<\infty\) and \(\mathfrak{m}\) is principal in \(V\)._
Proof.: We know that \(\mathfrak{m}\) is a finitely-generated ideal of \(D\) if and only if \(V\) is a finitely-generated \(D\)-module and \(\mathfrak{m}\) is principal in \(V\)[11, Proposition 1.5]. What is left to show is that under the assumption that \(\mathfrak{m}\) is principal in \(V\), we have \([V/\mathfrak{m}:D/\mathfrak{m}]<\infty\) if and only if \(V\) is a finitely-generated \(D\)-module.
Suppose that \(V=Dt_{1}+\cdots+Dt_{n}\) for some \(t_{1},\ldots,t_{n}\in V\). Then viewing this modulo \(\mathfrak{m}\), we get \(V/\mathfrak{m}=(D/\mathfrak{m})(t_{1}+\mathfrak{m})+\cdots+(D/\mathfrak{m})(t _{n}+\mathfrak{m})\). This implies that \([V/\mathfrak{m}:D/\mathfrak{m}]<\infty\).
Now suppose that \([V/\mathfrak{m}:D/\mathfrak{m}]<\infty\). Then there exists some \(t_{1},\ldots,t_{n}\in V^{\times}\) such that \(V/\mathfrak{m}=(D/\mathfrak{m})(t_{1}+\mathfrak{m})+\cdots+(D/\mathfrak{m})(t _{n}+\mathfrak{m})\). Since \(\mathfrak{m}\) is principal in \(V\), we know \(\mathfrak{m}=tV\) for some \(t\in V\). We claim that \(\mathfrak{m}=(tt_{1},\ldots,tt_{n})D\). Because \(t_{1},\ldots,t_{n}\in V\), we know that \(tt_{1},\ldots,tt_{n}\in\mathfrak{m}\). Therefore, \((tt_{1},\ldots,tt_{n})D\subseteq\mathfrak{m}\). Now take \(d\in\mathfrak{m}\). Then \(\frac{d}{t}\in V\), so \(\frac{d}{t}=a+c_{1}t_{1}+\cdots+c_{n}t_{n}\) for some \(c_{1},\ldots,c_{n}\in D\) and \(a\in\mathfrak{m}\). This implies that \(d=at+c_{1}tt_{1}+\cdots c_{n}t_{n}\). Notice that \(\frac{at}{tt_{1}}=\frac{a}{t_{1}}\in\mathfrak{m}\) as \(a\in\mathfrak{m}\) and \(t_{1}\in V^{\times}\). Thus, \(at\in tt_{1}D\subseteq(tt_{1},\ldots,tt_{n})D\), meaning that \(d\in(tt_{1},\ldots,tt_{n})D\). This shows that \(\mathfrak{m}=(tt_{1},\ldots,tt_{n})D\). Now, \(V=Dt_{1}+\cdots+Dt_{n}+\mathfrak{m}=Dt_{1}+\cdots+Dt_{n}\), so \(V\) is a finitely-generated \(D\)-module.
Note that \(\operatorname{Int}^{\mathbb{R}}(D)\) in the following corollary has the Skolem property by Proposition 5.3, but \(\operatorname{Int}^{\mathbb{R}}(D)\) does not have the strong Skolem property.
**Corollary 5.6**.: _Let \(D\) be a PVD with associated valuation domain \(V\) that has a principal maximal ideal \(\mathfrak{m}\). If \((V/\mathfrak{m})/(D/\mathfrak{m})\) is a separable finite extension of fields, then \(\operatorname{Int}^{\mathbb{R}}(D)\) does not have the strong Skolem property._
Proof.: By Proposition 5.5, we know that \((x^{2},\mathfrak{m})\) is a finitely-generated ideal of \(\operatorname{Int}^{\mathbb{R}}(D)\). We also know that \((x^{2},\mathfrak{m})\) is not Skolem closed by 5.4. Thus, \((x^{2},\mathfrak{m})\) is a finitely-generated ideal of \(\operatorname{Int}^{\mathbb{R}}(D)\) that is not Skolem closed, implying that \(\operatorname{Int}^{\mathbb{R}}(D)\) does not have the Skolem property.
Let \(V\) be a valuation domain with field of fractions \(K\). Take \(E\) to be a nonempty subset of \(K\) that is strongly coherent for \(\operatorname{Int}^{\mathbb{R}}(E,V)\). The following lemma shows that \(\operatorname{Int}^{\mathbb{R}}(E,V)\) is integrally closed, so we have that \(\operatorname{Int}^{\mathbb{R}}(E,V)\) is not Prufer if and only if there exists a nonzero finitely-generated ideal of \(\operatorname{Int}^{\mathbb{R}}(V)\) that is not divisorial [11, Proposition 34.12]. It turns out this is a perspective that links Prufer domains and the strong Skolem property.
**Lemma 5.7**.: _Let \(D\) be a domain with field of fractions \(K\). Suppose that \(E\) is a nonempty subset of \(K\). Then \(\operatorname{Int}^{\mathbb{R}}(E,D)\) is integrally closed if and only if \(D\) is integrally closed._
Proof.: If \(\mathrm{Int}^{\mathrm{R}}(E,D)\) is integrally closed, then \(D=\mathrm{Int}^{\mathrm{R}}(E,D)\cap K\) is also integrally closed.
Now suppose that \(D\) is integrally closed. Also suppose that \(\varphi\in K(x)\) is integral over \(\mathrm{Int}^{\mathrm{R}}(E,D)\). Then there exists \(\psi_{0},\ldots,\psi_{n-1}\in\mathrm{Int}^{\mathrm{R}}(E,D)\) such that
\[\varphi^{n}+\psi_{n-1}\varphi^{n-1}+\cdots+\psi_{1}\varphi+\psi_{0}=0.\]
Let \(a\in E\). Since \(\mathrm{Int}^{\mathrm{R}}(E,D)\subseteq K[x]_{(x-a)}\), which is integrally closed, then \(\varphi\) being integral over \(\mathrm{Int}^{\mathrm{R}}(E,D)\) means that \(\varphi\) is integral over \(K[x]_{(x-a)}\). This shows that \(\varphi\in K[x]_{(x-a)}\). Then we have
\[\varphi(a)^{n}+\psi_{n-1}(a)\varphi(a)^{n-1}+\cdots+\psi_{1}(a)\varphi(a)+ \psi_{0}(a)=0.\]
Note that \(\varphi(a)\) is defined and in \(K\). The above equation shows that \(\varphi(a)\) is integral over \(D\) as each \(\psi_{i}(a)\in D\). Thus, \(\varphi(a)\in D\). This holds for all \(a\in E\), meaning \(\varphi\in\mathrm{Int}^{\mathrm{R}}(E,D)\), which shows that \(\mathrm{Int}^{\mathrm{R}}(E,D)\) is integrally closed.
We need another lemma to aide us. This lemma constructs an integer-valued rational function valued in a valuation domain. The constructed rational function has the flavor of a function guaranteed by continuity, such as by Proposition 2.1 of [14], but there are additional properties to give more control.
**Lemma 5.8**.: _Let \(V\) be a valuation domain with value group \(\Gamma\), maximal ideal \(\mathfrak{m}\), associated valuation \(v\), and field of fractions \(K\). Suppose that \(\Gamma\) is not divisible or \(V/\mathfrak{m}\) is not algebraically closed. Then for all \(\varepsilon,\delta\in\Gamma\) with \(\varepsilon,\delta>0\) and \(c\in K\), there exists \(\varphi\in\mathrm{Int}^{R}(K,V)\) such that \(v(\varphi(d))\leq\varepsilon\) for all \(d\in K\) and \(\gamma\in\mathbb{Q}\Gamma\) with \(\gamma>\delta\) such that \(v(\varphi(d))>0\) if and only if \(v(d-c)\geq\gamma\). Additionally, \(v(\varphi(c))=\varepsilon\)._
Proof.: We split into two cases.
1. Suppose that \(\Gamma\) is not divisible. Let \(\varepsilon,\delta\in\Gamma\) with \(\varepsilon,\delta>0\) and \(c\in K\). Since \(\Gamma\) is not divisible, there exists \(\alpha\in\Gamma\) and \(m\in\mathbb{N}\) with \(m>0\) such that \(\frac{\alpha}{m}\notin\Gamma\). If \(\frac{\varepsilon}{m}\in\Gamma\), then \(\frac{\alpha}{m}+\frac{\varepsilon}{m}=\frac{\alpha+\varepsilon}{m}\notin\Gamma\) and take \(a\in K\) such that \(v(a)=\alpha+m\delta+\varepsilon\), \(b\in K\) such that \(v(b)=\alpha+m\delta\), and \(n=m\). If \(\frac{\varepsilon}{m}\notin\Gamma\), then \(\frac{\varepsilon}{2m}\notin\Gamma\) and take \(a\in K\) such that \(v(a)=2\varepsilon+2m\delta\), \(b\in K\) such that \(v(b)=\varepsilon+2m\delta\), and \(n=2m\). In either case, \(\frac{v(a)}{n}\) and \(\frac{v(b)}{n}\) are both not in \(\Gamma\). Now set \[\varphi(x)=\frac{(x-c)^{n}+a}{(x-c)^{n}+b}.\] Let \(d\in K\). We check that \[v(\varphi(d))=\begin{cases}0,&\text{if }nv(d-c)<v(b),\\ nv(d-c)-v(b),&\text{if }v(b)<nv(d-c)<v(a),\\ v(a)-v(b),&\text{if }nv(d-c)>v(a).\end{cases}\] This shows that \(\varphi\in\mathrm{Int}^{\mathrm{R}}(K,V)\) since \(\frac{v(a)}{n},\frac{v(b)}{n}\notin\Gamma\). Furthermore, we see that \(nv(d-c)<v(a)\) implies \(nv(d-c)-v(b)<v(a)-v(b)=\varepsilon\), so \(v(\varphi(d))\leq\varepsilon\) for all \(d\in K\). This also shows that \(v(\varphi(d))>0\) if and only if \(v(d-c)\geq\frac{v(b)}{n}\). Note that \(\frac{v(b)}{n}>\delta\). We also verify that \(v(\varphi(c))=\varepsilon\).
2. Suppose that \(\Gamma\) is divisible and \(V/\mathfrak{m}\) is not algebraically closed. Pick \(b\in K\) such that \(v(b)>\delta\). Then let \(a\in K\) such that \(v(a)=v(b)+\frac{\varepsilon}{n}\), which exists since \(\Gamma\) is divisible. Because \(V/\mathfrak{m}\) is not algebraically closed, there exists a monic, nonconstant, unit-valued polynomial \(f\in V[x]\). Now set \[\varphi(x)=\frac{a^{n}f(\frac{x-c}{a})}{b^{n}f(\frac{x-c}{b})},\]
where \(n\) is the degree of \(f\). Now let \(d\in K\). We use the fact that \(v(a_{1}^{n}f(\frac{a_{2}}{a_{1}}))=\min\{nv(a_{1}),nv(a_{2})\}\) for each \(a_{1},a_{2}\in K\) with \(a_{1}\neq 0\)[11, Corollary 2.3] to calculate that \[v(\varphi(d))=\begin{cases}0,&\text{if }v(d-c)<v(b)\\ nv(d-c)-nv(b),&\text{if }v(b)\leq v(d-c)\leq v(a)\\ nv(a)-nv(b),&\text{if }v(a)<v(d-c).\end{cases}\] This shows that \(\varphi\in\operatorname{Int}^{\mathrm{R}}(K,V)\) and \(v(\varphi(d))\leq nv(a)-nv(b)=\varepsilon\) for all \(d\in K\) and \(v(\varphi(d))=nv(a)-nv(b)>0\) if and only if \(v(d-c)\geq v(b)\). Note that \(v(b)>\delta\). Lastly, we verify that \(v(\varphi(c))=nv(a)-nv(b)=\varepsilon\).
Now we are ready to link the property of being a Prufer domain with the strong Skolem property in the context of rings of integer-valued rational functions.
**Proposition 5.9**.: _Let \(V\) be a valuation domain with field of fractions \(K\). Suppose that \(E\) is some subset of \(K\) that is strongly coherent for \(\operatorname{Int}^{R}(E,V)\). Then for all nonzero fractional ideals \(I\) of \(\operatorname{Int}^{R}(E,V)\), we have \(I_{t}=I^{\mathrm{Sk}_{f}}\)._
Proof.: It suffices to show that \(I_{v}=I^{\mathrm{Sk}}\) for all nonzero finitely-generated integral ideals \(I\) of \(\operatorname{Int}^{\mathrm{R}}(E,V)\).
Suppose that \(I\) is an integral ideal of \(\operatorname{Int}^{\mathrm{R}}(E,V)\). Let \(\varphi\in I^{\mathrm{Sk}}\) and \(\psi\in I^{-1}\) be nonzero. Let \(a\in E\). Then \(\varphi(a)\in I(a)\) so there exists \(\rho\in I\) such that \(\rho(a)=\varphi(a)\). Since \(\psi\rho\in\operatorname{Int}^{\mathrm{R}}(E,V)\), we have that \(\psi(a)\rho(a)\in V\), except for possibly the finitely many values of \(a\in E\) for which \(\psi(a)\) is not defined. If \(\psi(a)\) is defined, we have that \(\psi(a)\varphi(a)=\psi(a)\rho(a)\in V\). Since \(E\) is a strongly coherent set for \(\operatorname{Int}^{\mathrm{R}}(E,V)\) and \((\psi\varphi)(a)\in V\) for all but finitely many \(a\in E\), we get \(\varphi\psi\in\operatorname{Int}^{\mathrm{R}}(E,V)^{\mathrm{Sk}}= \operatorname{Int}^{\mathrm{R}}(E,V)\), implying that \(\varphi\in(I^{-1})^{-1}=I_{v}\). We then have \(I^{\mathrm{Sk}}\subseteq I_{v}\).
Now suppose that \(I\) is finitely-generated, so there exist \(\psi_{1},\ldots,\psi_{n}\in\operatorname{Int}^{\mathrm{R}}(E,V)\) such that \(I=(\psi_{1},\ldots,\psi_{n})\). Let \(v\) be the valuation associated with \(V\) and \(\Gamma\) the value group. Suppose for a contradiction that there exists \(\varphi\in I_{v}\) such that \(\varphi\notin I^{\mathrm{Sk}}\). Then there exists \(a\in E\) such that \(v(\varphi(a))<v(\psi_{j}(a))\) for all \(j\). We know that \(I(a)\) is generated by \(\psi_{i}(a)\) for some \(i\). We can assume that there does not exist a smallest strictly positive element in \(\Gamma\) since otherwise \(V\) has principal maximal ideal, implying that \(\operatorname{Int}^{\mathrm{R}}(E,V)\) is Prufer and therefore \(I=I_{v}=I^{\mathrm{Sk}}\). Then there exists \(\varepsilon\in\Gamma\) such that \(0<\varepsilon<v(\psi_{i}(a))-v(\varphi(a))\). Due to [12, Proposition 2.1], by the continuity of \(\psi_{1},\ldots,\psi_{n}\), there exists \(\delta\in\Gamma\) such that for all \(d\in K\) with \(v(d-a)>\delta\), we have \(v(\psi_{j}(d))>v(\psi_{i}(a))-\varepsilon\) for all \(j\). By Lemma 5.8, there exists \(\rho\in\operatorname{Int}^{\mathrm{R}}(E,V)\) such that \(v(\rho(d))\leq v(\psi_{i}(a))-\varepsilon\) for all \(d\in K\) and there exists \(\gamma\in\mathbb{Q}\Gamma\) with \(\gamma>\delta\) such that \(v(\rho(d))>0\) if and only if \(v(d-a)>\gamma\).
For each \(j\), consider \(\frac{\psi_{j}}{\varphi}\). Let \(d\in E\). Then we have that
\[v\bigg{(}\frac{\psi_{j}(d)}{\rho(d)}\bigg{)}=\begin{cases}v(\psi_{j}(d)),& \text{if }v(d-a)\leq\gamma,\\ v(\psi_{j}(d))-v(\rho(d))\geq v(\psi_{j}(d))-(v(\psi_{j}(d))-\varepsilon)>0,& \text{if }v(d-a)>\gamma,\end{cases}\]
which shows that \((\psi_{1},\ldots,\psi_{n})\subseteq(\rho)\). We then deduce that \(\varphi\in I_{v}\subseteq(\rho)_{v}=(\rho)\) so we must have \(v(\varphi(a))\geq v(\rho(a))\). However, \(v(\rho(a))=v(\psi_{i}(a))-\varepsilon>v(\varphi(a))\), a contradiction.
**Remark 5.10**.: _If \(\operatorname{Int}^{R}(E,V)\) does not have the strong Skolem property and \(E\) is strongly coherent set for \(\operatorname{Int}^{R}(E,V)\), then the finitely-generated ideals of \(\operatorname{Int}^{R}(E,V)\) that are not Skolem closed are exactly the ones that are not divisorial._
Suppose \(V\) is a valuation domain with algebraically closed residue field and maximal ideal that is not principal. Assume further that the value group of \(V\) is not divisible. We show in the following
proposition that \(\mathrm{Int}^{\mathrm{R}}(V)\) does not have the strong Skolem property by explicitly finding a finitely-generated ideal that is not Skolem closed. This also shows that \(\mathrm{Int}^{\mathrm{R}}(V)\) is not Prufer in this case, since this finitely-generated ideal that is not Skolem closed is also not divisorial by Proposition 5.9. Additionally, if \(V\) is a valuation domain with algebraically closed residue field and divisible value group, then Proposition 2.38 of [10] implies that \(\mathrm{Int}^{\mathrm{R}}(V)\) is not Prufer. This gives an alternative proof of Theorem 2.29 of [10], which classifies exactly when \(\mathrm{Int}^{\mathrm{R}}(V)\) is a Prufer domain for a valuation domain \(V\).
**Proposition 5.11**.: _Let \(V\) be a valuation domain with algebraically closed residue field and maximal ideal that is not principal. Suppose the value group \(\Gamma\) of \(V\) is not divisible. Let \(\mathfrak{m}\) denote the maximal ideal of \(V\). Then for any \(t\in\mathfrak{m}\), the finitely-generated ideal \((x^{2},t^{2})\) of \(\mathrm{Int}^{R}(V)\) is not Skolem closed, meaning \(\mathrm{Int}^{R}(V)\) does not have the strong Skolem property._
Proof.: We will show that the finitely-generated ideal \((x^{2},t^{2})\) of \(\mathrm{Int}^{\mathrm{R}}(V)\) is not Skolem closed by showing that \(tx\in(x^{2},t^{2})^{\mathrm{Sk}}\setminus(x^{2},t^{2})\). Let \(d\in V\). Then the value ideal \((x^{2},t^{2})(d)\) is equal to \((d^{2},t^{2})\), so
\[(x^{2},t^{2})(d)=\begin{cases}(d^{2}),&\text{if }v(d)\leq v(t)\\ (t^{2}),&\text{if }v(d)>v(t).\end{cases}\]
On the other hand, we have that if \(v(d)\leq v(t)\), then \(v(td)\geq v(d^{2})\), so \(td\in(d^{2})\). If \(v(d)>v(t)\), then \(v(td)>v(t^{2})\), so \(td\in(t^{2})\). Thus, \(tx\in(x^{2},t^{2})^{\mathrm{Sk}}\).
Now we want to show that \(tx\notin(x^{2},t^{2})\). Suppose on the contrary that there exists \(\varphi,\psi\in\mathrm{Int}^{\mathrm{R}}(V)\) such that \(tx=\varphi(x)x^{2}+\psi(x)t^{2}\). Let \(d\in V\) such that \(0\leq v(d)<v(t)\). Then \(td=\varphi(d)d^{2}+\psi(d)t^{2}\) implies \(v(td)=v(\varphi(d)d^{2})\) since \(v(td)<v(t^{2})\leq v(\psi(d)t^{2})\). This implies then that \(v(\varphi(d))=v(t)-v(d)\). Thus, \(\mathrm{minval}_{\varphi}(\gamma)=v(t)-\gamma\) for \(\gamma\in\Gamma\) such that \(0\leq\gamma<v(t)\). This also shows that \(\mathrm{minval}_{\varphi}(v(t))=0\) and therefore there exists some \(\varepsilon\in\mathbb{Q}\Gamma\) with \(\varepsilon>0\) such that there exists \(c_{1},c_{2}\in\mathbb{Z}\) and \(\beta_{1},\beta_{2}\in\Gamma\) so that
\[\mathrm{minval}_{\varphi}(\gamma)=\begin{cases}c_{1}\gamma+\beta_{1},&\text{ if }v(t)-\varepsilon<\gamma\leq v(t)\\ c_{2}\gamma+\beta_{2},&\text{if }v(t)\leq\gamma<v(t)+\varepsilon,\end{cases}\]
since \(\mathrm{minval}_{\varphi}\) is piecewise linear [10, Proposition 2.16]. Note that we found that \(c_{1}=-1\) and we must have \(c_{2}\geq 0\) since \(\mathrm{minval}_{\varphi}(v(t))=0\). Therefore, by [10, Lemma 2.25], we have that \(\varphi\notin\mathrm{Int}^{\mathrm{R}}(V)\), a contradiction. This means that \(tx\notin(x^{2},t^{2})\).
Since \((x^{2},t^{2})\) is a finitely-generated ideal of \(\mathrm{Int}^{\mathrm{R}}(V)\) that is not Skolem closed, we know that the ring \(\mathrm{Int}^{\mathrm{R}}(V)\) does not have the strong Skolem property.
## 6 Rational functions as maps between fields
Let \(L/M\) be an extension of fields. This section leads up to a lemma showing the nonexistence of a nonconstant rational function that maps L to M, except if L is finite or \(L/M\) is a particular type of purely inseparable extension. This lemma is used for Proposition 5.4.
**Lemma 6.1**.: _Let \(L/M\) be a field extension that is not purely inseparable of finite exponent. Additionally, suppose that \(L\) is an infinite field. Then there does not exist a nonconstant polynomial \(f\in M[x]\) such that \(f(d)\in M\) for all but finitely many \(d\in L\)._
Proof.: Suppose first that \(M=\{d_{1},\ldots,d_{k}\}\) is finite. Also assume the existence of \(f\in M[x]\) such that \(f(L)\subseteq M\). Then \((f(x)-d_{1})\cdots(f(x)-d_{n})\) evaluates to \(0\) for all but finitely many elements of \(L\), which is infinite, so \((f(x)-d_{1})\cdots(f(x)-d_{n})=0\). This implies that \(f\) is a constant.
Now we suppose that \(M\) is infinite. Suppose for a contradiction that there is a nonconstant polynomial \(f\in M[x]\) such that \(f(x)\in M\) for all but finitely many \(x\in L\). Write
\(a_{2}x^{2}+\cdots+a_{m}x^{m}\) where each \(a_{i}\in M\) with \(a_{m}\neq 0\). Additionally, \(m\geq 2\) since if \(m=1\), then \(L=M\), but then \(L/M\) would be purely inseparable of finite exponent.
Due to the existence of \(f\), there exist elements of \(L\setminus M\) algebraic over \(M\). Let \(\alpha\in L\setminus M\) be an algebraic element of degree \(n\geq 2\) over \(M\). For \(i\in\mathbb{N}\), we can uniquely represent \(\alpha^{i}=\sum\limits_{j=0}^{n-1}c_{ij}\alpha^{j}\), where each \(c_{ij}\) is in \(M\). Then
\[f(\alpha x)=\sum\limits_{i=0}^{m}a_{i}(\alpha x)^{i}=\sum\limits_{i=0}^{m}a_{i }\sum\limits_{j=0}^{n-1}c_{ij}\alpha^{j}x^{i}=\sum\limits_{j=0}^{n-1}\Biggl{(} \sum\limits_{i=0}^{m}a_{i}c_{ij}x^{i}\Biggr{)}\alpha^{j}.\]
Evaluating \(f(\alpha x)\) at \(x=d\) for all but finitely many \(d\in M\) gives an \(M\)-linear combination of \(1,\alpha,\alpha^{2},\ldots,\alpha^{n-1}\), which by assumption is in \(M\), so the coefficients of \(\alpha,\alpha^{2},\ldots,\alpha^{n-1}\) must be \(0\). Since \(\sum\limits_{i=0}^{m}a_{i}c_{ij}x^{i}\) evaluate to \(0\) at all but finitely many elements of the infinite field \(M\) for \(j=1,\ldots,n-1\), we deduce that \(\sum\limits_{i=0}^{m}a_{i}c_{ij}x^{i}=0\) for \(j=1,\ldots,n-1\). This implies that \(a_{i}c_{ij}=0\) for \(i=0,\ldots,m\) and \(j=1,\ldots,n-1\). Since \(a_{m}\neq 0\), we have \(c_{mj}=0\) for all \(j=1,\ldots,n-1\). This means that \(\alpha^{m}\in M\). This holds for all \(\alpha\in L\setminus M\), so the polynomial \(x^{m}\) maps \(L\) to \(M\).
Take \(f\in M[x]\) to be a nonconstant polynomial such that \(f(L)\subseteq M\) and \(f\) is a polynomial with minimal degree with respect to this property. From above, we know that \(x^{m}\) maps \(L\) to \(M\), where \(m=\deg(f)\). We know that \((x+1)^{m}\) maps \(L\) to \(M\) as well. This means that \((x+1)^{m}-x^{m}=\sum\limits_{i=0}^{m-1}\binom{m}{i}x^{i}\) maps \(L\) to \(M\). Since \(m\) was chosen to be minimal, we have that \(\deg((x+1)^{m}-x^{m})=0\) and thus \(\binom{m}{i}=0\) for all \(i=1,\ldots,m-1\).
Let \(p\) be the characteristic of \(M\). If \(p=0\), then \(\binom{m}{i}=0\) cannot happen. Thus, suppose that \(p>0\). Since \(\binom{m}{1}=m=0\), we have that \(p\) divides \(m\). Suppose that \(m\neq p^{r}\) for any power \(r\in\mathbb{Z}_{>0}\). Then when we write the base \(p\) expansion \(m=m_{k}p^{k}+m_{k-1}p^{k-1}+\cdots+m_{1}p+m_{0}\) with \(0\leq m_{i}<p-1\) and \(m_{k}\neq 0\). Also, \(m_{0}=0\) since \(p\) divides \(m\). Since \(m\neq p^{k}\), we get that \(m_{k}\geq 2\) or \(m_{i}\neq 0\) for some \(i=1,\ldots,k-1\). Either way, \(m_{i}\neq 0\) for some \(i=1,\ldots,k\) and \(0<p^{i}<m\). Then Lucas's Theorem says \(\binom{m}{p^{i}}\equiv\binom{m_{0}}{0}\cdots\binom{m_{i+1}}{0}\binom{m_{i}}{1} \binom{m_{i-1}}{0}\cdots\binom{m_{0}}{0}\equiv m_{i}\not\equiv 0\pmod{p}\). Thus, \(\binom{m}{p^{i}}\neq 0\in M\), a contradiction. Thus \(m=p^{r}\) for some \(r\in\mathbb{Z}_{>0}\).
However, this implies that \(x^{p^{r}}\) maps \(L\) to \(M\), so \(L/M\) is purely inseparable. If \(L/M\) is not of finite exponent, then there exists \(d\in L\) such that \(e[d:M]>r\), contradicting the fact that \(x^{p^{r}}\) maps \(L\) to \(M\). Thus, a nonconstant polynomial \(f\in M[x]\) cannot map \(L\) into \(M\).
**Lemma 6.2**.: _Let \(L/M\) be a field extension that is not purely inseparable of finite exponent. Additionally, suppose that \(L\) is an infinite field. Then there does not exist a nonconstant rational function \(\varphi\in M(x)\) such that \(\varphi(d)\in M\) for all but finitely many \(d\in L\)._
Proof.: First, we will handle the case when \(M=\{d_{1},\ldots,d_{k}\}\) is finite. If \(\varphi\in M(x)\) such that \(\varphi(L)\subseteq M\), then \((\varphi-d_{1})\cdots(\varphi-d_{k})\) evaluates to \(0\) for all but finitely element of \(L\), which is infinite. Therefore, \((\varphi-d_{1})\cdots(\varphi-d_{k})=0\), forcing \(\varphi\) to be constant.
Now we assume that \(M\) is infinite for here on. Suppose there exists a nonconstant \(\varphi\in M(x)\) such that \(\varphi(d)\subseteq M\) for all but finitely many \(d\in L\). Write \(\varphi=\frac{f}{g}\) with \(f,g\in M[x]\) coprime. If there exists \(d\in L\) that is transcendental over \(M\), then \(\varphi(d)=\frac{f(d)}{g(d)}=c\) for some \(c\in M\). Thus, \(f(x)-cg(x)=0\), so \(f(x)-cg(x)\) must be identically zero since otherwise, \(d\) is algebraic over \(M\). But this means that \(\frac{f(x)}{g(x)}=c\), contradicting the fact that \(\varphi\) is nonconstant. Thus, \(L/M\) is an algebraic field extension.
Let \(\alpha\in L\setminus M\) be an element algebraic over \(M\) Let \(n\) be the degree of \(\alpha\) over \(M\). Set \(\psi(x_{0},x_{1},\ldots,x_{n-1})\coloneqq\varphi(x_{0}+x_{1}\alpha+x_{2} \alpha^{2}+\cdots+x_{n-1}\alpha^{n-1})=\frac{f(x_{0}+x_{1}\alpha+x_{2}\alpha^{2}+ \cdots+x_{n-1}\alpha^{n-1})}{g(x_{0}+x_{1}\alpha+x_{2}\alpha^{2}+\cdots+x_{n-1} \alpha^{n-1})}\), where
\(x_{0},\ldots,x_{n-1}\) are indeterminants, and write
\[f(x_{0}+x_{1}\alpha+x_{2}\alpha^{2}+\cdots+x_{n-1}\alpha^{n-1})=f_{0}+f_{1}\alpha +f_{2}\alpha^{2}+\cdots+f_{n-1}\alpha^{n-1}\]
and
\[g(x_{0}+x_{1}\alpha+x_{2}\alpha^{2}+\cdots+x_{n-1}\alpha^{n-1})=g_{0}+g_{1} \alpha+g_{2}\alpha^{2}+\cdots+g_{n-1}\alpha^{n-1},\]
where \(f_{i},g_{i}\in M[x_{0},x_{1},\ldots,x_{n-1}]\) since \(1,\alpha,\alpha^{2},\ldots,\alpha^{n-1}\) are linearly independent over \(M\), so \(1,\alpha,\alpha^{2},\ldots,\alpha^{n-1}\) are linearly independent over \(M(x_{0},x_{1},\ldots,x_{n-1})\).
Now we evaluate \(\psi\) at \(\mathbf{a}=(a_{0},\ldots,a_{n-1})\), where \(a_{0},\ldots,a_{n-1}\in M\). We get that
\[\frac{f_{0}(\mathbf{a})+f_{1}(\mathbf{a})\alpha+\cdots+f_{n-1}(\mathbf{a}) \alpha^{n-1}}{g_{0}(\mathbf{a})+g_{1}(\mathbf{a})\alpha+\cdots+g_{n-1}( \mathbf{a})\alpha^{n-1}}=\psi(\mathbf{a})=\varphi(a_{0}+a_{1}\alpha+\cdots+a _{n-1}\alpha^{n-1})\in M.\]
Because \(\psi(\mathbf{a})\) is in \(M\) and \(1,\alpha,\ldots,\alpha^{n-1}\) are linear independent over \(M\), for each \(i=0,1,\ldots,n-1\), we have \(f_{i}(\mathbf{a})=\psi(\mathbf{a})g_{i}(\mathbf{a})\). Then \(f_{i}-\psi g_{i}\) is a rational function that evaluates to \(0\) for all but finitely many \(\mathbf{a}\in M^{n}\), which means that \(f_{i}-\psi g_{i}=0\). Rearranging yields \(f_{i}=\psi g_{i}\) for \(i=0,1,\ldots,n-1\).
We claim that \(g_{0}\) is not identically zero. Suppose on the contrary that \(g_{0}=0\). Then \(g(x)=g_{0}(x,0,\ldots,0)+g_{1}(x,0,\ldots,0)\alpha+\cdots+g_{n-1}(x,0,\ldots, 0)\alpha^{n-1}\). Since \(g(x)\in M[x]\), we have that \(g_{1}(x,0,\ldots,0)=\cdots=g_{n-1}(x,0,\ldots,0)=0\). However, this means that \(g(x)\) is identically zero, which is impossible.
Now we have
\[\psi(x_{0},x_{1},\ldots,x_{n-1})=\varphi(x_{0}+x_{1}\alpha+x_{2}\alpha^{2}+ \cdots+x_{n-1}\alpha^{n-1})=\frac{f_{0}}{g_{0}}.\]
Note that this implies that \(\frac{f_{0}}{g_{0}}(d,0,\ldots,0)\) is defined for all but finitely many \(d\in M\). Thus,
\[\frac{f_{0}+f_{1}\alpha+f_{2}\alpha^{2}+\cdots+f_{n-1}\alpha^{n-1}}{g_{0}+g_{1 }\alpha+g_{2}\alpha^{2}+\cdots+g_{n-1}\alpha^{n-1}}=\frac{f_{0}}{g_{0}}.\]
Cross-multiplying and subtracting yields
\[(f_{1}g_{0}-f_{0}g_{1})\alpha+\cdots+(f_{n-1}g_{0}-f_{0}g_{n-1})\alpha^{n-1}=0\]
Suppose for a contradiction that there exists \(i\in\{1,2,\ldots,n-1\}\) such that \(g_{i}\neq 0\). Then \(f_{i}g_{0}-f_{0}g_{i}=0\) implies that \(\frac{f_{0}}{g_{0}}=\frac{f_{i}}{g_{i}}\). Thus, for all but finitely many \(d\in M\), we have \(\varphi(d)=\frac{f_{0}}{g_{0}}(d,0,\ldots,0)=\frac{f_{i}}{g_{i}}(d,0,\ldots,0)=0\). The last equality is due to the fact that \(f(x)\in M(x)\) implies \(f_{j}(x,0,\ldots,0)=0\) for all \(j\in\{1,2,\ldots,n-1\}\). This forces \(\varphi=0\), but we assumed that \(\varphi\) is nonconstant, so this is a contradiction. Therefore, \(g_{1}=\cdots=g_{n-1}=0\). This means \(g(x_{0}+x_{1}\alpha+x_{2}\alpha^{2}+\cdots+x_{n-1}\alpha^{n-1})=g_{0}(x_{0}, \ldots,x_{n-1})\) and thus \(g(d)\) is a polynomial with coefficients in \(M\) such that \(g(d)\in M\) for all but finitely many \(d\in M[\alpha]\).
If \(L/M\) is not purely inseparable, we can choose \(\alpha\in L\) to be separable so by Lemma 6.1, the polynomial \(g\) is constant. If \(L/M\) is purely inseparable of infinite exponent, then \(g\) must be constant as well, since otherwise, \(\deg g>0\) implies that \(\deg g\geq p^{e[\alpha:M]}\) for each \(\alpha\in L\), which is unbounded. In both cases, \(\varphi\) is a polynomial in \(M[x]\) such that \(\varphi(d)\subseteq M\) for all but finitely many \(d\in L\), but applying Lemma 6.1 again shows that \(\varphi\) must be constant, a contradiction.
Our goal now is to upgrade the previous lemma to the nonexistence of such a rational function in \(L(x)\).
**Proposition 6.3**.: _[_10_, Proposition X.1.4]_ _Let \(D\) be a domain with field of fractions \(K\), \(E\) be an infinite subset of \(K\), and \(L\) be a field extension of \(K\). If \(\varphi\in L(x)\) is such that \(\varphi(E)\subseteq D\), then, in fact, \(\varphi\in K(x)\)._
The following is a stronger version of Lemma 6.2. When \(L/M\) is a field extension that is not purely inseparable with \(L\) being infinite, not only does there not exist a nonconstant rational function \(\varphi\in M(x)\) such that \(\varphi(d)\in M\) for all but finitely many \(d\in L\), but there does not exist such a rational function in \(L(x)\) either.
**Lemma 6.4**.: _Let \(L/M\) be a field extension that is not purely inseparable of finite exponent. Additionally, suppose that \(L\) is an infinite field. Then there does not exist a nonconstant rational function \(\varphi\in L(x)\) such that \(\varphi(d)\in M\) for all but finitely many \(d\in L\)._
Proof.: Proceeding with proof by contradiction, let \(\varphi\in L(x)\) be a nonconstant rational function such that \(\varphi(x)\in M\) for all but finitely many \(x\in L\).
First, we will handle the case when \(M=\{d_{1},\ldots,d_{k}\}\) is finite. If \(\varphi\in L(x)\) such that \(\varphi(L)\subseteq M\), then \((\varphi-d_{1})\cdots(\varphi-d_{k})\) evaluates to \(0\) for all but finitely element of \(L\), which is infinite. Therefore, \((\varphi-d_{1})\cdots(\varphi-d_{k})=0\), forcing \(\varphi\) to be constant.
Now we assume that \(M\) is infinite. Following the notation in Proposition 6.3, we let \(D=M\) and \(L=L\). Also let \(E\) be the set of elements \(d\in M\) such that \(\varphi(d)\in M\). Note that \(E\) is infinite. Then \(\varphi(E)\subseteq M\), so \(\varphi\in M(x)\). However, \(\varphi\in M(x)\) is a rational function such that \(\varphi(d)\in M\) for all but finitely many \(d\in L\), a contradiction of Lemma 6.2.
|
2302.04276
|
The Simons Observatory: pipeline comparison and validation for
large-scale B-modes
|
The upcoming Simons Observatory Small Aperture Telescopes aim at achieving a
constraint on the primordial tensor-to-scalar ratio $r$ at the level of
$\sigma(r=0)\lesssim0.003$, observing the polarized CMB in the presence of
partial sky coverage, cosmic variance, inhomogeneous non-white noise, and
Galactic foregrounds. We present three different analysis pipelines able to
constrain $r$ given the latest available instrument performance, and compare
their predictions on a set of sky simulations that allow us to explore a number
of Galactic foreground models and elements of instrumental noise, relevant for
the Simons Observatory. The three pipelines employ different combinations of
parametric and non-parametric component separation at the map and power
spectrum levels, and use B-mode purification to estimate the CMB B-mode power
spectrum. We applied them to a common set of simulated realistic frequency
maps, and compared and validated them with focus on their ability to extract
robust constraints on the tensor-to-scalar ratio $r$. We evaluated their
performance in terms of bias and statistical uncertainty on this parameter. In
most of the scenarios the three methodologies achieve similar performance.
Nevertheless, several simulations with complex foreground signals lead to a
$>2\sigma$ bias on $r$ if analyzed with the default versions of these
pipelines, highlighting the need for more sophisticated pipeline components
that marginalize over foreground residuals. We show two such extensions, using
power-spectrum-based and map-based methods, that are able to fully reduce the
bias on $r$ below the statistical uncertainties in all foreground models
explored, at a moderate cost in terms of $\sigma(r)$.
|
K. Wolz, S. Azzoni, C. Hervias-Caimapo, J. Errard, N. Krachmalnicoff, D. Alonso, C. Baccigalupi, A. Baleato Lizancos, M. L. Brown, E. Calabrese, J. Chluba, J. Dunkley, G. Fabbian, N. Galitzki, B. Jost, M. Morshed, F. Nati
|
2023-02-08T19:00:00Z
|
http://arxiv.org/abs/2302.04276v2
|
# The Simons Observatory: pipeline comparison and validation for large-scale \(B\)-modes
###### Abstract
Context:The upcoming Simons Observatory Small Aperture Telescopes aim at achieving a constraint on the primordial tensor-to-scalar ratio \(r\) at the level of \(\sigma(r=0)\lesssim 0.003\), observing the polarized CMB in the presence of partial sky coverage, cosmic variance, inhomogeneous non-white noise, and Galactic foregrounds.
Aims:We present three different analysis pipelines able to constrain \(r\) given the latest available instrument performance, and compare their predictions on a set of sky simulations that allow us to explore a number of Galactic foreground models and elements of instrumental noise, relevant for the Simons Observatory.
Methods:The three pipelines use different combinations of parametric and non-parametric component separation at the map and power spectrum levels, and employ \(B\)-mode purification to estimate the CMB \(B\)-mode power spectrum. They are tested and compared regarding their capability to analyze a common set of simulated realistic frequency maps, and to extract constraints on the tensor-to-scalar ratio \(r\). Their performance is evaluated in terms of bias and statistical uncertainty on this parameter.
Results:In most of the scenarios the three methodologies achieve similar performance. Nevertheless, several simulations with complex foreground signals lead to a \(>2\sigma\) bias on \(r\) if analyzed with the default versions of these pipelines, highlighting the need for more sophisticated pipeline components that marginalize over foreground residuals. We show two such extensions, using power-spectrum-based and map-based methods, that are able to fully reduce the bias on \(r\) below the statistical uncertainties in all foreground models explored, at a moderate cost in terms of \(\sigma(r)\).
## 1 Introduction
One of the next frontiers in cosmological science using the Cosmic Microwave Background (CMB) is the observation of large-scale \(B\)-mode polarization, and the consequential potential detection of primordial gravitational waves. Such a detection would let us glance into the early Universe and its very high energy physics, at scales unattainable by any other experiment. Primordial tensor perturbations, which would constitute a stochastic background of primordial gravitational waves, would source a parity-odd \(B\)-mode component in the polarization of the CMB (Kamiokowski et al. 1997; Seljak 1997; Seljak & Zaldarriaga 1997; Zaldarriaga & Seljak 1997). The ratio between the amplitudes of the primordial power spectrum of these tensor perturba
tions and the primordial spectrum of the scalar perturbations is referred to as the tensor-to-scalar ratio \(r\). This ratio covers a broad class of models of the early Universe, allowing us to test and discriminate between models that predict a wide range of values of \(r\). These include vanishingly small values, as resulting from models of quantum gravity (e.g. Ijjas & Steinhardt, 2018, 2019), as well as those expected to soon enter the detectable range, predicted by models of inflation (Starobinskii, 1979; Abbott & Wise, 1984; Martin et al., 2014, 2014, 2015). An unequivocal measurement of \(r\), or a stringent upper bound, would thus greatly constrain the landscape of theories on the early Universe.
Although there is no evidence of primordial \(B\)-modes yet, current CMB experiments have placed stringent constraints on their amplitude, finding \(r<0.036\) at 95% confidence (BICEP/Keck Collaboration, 2021) when evaluated at a pivot scale of 0.05 Mpc\({}^{-1}\). At the same time, these experiments have firmly established that the power spectrum of primordial scalar perturbations is not exactly scale-independent, with the scalar spectral index \(n_{s}-1\sim 0.03\)(e.g. Planck Collaboration VI, 2020). Given this measurement, several classes of inflationary models predict \(r\) to be in the \(\sim 10^{-3}\) range (see Kamionkowski & Kovetz, 2016, and references therein).
Even though the only source of primordial large-scale \(B\)-modes at linear order are tensor fluctuations, in practice a measurement is complicated by several factors: first, the gravitational deflection of the background CMB photons by the cosmic large-scale structure creates coherent sub-degree distortions in the CMB, known as CMB lensing (Lewis & Challinor, 2006). Through this mechanism, the nonlinear scalar perturbations from the late Universe transform a fraction of the parity-even \(E\)-modes into \(B\)-modes at intermediate and small scales (Zaldarriaga & Seljak, 1998). Second, diffuse Galactic foregrounds have significant polarized emission, and in particular foreground components such as synchrotron radiation and thermal emission from dust produce \(B\)-modes with a significant amplitude. Component separation methods, which exploit the different spectral energy distributions (SED) of the CMB and foregrounds to separate the different components, are thus of vital importance (Delabrouille & Cardoso, 2007; Leach et al., 2008). Practical implementations of these methods must also be able to carry out this separation in the presence of instrumental noise and systematic effects (e.g. Natoli et al., 2018; Abitbol et al., 2021).
Polarized Galactic foregrounds pose a formidable obstacle when attempting to measure primordial \(B\)-modes at the level of \(r\sim 10^{-3}\). Current measurements of Galactic emission demonstrate that the Galactic \(B\)-mode signal is dominant over the cosmological signal on the relevant scales (Planck Collaboration X, 2016; Planck Collaboration Int. XXX, 2016; Planck Collaboration IV, 2020; Planck Collaboration XI, 2020). At the minimum of polarized Galactic thermal dust and synchrotron, around 80 GHz, their \(B\)-mode signal represents an effective tensor-to-scalar ratio with amplitude larger than the target CMB signal, even in the cleanest regions of the sky (Krachmalicoff et al., 2016). Component separation methods are able to clean most of this, but small residuals left after the cleaning could be comparable to the primordial \(B\)-mode signal we want to measure. In recent years, many works have analyzed this problem and made forecasts of how well we could potentially measure \(r\) with different ground-based and satellite experiments (e.g. Betoule et al., 2009; Bonaldi & Ricciardi, 2011; Katayama & Komatsu, 2011; Armitage-Caplan et al., 2012; Errard & Stompor, 2012; Remazeilles et al., 2016; Stompor et al., 2016; Errard et al., 2016; Hervias-Caimapo et al., 2017; Alonso et al., 2017; Remazeilles et al., 2018, 2018; Ermard & Stompor, 2019; Throne et al., 2019; Remazeilles et al., 2021; Azzoni et al., 2021; Hervias-Caimapo et al., 2022; CMB-S4 Collaboration, 2022; Vacher et al., 2022; LiteBIRD Collaboration, 2022). In general, these works have highlighted how, if left untreated, systematic residuals arising from a simplistic characterization of foregrounds will bias an \(r\sim 10^{-3}\) measurement by several \(\sigma\). Thus, it is of vital importance to model the required foreground complexity when cleaning the multi-frequency CMB observations and to keep a tight control of systematics without introduction of bias.
Multiple upcoming CMB experiments rank the detection of large-scale primordial \(B\)-modes among their primary science targets. Near-future experiments such as the BICEP Array (Hui et al., 2018) target a detection at the level of \(r\sim 0.01\), while in the following decade, next-generation projects such as LiteBIRD (Hazumi et al., 2019) and CMB-S4 (Abazajian et al., 2016) aim at \(r\sim 0.001\).
The Simons Observatory (SO), like the BICEP Array, targets the detection of primordial gravitational waves at the level of \(r\sim 0.01\) (see "The Simons Observatory: science goals and forecasts", SO Collaboration, 2019), and its performance at realizing this goal is the main focus of this paper. SO is a ground-based experiment, located at the Cerro Toco site in the Chilean Atacama desert, which will observe the microwave sky in six frequency channels, from 27 to 280 GHz, due to start in 2023. SO consists of two main instruments. On the one hand, a Large Aperture Telescope (LAT) with a 6m diameter aperture will target small-scale CMB physics, secondary anisotropies and the CMB lensing signal. Measurements of the latter will serve to subtract lensing-induced \(B\)-modes from the CMB signal to retrieve primordial \(B\)-modes (using a technique known as "delensing", see Namikawa et al., 2022). On the other hand, multiple Small Aperture Telescopes (SATs) with 0.5m diameter apertures will make large-scale, deep observations of \(\sim 10\%\) of the sky, with the main aim of constraining the primordial \(B\)-mode signal, peaking on scales \(\ell\sim 80\) (the so-called "recombination bump"). See SO Collaboration (2019) for an extended discussion on experimental capabilities.
In this paper, we aim at validating three independent \(B\)-mode analysis pipelines. We compare their performance regarding a potential \(r\) measurement by the SO SATs, and evaluate the capability of the survey to constrain \(\sigma(r=0)\leq 0.003\) in the presence of foreground contamination and instrumental noise. To that end, we produce sky simulations encompassing different levels of foreground complexity, CMB with different values of \(r\) and different amounts of residual lensing contamination, and various levels of the latest available instrumental noise1, calculated from the parametric models presented in SO Collaboration (2019).
Footnote 1: We note that the pipelines are still agnostic to some aspects of the instrumental noise such as filtering, which may impact the overall forecasted scientific performance. We anticipate studying these in detail in future work.
We feed these simulations through the analysis pipelines and test their performance, quantifying the bias and statistical uncertainty on \(r\) as a function of foreground and noise
complexity. The three pipelines are described in detail in Section 2. Section 3 presents the simulations used in the analysis, including the models used to produce CMB and foreground sky maps, as well as instrumental noise. In Section 4, we present our forecasts for \(r\), accompanied by the power spectrum products and a channel weights comparison. Section 5 shows preliminary results on a set of new, complex foreground simulations. In Section 6 we summarize and draw our conclusions. Finally, Appendix A summarizes the \(\chi^{2}\) analysis performed on the Cross-\(C_{\ell}\) cleaning pipeline, while Appendix B discusses biases on Gaussian simulations observed with the NILC cleaning pipeline.
## 2 Methods, pipelines
In this section we present our three component separation pipelines, that adopt complementary approaches widely used in the literature: power-spectrum-based parametric cleaning (BICEP2 Collaboration and Keck Array Collaboration (2016, 2018)), Needlet Internal Linear Combination (NILC) blind cleaning (Delabrouille et al., 2009; Basak and Delabrouille, 2012, 2013), and map-based parametric cleaning (Poletti and Errard, 2023). In the following, these are denominated pipelines A, B and C, respectively. The cleaning algorithms operate on different data spaces (harmonic, needlet and pixel space) and vary in their cleaning strategy (parametric, meaning that we assume an explicit model for the frequency spectrum of the foreground components, or blind, meaning that we do not model the foregrounds or make any assumptions on how their frequency spectrum should be). Hence, they do not share the same set of method-induced systematic errors. This will serve as an important argument in favor of claiming robustness of our inference results.
Table 1 lists the three pipelines and their main properties. Although there are some similarities between these analysis pipelines and the forecasting frameworks that were exploited in SO Collaboration (2019), the tools developed for this paper are novel implementations designed to deal with realistic SO data-like inputs, including complex noise and more exotic foreground simulations compared to what was considered in the previous work. We stress again that no filtering or other systematic effects were included in the noise maps.
### Pipeline A: Cross-\(C_{\ell}\) cleaning
Pipeline A is based on a multi-frequency power-spectrum-based component separation method, similar to that used in the latest analysis carried out by the BICEP/_Keck_ collaboration (BICEP2 Collaboration and Keck Array Collaboration, 2016, 2018). The data vector is the full set of cross-power spectra between all frequency maps, \(C_{\ell}^{\nu\nu^{\prime}}\). The likelihood compares this against a theoretical prediction that propagates the map-level sky and instrument model onto the corresponding power spectra. The full pipeline is publicly available2, and a schematic overview is provided in Figure 1.
Footnote 2: See github.com/simonsob/BBPower.
In step 1, power spectra are measured using a pseudo-\(C_{\ell}\) approach with \(B\)-mode purification as implemented in NaMaster(Alonso et al., 2019). As described in Smith and Zaldarriaga (2007) the presence of a sky mask leads to the presence of ambiguous modes contaminated by full-sky \(E\)-modes. These must be removed at the map level to avoid the contribution to the power spectrum uncertainties from the leaked \(E\)-modes. The mask used for this analysis traces the hits count map released in SO Collaboration (2019) (see Figure 2), and its edges are apodized using a C1-type kernel (see Grain et al., 2009) with an apodization scale of 10 degrees, yielding an effective sky coverage of \(f_{\rm sky}\sim 10\%\). Each power spectrum is calculated in bandpower windows with constant bin width \(\Delta\ell=10\), of which we only keep the range \(30\leq\ell\leq 300\). Our assumption is that on real data, larger scales are contaminated by atmospheric noise and filtering, whereas smaller scales, targeted by the SO-LAT and useful for constraining lensing \(B\)-modes, do not contain any significant primordial \(B\)-mode contribution. To avoid a significant bias in the auto-correlations when removing the impact of instrumental noise, a precise noise model is required that may not be available in practice. We address this issue by using data splits, which in the case of real data may be formed by subdividing data among observation periods, sets of detectors, sky patches or by other means, while in this paper, we resort to simulations.
We construct simulated observations for each sky realization comprising \(S=4\) independent splits with the same sky but different noise realizations (each with a commensurately larger noise amplitude). We compute \(BB\) power spectra from pairs of maps, each associated with a given data split and a given frequency channel. For any fixed channel pair combination, we average over the corresponding set of \(S(S-1)/2=6\) power spectra with unequal split pairings. For \(N=6\) SAT frequency channels, this results in a collection of \(N(N+1)/2=21\) noise-debiased multi-frequency power spectra, shown in Figure 3. Note that we could have modeled and subtracted the noise bias explicitly, since we have full control over the noise properties in our simulations. In realistic settings, however, the accuracy of an assumed noise model may be limited. While inaccurate noise modeling would affect the statistical uncertainty \(\sigma(r)\) through the covariance matrix calculated from simulations, the cross-split approach ensures robustness of the inferred value of \(r\) against noise-induced bias.
In step 2, we estimate the bandpower covariance matrix from simulations, assuming no correlations between different multipole windows. Note that, to include foreground signal variance in the budget, Gaussian foreground simulations (see Section 3) are considered for the covariance computation step, since our realistic foreground templates cannot be used as statistical samples. As we show in Appendix A, this covariance matrix is indeed appropriate, as it leads to the theoretically expected empirical distribution of the \(\hat{\chi}_{\rm min}^{2}\) statistic not only in the case of Gaussian foregrounds, but also for the non-Gaussian foreground simulations. Inaccurate covariance estimates would make this statistic peak at higher or lower values, which we do not observe.
Step 3 is the parameter inference stage. We use a Gaussian likelihood when comparing the multi-frequency power spectra with their theoretical prediction. Note that, in general, the power spectrum likelihood is non-Gaussian, and Hamimeche and Lewis (2008) provide an approximate likelihood that is able to account for this non-Gaussianity. We explicitly verified that both likelihoods lead to equivalent parameter constraints, and thus choose the simpler Gaussian option. The validity of the Gaussian approximation
is a consequence of the central limit theorem, since each measured bandpower consists of effectively averaging over \(N_{\text{mode}}\simeq\Delta\ell\times f_{\text{sky}}\times(2\ell+1)>61\) independent squared modes on the scales used here. Note that this assumption is valid thanks to the relatively large SO-SAT sky patch and would not hold any longer for BICEP/Keck-like patch sizes. The default sky model is the same as that described in Abitbol et al. (2021). We model the angular power spectra of dust and synchrotron as power laws of the form \(D_{\ell}=A_{c}(\ell/\ell_{0})^{\alpha_{c}}\), with \(\ell_{0}=80\), and \(c=d\) or \(s\) for dust and synchrotron respectively. The dust SED is modeled as a modified black-body spectrum with spectral index \(\beta_{d}\) and temperature \(\Theta_{d}\), which we fix to \(\Theta_{d}=20\,\text{K}\). The synchrotron SED is modeled as a power law with spectral index \(\beta_{s}\). Finally, we consider a dust-synchrotron correlation parameter \(\epsilon_{ds}\). Including the tensor-to-scalar ratio \(r\) and a free lensing \(B\)-mode amplitude \(A_{\text{lens}}\), this fiducial model has 9 free parameters:
\[\{A_{\text{lens}},r,A_{d},\alpha_{d},\beta_{d},A_{s},\alpha_{s},\beta_{s}, \epsilon_{ds}\}. \tag{1}\]
We will refer to results using this model as "\(C_{\ell}\)-fiducial". Table 2 lists the priors on its 9 parameters.
The main drawback of power-spectrum-based pipelines in their simplest incarnation, is their inability to account for spatial variation in the foreground spectra. If ignored, this spatial variation can lead to biases at the level of \(r\sim O(10^{-3})\), which are significant for the SO target. At the power spectrum level, spatially-varying SEDs give rise to frequency decorrelation, which can be included in the model. Here, we will show results for an extended model that uses the moment-based 3 parameterization of Azzoni et al. (2021) to describe the spatial variation of \(\beta_{d}\) and \(\beta_{s}\). The model introduces 4 new parameters
Footnote 3: See Tegmark (1998); Chluba et al. (2017); Vacher et al. (2023) for more details on the moment-expansion formalism in the context of CMB foregrounds, and Mangilli et al. (2021) as an alternative power-spectrum-based description.
\[\{B_{s},\gamma_{s},B_{d},\gamma_{d}\}\,, \tag{2}\]
where \(B_{c}\) parameterizes the amplitude of the spatial variations in the spectral index of component \(c\), and \(\gamma_{c}\) is their power spectrum slope (see Azzoni et al. (2021) for further details). We will refer to results using this method as "\(C_{\ell}\)-moments", or "A + moments". The priors in the shared parameter space are the same as in \(C_{\ell}\)-fiducial, and Table 2 lists the priors on its additional 4 parameters. For both methods, we sample posteriors using the emce code (Foreman-Mackey et al., 2013). Note that we assume a top-hat prior on \(r\) in the range \([-0.1,\,0.1]\) instead of imposing \(r>0\). The reason is that we would like to remain sensitive to potential negative biases on \(r\). While negative \(r\) values do not make sense physically, they may result from e.g. volume effects caused by choosing specific priors on other parameters that we have marginalized over. Opening the prior on \(r\) to negative values allows us to monitor these unwanted effects, offering a simple robustness check. On real data, this will be replaced by a positivity prior \(r>0\), but only after ensuring that our specific prior choices on the other parameters do not bias \(r\), which is the focus of a future work.
### Pipeline B: NILC cleaning
Our second pipeline is based on the blind Internal Linear Combination (ILC) method, which assumes no information on foregrounds whatsoever, and instead only assumes that the observed data contains one signal of interest (the CMB)
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Pipeline & method & data space & blind / parametric & \(r\) inference step \\ \hline A & Cross-\(C_{\ell}\) cleaning & harmonic (power spectra) & parametric & multi-frequency \(C_{\ell}\) likelihood \\ B & NILC cleaning & needlets (maps) & blind & CMB-only \(C_{\ell}\) likelihood \\ C & map-based cleaning & pixels (maps) & parametric & CMB-only \(C_{\ell}\) likelihood \\ \hline \hline \end{tabular}
\end{table}
Table 1: Overview of the component separation pipelines used to infer \(r\)
Figure 1: Schematic of pipeline A. Orange colors mark steps that are repeated 500 times, once for each simulation.
Figure 2: Apodized SAT hits map with effective \(f_{\text{sky}}=10\%\) used in this paper, shown in equatorial projection. Its edges are apodized using a C1-type kernel with an apodization scale of 10 degrees.
plus noise and contaminants (Bennett et al. 2003). The method assumes a simple model for the observed multi-frequency maps \(\mathbf{d}_{\nu}\) at \(N_{\nu}\) frequency channels (in either pixel or harmonic space)
\[\mathbf{d}_{\nu}=a_{\nu}\mathbf{s}+\mathbf{n}_{\nu}, \tag{3}\]
where \(a_{\nu}\) is the black-body spectrum of the CMB, \(\mathbf{s}\) is the amplitude of the true CMB signal and \(\mathbf{n}_{\nu}\) is the contamination in channel \(\nu\), which includes the foregrounds and instrumental noise. ILC exploits the difference between the black-body spectrum of the CMB and the SED(s) of other components that may be present in the data. The method aims at reconstructing a map of the CMB component \(\tilde{\mathbf{s}}\) as a linear combination of the data with a set of weights \(w_{\nu}\) allowed to vary across the map,
\[\tilde{\mathbf{s}}=\sum_{\nu}w_{\nu}\mathbf{d}_{\nu}=\mathbf{w}^{T}\hat{ \mathbf{d}}\,, \tag{4}\]
where both \(\mathbf{w}\) and \(\hat{\mathbf{d}}\) are \(N_{\nu}\times N_{\mathrm{pix}}\) matrices,with \(N_{\mathrm{pix}}\) being the number of pixels. Optimal weights are found by minimizing the variance of \(\tilde{\mathbf{s}}\). The result is given by:
\[\mathbf{w}^{T}=\frac{\mathbf{a}^{T}\hat{\mathbf{c}}^{-1}}{\mathbf{a}^{T}\hat {\mathbf{c}}^{-1}\mathbf{a}}, \tag{5}\]
where \(\mathbf{a}\) is the black-body spectrum of the CMB (i.e. a vector filled with ones if maps are in thermodynamic temperature units) and \(\hat{\mathbf{C}}=\langle\hat{\mathbf{d}}\,\hat{\mathbf{d}}^{T}\rangle\) is the frequency-frequency covariance matrix per pixel of the observed data. Note that
\begin{table}
\begin{tabular}{c c c c c c c c c c c c c c} \hline \hline \multicolumn{1}{c}{ model} & \multicolumn{6}{c}{\(C_{\ell}\)-fiducial and \(C_{\ell}\)-moments} & \multicolumn{6}{c}{\(C_{\ell}\)-moments only} \\ \hline parameter & \(A_{\mathrm{lens}}\) & \(r\) & \(A_{d}\) & \(\alpha_{d}\) & \(\beta_{d}\) & \(A_{s}\) & \(\alpha_{s}\) & \(\beta_{s}\) & \(\epsilon_{ds}\) & \(B_{s}\) & \(\gamma_{s}\) & \(B_{d}\) & \(\gamma_{d}\) \\ prior type & TH & TH & TH & TH & G & TH & TH & G & TH & TH & TH & TH & TH \\ center value & 1.0 & 0.0 & 25 & 0.0 & 1.54 & 2.0 & \(-1.0\) & \(-3.0\) & 0.0 & 0.0 & \(-4.0\) & 5.0 & \(-4.0\) \\ half width & 1.0 & 0.1 & 25 & 0.5 & 0.11 & 2.0 & 1.0 & 0.3 & 1.0 & 10.0 & 2.0 & 5.0 & 2.0 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Parameter priors for pipeline A, considering both the \(C_{\ell}\)-fiducial model and the \(C_{\ell}\)-moments model. Prior types are either Gaussian (G) or top-hat (TH), considered distributed symmetrically around the center value with half width meaning the standard deviation (Gaussian) or the half width (top-hat).
Figure 3: Power spectrum data analyzed by pipeline A, showing a single realization of CMB signal and Gaussian foregrounds. Blue shaded areas quantify the \(1\sigma\) Gaussian standard deviation calculated from simulations of CMB, noise and Gaussian foregrounds. Note that negative auto-spectra can occur at noise-dominated scales as a result of cross-correlating data splits.
we assume no correlation between pixels for the optimal weights.
In our particular implementation, we use the Needlet Internal Linear Combination method (NILC, Delabrouille et al., 2009; Basak & Delabrouille, 2012, 2013). NILC uses localization in pixel and harmonic space by finding different weights \(\mathbf{w}\) for a set of harmonic filters, called "needlet windows". These windows are defined in harmonic space \(h_{i}(\ell)\) for \(i=0,...,n_{\text{windows}}-1\) and must satisfy the constraint \(\sum_{i=0}^{n_{\text{window}}-1}h_{i}(\ell)^{2}=1\) in order to preserve the power of the reconstructed CMB. We use \(n_{\text{windows}}=5\) needlet windows shown in Figure 4, and defined by
\[h_{i}(\ell)=\begin{cases}\cos(\frac{\pi}{2}(\ell_{i}^{\text{peak}}-\ell)/( \ell_{i}^{\text{peak}}-\ell_{i}^{\text{min}}))&\text{if }\ell_{i}^{\text{min}}\leq\ell<\ell_{i}^{\text{peak}}\\ 1&\text{if }\ell=\ell_{i}^{\text{peak}}\\ \cos(\frac{\pi}{2}(\ell-\ell_{i}^{\text{peak}})/(\ell_{i}^{\text{max}}-\ell_{ i}^{\text{peak}}))&\text{if }\ell_{i}^{\text{peak}}<\ell\leq\ell_{i}^{\text{max}}\end{cases}, \tag{6}\]
with \(\ell_{\text{min}}=\{\) 0, 0, 100, 200, 350\(\}\), \(\ell_{\text{max}}=\{\) 100, 200, 350, 500, 500\(\}\), and \(\ell_{\text{peak}}=\{\) 0, 100, 200, 350, 500\(\}\) for the corresponding 5 needlet windows. Even though we do not use the full 500-\(\ell\) range where windows are defined for the likelihood sampling, we still perform the component separation on all 5 windows up to multipoles beyond our upper limit of \(\ell=300\), in order to avoid edge effects on the smaller scales.
Let us now describe the NILC procedure as illustrated in Figure 5. In step 1, we perform our CMB reconstruction in the \(E\) and \(B\) field instead of \(Q\) and \(U\). We transform the observed maps to \(a_{\ell m}^{X}\) with \(X\in E,B\). All frequency channels are then brought to a common beam resolution by rescaling the harmonic coefficients with an appropriate harmonic beam window function. The common beam we adopt is the one from the third frequency channel at 93 GHz, which corresponds to a FWHM of 30 arcmin. For each needlet window index \(i\), we multiply \(a_{\ell m}^{X}\) with \(h_{i}(\ell)\) as a harmonic filter. Since different frequency channels have different limiting resolutions, we do not use all channels in every needlet window: the first 2 windows use all 6 frequency channels, the third window does not use the 27 GHz channel and the last 2 needlet windows do not use the 27 and 39 GHz channels. The covariance matrix \(\hat{\mathsf{C}}\) has dimensions \(N_{\nu}\times N_{\nu}\times N_{\text{pix}}\). For each pixel \(p\), its corresponding \(N_{\nu}\times N_{\nu}\) elements are computed directly from the data, averaging over the pixels inside a given pixel domain \(\mathcal{D}(p,i)\) around each pixel. In practice, the element \(\nu,\nu^{\prime}\) of the covariance matrix is calculated by multiplying the two filtered maps at channels \(\nu\) and \(\nu^{\prime}\), then smoothing that map with a Gaussian kernel with FWHM equal to the size of the pixel domain \(\mathcal{D}(p,i)\). The FWHMs for the pixel domain size are 185, 72, 44, 31, and 39 degrees for each needlet window, respectively. 4
Footnote 4: The domain sizes are estimated directly from the needlet window scale (see details in the appendix A of Delabrouille et al., 2009). The ILC bias can be minimized by enlarging the pixel domains to be big enough to include a higher number of modes. We choose the resulting ILC bias to not exceed 0.2%, for which we need pixel domain sizes large enough so that each needlet window contains at least 2500 modes.
We then proceed to calculate the weights \(\mathbf{w}^{T}\) (see Equation 5) for window \(i\), which is an array with shape (2, \(N_{\nu}\), \(N_{\text{pixels}}^{i}\)), with the first dimension corresponding to the \(E\) and \(B\) fields. Note that the number of pixels \(N_{\text{pixels}}^{i}\) is different for each needlet window, since we use different pixel resolutions depending on the smallest scale covered by the window. Finally, we apply Equation 4 to obtain an ILC-reconstructed CMB map for window \(i\). The final step is to filter this map in harmonic space for a second time with the \(h_{i}(\ell)\) window. The final reconstructed CMB map is the sum of these maps for all five needlet windows.
In step 2, the reconstructed CMB maps are compressed into power spectra using NaMaster, deconvolving to the common beam resolution. We use \(B\)-mode purification as implemented in the software and the mask shown in Figure 2. We estimate the noise bias \(N_{\ell}\) in the final map by computing the power spectrum of noise-only simulations processed with the needlet weights and windows obtained from the simulated data as described above. \(N_{\ell}\) is averaged over simulations and subtracted from the \(C_{\ell}\) of the reconstructed maps.
Finally, in step 3, we run a Monte Carlo Markov Chain (MCMC) over the reconstructed \(BB\) spectrum (we ignore \(EE\) and \(EB\)) with only two free parameters: the tensor-to-scalar ratio \(r\) and the amplitude of the \(BB\) lensing spectrum \(A_{\text{lens}}\), using the Python package emcee. Both param
Figure 4: The five needlet windows used in pipeline B. The black dashed lines are the beam transfer functions \(b_{\ell}^{\nu}\) for the six SO-SAT frequency channels. The FWHM of the beams is listed in Table 3. The gray shaded area are the multipole ranges we do not use in the likelihood sampling to estimate \(r\).
Figure 5: Schematic of pipeline B. Orange colors mark steps that are repeated 500 times, once for each simulation.
eters have a top hat prior (between 0 and 2 for \(A_{\rm lens}\), and between \(-0.013\) and infinity for \(r\)). The covariance matrix is calculated directly over 500 simulations with the same setup but with Gaussian foregrounds. As likelihood, we use the same Gaussian likelihood used in pipeline A and restrict the inference to a multipole range \(30<\ell\leq 300\).
While the NILC implementation described above is blind, it can be extended to a semi-blind approach that introduces a certain level of foreground modeling. For example, constrained ILC (cILC, Remazeilles et al., 2011) explicitly nullifies or more contaminant components in the observed data (such as thermal dust) by including their modeled SED in the variance minimization problem that calculates the weights in Equation 5. This foreground modeling can be further extended to include the moment expansion of the SED described in Section 2.1. This method, known as constrained moment ILC (cMILC Remazeilles et al., 2021), has been shown to be effective for cleaning the large-scale \(B\)-mode contamination for space experiments such as LiteBIRD. While not used in this work, these extensions and others will be considered in future analyses with more complex foregrounds and systematics.
### Pipeline C: map-based cleaning
Our third pipeline is a map-based parametric pipeline based on the fgbuster code Poletti and Errard (2023). This approach is based on the following data model:
\[\mathbf{d}=\hat{\mathbf{A}}\mathbf{s}+\mathbf{n} \tag{7}\]
where \(\mathbf{d}\) is a vector containing the polarized frequency maps, \(\hat{\mathbf{A}}=\hat{\mathbf{A}}(\boldsymbol{\beta})\) is the so-called mixing matrix assumed to be parameterized by a set of spectral indices \(\boldsymbol{\beta}\), \(\mathbf{s}\) is a vector containing the \(Q\) and \(U\) amplitudes of the sky signals (CMB, foregrounds) and \(\mathbf{n}\) is the noise contained in each frequency map. Starting from the input observed data sets \(\mathbf{d}\), Figure 6 shows a schematic of the pipeline, which contains four steps.
Step 0 is the preprocessing of input simulations: for each simulation, we combine the simulated noise maps, the foreground and CMB maps and save them on disk. We create a new set of frequency maps, \(\hat{\mathbf{d}}\), smoothed with a common Gaussian kernel of \(100^{\prime}\) FWHM.
Step 1 is the actual component separation stage. We optimize the spectral likelihood defined as Stompor et al. (2009):
\[-2\log\left(\mathcal{L}_{\rm spec}(\boldsymbol{\beta})\right)=\left(\hat{ \mathbf{A}}^{T}\hat{\mathbf{N}}^{-1}\hat{\mathbf{d}}\right)^{T}\left(\hat{ \mathbf{A}}^{T}\hat{\mathbf{N}}^{-1}\hat{\mathbf{A}}\right)^{-1}\left(\hat{ \mathbf{A}}^{T}\hat{\mathbf{N}}^{-1}\hat{\mathbf{d}}\right) \tag{8}\]
which uses the common resolution frequency maps, \(\bar{\mathbf{d}}\), built during step 0. The right hand side of Equation 8 contains a sum over the observed sky pixels, assumed to have uncorrelated noise - the diagonal noise covariance matrix \(\hat{\mathbf{N}}\) is computed from 500 noise-only simulations. Although, in principle, \(\hat{\mathbf{N}}\) can be non-diagonal, we do not observe any significant bias of the spectral likelihood due to this approximation in this study. By minimizing Equation 8 we estimate the best-fit spectral indices \(\boldsymbol{\tilde{\beta}}\) and the corresponding mixing matrix \(\hat{\mathbf{A}}\equiv\hat{\mathbf{A}}(\boldsymbol{\tilde{\beta}})\). We also estimate the uncertainties on the recovered spectral indices as provided by the minimizer, a truncated-Newton algorithm (Nash, 1984) as implemented in scipy(Virtanen et al., 2020). Having thus obtained estimators of the foreground SEDs, we can recover the sky component maps with the generalized least-square equation
\[\tilde{\mathbf{s}}=\left(\hat{\mathbf{A}}^{T}\hat{\mathbf{N}}^{-1}\hat{ \mathbf{A}}\right)^{-1}\hat{\mathbf{A}}^{T}\hat{\mathbf{N}}^{-1}\mathbf{d} \equiv\hat{\mathbf{W}}\mathbf{d}\, \tag{9}\]
where \(\mathbf{d}\) is the input raw data, and not the common resolution maps. In steps 1 and 2, we have the possibility to use an inhomogeneous noise covariance matrix i.e. \(\hat{\mathbf{N}}=\hat{\mathbf{N}}(\hat{\mathbf{n}})\) and, although this is not exploited in this work, a spatially varying mixing matrix \(\boldsymbol{\beta}=\boldsymbol{\beta}(\hat{\mathbf{n}})\). For the latter, one can use the multi-patch or clustering methods implemented in fgbuster(Errard and Stompor, 2019; Puglisi et al., 2022).
Step 2 comprises the calculation of angular power spectra. The recovered CMB polarization map is transformed to harmonic space using NaMaster. We estimate an effective transfer function, \(\mathbf{B}_{\ell}^{\rm eff}=\hat{\mathbf{W}}\mathbf{B}_{\ell}\) associated with the reconstructed components \(\tilde{\mathbf{s}}\), from the channel-specific beams \(\mathbf{B}_{\ell}\). Correcting for the impact of this effective beam is vital to obtain an unbiased \(BB\) spectrum of the foreground-cleaned CMB, \(\tilde{C}_{\ell}^{\rm CMB}\). In the second step, we also estimate the noise bias from noise simulations, i.e.
\[\tilde{\mathbf{N}}_{\ell}=\frac{1}{\rm N_{sim}}\sum_{\rm sims}\sum_{m=-\ell}^ {\ell}\frac{\tilde{\mathbf{n}}_{\ell,m}\tilde{\mathbf{n}}_{\ell,m^{\prime}}^{ \dagger}}{2\ell+1}. \tag{10}\]
where \(\tilde{\mathbf{n}}=\hat{\mathbf{W}}\mathbf{n}^{\rm sim}\) is the noise in the recovered component-separated sky maps. We consider 500 simulations to estimate the noise bias.
Step 3 is the cosmological analysis stage. We model the angular power spectrum of the component-separated CMB map, including the noise contribution, as
\[C_{\ell}^{\rm CMB}(r,A_{\rm lens})\equiv C_{\ell}^{\rm prim}(r)+C_{\ell}^{\rm lens }(A_{\rm lens})+\tilde{N}_{\ell}^{\rm CMB} \tag{11}\]
and compare data and model with the cosmological likelihood
\[-2\log\mathcal{L}^{\rm cosmo}\] \[\qquad=\sum_{\ell}\left(2\ell+1\right)f_{\rm sky}\left(\frac{ \tilde{C}_{\ell}^{\rm CMB}}{C_{\ell}^{\rm CMB}}+\log(C_{\ell}^{\rm CMB})\right). \tag{12}\]
Figure 6: Schematic of pipeline C. Orange colors indicate repetition for each simulation.
It is worth noting that this is only an approximation to the true map-level Gaussian likelihood which approximates the effective number of modes in each multipole after masking and purification as \(f_{\rm sky}(2\ell+1)\), thus neglecting any mode-coupling effects induced by the survey footprint. We grid the likelihood above along the two dimensions \(r\) and \(A_{\rm lens}\). For each simulation we then estimate the maximum-likelihood values and 68% credible intervals from the marginal distributions of \(r\) and \(A_{\rm lens}\). We verified that the distributions of recovered \(\{r,\,A_{\rm lens}\}\) across simulations are well described by a Gaussian, hence supporting the Gaussian likelihood in Equation 12.
Pipeline C also offers the option to marginalize over a dust template. The recovered components in \(\tilde{\bf s}\), Equation 9, include the dust \(Q\) and \(U\) maps which are typically recovered with high signal-to-noise. In the same way that we compute \(\tilde{C}_{\ell}^{\rm CMB}\) in step 2, we compute the \(BB\) component of the recovered dust map, \(\tilde{C}_{\ell}^{\rm dust}\). We then update our cosmological likelihood, Equation 11, by adding a dust term:
\[C_{\ell}^{\rm CMB}=C_{\ell}^{\rm CMB}(r,\,A_{\rm lens})+A_{\rm dust}\tilde{C}_ {\ell}^{\rm dust}. \tag{13}\]
This is a similar approach to earlier methods (Errard and Stompor, 2019; LiteBIRD Collaboration, 2022). When choosing this approach, the inference of \(r\) during step 3 therefore involves the marginalization over both parameters \(A_{\rm lens}\) and \(A_{\rm dust}\). In principle one could add synchrotron or other terms in Equation 13 but we limit ourselves to dust as it turns out to be the largest contamination, and, in practice, marginalizing over it allows us to get unbiased estimates of cosmological parameters. In the remainder of this paper, we will will refer to this method as "C + dust marginalization".
## 3 Description of input simulations
We build a set of dedicated simulations on which to test our data analysis pipelines and compare results. The simulated maps include cosmological CMB signal, Galactic foreground emission as well as instrumental noise.
### Instrumental specifications and noise
We simulate polarized Stokes \(Q\) and \(U\) sky maps as observed by the SO-SAT telescopes. All maps are simulated using the HEALPix pixelation scheme (Gorski et al., 2005) with resolution parameter \(N_{\rm side}=512\).
We model the SO-SAT noise power spectra as
\[N_{\ell}=N_{\rm white}\Big{[}1+\Big{(}\frac{\ell}{\ell_{\rm knee}}\Big{)}^{ \alpha_{\rm knee}}\Big{]}, \tag{14}\]
where \(N_{\rm white}\) is the white noise component while \(\ell_{\rm knee}\), and \(\alpha_{\rm knee}\) describe the contribution from \(1/f\) noise. Following SO Collaboration (2019) (hereinafter SO2019), we consider different possible scenarios: "baseline" and "goal" levels for the white noise component, and "pessimistic" and "optimistic" correlated noise. The values of \((N_{\rm white},\ell_{\rm knee},\alpha_{\rm knee})\) associated with the different cases are reported in Table 3. Note that the values of \(N_{\rm white}\) correspond to noise on a sky fraction of \(f_{\rm sky}=10\%\) and 5 years of observation time, as in SO2019. Differently from SO2019, we cite polarization noise levels corresponding to a uniform map coverage, accounting for the factor of \(\sim 1.3\) difference compared to Table 1 in SO2019. We simulate noise maps as Gaussian realizations of the \(N_{\ell}\) power spectra. In our main analysis, we use noise maps with pixel weights computed from the SO-SAT hits map (see Figure 2) and refer to this as "inhomogeneous noise". In Section 4.1, we briefly present results obtained from equally weighted noise pixels, which we refer to as "homogeneous noise". Otherwise, all results in this paper assume inhomogeneous noise. Note that, although inhomogeneous, the noise realizations used here lack some of the important anisotropic properties of realistic \(1/f\) noise, such as stripes due to the scanning strategy. Thus, together with the impact of other time-domain effects (e.g. filtering), we leave a more thorough study of the impact of instrumental noise properties for future work.
### Cmb
We simulate the CMB signal as Gaussian random realizations following a power spectrum given by the _Planck_ 2018 best-fit \(\Lambda\)CDM parameters. Our baseline model does not include any primordial tensor signal (\(r=0\)) but incorporates lensing power in the \(BB\) spectra (\(A_{\rm lens}=1\)). We consider also two modifications of this model: (i) primordial tensor signal with \(r=0.01\), representing a \(\gtrsim 3\sigma\) target detection for SO with \(\sigma(r)=0.003\), as forecasted by SO2019; (ii) reduced lensing power with \(A_{\rm lens}=0.5\), corresponding to a 50% delensing efficiency, achievable for SO (Namikawa et al., 2022).
For all the different scenarios we simulate 500 realizations of the CMB signal, convolved with Gaussian beams for each frequency channel, with FWHMs as reported in Table 3.
### Foregrounds
Thermal emission from Galactic dust grains and synchrotron radiation are known to be the two main contaminants to CMB observations in polarization, at intermediate and large angular scales, impacting therefore measurements of the primordial \(BB\) signal. Many studies have been performed in the past years on the characterization of polarized Galactic foreground emission, thanks to the analysis of _WMAP_ and _Planck_ data, as well as low frequency surveys (Harper et al., 2022; Krachmalnicoff et al., 2018). However, many aspects of their emission remain unconstrained including, in particular, the characterization of their SEDs and its variation across the sky. For this reason, to properly assess the impact of foreground emission on component separation and \(r\) constraints we use four sets of models to simulate the sky emission. As specified in the following, we use the Python sky model (PySM) package (Thorne et al., 2017) to simulate polarized foreground components, with some additional modifications:
* **Gaussian foregrounds**: we simulate thermal dust emission and synchrotron radiation as Gaussian realizations of power law \(EE\) and \(BB\) power spectra. Although inaccurate, since foregrounds are highly non-Gaussian, this idealized model has been used to validate the different pipelines and to build approximate signal covariance matrices from 500 random realizations. In particular, we estimate the amplitudes of the foreground signal (evalu
ated for \(D_{\ell}=\ell(\ell+1)C_{\ell}/2\pi\) at \(\ell=80\)) and the slope of their polarization angular power spectra from the synchrotron and thermal dust templates of PySM, evaluated in the SO-SAT sky patch, leading to the following values (\(d\): thermal dust at 353 GHz; \(s\): synchrotron at 23 GHz): \(A_{EE}^{d}=56\ \mu K_{\rm CMB}^{2}\), \(A_{BB}^{d}=28\ \mu K_{\rm CMB}^{2}\), \(\alpha_{EE}^{d}=-0.32\), \(\alpha_{BB}^{d}=-0.16\); \(A_{EE}^{s}=9\ \mu K_{\rm CMB}^{2}\), \(A_{BB}^{s}=1.6\ \mu K_{\rm CMB}^{2}\), \(\alpha_{EE}^{s}=-0.7\), \(\alpha_{BB}^{s}=-0.93\). The frequency scaling of the maps at the SO frequencies has been considered to be a modified black body for thermal dust emission, with fixed spectral parameter \(\beta_{d}=1.54\) and \(T_{d}=20\) K, and a power law for synchrotron with \(\beta_{s}=-3\) (in antenna temperature units).
* dos0 **model**: in this case multi-frequency maps have been obtained using the dos0 PySM model. This model includes templates for thermal dust emission coming from _Planck_ high frequency observations and from _WMAP_ 23 GHz maps from synchrotron radiation. SEDs are considered to be uniform across the sky with the same values of the spectral parameters used for the Gaussian simulations.
* dls1 **model**: this model uses the same foreground amplitude templates as dos0, but with the inclusion of spatial variability for spectral parameters, as described in Thorne et al. (2017).
* dmsm **model**: this model represents a modification of the dls1 spatial variation of spectral parameters. For thermal dust we smoothed the \(\beta_{d}\) and \(T_{d}\) templates at an angular resolution of 2 degrees, in order to down-weight the contribution of instrumental noise fluctuations in the original PySM maps. For synchrotron emission we modified the \(\beta_{s}\) PySM in order to account for the additional information coming from the analysis of S-PASS data at 2.3 GHz (see Krachmalnicoff et al. (2018)). In particular S-PASS data show that the synchrotron spectral index presents enhanced variations with respect to the PySM template. We therefore multiplied the fluctuations in the \(\beta_{s}\) map by a factor 1.6 to take into consideration larger variations. Moreover, small scale fluctuations (with a minimum angular resolution of 2 degrees) have been added, as Gaussian realization of a power-law power spectrum with slope \(-2.6\) (see Figure 11 in Krachmalnicoff et al. (2018)).
Note that this quartet of foreground models includes the model dls1, used for large-scale \(B\)-mode forecasts in SO Collaboration (2019), and therefore represents an extension of the Galactic foreground scenarios probed in that past analysis. As for the CMB simulations, the multi-frequency foreground maps at the SO reference frequencies have been convolved with Gaussian beams, to reach the expected angular resolution. We assume delta-like frequency bandpasses in order to accelerate the production of these simulations, although all pipelines are able to handle finite bandpasses. Therefore this approximation should not impact the performance of any of the pipelines presented here.
## 4 Results and discussion
Simulations were generated for 4 different noise and foreground models, respectively, (see Section 3), for a total of 16 different foreground-noise combinations. For the main analysis, we consider a fiducial CMB model with \(A_{\rm lens}=1\) (no delensing) and \(r=0\) (no primordial tensor fluctuations). In addition, we explored three departures from the fiducial CMB model, with input parameters (\(A_{\rm lens}=0.5\), \(r=0\)), (\(A_{\rm lens}=1\), \(r=0.01\)) and (\(A_{\rm lens}=0.5\), \(r=0.01\)). Here we report the results found for all these cases.
### Constraints on \(r\)
Let us start by examining the final constraints on \(r\) obtained by each pipeline applied to 500 simulations. These results are summarized in Figure 7 and Table 4. Figure 7 shows the mean \(r\) and (16, 84)% credible intervals found by each pipeline as a function of the input foreground model (labels on the \(x\) axis). Results are shown for 5 pipeline setups: pipeline A using the \(C_{\ell}\)-fiducial model (red), pipeline A using the \(C_{\ell}\)-moments model (yellow), pipeline B (blue), pipeline C (green) and pipeline C including the marginalization over the dust amplitude parameter (cyan). For each pipeline, we show two points with error bars. The dot markers and smaller error bars correspond to the results found in the best-case instrument scenario (goal noise level, optimistic \(1/f\) component), while the cross markers and larger error bars correspond to the baseline noise level and pessimistic \(1/f\) component. The quantitative results are reported in Table 4.
We will first discuss the nominal pipelines A, B and C without considering any extensions. We find that for the simpler Gaussian and dos0 foregrounds, the nominal pipelines obtain unbiased results, as expected. Pipeline B shows a slight positive bias for Gaussian foregrounds, in combination with inhomogeneous noise only. This bias is absent for homogeneous noise and can be traced back to
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline & & Baseline & Goal & Pessimistic & Optimistic & \\ Frequency [GHz] & FWHM [arcmin] & \(N_{\rm white}\) [\(\mu\)K-arcmin] & \(N_{\rm white}\) [\(\mu\)K-arcmin] & \(\ell_{\rm knee}\) & \(\ell_{\rm knee}\) & \(\alpha_{\rm knee}\) \\ \hline
27 & 91 & 46 & 33 & 30 & 15 & -2.4 \\
39 & 63 & 28 & 22 & 30 & 15 & -2.4 \\
93 & 30 & 3.5 & 2.5 & 50 & 25 & -2.5 \\
145 & 17 & 4.4 & 2.8 & 50 & 25 & -3.0 \\
225 & 11 & 8.4 & 5.5 & 70 & 35 & -3.0 \\
280 & 9 & 21 & 14 & 100 & 40 & -3.0 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Instrument and noise specifications used to produce the simulations in this work. Note that these levels correspond to homogeneous noise, while our default analysis assumes noise maps weighted according to the SAT hits map.
the pixel covariance matrix used to construct the NILC weights. We will discuss this in more detail in Appendix B. For now, we show the results using a more constrained mask that is closer to homogeneity. We stress that these results, marked with a \({}^{\dagger}\), are not comparable to the rest in Table 4, since they are calculated on a different mask. The more complex d1s1 foregrounds lead to a \(\sim 1\sigma\) bias in the goal and optimistic noise scenario. The dmsm foregrounds lead to a noticeable increase of the bias of up to \(\sim 2\sigma\), seen with pipeline C in all noise scenarios and with pipeline A in the goal-optimistic case, and slightly less with pipeline B. The modifications introduced in the dmsm foreground model include a larger spatial variation in the synchrotron spectral index \(\beta_{s}\) with respect to d1s1, and are a plausible reason for the increased bias on \(r\).
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline & & \multicolumn{5}{c}{\(10^{3}\times(r\pm\sigma(r))\)} \\ \hline Noise & FG model & Pipeline A & + moments & Pipeline B & Pipeline C & + dust marg. \\ \hline \multirow{4}{*}{Goal rms, optimistic \(1/f\)} & Gaussian & \(-0.1\pm 2.1\) & \(0.0\pm 2.8\) & \(0.6\pm 2.6^{\dagger}\) & \(1.6\pm 2.7\) & \(-1.8\pm 4.4\) \\ & d0s0 & \(-0.4\pm 2.1\) & \(-0.5\pm 2.7\) & \(-0.1\pm 2.1\) & \(0.5\pm 2.3\) & \(-1.7\pm 3.5\) \\ & d1s1 & \(1.8\pm 2.1\) & \(-0.2\pm 2.8\) & \(2.1\pm 2.1\) & \(2.6\pm 2.3\) & \(0.0\pm 3.3\) \\ & dmsm & \(3.9\pm 2.1\) & \(0.3\pm 2.7\) & \(3.8\pm 2.1\) & \(5.3\pm 2.4\) & \(0.2\pm 3.0\) \\ \hline \multirow{4}{*}{Goal rms, pessimistic \(1/f\)} & Gaussian & \(-0.2\pm 2.5\) & \(-0.1\pm 2.7\) & \(1.1\pm 3.3^{\dagger}\) & \(0.9\pm 2.1\) & \(-0.9\pm 5.3\) \\ & d0s0 & \(-0.6\pm 2.5\) & \(-0.5\pm 2.8\) & \(-0.5\pm 2.8\) & \(0.1\pm 2.5\) & \(-0.9\pm 4.0\) \\ \cline{1-1} & d1s1 & \(1.3\pm 2.5\) & \(0.1\pm 3.0\) & \(1.2\pm 2.8\) & \(3.4\pm 3.1\) & \(-0.0\pm 3.9\) \\ \cline{1-1} & dmsm & \(3.2\pm 2.6\) & \(0.3\pm 3.9\) & \(2.1\pm 2.8\) & \(5.5\pm 2.4\) & \(0.6\pm 4.2\) \\ \hline \multirow{4}{*}{Baseline optimistic \(1/f\)} & Gaussian & \(-0.1\pm 2.6\) & \(-0.3\pm 3.3\) & \(0.5\pm 3.3^{\dagger}\) & \(0.5\pm 3.2\) & \(-1.9\pm 5.9\) \\ \cline{1-1} & d0s0 & \(-0.4\pm 2.6\) & \(-0.3\pm 3.3\) & \(-0.9\pm 2.7\) & \(0.7\pm 2.9\) & \(-1.8\pm 4.4\) \\ \cline{1-1} & d1s1 & \(1.7\pm 2.6\) & \(-0.2\pm 3.4\) & \(1.0\pm 2.7\) & \(1.8\pm 2.7\) & \(-0.8\pm 4.8\) \\ \cline{1-1} & dmsm & \(3.9\pm 2.6\) & \(0.3\pm 3.5\) & \(2.5\pm 2.7\) & \(5.5\pm 3.2\) & \(0.4\pm 5.0\) \\ \hline \multirow{4}{*}{Baseline pessimistic \(1/f\)} & Gaussian & \(-0.3\pm 3.4\) & \(-0.3\pm 3.8\) & \(1.6\pm 4.1^{\dagger}\) & \(0.0\pm 2.9\) & \(-1.6\pm 5.3\) \\ \cline{1-1} & d0s0 & \(-0.7\pm 3.4\) & \(-0.06\pm 3.9\) & \(-0.6\pm 3.6\) & \(1.1\pm 3.2\) & \(-1.1\pm 5.3\) \\ \cline{1-1} & d1s1 & \(1.1\pm 3.4\) & \(-0.6\pm 4.0\) & \(0.5\pm 3.6\) & \(3.8\pm 3.2\) & \(-1.2\pm 5.3\) \\ \cline{1-1} & dmsm & \(2.8\pm 3.4\) & \(-0.6\pm 4.0\) & \(1.2\pm 3.6\) & \(6.0\pm 3.1\) & \(-0.5\pm 5.1\) \\ \hline \hline \end{tabular} \({}^{\dagger}\) These results are calculated on a smaller, more homogeneous mask, shown in Figure 1. This is explained in Appendix B.
\end{table}
Table 4: Results on the mean of \(r\) and the (16, 84)% credible interval derived from 500 simulations, as inferred by three pipelines (and two extensions) on four foreground models and four noise cases, two of which are shown in Figure 7. No delensing is assumed in these fiducial results.
Figure 7: Compilation of the mean \(r\) with its (16, 84)% credible interval as derived from 500 simulations, applying the three nominal pipelines (plus extensions) to four foreground scenarios of increasing complexity. We assume a fiducial cosmology with \(r=0\) and \(A_{\rm lens}=1\), inhomogeneous noise with goal sensitivity and optimistic \(1/f\) noise component (dot markers), and inhomogeneous noise with baseline sensitivity and pessimistic \(1/f\) noise component (cross markers). Note that the NILC results for Gaussian foregrounds are based on a smaller sky mask, see Appendix B.
Remarkably, we find that, in their simplest incarnation, all pipelines achieve comparable statistical uncertainty on \(r\), ranging from \(\sigma(r)\simeq 2.1\times 10^{-3}\) to \(\sigma(r)\simeq 3.6\times 10^{-3}\) (a 70% increase), depending on the noise model. Changing between the goal and baseline white noise levels results in an increase of \(\sigma(r)\) of \(\sim 20-30\%\). Changing between the optimistic and pessimistic \(1/f\) noise has a similar effect on the results from pipelines A and B, although \(\sigma(r)\) does not increase by more than 10% when changing to pessimistic \(1/f\) noise for pipeline C. These results are in reasonable agreement with the forecasts presented in SO Collaboration (2019).
Now let us have a look at the pipeline extensions, A + moments and C + dust marginalization. Notably, in all noise and foreground scenarios, the two extensions are able to reduce the bias on \(r\) to below \(1\sigma\). For the Gaussian and d0s0 foregrounds, we consistently observe a small negative bias (at the \(\sim 0.1\sigma\) level for A + moments and \(<0.5\sigma\) for C + dust marginalization). This bias may be caused by the introduction of extra parameters that are prior dominated, like the dust template's amplitude in the absence of residual dust contamination, or the moment parameters in the absence of varying spectral indices of foregrounds. If those extra parameters are weakly degenerate with the tensor-to-scalar ratio, the marginal \(r\) posterior will shift according to the choice of the prior on the extra parameters. The observed shifts in the tensor-to-scalar ratio and their possible relation with these volume effects will be investigated in a future work. For the more complex d1s1 and dmsm, both pipeline extensions effectively remove the bias observed in the nominal pipelines, achieving a \(\sim 0.5\sigma\) bias and lower.
The statistical uncertainty \(\sigma(r)\) increases for both pipeline extensions, although by largely different factors. While C + dust marginalization yields \(\sigma(r)\) between \(3.0\times 10^{-3}\) and \(5.9\times 10^{-3}\), the loss in precision for A + moments is significantly smaller, with \(\sigma(r)\) varying between \(2.7\times 10^{-3}\) and \(4.0\times 10^{-3}\) depending on the noise scenario, an average increase of \(\sim 25\%\) compared to pipeline A. In any case, within the assumptions made regarding the SO noise properties, it should be possible to detect a primordial \(B\)-mode signal with \(r=0.01\) at the 2-3\(\sigma\) level with no delensing. The impact of other effects, such as time domain filtering or anisotropic noise may affect these forecasts, and will be studied in more detail in the future.
This analysis was repeated for input CMB maps generated assuming \(r=0\) or \(0.01\) and \(A_{\rm lens}=0.5\) or \(1\). For simplicity, in these cases we considered only the baseline white noise level with optimistic \(1/f\) noise, and the moderately complex d1s1 foreground model. The results can be found in Figure 8 and Table 5. A 50% delensing efficiency results in a reduction in the final \(\sigma(r)\) by 25-30% for pipelines A and B, \(\sim\)10-20% for A + moments, and 0-33% for C + dust marginalization. The presence of primordial \(B\)-modes with a detectable amplitude increases the contribution from cosmic variance to the error budget, with \(\sigma(r)\) growing by up to 40% if \(r=0.01\), in agreement with theoretical expectations. Using C + dust marginalization, and considering no delensing, we even find \(\sigma(r)\) decreasing, hinting at the possible breaking of the degeneracy between \(r\) and \(A_{\rm dust}\). We conclude that all pipelines are able to detect the \(r=0.01\) signal at the level of \(\sim 3\sigma\). As before, we observe a 0.5-1.2\(\sigma\) bias on the recovered \(r\) that is eliminated by both the moment expansion method and the dust marginalization method.
Finally, we have explored how cosmological constraints and the pipelines' performances are affected by noise inhomogeneity resulting from weighting the noise pixels according to the SO-SAT hits map. The geographical location of SO, and the size of the SAT field of view, place constraints on the possible scanning strategies. In particular, it forces SO to target a patch with a relatively large sky fraction \(f_{\rm sky}\sim 0.15\), and surrounded by a \(\sim 10\) degree wide boundary with significantly higher noise (see hits map in Figure 2). The lower panel of Figure 9 shows the ratio between the values of \(\sigma(r)\) found using inhomogeneous noise realizations and those with homogeneous noise in the baseline-optimistic noise model with d0s0 foregrounds, averaged over 500 simulations. We see that for all pipeline scenarios, \(\sigma(r)\) increases by \(\sim 30\%\) due to the noise inhomogeneity.
### Power spectra
Having presented the constraints on \(r\) found by each pipeline, let us now examine the CMB power spectrum products. Pipelines B and C produce CMB-only maps and base their inference of \(r\) on the resulting power spectra, whereas pipeline A works directly with the cross-frequency power spectra of the original multi-frequency maps. Nevertheless, CMB power spectra are an important data product that every pipeline should be able to provide. Following the methods presented in Dunkley et al. (2013) and Planck Collaboration XI (2016); Planck Collaboration V (2020), we use a modified version of pipeline A that retrieves CMB-only bandpowers from multi-frequency power spectra, marginalizing over foregrounds with an MCMC sampler
Figure 8: Results on \(r\) and the \((16,\,84)\%\) credible interval derived from 500 simulations, applying the three nominal pipelines plus extensions, assuming input models including primordial \(B\)-modes and 50% delensing efficiency. We assume baseline noise level with optimistic \(1/f\) component and the d1s1 foreground template.
as presented in Section 2.1. Note that this method, originally developed for high-\(\ell\) CMB science, is applicable since we are in the Gaussian likelihood regime. By re-inserting this cleaned CMB spectrum into a Gaussian likelihood with parameters (\(r\), \(A_{\rm lens}\)), we obtain constraints that are consistent with the results shown in Table 4.
Figure 10 shows the CMB power spectra for the three complex foreground simulations d0s0, d1s1 and dmsm (upper, middle and lower panel, respectively) while considering the goal-optimistic noise scenario. The various markers with error bars denote the measured CMB power spectra and their \(1\sigma\) standard deviation across 500 simulations, while the black solid line denotes the input CMB power spectrum. Results are shown in gold triangles, blue circles, turquoise diamonds for pipeline A, B and C respectively. The dotted lines show the best-fit CMB model for the three nominal pipelines (using the same color scheme). Only in the dmsm foreground scenario, which is the most complex considered here, we also show the results from pipeline C + dust marginalization (dark red squares with error bars) and the best-fit CMB power spectrum from A + moments (pink dot-dashed line) and C + dust marginalization (dark red dot-dashed line).
For the nominal pipelines (A, B and C) without extensions, the measured power spectra display a deviation from the input CMB at low multipoles, increasing with rising foreground complexity. For dmsm at multipoles \(\lesssim 50\), this bias amounts to about \(1.5\sigma\) and it goes down to less than \(0.5\sigma\) at \(80\lesssim\ell\lesssim 250\). The three pipelines agree reasonably well, while pipeline A appears slightly less biased for the lowest multipoles. Pipelines B and C show an additional mild excess of power in their highest multipole bins, with a \(<0.3\sigma\) increase in pipeline C for \(130\lesssim\ell\lesssim 170\) and up to \(1\sigma\) for the highest multipole (\(\ell=297\)) in pipeline B. This might indicate power leakage from the multiple operations on map resolutions implemented in pipelines B and C. In pipeline B, these systematics could come from first decomposing the multi-frequency maps and then convolving them with a common beam in order to bring them to a common resolution, whereas in pipeline C, the leakage is likely due to the linear combination of the multi-resolution frequency maps following Equation (9). Other multipole powers lie within the \(1\sigma\) standard deviation from simulations for all three pipelines.
Both extensions, A + moments and C + dust marginalization, lead to an unbiased CMB power spectrum model, as shown by the pink and dark red dot-dashed lines and the square markers in the lower panel of Figure 10. In the case of pipelines B and C, comparing the best-fit models obtained from the measured power spectra to the input CMB model, we find sub-sigma bias for all bins with \(\ell>100\). We show, however, that the ability to marginalize over additional foreground residuals (e.g. the dust-template marginalization in pipeline C) is able to reduce this bias on all scales, at the cost of increased uncertainties. Implementing this capability in the blind NILC pipeline B would likely allow it to reduce the bias observed in the figure.
The SO-SATs are expected to constrain the amplitude of CMB lensing \(B\)-modes to an unprecedented precision. As can be seen from Figure 10, individual, cleaned CMB bandpowers without delensing at multipoles \(\ell\gtrsim 150\) achieve a signal-to-noise ratio of about 10, accounting for a combined precision on the lensing amplitude of \(\sigma(A_{\rm lens})\lesssim 0.03\) when considering multipoles up to \(\ell_{\rm max}=300\). This is consistent with the inference results obtained by pipelines A and B.
### Channel weights
Our three baseline pipelines differ fundamentally in how they separate the sky components. One common feature among all pipelines is the use of six frequency channels to distinguish components by means of their different SEDs. In Figure 11 we visualize the channel weights as a function of the band center frequency, showing the pipelines in three
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline & \multicolumn{4}{c}{\(10^{3}\times(r\pm\sigma(r))\)} \\ \hline Input CMB & Pipeline A & + moments & Pipeline B & Pipeline C & + dust marg. \\ \((r=0\), \(A_{\rm lens}=1)\) & \(1.7\pm 2.6\) & \(-0.2\pm 3.4\) & \(1.0\pm 2.7\) & \(1.8\pm 2.7\) & \(-0.8\pm 4.8\) \\ \((r=0\), \(A_{\rm lens}=0.5)\) & \(1.7\pm 2.0\) & \(0.7\pm 2.7\) & \(0.7\pm 2.4\) & \(3.3\pm 2.7\) & \(-0.3\pm 3.6\) \\ \((r=0.01\), \(A_{\rm lens}=1)\) & \(11.5\pm 3.1\) & \(10.6\pm 3.2\) & \(11.0\pm 3.2\) & \(13.0\pm 3.1\) & \(13.4\pm 4.3\) \\ \((r=0.01\), \(A_{\rm lens}=0.5)\) & \(11.8\pm 2.4\) & \(10.3\pm 2.8\) & \(10.5\pm 3.0\) & \(13.0\pm 3.5\) & \(9.4\pm 4.0\) \\ \hline \hline \end{tabular}
\end{table}
Table 5: Results on \(r\) with its (16, 84)% credible interval derived from 500 simulations, applying the three nominal pipelines with extensions, assuming input models including primordial \(B\)-modes and 50% delensing efficiency. We assume baseline noise level with optimistic \(1/f\) component and d1s1 foregrounds, see Figure 8.
Figure 9: Results on \(r\) with (16, 84)% credible levels derived from 500 simulations, applying the three nominal pipelines plus extensions, assuming the d0s0 foreground scenario with baseline white noise level and optimistic \(1/f\) component. Cross markers with smaller error bars correspond to homogeneous noise across the SAT field of view, dot markers with larger error bars correspond to inhomogeneous noise, and the relative increase in \(\sigma(r)\) between both is shown in the bottom panel.
vertically stacked panels. In the upper panel, we show the effective weights for the CMB applied to the noise-debiased raw power spectra used by pipeline A, distinguishing between weights for each harmonic bin:
\[{\bf w}_{\ell}{}^{T}=\frac{{\bf a}^{T}\hat{\bf C}_{\ell}^{-1}}{{\bf a}^{T}\hat{ \bf C}_{\ell}^{-1}{\bf a}}. \tag{15}\]
Here, \(\hat{\bf C}_{\ell}\) is the \(6\times 6\) matrix of raw cross-frequency power spectra from noisy sky maps, \({\bf a}\) is a vector of length 6 filled with ones. This is equivalent to the weights employed by the SMICA component separation method (Cardoso et al., 2008) and by ILC as explained in Section 2.2. The middle panel shows the pixel-averaged NILC weights for the five needlet windows (Figure 4) used in pipeline B. In the lower panel we show the CMB weights calculated with the map-based component separation, pipeline C, averaged over the observed pixels to yield an array of 6 numbers. All channel weights are averaged over 100 simulations containing CMB (\(r=0\), \(A_{\rm lens}=1\)), d1s1 foregrounds and two noise models: goal white noise with optimistic \(1/f\) noise is shown as dashed lines, whereas baseline white noise with pessimistic \(1/f\) noise is shown as solid lines. Moreover, the gray shaded areas quantify the 1-\(\sigma\) uncertainty region of these weights in the baseline/pessimistic case estimated from 100 simulations. We see from Figure 11 that the average channel weights agree well between pipelines A, B and C. Mid-frequency channels at 93 and 145 GHz are assigned positive CMB weights throughout all pipelines, while high- and low-frequency channels tend to be suppressed owing to a larger dust and synchrotron contamination, respectively. In more detail, the 280 GHz channel is given negative weight in all pipelines, while average weights at 27, 39 and 225 GHz are negative with pipeline C and either positive or negative in pipelines A and B, depending on the angular scale. The CMB channel weight tends to consistently increase for pipelines A and B as a function of multipole, a fact well exemplified by NILC at 225 GHz, matching the expectation that the CMB lensing signal becomes more important at high \(\ell\). Overall, Figure 11 illustrates that foregrounds at
Figure 10: CMB-only power spectra resulting from component separation with pipelines A, B and C showing the non-Gaussian foregrounds scenarios d0s0 (_top panel_), d1s1 (_middle panel_) and dmsm (_bottom panel_) while considering the goal-optimistic noise scenario. The different colored markers with error bars show the mean of 500 simulations and the scatter between them (corresponding to the statistical uncertainties of a single realization). The dotted lines in the corresponding colors indicate the best-fit power spectrum model. In the dmsm case, we show the extended pipeline results from A + moments and C + dust marginalization with the best-fit models shown as dot-dashed lines. The black solid line is the input CMB model containing lensing \(B\)-modes only. Note that pipeline C only considers multipoles up to \(\ell=180\) in the power spectrum likelihood.
low and high frequencies are consistently subtracted by the three component separation pipelines, with the expected scale dependence in pipelines A and B. Moreover, at every frequency, the channel weights are non-negligible and of similar size across the pipelines, meaning that all channels give a relevant contribution to component separation for all pipelines.
## 5 More complex foregrounds: d10s5
During the completion of this paper, the new PySM3 Galactic foreground models were made publicly available. In particular, in these new models, templates for polarized thermal dust and synchrotron radiation have been updated including the following changes:
1. Large-scale thermal dust emission is based on the GNILC maps (Planck Collaboration IV 2020), which present a lower contamination from CIB emission with respect to the d1 model, based on Commander templates.
2. For both thermal dust and synchrotron radiation small scales structures are added by modifying the logarithm of the polarization fraction tensor5. Footnote 5: See pysm3.readthedocs.io/en/latest/preprocess-templates.
3. Thermal dust spectral parameters are based on GNILC products, with larger variation of \(\beta_{d}\) and \(T_{d}\) parameter at low resolution compared to the d1 model. Small-scale structure is also added as Gaussian realizations of power-law power spectra.
4. The new template for \(\beta_{s}\) includes information from the analysis of S-PASS data (Krachmalnicoff et al. 2018), in a similar way as the one of the sn model adopted in this work. In addition, small-scale structures are present at sub-degree angular scales.
These modifications are encoded in the models called d10 and s5 in the updated version of PySM. Although these models are still to be considered preliminary, both in terms of their implementation details in PySM6 and in general, being based on datasets that may not fully involve the unknown level of foreground complexity, we decided to dedicate an extra section to their analysis. For computational speed, we ran the five pipeline set-ups on a reduced set of 100 simulations containing the new d10s5 foregrounds template, CMB with a standard cosmology (\(r=0\), \(A_{\rm lens}=1\)) and inhomogeneous noise in the goal-optimistic scenario. The resulting marginalized posterior mean and \((16,\,84)\%\) credible intervals on \(r\), averaged over 100 simulations, are:
Footnote 6: The PySM library is currently under development, with beta versions including minor modifications of the foreground templates being realized regularly. In this part of our analysis we make use of PySM v3.4.0b3.
\[\begin{split} r&=0.0194\pm 0.0021&\text{(pipeline A)}\\ r&=0.0025\pm 0.0031&\text{(A + moments)}\\ r&=0.0144\pm 0.0023&\text{(pipeline B)}\\ r&=0.0220\pm 0.0026&\text{(pipeline C)}\\ r&=-0.0015\pm 0.0051&\text{(C + dust marg.)}\\ \end{split} \tag{16}\]
Note that the respective bias obtained with pipelines A, B and C are at 9, 6 and 8\(\sigma\), quadrupling the bias of the dmsm foreground model. Crucially, this bias is reduced to \(\sim 0.8\sigma\) with the A + moments pipeline, with a 40% increase in \(\sigma(r)\) compared to pipeline A, and \(\sim 0.3\sigma\) with the C + dust-marginalization pipeline, with a 95% increase in \(\sigma(r)\) compared to pipeline C. This makes A + moments the unbiased method with the lowest statistical error.
The \(C_{\ell}\)-fiducial model achieves minimum \(\chi^{2}\) values of \(\approx 600\pm 40\). Although this is an increase of \(\Delta\chi^{2}\sim 30\) with respect to the less complex foreground simulations
Figure 11: Channel weights for pipelines A, B and C. We show the SMICA weights for 27 different \(\ell\)-bins calculated from raw, noisy \(C_{\ell}\)s (pipeline A, upper panel), pixel-averaged NILC weights for 5 needed windows (pipeline B, middle panel) and pixel-averaged weights from parametric map-based component separation (pipeline C, lower panel). Weights are averaged over 100 simulations, shown are goal white + optimistic \(1/f\) noise (dashed lines) as well as baseline white + pessimistic \(1/f\) noise (solid lines). The semitransparent gray areas represent the channel weights’ 1-\(\sigma\) standard deviation across 100 simulations, covering baseline + pessimistic noise.
(see Appendix A), the associated probability to exceed is \(p\sim 0.16\) (assuming our null distribution is a \(\chi^{2}\) with \(N_{\rm data}-N_{\rm parameters}=567\) degrees of freedom), and therefore it would not be possible to identify the presence of a foreground bias by virtue of the model providing a bad fit to the data. The \(\chi^{2}\) value we find also confirms that the covariance matrix calculated from Gaussian simulations is still appropriate for the non-Gaussian d10s5 template. On the other hand, A + moments achieves best-fit \(\chi^{2}\) values of \(\approx 531\pm 33\), with a \(<5\%\) decrease in \(\Delta\chi^{2}\) with respect to less complex foreground simulations, indicating an improved fitting accuracy.
As shown in Figure 12, the relative model odds between \(C_{\ell}\)-fiducial and \(C_{\ell}\)-moments (see Appendix A for more details) vary between \(10^{-26}\) and \(10^{-2}\), clearly favoring \(C_{\ell}\)-moments. Out of 100 d10s5 simulations, 99 yield model odds below 1% and 78 below \(10^{-5}\). As opposed to the less complex foreground simulations (d0s0, d1s1 and dmsm), d10s5 gives strong preference to using the moment expansion in the power spectrum model. Note that the AIC-based model odds are computed from the differences of \(\chi^{2}\) values that stem from the same simulation seed, while the \(\chi^{2}\) analysis only considers the models individually, which explains why the former makes for a much more powerful model comparison test.
These results consider only the most optimistic noise scenario. Other cases would likely lead to larger uncertainty and, as a consequence, lower relative biases. In this regard, it is highly encouraging to see two pipeline extensions being able to robustly separate the cosmological signal from Galactic synchrotron and dust emission with this high-level complexity. This highlights the importance of accounting for and marginalizing over residual foreground contamination due to frequency decorrelation for the level of sensitivity that SO and other next-generation observatories will achieve.
The contrast between the results obtained on the dmsm and d10s5 simulations gives us an opportunity to reflect on the strategy one should follow when determining the fiducial component separation method to use in primordial \(B\)-mode searches. Although the dmsm model leads to a \(\sim 2\sigma\) bias on \(r\) under the simplest component separation algorithms, simple model-selection metrics are not able to provide significant evidence that a more sophisticated modeling of foregrounds is needed. The situation changes with d10s5. A conservative approach is therefore to select the level of complexity needed for component separation by ensuring that unbiased constraints are obtained for all existing foreground models consistent with currently available data. The analysis methods passing this test can then form the basis for the fiducial \(B\)-mode constraints. Alternative results can then be obtained with less conservative component separation techniques, but their goodness of fit (or any similar model selection metric) should be compared with that of the fiducial methods. These results should also be accompanied by a comprehensive set of robustness tests able to identify signatures of foreground contamination in the data. This will form the basis of a future work. In a follow-up paper, we will also explore the new set of complex PvSM3 foreground templates in more detail.
## 6 Conclusions
In this paper, we have presented three different component separation pipelines designed to place constraints on the amplitude of cosmological \(B\)-modes on polarized maps of the SO Small Aperture Telescopes. The pipelines are based on multi-frequency \(C_{\ell}\) parametric cleaning (Pipeline A), blind Needlet ILC cleaning (Pipeline B), and map-based parametric cleaning (Pipeline C). We have also studied extensions of pipelines A and C that marginalize over additional residual foreground contamination, using a moment expansion or a dust power spectrum template, respectively. We have tested and compared their performance on a set of simulated maps containing lensing \(B\)-modes with different scenarios of instrumental noise and Galactic foreground complexity. The presence of additional instrumental complexity, such as time-domain filtering, or anisotropic noise, are likely to affect our results. The impact of these effects will be more thoroughly studied in future work.
We find the inferred uncertainty on the tensor-to-scalar ratio \(\sigma(r)\) to be compatible between the three pipelines. While the simpler foreground scenarios (Gaussian, d0s0) do not bias \(r\), spectral index variations can cause an increased bias of 1-2\(\sigma\) if left untreated, as seen with more complex foreground scenarios (d1s1, dmsm). Modeling and marginalizing over the spectral residuals is vital to obtain unbiased \(B\)-mode estimates. The extensions to pipelines A and C are thus able to yield unbiased estimates on all foreground scenarios, albeit with a respective increase in \(\sigma(r)\) by \(\sim 20\%\) (A + moments) and \(>30\%\) (C + dust marginalization). These results are in good agreement with the forecasts presented in SO Collaboration (2019).
After testing on simulations with an \(r=0.01\) cosmology, we conclude that under realistic conditions and if the forecasted map-noise levels and characteristics are achieved, SO should be able to detect a \(r=0.01\) signal at \(\sim 2\)-3\(\sigma\) after 5 years of observation. Inhomogeneous noise from the SAT
Figure 12: Empirical distribution of the AIC-based relative model odds (see Appendix A for details) between the \(C_{\ell}\)-fiducial and the \(C_{\ell}\)-moments model, evaluated for 100 simulations and generated with five different Galactic foreground templates, including the new PvSM foreground model d10s5. We find strong preference for the \(C_{\ell}\)-moments model in the d10s5 foreground scenario, and only then.
map-making scanning strategy brings about 30% increase in \(\sigma(r)\) as compared to homogeneous noise. Analyzing the per-channel weights for our pipelines, we find all frequency channels to be relevant for the CMB signal extraction and all pipelines to be in good agreement. These forecasts cover the nominal SO survey, and can be considered pessimistic in the light of prospective additional SATs that will further improve the sensitivity on large angular scales.
We have also carried out a preliminary analysis of new, more complex, foreground models recently implemented in PySM3, in particular the d10s5 foreground template. The much higher level of spatial SED variation allowed by this model leads to a drastic increase in the bias on \(r\) by up to \(\sim 9\sigma\), when analyzed with the nominal pipelines A, B and C. Fortunately, this bias can be reduced to below 1\(\sigma\) when using A + moments and c + dust marginalization. These extensions lead to a 40% and 95% degradation of the error bars, respectively. Our results highlight the importance of marginalizing over residuals caused by frequency decorrelation for SO-like sensitivities. Although we have not analyzed d10s5 to the same depth as the other foreground models presented here, it is encouraging to confirm that we have the tools at hand to obtain robust, unbiased constraints on the tensor-to-scalar ratio in the presence of such complex Galactic foregrounds.
In preparation for the data collected by SO in the near future, we will continue our investigations into Galactic foreground models with other levels of complexity as the field progresses. Nevertheless, the current work has shown that the analysis pipelines in place for SO are able to obtain robust constraints on the amplitude of primordial \(B\) modes in the presence of Galactic foregrounds covering the full range of complexity envisaged by current, state-of-the-art models.
###### Acknowledgements.
The authors would like to thank Ken Ganga and Arthur Kosowsky for useful feedback. The group at SISSA acknowledges support from the COSMOS Network of the Italian Space Agency and the InDark Initiative of the National Institute for Nuclear Physics (INFN). KW is funded by a SISSA PhD fellowship. SA is funded by a Kavli/IPMU doctoral studentship. CRC acknowledges NSF award 1815887 and FONDECYT Postdoc fellowship 2022055. DA is supported by the Science and Technology Facilities Council through an Ernest Rutherford Fellowship, grant reference ST/P004474. This work is part of a project that has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (PI: Josquin Errard, Grant agreement No. 101044073). ABL is a BCCP fellow at UC Berkeley and Lawrence Berkeley National Laboratory. MLB acknowledges funding from UKRI and STFC (Grant awards ST/X006344/1 and ST/X006336/1). EC acknowledges support from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (Grant agreement No. 849169). JC was furthermore supported by the ERC Consolidator Grant _GNSPEC_ (No. T25456) and the Royal Society as a Royal Society University Research Fellow at the University of Manchester, UK (No. URF/R/191023). GF acknowledges the support of the European Research Council under the Marie Sklodowska Curie actions through the Individual Global Fellowship No. 892041 EUCOAGMBAS. We acknowledge the use of CAMB (Lewis et al., 2000), healpy (Zonca et al., 2019), numpy (Harris et al., 2020), matplotlib (Blunter, 2007), emcee (Foreman-Mackey et al., 2013) and fgbuster (Errared & Stompor, 2019; Puglisi et al., 2022) software packages.
|
2301.05438
|
Mid-infrared assisted water electrolysis as a method to substantially
push down the overpotential of the oxygen evolution reaction
|
Photocatalytic- and Photoelectrochemical water splitting is currently
performed using radiation sources with wavelengths < 1000 nm, i.e. in the near
infrared range (NIR). The fact that water has a broad absorption band, which
lies at wavenumbers between 3000 and 3700 cm-1 (stretching mode) and 1650 cm-1
(bending mode), has not been taken into account so far. We irradiated the steel
anode with a mid-infrared LED (lambda=3300 nm) while water electrolysis was
performed in pH 7-corrected phosphate buffer solution. A significant shift (270
mV) of the cyclic voltammetry (CV) curve towards lower potential values was
obtained when the radiation source was on. To the best of our knowledge, this
is the first report describing the use of a mid-infrared radiation source to
increase the efficiency of water electrolysis.
|
Klara Ruewe, Helmut Schaefer
|
2023-01-13T08:47:38Z
|
http://arxiv.org/abs/2301.05438v1
|
Mid-infrared assisted water electrolysis as a method to substantially push down the overpotential of the oxygen evolution reaction
###### Abstract
Photocatalytic- and Photoelectrochemical water splitting is currently performed using radiation sources with wavelengths < 1000 nm, _i.e._ in the near infrared range (NIR). The fact that water has a broad absorption band, which lies at wavenumbers between 3000 and 3700 cm-1 (stretching mode) and 1650 cm-1 (bending mode), has not been taken into account so far. We irradiated the steel anode with a mid-infrared LED (\(\lambda\)=3300 nm) while water electrolysis was performed in pH 7-corrected phosphate buffer solution. A significant shift (270 mV) of the cyclic voltammetry (CV) curve towards lower potential values was obtained when the radiation source was on. To the best of our knowledge, this is the first report describing the use of a mid-infrared radiation source to increase the efficiency of water electrolysis.
## Introduction
Electrocatalytic-, photocatalytic- and photoelectrochemical splitting of water into its cleavage products when sourced by renewable energy represents a promising, CO2 footprint free solar to fuel conversion route [1, 2, 3, 4].
The efficiency of water electrolysis (electroatalytically initiated water) stands and falls with the cell voltage which is the sum of the thermodynamic decomposition voltage (1.23 V) plus the overvoltage occurring at both electrodes [5]. Therefore, high overpotentials, which can be attributed to both half-cell reactions, the hydrogen evolution reaction, as well as the oxygen evolution reaction, still
represent a hurdle to be overcome. Currently, the optimization of electrocatalytic initiated water splitting is based on the improvement of the electrode materials [6; 7; 8], the electrolyte [9] respectively.
To the best of our knowledge, most photocatalytic or photoelectrochemical approaches to assist in the splitting of water molecules into their fission products utilize visible (400-800 nm) or near-infrared (800-1000 nm) radiation sources), is therefore operated with diodes that work at \(\lambda\) =400-1000 nm [10; 11].
It is known that due to the excitation of the stretching vibration and the bending vibration, water has broad absorption bands that are in the infrared spectral range [12; 13], for example at wavelengths around 2800 nm and 6000 nm. Irradiation of water with diode lasers of appropriate wavelength should weaken the intramolecular O-H bond and lead to a significant reduction in overpotentials. We conducted experiments with a mid-infrared LED with a wavelength of 3300 nm and indeed achieved a significant reduction in the overall cell voltage. In particular, it has been found that the potential of the anode at which oxygen evolution begins is significantly reduced when irradiated with mid-IR radiation. To the best of our knowledge, most of the photocatalytic approaches described so far to improve water electrolysis (photoelectrochemical water splitting) only use radiation sources with significantly shorter wavelengths.
## Results and discussion
In order to show the effect of using Mid-IR radiation source on the water splitting capabilities of a steel anode (steel X20CoCrWMo10-9, see experimental section), water electrolysis was realized by the simplest approach imaginable from an experimental point of view upon using a glass beaker (single compartment conditions) and a three- electrode configuration consisting of a steel anode, a Pt counter electrode and a reversible hydrogen electrode. Water splitting was carried out (for comparison) in a pH 7 corrected mixture of 0.1 molar KH\({}_{2}\)PO\({}_{4}\) and 0.1 molar K\({}_{2}\)HPO\({}_{4}\) solution first without using radiation. The CV curve can be taken from Figure 1 (black curve). Then the LED (4 x 4
mm in size) was positioned directly in front of the steel anode electrode (between reference and the anode) and the experiment was repeated. The CV curve can be taken from Figure 1 (blue curve). The substantial shift of the CV curve towards lower potentials can be clearly seen. The potential at which oxygen evolution begins (onset of OER) defined as the potential (in V vs. RHE) at which the current density exceeds 100 \(\mu\)A/cm\({}^{2}\) was reduced by 270 mV, i.e. the CV curve shifted from 1.57 V vs. RHE (without irradiation) to 1.3 V vs. RHE (with irradiation). It should be mentioned at this point that non-activated X20CoCrWMo10-9 steel has been used as electrode material which is known to have a rather slow OER activity (See sample Co in our previous publication[6]). Therefore, an onset potential of 1.3 V vs. RHE determined in pH 7 medium has to be seen as a fantastic value achieved with a rather inactive anode material. In addition, it turned out that the overall cell voltage was also reduced. These findings impressively prove the suitability of this Mid-IR-based approach to improve the water splitting capabilities of known steel-based electrode materials. To eliminate eventually occurring charging of thin oxide layers whilst recording CV's (that were formed on the steel surface upon positive potentials at the anode) we repeated measurements at lower scan rate (10 mV/s). However, it turned out that this did not influence the gap between CV curves derived from measurements achieved upon irradiation, without using Mid-IR source respectively.
## Conclusions
We have shown upon rather simple electrochemical measurements (quasi-steady state conditions) that the exploitation of Mid-IR irradiation of the anode is a reasonable and effective method to substantially improve the water electrolysis performance.
Figure 1: Results from cyclic voltametric measurements. Electrolyte: pH 7 corrected phosphate buffer solution. Anode: untreated X20CoCrWMo10-9 steel, electrode area: 2 cm\({}^{2}\). Scan rate: 20 mV/s. The CV curve (with radiation) was recorded whilst the LED was switched on.
Experimental part:
### Electrochemical measurements
A three-electrode set-up was used for all electrochemical measurements. The working electrode (WE) consisting of electro-activated AISI 304 steel (X20CoCrWMo10-9) (sample Co) was prepared as described in our previous publication[6]. A Pt wire electrode (4x5 cm geometric area) was exploited as counter electrode and a reversible hydrogen electrode served as reference electrode (RHE, HydroFlex, Gaskatel Gesellschaft fur Gassysteme durch Katalyse und Elektrochemie mbH. D-34127 Kassel, Germany) was utilized as the reference standard, therefore all voltages are quoted against this reference electrode (RE).
For all measurements the RE was placed between the working and the CE. The measurements were performed in a pH 7 corrected 0.1 M KH\({}_{2}\)PO\({}_{4}\)/K\({}_{2}\)HPO\({}_{4}\) solution. The pH 7 corrected 0.1 M KH2PO4/K2HPO4 solution was prepared as follows: Aqueous solutions of 0.1 M K\({}_{2}\)HPO\({}_{4}\) and KH\({}_{2}\)PO\({}_{4}\) (VWR, Darmstadt, Germany) were mixed until the resulting solution reached a pH value of 7.0. The distance between the WE and the RE was adjusted to 1 mm and the distance between the RE and the CE was adjusted to 4-5 mm. Irradiation of the anode was realized upon usage of a NanoPlus MIR LED; NanoPlus D-97218 Gerbrunn, Germany). The distance between anode and LED was adjusted to < 1mm. All electrochemical data were recorded digitally using a Potentiostat Keithley Tektronix 2460 SourceMeter[8].
Cyclic voltammograms (CV) were recorded in 100 mL of electrolyte in a 150 mL glass beaker under stirring (180 r min-1) using a magnetic stirrer (15 mm stirring bar). The scan rate was set to 20 mV s-1 and the step size was 2 mV. The potential was cyclically varied between 1- and 1,9 V vs. RHE. No IR compensation was performed whilst recording the CV plots.
|
2303.13423
|
CE$ν$NS Experiment Proposal at CSNS
|
The detection and cross-section measurement of Coherent Elastic
Neutrino-Nucleus Scattering (CE{\nu}NS) are vital for particle physics,
astrophysics, and nuclear physics. Therefore, a new CE{\nu}NS detection
experiment is proposed in China. Undoped CsI crystals, each coupled with two
Photon Multiplier Tubes (PMTs), will be cooled down to 77K and placed at the
China Spallation Neutron Source (CSNS) to detect the CE{\nu}NS signals produced
by neutrinos from stopped pion decays happening within the Tungsten target of
CSNS. Owing to the extremely high light yield of pure CsI at 77K, even though
it only has a neutrino flux 60\% weaker than the COHERENT experiment, the
detectable signal event rate is still expected to be 0.074/day/kg (0.053/day/kg
for COHERENT). Low radioactivity materials and devices will be used to
construct the detector, and strong shielding will be applied to reduce the
radioactive and neutron background. Dual-PMT readout should be able to reject
PMT-related background, such as Cherenkov light and PMT dark noise. With all
the strategies mentioned above, we hope to reach a 5.1{\sigma} signal detection
significance within six months of data taking with a 12kg CsI. This
presentation will discuss the experiment's design, as well as the estimation of
the signal, various kinds of background, and expected signal sensitivity.
|
Chenguang Su, Qian Liu, Tianjiao Liang
|
2023-03-23T16:44:48Z
|
http://arxiv.org/abs/2303.13423v3
|
# CE\(\nu\)NS Experiment Proposal at CSNS
###### Abstract
The detection and cross section measurement of Coherent Elastic Neutrino-Nucleus Scattering (CE\(\nu\)NS) is vital for particle physics, astrophysics and nuclear physics. Therefore, a new CE\(\nu\)NS detection experiment is proposed in China. Undoped CsI crystals coupled with two Photon Multiplier Tubes (PMTs) each, will be cooled down to 77K and placed at China Spallation Neutron Source (CSNS) to detect the CE\(\nu\)NS signals produced by neutrinos from stopped pion decays happening within the Tungsten target of CSNS. Owing to the extremely high light yield of pure CsI at 77K, even though only having a neutrino flux 60% weaker than COHERENT, the detectable signal event rate is still expected to be \(0.14/day/kg\). Low radioactivity materials and devices will be used to construct the detector and strong shielding will be applied to reduce the radioactive and neutron background. Dual-PMT readout should be able to reject PMT-related background like Cherenkov light and PMT dark noise. With all the strategies above, we are hoping to reach a \(5.1\sigma\) signal detection significance by a half-year data taking with a \(12kg\) CsI. In this presentation, the design of the experiment will be presented. In addition, the estimation of signal, various kinds of background and expected signal sensitivity will be discussed.
neutrino scattering physics; neutrino detectors; spallation neutron source +
Footnote †: journal: Physics Letters
CEN
to the structure of nucleus but also small enough to keep the scattering elastic. In addition, CE\(\nu\)EN process has no threshold limit. Hence, reactor neutrino spectrum under the 1.8MeV threshold of IBD process can be measured by CE\(\nu\)NS process and the experiment result could be a good examination of nuclear physics models.
Since the measurement of CE\(\nu\)NS process is vital for different fields of physics, many scientists has been devoting into this area since the first theoretical prediction was published by Daniel Z. Freedman in 1974 [2]. Owing to the coherent enhancement, its cross section is approximately proportional to the square of the neutron number in the nucleus [3] making it much larger than any other neutrino-matter interactions. However, the signal produced by the recoiled nucleus is so weak that the CE\(\nu\)NS signal had not been observed until 2017 by COHERENT collaboration using \(\pi^{+}\) decay-at-rest neutrinos from the Spallation Neutron Source (SNS) in Oak Ridge [4]. There are also many other groups trying to detect CE\(\nu\)NS signal by reactor neutrinos with various kinds of technologies applied. For instance, cryogenic superconductor calorimeter is selected by NUCLEUS in France [5] while a astronomical CCD is applied by CONNIE in Mexico [6].
Despite much effort has been put into the searching of CE\(\nu\)NS signal, the independent CE\(\nu\)NS signal detection verification still remains a blank to fill. Here we propose a CE\(\nu\)NS experiment at China Spallation Neutron Source (CSNS) which provides neutrinos with almost the same spectrum as SNS in Oak Ridge. The experiment design and the estimation of signal and background are discussed in section 2 and section 3. A sensitivity estimation and the experiment schedule are presented in section 4 and section 5.
## 2 Experiment Design
The common difficulties faced by neutrino detection experiments are low cross section, weak signal and high background. The CE\(\nu\)NS experiment shares the last two of them while the first one is relieved by the coherent enhancement of cross section. The observable energy generated by recoiled nucleus is only several \(keV\), which requires threshold of the detector to be very low. Since the signals are weak, background must be strongly suppressed to make sure the signals would not be overwhelmed. Therefore, an optimized shielding structure is a must. The following part of this section describes our selection of neutrino source and our design of detector and shielding structure.
### Selection of neutrino source
The China Spallation Neutron Source (CSNS) is selected as our neutrino source. It is located in Dongguan, Guangdong province in China. In CSNS, a beam of protons is accelerated to \(1.6GeV\) and impinges on a Tungsten target with a repetition rate of \(25Hz\). \(\nu_{\mu}\), \(\bar{\nu_{\mu}}\) and \(\nu_{e}\) are generated by the decay of target-stopped \(\pi^{+}\) generated by the impingement. Thus, the neutrinos are highly pulsed which is very beneficial for the suppression of the background evenly distributed in time. The energy of neutrinos from \(\pi^{+}\) decay-at-rest expends mainly between \(20-50MeV\), about one magnitude higher than the reactor neutrinos (Fig. 1), making the detection of CE\(\nu\)NS signal much easier.
CSNS is running with a beam power of \(140kW\) now. According to a simulation implemented by FLUKA, the neutrino production rate is about \(0.17/proton/flavor\) in CSNS [7]. Our detector is to be placed on a platform \(8.2m\) right above the target. Considering the thickness of shielding structure and detector encapsulation to be \(2.3m\), the neutrino flux of the detector location is calculated to be \(2.42\times 10^{10}/cm^{2}/h\). This is about \(40\%\) of the flux in COHERENT [8]. Fig. 2 shows the scene picture of the platform (left) and its relative position in CSNS (right).
### Detector design
To achieve a threshold of around \(1keV\) recoil energy, a detector as shown in Fig. 3 is designed. The detector would contain several sub-detectors held in a big Dewar. Each sub-detector is composed of a PTFE fabrication shell, two R11065 Hamamatsu photomultiplier tubes (PMTs) and one \(3kg\) undoped cesium iodide (CsI) crystal. The Dewar will be filled with liquid nitrogen to immerse the sub-detectors to provide a stable cryogenic temperature of \(77K\). Even though the light yield of undoped CsI is much lower than CsI(Na) and CsI(TI) in room temperature [11; 12], it would increase more than 15 times when cooled down to \(77K\)[11; 13]. When coupled with R11065 PMTs, the light yield of undoped CsI was reported to reach \(33.5PE/keV_{ee}\)[14], more than twice of that of CsI(Na) in room temperature measured by COHERENT collaboration [8]. High light yield enable us to lower the threshold.
The two PMTs in each sub-detector form a coincidence system to reject backgrounds generated by the PMT dark count background including electron emission on cathode or dynodes and Cherenkov light generated by charged particles passing through PMT window. This background dominate in COHERENT CsI(Na) experiment [8] in a few photon electrons (PE) region. Since the PMT dark count background is independent from one PMT to another, this background can be suppressed by 3 magnitudes by applying a two PMT readout coincidence system requiring at least one photon electron is detected in each PMT. The suppression effect is discussed in detail in section 3.2.
Figure 1: Neutrino energy spectra of reactors (**red**) and spallation neutron source (**blue**). The energy neutrinos from spallation neutron is significantly higher reactor neutrinos [9; 10].
Figure 2: Scene picture of the platform (**left**) and its relative position in CSNS (**right**). The platform is \(8.2m\) right above the Tungsten target.
The four sub-detectors also form an anti-coincidence system. The probability of one particle producing signals in more than one sub-detectors is negligible for neutrinos but much larger for fast neutrons and \(\gamma\) rays. Events in which scintillation signals are observed in more than one sub-detectors would be regarded as background. This strategy can strongly reduce the fast neutron background. Details of the effect of this strategy is in section 3.2.
### Shielding structure
In order to achieve a \(5\sigma\) detection of CE\(\nu\)NS signal in one year or so, the event rate of background needs to be reduced to the same magnitude of CE\(\nu\)NS signal. A preliminary shielding structure as shown in Fig. 4 is designed to achieve this goal. Inside the Dewar (grey), four sub-detectors are surrounded by \(5cm\) of OHFC (oxygen-free high-conductivity copper) to shield the radioactive background produced by the stainless steel in Dewar and inner layer shielding materials. Outside the Dewar is a \(30cm\) thick layer of HDPE (high density polyethylene), followed by \(60cm\) of Lead. The Lead shield aims at reducing \(\gamma\) ray background while the innermost layer of HDPE slows down and stops fast neutrons produced by high energy neutrons (\(>\)50MeV) reacting with lead nuclei. The Lead shield is encased by a 5cm thick \(\mu\) veto plastic scintillator to tag comic ray events. The outermost layer is \(80cm\) HDPE serves as a strong moderator to fast neutrons. The total thickness of HDPE reaches 1.1m in this design because the fast neutrons escaping from the Tungsten target are expected to be the main background on the platform. Detailed discussion of the fast neutron background is demonstrated in section 3.2.
This shielding structure is just a preliminary one. As the on-site background measurement on the platform in CSNS is undertaking, this design will be adjusted and optimized according to our measurement result.
Figure 3: A schematic of the detector. The detector contains four sub-detectors in a Dewar filled with liquid nitrogen. Each sub-detector is composed of one \(3kg\) undoped CsI and two R11065 PMTs.
### Data taking strategy and event selection
A data taking strategy is proposed with characteristics of neutrino source and detector taken into consideration. It is stated as follows:
1. The 25\(Hz\) proton beam trigger signal provided by CSNS would be taken as an external trigger for the experiment data taking, suppressing the steady-state background like cosmic ray and environmental background by 4 magnitudes. Each trigger corresponds to one event referred latter.
2. A complete waveform signal of each PMT would be recorded by a flash ADC with sampling rate of 1\(GHz\). Each waveform would extends 50\(\mu s\) with 10\(\mu s\) signal region and 40\(\mu s\) pretrace. Offline waveform analysis would be applied to extract CE\(\nu\)NS signal candidates.
3. Every event is recorded with a time tag and a \(\mu\) veto tag. By referring the the time tag to the beam power monitor of CSNS, the proton beam power fluctuation can be neutralized. And the \(\mu\) veto tag signal from the \(\mu\) veto system rejects events possibly contaminated by cosmic rays.
Although the 25\(Hz\) proton beam trigger can reduce the steady-state background by 4 magnitudes, some event selection criterion are still needed to further reduce the background to meet the standard mentioned in section 2.3. They are listed as follows.
1. The event is not tagged by \(\mu\) veto system to reject events possibly contaminated by cosmic ray.
Figure 4: A schematic of the preliminary shielding structure design. The shielding components from the inside out are as follows. (**1**) Yellow green: 5\(cm\) OHFC. (**2**) Grey: Dewar. (**3**) Yellow: 30\(cm\) HDPE. (**4**) Green: 60\(cm\) Lead. (**5**) Red: 5\(cm\)\(\mu\) veto plastic scintillator. (**6**) Blue: 80\(cm\) HDPE.
For each waveform, the PE number found in pretrace should be smaller than 3 to suppress the afterglow background introduced by other particles hitting CsI detector just a few microseconds before the trigger.
3. For each sub-detector, at least one PE should be detected in both PMTs. The purpose of this criteria is to reduce the PMT dark count background which is independent from one PMT to another.
4. For the whole detector system, events with more than one sub-detectors satisfying criteria 3 would be excluded since neutrons and \(\gamma\) rays are much more likely to produce signals in different sub-detectors.
The selection efficiency of all criterion to CE\(\nu\)NS signal are also investigated. The efficiencies of the first two criterion needs to be determined on site with the whole detector and shield constructed which is not ready yet. Hence the efficiencies of similar cuts applied by COHERENT collaboration are adopted in the sensitivity estimation in section 4[8]. The efficiency is 98.9% for criteria 1 and 73.8% for criteria 2. Considering the cosmic ray level and other kinds of steady-state background responsible for most afterglow events should be similar in CSNS and SNS while the afterglow of undoped CsI at 77K is much weaker than CsI(Na) at room temperature, this estimation of efficiencies of the first two criterion should be conservative totally.
The efficiency of criteria 3 is considered by assuming the probabilities of scintillation photon detected by two PMTs are equal in an average sense which should be reasonable enough considering the detector is symmetric longitudinally. An analytical calculation based on binomial distribution has been carried out to calculate the selection efficiency of scintillation signal with different total NPE. The efficiency of criteria 4 is considered as 100% since the possibility of one neutrino producing signals in different sub-detectors is negligible.
Fig. 5 shows the estimated CE\(\nu\)NS signal selection efficiency with respect to different numbers of detected NPE. The green line and blue line show the efficiencies of \(\mu\) veto cut and afterglow cut taken from [8]. They are both overall cuts holding the same value for all events. The black line shows the total efficiency of all criterion by multiplying efficiency of each cut together. Criteria 3 gives the rising shape of the curve and its efficiency is equal to all scintillation events including signals generated by particles depositing energy in CsI.
Figure 5: Efficiency of event selection criterion. **Green**: Efficiency of \(\mu\) veto cut[8]. **Blue**: Efficiency of afterglow cut [8]. **Black**: Total Efficiency.
## 3 Estimation of CE\(\nu\)NS signal and background
With the experiment design described above, the estimation of CE\(\nu\)NS signal and background in this experiment can be implemented. In this part, the detector is assumed to be composed of four \(3kg\) undoped CsI sub-detectors, totaly \(12kg\) of undoped CsI and the data taking time is taken as half an year. Based on the results of this part, the expected sensitivity of this experiment can be obtained.
### Estimation of CE\(\nu\)NS signal
The estimation of CE\(\nu\)NS signal is composed of two steps. First, the CE\(\nu\)NS recoiled energy distribution of nuclear recoils induced by neutrinos should be considered. Second, the energy response of the detector to recoiled nuclei should be taken into account to acquire an expected spectrum of NPE detected by PMTs.
The expected CE\(\nu\)NS recoiled energy distribution can be calculated numerically when the neutrino flux, detector mass and CE\(\nu\)NS differential cross section [3] are considered. Fig. 6 shows the calculated result. Total event rate distribution as well as the event rates of three different neutrinos generated in CSNS are shown. The total CE\(\nu\)NS events rate reaches \(303/hal\,year/12kg\), equivalently \(0.14/day/kg\).
Figure 6: Expected recoiled energy distribution of CE\(\nu\)NS interaction detected by a \(12kg\) cryogenic undoped CsI detector \(10.5m\) away from the Tungsten target with a half-year data taking. The contribution from different flavors of neutrinos and different isotopes are also shown.
The energy response of the detector to recoiled nuclei also includes two steps. First, the light yield with respect to energy deposited through ionization process which is often calibrated by \(\gamma\) rays or \(\beta\) rays. Second, the quenching factor (QF) of the detector material which is the efficiency of nuclear recoil energy transforming into ionization energy. A \(33.5PE/keV_{ex}\) light yield of undoped CsI in 77K [14] is adopted in this estimation. And the quenching factor of undoped CsI uses the result measured by COHERENT collaboration [15].
Convolute the recoiled energy distribution with energy response of detector to nuclear recoils and Fig. 7 is obtained. It shows the expected spectrum of NPE detected by PMTs.
### Estimation of background
The background comes from various kinds of sources. (i) Lots of beam related neutrons (BRN) would be produced upon the impinging of protons on the Tungsten target. Even though a shield composed of \(7m\) of steel and \(1m\) of concrete are settled between the target and the platform, some fast neutrons can still escape and reach the platform. The fast neutrons generate nuclear recoil signals by scattering with Cs and I nucleus which are indistinguishable from the real CE\(\nu\)NS signals and can not be reduced by proton beam trigger. (ii) the PMT dark count event can happen to fall within the signal region of the recorded waveform. Since most CE\(\nu\)NS signals and most PMT dark count signals both only generate a few detectable PE, the PMT dark count background could be important in low NPE region. (iii), materials and devices used to construct the detector and shield unavoidably contain some long-lived radioactive isotopes. Their decay would also introduce \(\gamma\) and \(\beta\) background on the detector. (iv), the environmental \(\gamma\) background from the decay of long-lived radioactive isotopes in rock and building materials always exit everywhere including the platform. A simulation software framework has been developed based on Geant4 to evaluate the influence of these background on this experiment. The following parts show the detailed research upon different kinds of background. All results are acquired with a detector containing four \(3kg\) undoped CsI sub-detectors assumed and a data taking time of half an year.
Figure 7: Expected NPE spectra of CE\(\nu\)NS signal. Contribution from different flavors of neutrinos are also shown.
#### 3.2.1 Beam related neutron background
A \({}^{3}\)He multi-sphere neutron spectrometer has been used to measure the neutron spectrum upon the platform and outside the facility. Fig. 8 shows the neutron spectrum on the platform unfolded by a method similar to the description in [16]. The integrated neutron flux of Fig. 8 is \(4.8\times 10^{-2}n/cm^{2}/s\), about one magnitude higher than that measured outside the facility, which is \(5.3\times 10^{-3}n/cm^{2}/s\). Taking this spectrum as an input, with the whole shielding structure considered, a simulation has been done to access how many neutron background would be generated. After all the event selection criterion applied, the survived neutron background spectrum is shown by the orange line in Fig. 12. It is worth mentioning that the event selection criteria 4 could reject more than 70% of neutron background according to the simulation.
#### 3.2.2 PMT dark count background
A PMT dark count spectrum at 77\(K\) has been taken by setting the data taking ADC to a self-trigger mode and adjusting the threshold low enough to trigger single photon electron (SPE) signals. The dark count rate is measured to be averagely \(111Hz\) and stays stable in a 24\(h\) data taking period. A toy Monte-Carlo analysis has been carried out to investigate the effect of criteria 3 on this background. Four couple of PMTs of four sub-detectors are considered. Fig. 9 shows the background level with and without applying event selection criteria 3. The PMT dark count background could be suppressed by 3 magnitudes.
Figure 8: Neutron spectrum (**black**) measured by a \({}^{3}\)He multi-sphere neutron spectrometer and the initial guess spectrum (**red**) feed into the unfolding program.
#### 3.2.3 Long-lived radioactive isotopes background
The concentration of different long-lived radioactive isotopes in different materials and devices are listed in Table 1. The influence of the decay radiation of these isotopes are simulated. The decay chain of isotopes are considered and the decay equilibrium is assumed to be reached. Fig. 10 shows the spectrum of energy deposited in CsI from different materials and devices. It can be clearly seen that the radioactive background from CsI crystal dominates the radioactive background.
#### 3.2.4 Environmental \(\gamma\) background
The contribution of environmental \(\gamma\) background is estimated using spectrum shown in Fig. 11 as an input. The spectrum is measured by CDEX collaboration in CJPL (China Jinping Underground Laboratory) [17]. As stronger radioactivity from \({}^{222}Rn\) and other radioactive isotopes from rock is expected in an underground circumstance like CJPL, this estimation should be conservative. According to our simulation, owing to the strong shielding effect of Lead to \(\gamma\), the number of \(\gamma\) able to penetrate the shield and depositing energy in CsI would be so low that the contribution of environmental \(\gamma\) is invisible in Fig. 12.
#### 3.2.5 Summary of background
After all event selection criterion employed, Fig 12 summaries the contributions of background surviving all cuts from different sources. In the region of NPE smaller than 5, the background from PMT dark count dominates. While the BRN background prevails in higher NPE region. The radioactive background contribute a rather low-flat component and the environmental background is too weak to be visible.
Figure 9: PMT dark count spectra with (**blue**) and without (**orange**) applying event selection criteria 3. Four couple of PMTs of four sub-detectors are considered. The event selection criteria 3 can suppress this background by 3 magnitudes.
There are also other possible background sources like neutrino induced neutrons (NIN) and cosmic ray induced short-lived radioactive isotopes (CRSRI). They are found to be negligible by COHERENT collaboration [8]. The event rate of both of them depends on the shielding structure. The NIN event rate also relays on the neutrino flux and the CRSRI event rate is proportional to the cosmic ray rate. Since our experiment share a similar neutrino flux, shielding structure and overburden to cosmic ray with COHERENT experiment with difference within a magnitude, these background are neglected by us in this estimation.
Figure 11: The environmental \(\gamma\) background redraw from the measurement by CDEX collaboration in CJPL [17].
Figure 10: The spectra of energy deposited in CsI from different materials and devices. Four 3\(kg\) undoped CsI sub-detectors are assumed. The background from CsI (**blue**) is the dominate component followed by background from stainless steel of the Dewar (**purple**), PMTs (**red**) and Copper (**orange**). The contribution from liquid nitrogen (**green**) is very little.
## 4 Expected Sensitivity
Using the estimation of signal and background spectra above, the expected sensitivity of this experiment can be evaluated. Fig. 13 shows the expected spectra of CE\(\nu\)NS events, background events and their summation with all event selection criteria applied considering
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|} \hline & **PTFE** & **Fe** & **HDPE** & **PMT** & **Lead** & **LN2\({}^{1}\)** & **CsI** & **OFHC** \\ \hline
**K40** & 0.343 & 60 & 43.3 & 37.1 & - & - & - & - \\
**Ra226** & 0.12 & - & - & - & 3 & - & - & - \\
**Ra228** & 0.11 & - & - & - & - & - & - & - \\
**Th228** & 0.065 & - & - & - & 1 & - & - & - \\
**U238** & 1.96 & - & 19.8 & 5.2 & 0.12 & - & - & 0.077 \\
**Ac228** & - & 70 & - & - & - & - & - & - \\
**Bi214** & - & 25 & - & - & - & - & - & - \\
**Pb212** & - & 70 & - & - & - & - & - & - \\
**Pb214** & - & 25 & - & - & - & - & - & - \\
**Th234** & - & 200 & - & - & - & - & - & - \\
**T1208** & - & 70 & - & - & - & - & - & - \\
**Th232** & - & - & 12.2 & 13.4 & - & - & - & 0.005 \\
**Pb210** & - & - & - & - & 240000 & - & - & - \\
**Ar39** & - & - & - & - & - & 0.01 & - & - \\
**Co60** & - & 17 & - & - & - & - & - & - \\
**Cs137** & 0.17 & 6 & - & - & - & - & 150 & - \\
**Cs134** & - & - & - & - & - & - & 50 & - \\ \hline
**Unit** & mBq/kg & mBq/kg & mBq/kg & mBq/PMT & mBq/kg & mBq/kg & mBq/kg & mBq/kg \\ \hline
**Reference** & Xenon1T [18] & ILIAS [19] & Xenon1T [18] & Taishan [18] & EDELWEISS [21] & Taishan [22] & XION1T [18] \\ \hline \end{tabular}
\({}^{1}\) The background of liquid nitrogen is estimated by assuming the nitrogen reaching a purity of 99.99% and the rest impurities are all argon. The radioactive background level of atmosphere argon is taken from [20]
\end{table}
Table 1: The concentration of radioactive isotopes in different materials and devices.
Figure 12: The summary of contributions of background from difference sources after all cuts applied. PMT dark count background (**green**) dominate in low NPE region while the BRN background (**orange**) prevails in high NPE region. The radioactive background (**red**) contribute a low-flat component and the environmental background (**purple**) is too weak to be seen. Four \(3kg\) undoped CsI sub-detectors are considered.
a detector containing four \(3kg\) undoped CsI sub-detectors and a data taking time of half an year. A signal region between [4; 72] NPE is selected to maximize the signal to background ratio. The 4 NPE lower limit is chosen to exclude most of the PMT dark count background and it corresponds to a equivalently \(1.5keV_{nr}\) detection threshold to nuclear recoils. Within this signal region, the total signal event rate is \(0.074/day/kg\) and the total background event rate is \(0.386/day/kg\), equivalently \(160/half year/12kg\) and \(833/half year/12kg\) respectively.Its compositions are listed in Table 2. The expected confidence level (C.L.) of this experiment is calculated by the following formula:
\[C.L.=\frac{N_{sig}}{\sqrt{N_{sig}+N_{bkg}}}, \tag{1}\]
\(N_{sig}\) is the expected event number of CE\(\nu\)NS signal and \(N_{bkg}\) is that of background. By the experiment setup assumed above, the C.L. is expected to reach \(5.1\sigma\) in half an year. Fig. 14 shows the expected C.L. varying with different detector mass and data taking time. Note that the contribution to C.L. from the arrival time profile of events regarding the proton beam trigger is not considered in this estimation yet. Since the arrival time profile of CE\(\nu\)NS signals are highly correlated to that of proton beam, it significantly differs from those of the PMT dark count and the radioactive background which are evenly distributed in time. Thus, if the contribution to C.L. from arrival time distribution is taken into account additionally, the confidence level would certainly be promoted.
Figure 13: The expected spectra of CE\(\nu\)NS events (**dashed red**), background events (**shadowed grey**) and their summation (**solid blue**) with all event selection criteria applied. Four \(3kg\) undoped CsI sub-detectors are considered. The first a few bins of background and the summation reach out of the y-axis range because the PMT dark count background is very high in this region.
## 5 Experiment schedule
By referring to Table 2, it can be clearly seen that the BRN is the critical background in this experiment. An excellent knowledge of the flux, spectrum and filed distribution of the beam related neutrons on the platform is necessary. Several liquid scintillator detectors capable of \(N/\gamma\) discrimination have been placed on the platform to acquire a more precise measurement of neutron background.
Meanwhile, the test of a detector prototype able to contain two \(3kg\) CsI crystals is also ongoing. The cryogenic system works stably and the feasibility of this detector design has been verified.
In 2023, we plan to finish the commissioning of the detector and the construction of the shield. If everything goes well, hopefully the data taking could start in 2024.
## 6 Summary
The measurement of CE\(\nu\)NS signal enjoys great significance among various aspects of physics. By placing a \(12kg\) cryogenic undoped CsI detector inside a strong shield on a platform \(8.2m\) away from the Tungsten target in CSNS, with some event selection criterion employed to enhance the signal to background ratio, the detector threshold is expected to be lower down to \(1.5keV_{nr}\) nuclear recoil energy. The detectable CE\(\nu\)NS events rate is expected to be \(160/halfyear\)
\begin{table}
\begin{tabular}{|c|c|} \hline
**Background Type** & **Event rate in signal region after cut / half year /** \\ \hline
**Beam related neutron** & 666 \\ \hline
**PMT dark count** & 160 \\ \hline
**Radioactive isotopes** & 7 \\ \hline
**Environmental \(\gamma\)** & negligible \\ \hline
**Neutrino induced neutron** & negligible \\ \hline
**Cosmic ray induced radioactive isotopes** & negligible \\ \hline \end{tabular}
\end{table}
Table 2: The event rates in signal region after cut of different kinds of background.
Figure 14: The expected confidence level varying with different detector mass and data taking time. If a \(12kg\) CsI detector is employed and taking data for half an year (180 days), the C.L. can reach \(5.1\sigma\) (the pentagram).
and the total background rate could be suppressed to be \(833/halfyear\). Within a half-year data taking period, a \(5\sigma\) detection of CE\(\nu\)NS signal is anticipated. If the dedicated measurement of neutron background and the test of prototype progress smoothly, the data taking is anticipated to be started in 2 years.
This work was supported by the National Natural Science Foundation of China (Grant No. 12221005 and 12175241) and the Fundamental Research Funds for the Central Universities.
The authors declare no conflict of interest.
|
2308.09709
|
Neural-network quantum state study of the long-range antiferromagnetic
Ising chain
|
We investigate quantum phase transitions in the transverse field Ising chain
with algebraically decaying long-range (LR) antiferromagnetic interactions
using the variational Monte Carlo method with the restricted Boltzmann machine
employed as a trial wave function ansatz. First, we measure the critical
exponents and the central charge through the finite-size scaling analysis,
verifying the contrasting observations in the previous tensor network studies.
The correlation function exponent and the central charge deviate from the
short-range (SR) Ising values at a small decay exponent $\alpha_\mathrm{LR}$,
while the other critical exponents examined are very close to the SR Ising
exponents regardless of $\alpha_\mathrm{LR}$ examined. However, in the further
test of the critical Binder ratio, we find that the universal ratio of the SR
limit does not hold for $\alpha_\mathrm{LR} < 2$, implying a deviation in the
criticality. On the other hand, we find evidence of the conformal invariance
breakdown in the conformal field theory (CFT) test of the correlation function.
The deviation from the CFT description becomes more pronounced as
$\alpha_\mathrm{LR}$ decreases, although a precise breakdown threshold is yet
to be determined.
|
Jicheol Kim, Dongkyu Kim, Dong-Hee Kim
|
2023-08-18T17:58:36Z
|
http://arxiv.org/abs/2308.09709v3
|
# Neural-network quantum state study of the long-range antiferromagnetic Ising chain
###### Abstract
We investigate quantum phase transitions in the transverse field Ising chain with algebraically decaying long-range antiferromagnetic interactions by using the variational Monte Carlo method with the restricted Boltzmann machine being employed as a trial wave function ansatz. In the finite-size scaling analysis with the order parameter and the second Renyi entropy, we find that the central charge deviates from 1/2 at a small decay exponent \(\alpha_{\rm LR}\) in contrast to the critical exponents staying very close to the short-range (SR) Ising values regardless of \(\alpha_{\rm LR}\) examined, supporting the previously proposed scenario of conformal invariance breakdown. To identify the threshold of the Ising universality and the conformal symmetry, we perform two additional tests for the universal Binder ratio and the conformal field theory (CFT) description of the correlation function. It turns out that both indicate a noticeable deviation from the SR Ising class at \(\alpha_{\rm LR}<2\). However, a closer look at the scaled correlation function for \(\alpha_{\rm LR}\geq 2\) shows a gradual change from the asymptotic line of the CFT verified at \(\alpha_{\rm LR}=3\), providing a rough estimate of the threshold being in the range of \(2\lesssim\alpha_{\rm LR}<3\).
## I Introduction
Artificial neural networks and machine learning have been influencing the paradigm of physics research with a growing number of applications on various subjects, including phase transitions and critical phenomena in classical and quantum many-body systems [1; 2; 3; 4]. In particular, the representation of a quantum wave function by a neural network [5] provides an alternative numerical platform combined with the variational Monte Carlo (VMC) method to find the ground state of a many-body Hamiltonian. The neural-network quantum state (NQS) has extended its area of applications to the Fermi and Bose Hubbard models [6; 7], real-time dynamics [5; 8], open quantum systems [9; 10; 11; 12], quantum state tomography [13; 14], frustrated systems [15; 16; 17; 18; 19; 20; 21], and _ab initio_ simulations of molecules [22; 23; 24]. The NQS ansatz offers the high expressive capacity often measured in terms of entanglement scaling [25; 26; 27; 28; 29], proposing a complementary tool to conventional numerical methods for studying quantum criticality.
In this paper, we investigate quantum phase transitions in the transverse field Ising chain (TFIC) with algebraically decaying long-range (LR) antiferromagnetic (AF) interactions by employing the NQS ansatz for the VMC calculations. LR-interacting quantum systems have attracted growing attention, both theoretical and experimental [30]. The trapped-ion quantum simulation [31] realized the TFIC Hamiltonian with an LR interaction that maps to the form of \(1/r^{\alpha_{\rm LR}}\) with a tunable exponent \(\alpha_{\rm LR}\), providing a controllable experimental platform to study quantum phase transitions at and out of equilibrium [32; 33; 34]. The nearest-neighbor-interacting short-range (SR) TFIC is a textbook example of quantum critical behavior in one dimension (1D) that belongs to the universality class of the classical two-dimensional (2D) Ising model [35]. However, such quantum-classical correspondence to the universality of critical phenomena becomes nontrivial in presence of LR interactions. A central question of how criticality depends on \(\alpha_{\rm LR}\) is still an active subject of various numerical and analytical studies [36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62].
We revisit this question in the AF side of the LR interactions for TFIC where the breakdown of the Ising class in the critical ground state seems to be very different from what is established in the ferromagnetic (FM) counterpart [47; 48; 49; 50; 51; 52; 56; 57; 58]. Because an exact solution is not available, constructing the picture of how its criticality deviates from the Ising class as \(\alpha_{\rm LR}\) decreases relies primarily on the collection of numerical observations. Despite various numerical studies characterizing the quantum phase transition in AF-LR-TFIC at equilibrium [53; 54; 55; 56; 57; 59] and out of equilibrium [56; 60], the picture remains incomplete in some parts, which requires more numerical evidence for clarification. Using the restricted Boltzmann machine (RBM) for the NQS ansatz [5], we consider the moments of staggered magnetization including the order parameter and the Binder ratio, the two-point correlation function, and the entanglement entropy to examine the present picture and find clearer signatures of the breakdown of the SR Ising class and the conformal invariance along the critical line.
We begin with brief reviews of previous results on the characterization of the criticality. The first study of AF-LR-TFIC [53] using the time-dependent variational principle (TDVP) found a phase transition for all \(\alpha_{\rm LR}>0\), where it turned out that the critical exponent of the correlation function decreases from the SR Ising value for \(\alpha_{\rm LR}\lesssim 2\). A significant increase in the central charge from the Ising value \(1/2\) was observed at \(\alpha_{\rm LR}\lesssim 1\) in the TDVP [53] and density matrix renormalization group (DMRG) [54] calculations, based on which the breakdown of conformal invariance was proposed [54]. While here we focus on the critical ground state, a violation of the area law for the entanglement entropy was observed in the offcritical area [53; 54; 59], and it was shown that the area law of the noncritical ground state holds for \(\alpha_{\rm LR}>2\)[63].
On the other hand, contrasting evidence was found in the other DMRG calculations [55; 56], where the estimates of the critical exponents \(\nu\simeq 1\) and \(\beta\simeq 1/8\) and the dynamic exponent \(z\simeq 1\) were in agreement with the SR Ising values for all examined \(\alpha_{\rm LR}\) between 0.4 and 3. Although, these DMRG estimates of the critical exponents have not been fully verified in different approaches. Linked cluster expansion calculations [57] reported \(z\nu=1.7(5)\) for \(\alpha_{\rm LR}=2\) while
\(z\nu\approx 1\) for \(\alpha_{\rm LR}=9/4\). Previous quantum Monte Carlo (QMC) calculations with stochastic series expansion [58] provided lower values of \(\nu\) and \(\beta\) in its examined range of \(\alpha_{\rm LR}\geq 2\). However, these TDVP and DMRG results all together suggest an interesting possibility that some of the exponents can still be very close to the SR Ising values even for a small \(\alpha_{\rm LR}\) where the central charge indicates a deviation.
The same scenario was proposed in the study of the Kitaev chain with LR pairing [61; 62] which becomes equivalent to the Ising chain only in the SR limit. Along the critical line of a positive chemical potential, the conformal invariance is broken for \(\alpha_{\rm LR}<2\) while the Ising exponent \(\beta\) is unchanged in the test of a quantity that corresponds to the Ising order parameter in the SR limit. Although there is no rigorous mapping between the Kitaev chain and the Ising model at a finite \(\alpha_{\rm LR}\), the empirical similarity between the scenario of conformal symmetry breakdown in the Kitaev chain and the previous observations in AF-LR-TFIM motivates us to revisit the phase transition in this LR Ising system to examine the breakdown of the Ising universality and the conformal invariance in different numerical approaches.
Our VMC+RBM calculations investigate this scenario. In the finite-size scaling (FSS) analysis of the order parameter extracted from the ground-state RBM wave function for \(0.5\leq\alpha_{\rm LR}\leq 3\), we find that our estimates of the critical exponents are indeed very close to the SR Ising values for all the values of \(\alpha_{\rm LR}\) examined, which is in agreement with the previous DMRG results [55; 56]. On the other hand, we find that the exponent of the correlation function and the estimate of the central charge exhibit deviations from the SR Ising values for a small \(\alpha_{\rm LR}\). Although an increasing numerical difference in the central charge is observed for a small \(\alpha_{\rm LR}\) between our estimate extracted from the second Renyi entropy under periodic boundary conditions (PBC) and the previous measurement with the von Neumann entropy in an open chain [53; 54], both results agree on the deviation found at \(\alpha<1\), supporting the scenario of conformal invariance breakdown occurring at a sufficiently small \(\alpha_{\rm LR}\).
To identify the threshold for the breakdown of the SR Ising class beyond the implications of the critical exponents and the central charge, we additionally examine the critical Binder ratio [40] and the CFT description of the universal form of the correlation function [64]. Both tests show a clear signal of such breakdown at \(\alpha_{\rm LR}<2\), which strengthens the similar evidence found in the measurement of the critical exponent of the correlation function. However, above \(\alpha_{\rm LR}=2\), the detailed view of the scaled correlation function [64] still indicates a gradual deviation from the asymptotic line predicted by the CFT, raising a possibility that a precise value of the threshold can be above \(\alpha_{\rm LR}=2\) while it is less than \(\alpha_{\rm LR}=3\) where the description of the CFT is well verified.
This paper is organized as follows. The AF-LR-TFIC model Hamiltonian and the numerical details of the VMC+RBM calculations are described in Sec. II. The main results are given in Sec. III. In the subsections, the FSS analysis for the estimate of the critical exponents and the extraction of the central charge from the second Renyi entropy are presented, and then the test of the critical Binder ratio and the comparison with the CFT prediction of the correlation function are given to identify the threshold. The conclusions are given in Sec. IV.
## II Model and VMC+RBM calculations
We consider the AF-LR-TFIC Hamiltonian [53] given as
\[\hat{H}=\sin\theta\sum_{l<j}J_{ij}\hat{\sigma}_{i}^{x}\hat{\sigma}_{j}^{x}+ \cos\theta\sum_{i}\hat{\sigma}_{i}^{z}, \tag{1}\]
where \(\theta\) is in the range of \(0<\theta<\pi/2\) for the AF coupling, and the site indices \(i\) and \(j\) run from \(1\) to \(L\) in the chain of length \(L\). We impose PBC as the boundary conditions that are necessary for the test of the CFT description of the correlation function constructed in a cylindrical space-time geometry. In the implementation of the algebraically decaying LR interaction under PBC, we choose to write \(J_{ij}\) with a range cutoff that increases with the system size \(L\) by adopting the formulation used in the LR-Kitaev chain [61; 62] as
\[J_{ij}=\begin{cases}|i-j|^{-\alpha_{\rm LR}}&\text{for }|i-j|<L/2,\\ (L-|i-j|)^{-\alpha_{\rm LR}}&\text{for }|i-j|>L/2.\end{cases} \tag{2}\]
We choose RBM as an ansatz of a trial wave function for VMC simulations to find an approximate ground state [5]. A trial state can be written as \(|\Psi\rangle=\sum_{\bf s}{\bf W}({\bf s};\mathcal{W})|{\bf s}\rangle\) with the visible variables \({\bf s}=(s_{1},s_{2},\ldots,s_{L})\) of RBM, where \(s_{i}\) indicates \(\sigma_{i}^{x}\) for the \(\hat{\sigma}^{x}\)-basis representation of the given Hamiltonian. We impose the translation symmetry under PBC to reduce the number of variational parameters. Following the procedures of Ref. [5], after integrating out the hidden layer, one can express the RBM wave function as
\[\Psi({\bf s};\mathcal{W})=e^{\alpha\sum_{j=1}^{L}s_{j}}\prod_{m=1}^{L}\prod_{i =1}^{n_{h}}\cosh\left[b_{i}+\sum_{j=1}^{L}W_{ij}T_{m}(s_{j})\right], \tag{3}\]
where the translation operator \(T\) is defined as \(T_{m}(s_{j})=s_{j+m}\) with periodicity \(s_{j+L}=s_{j}\), and \(n_{h}\) is the number of filters given for the symmetry. On a diagram of RBM, one may illustrate the hidden layer with \(N_{h}=Ln_{h}\) neurons with \(L\)-fold degeneracy of the neural variables enforcing the translation invariance. In Eq. (3), there are \((1+n_{h}+Ln_{h})\) RBM parameters of \(\mathcal{W}\equiv\{a,{\bf b},{\bf W}\}\) to be optimized using the VMC method. We adopt complex-valued parameters as suggested in Ref. [5] for better convergence, although the TFIC Hamiltonian is is Stoquastic [65]. We initialize the RBM by setting \(a=0\) and assigning Gaussian random numbers with zero mean and variance of \(1/(Ln_{h})\) to \({\bf b}\) and \({\bf W}\).
In VMC calculations, we optimize the RBM parameters using the stochastic reconfiguration (SR) method to construct the natural gradient [66; 67; 68]. The SR method can be described as the imaginary-time evolution of a trial state, providing a new state projected in the space of \(\{|\Psi\rangle,\partial_{1}|\Psi\rangle,\partial_{2}|\Psi\rangle,\ldots\}\), where \(\partial_{i}|\Psi\rangle\equiv\frac{\partial|\Psi\rangle}{\partial\mathcal{W}_ {i}}\). These procedures impose an update of the variational parameter as \(\mathcal{W}_{i}^{\rm new}=\mathcal{W}_{i}^{\rm old}+\gamma_{\rm SR}\delta \mathcal{W}_{i}\), where \(\delta\mathcal{W}_{i}\) is determined by solving the linear equation \({\bf S}\delta\mathcal{W}=-{\bf f}\)
The essential numerical procedures are to evaluate the overlap matrix \(\mathbf{S}\) and the force vector \(\mathbf{f}\),
\[S_{ij} = \tag{4}\] \[f_{i} = \tag{5}\]
where the derivative \(\Delta_{i}\) and the local energy \(E_{\mathrm{loc}}\) are
\[\Delta_{i}\equiv\frac{\partial_{i}\Psi(\mathbf{s};\mathcal{W})}{ \Psi(\mathbf{s};\mathcal{W})}\quad\text{and}\quad E_{\mathrm{loc}}\equiv\sum_ {\mathbf{s}^{\prime}}\langle\mathbf{s}|\hat{H}|\mathbf{s}^{\prime}\rangle \frac{\Psi(\mathbf{s}^{\prime};\mathcal{W})}{\Psi(\mathbf{s};\mathcal{W})}. \tag{6}\]
The expression \(\langle A\rangle_{\mathrm{mc}}\equiv\sum_{\mathbf{s}}P(\mathbf{s})A(\mathbf{ s})\) denotes the Monte Carlo (MC) measurement of \(A(\mathbf{s})\) with probability \(P(\mathbf{s})\propto|\Psi(\mathbf{s};\mathcal{W})|^{2}\). We use the conjugate gradient algorithm with the Jacobi preconditioner to solve the linear equation without explicitly storing the \(S\) matrix following the strategy to reduce computational costs proposed in Ref. [68]. For numerical stability, we use the regularization scheme introduced in Ref. [5], where at the \(p\)-th SR iteration, \(S_{ij}\) is replaced by \(S_{ij}(1+\lambda_{p}\theta_{ij})\) with \(\lambda_{p}=\max(\lambda_{0}b^{p},\lambda_{\mathrm{min}})\). We use the parameters \(\lambda_{0}=100\), \(b=0.9\), and \(\lambda_{\mathrm{min}}=0.01\). The learning rate \(\gamma_{\mathrm{SR}}\) is initially set to \(0.1\) and increased by \(0.1\) for every \(10000\) SR iterations until it becomes unity.
We monitor the convergence of \(|\Psi\rangle\) to the ground state by evaluating \(\langle\hat{H}\rangle\) and the relative variance defined as
\[\tilde{\sigma}_{E}\equiv\frac{\langle\hat{H}^{2}\rangle-\langle\hat{H}\rangle ^{2}}{\langle\hat{H}\rangle^{2}}. \tag{7}\]
The relative variance \(\tilde{\sigma}_{E}\) should be precisely zero when \(|\Psi\rangle\) becomes an exact eigenstate. However, in practice, it does not decrease below a certain value in VMC simulations. Probable systematic causes may include the limited expressive power of a finite-size neural network with a finite \(n_{h}\), despite the universal approximation theorem, and the stochastic fluctuations in MC measurements that can affect the linear solver.
Figure 1 presents an example of the convergence test performed at the critical point in the system of size \(L=64\) for the LR exponent \(\alpha_{\mathrm{LR}}=0.5\). Convergence tends to slow down as \(\alpha_{\mathrm{LR}}\) decreases in this LR-AF system. At the critical point, it typically takes about an order of \(10^{5}\) SR iterations until the energy and variance become saturated within the scale of their fluctuations over the iterations. We find that the accuracy level indicated by \(\tilde{\sigma}_{E}\) after saturation depends essentially on the number of filters \(n_{h}\). In our VMC calculations for the ground state, we set the convergence criterion as \(\tilde{\sigma}_{E}<10^{-6}\), which, for example, is achieved for \(n_{h}>8\) in Fig. 1. In our tests, \(n_{h}=16\) suffices for system sizes up to \(L=128\) and for the values of \(\alpha_{\mathrm{LR}}\) that we consider in this study.
## III Results and discussions
Using the RBM wave function \(\Psi(\mathbf{s})\) obtained in the VMC optimizations at a given \(\theta\), we measure the moments of staggered magnetization including the AF order parameter, the two-point correlation function, and the second Renyi entanglement entropy. For a given RBM sample, the MC averages are calculated with \(4\times 10^{8}\) configurations of \(\mathbf{s}\) sampled from the probability distribution \(P(\mathbf{s})\propto|\Psi(\mathbf{s})|^{2}\) using the Metropolis algorithm. We obtain ten RBM wave function samples from independent VMC calculations. We find that the standard error of the measurement based on one RBM sample is typically smaller than the sample-to-sample fluctuations, and thus we estimate the error bar by the standard deviation of the measurements over the RBM samples.
In this section, we first present the FSS analysis to estimate the critical exponents and the central charge for comparison with the previous TDVP and DMRG results. Then, we proceed to present our additional tests with the critical Binder ratio and the universal form of the correlation function to identify the threshold for the breakdown of the SR Ising universality and the conformal symmetry.
### Order parameter and critical exponents
The emergence of the AF order can be detected by measuring the staggered magnetization in the input layer of the RBM. In the AF phase, the operator \(\hat{M}_{s}=\sum_{i}(-1)^{i}\hat{\sigma}_{i}^{x}\) in each parity sector of the \(\mathbb{Z}_{2}\) symmetry indicates a finite positive or
Figure 1: Convergence test of the the RBM wave function in the VMC search for the ground state. The case with the system of the size \(L=64\) for \(\alpha_{\mathrm{LR}}=0.5\) is shown for example. The estimates of (a) energy density \(E_{0}/L\) and (b) relative variance \((\langle\hat{H}^{2}\rangle-\langle\hat{H}\rangle^{2})/\langle\hat{H}\rangle^{2}\) measured after \(2\times 10^{5}\) SR updates are plotted as a function of \(n_{h}\). The insets show the same quantities for a fixed number of filters \(n_{h}=16\) monitored during the SR iterations of the parameter updates. The data points in the insets represent the averages measured in the logarithmic bins of iteration numbers. The error bars are measured with ten independent RBM wave function samples.
negative expectation value. Although our MC sampling does not fix the parity, an alternative quantity \(M_{s}(\mathbf{s})=|\sum_{i}(-1)^{i}s_{i}|\) can characterize the order-disorder phase transition at the level of the RBM wave function. We write the order parameter as
\[m_{s}=\frac{1}{L}\left\langle M_{s}\right\rangle_{\mathrm{mc}}. \tag{8}\]
Near a critical point \(\theta_{c}\), the order parameter measured in a finite system of size \(L\) is expected to behave asymptotically as \(m_{s}(\theta,L)\sim L^{-\beta/\nu}\mathcal{M}_{\mathrm{o}}^{(\pm)}(|\theta- \theta_{c}|L^{1/\nu})\) with the critical exponents \(\beta\) and \(\nu\), where \(\mathcal{M}_{\mathrm{o}}^{(\pm)}\) is a size-independent scaling function. The corresponding susceptibility can also be defined by the fluctuations of \(M_{s}\) as
\[\chi_{s}=\left\langle M_{s}^{2}\right\rangle_{\mathrm{mc}}-\left\langle M_{s} \right\rangle_{\mathrm{mc}}^{2}, \tag{9}\]
which is expected to follow the FSS ansatz of \(\chi_{s}(\theta,L)\sim L^{\gamma/\nu}\mathcal{Z}_{\mathrm{o}}((\theta-\theta_ {c})L^{1/\nu})\) associated with the exponent \(\gamma\).
First we determine the critical point \(\theta_{c}\) for a given \(\alpha_{\mathrm{LR}}\) by locating a crossing point of the Binder's fourth-order cumulant,
\[U_{4}=1-\frac{\left\langle M_{s}^{4}\right\rangle_{\mathrm{mc}}}{3\left\langle M _{s}^{2}\right\rangle_{\mathrm{mc}}^{2}}, \tag{10}\]
between the curves of different \(L\)'s. The FSS ansatz of the cumulant is given as \(U_{4}(\theta,L)\sim\mathcal{U}_{\mathrm{o}}((\theta-\theta_{c})L^{1/\nu})\). Although \(\mathcal{U}_{\mathrm{o}}\) becomes independent of \(L\) for a large \(L\), a finite-size correction can appear for small \(L\)'s. The finite-size correction of the leading order is usually assumed to be in the form of \(\theta_{L,2L}^{*}-\theta_{c}\propto L^{-\tilde{\omega}}\) for a crossing point \(\theta_{L,2L}^{*}\) identified between two adjacent curves of system sizes \(L\) and \(2L\). We determine \(\theta_{c}\) based on this correction-to-scaling ansatz with the extrapolation to infinite size.
After locating the critical point \(\theta_{c}\), we estimate the critical exponents \(\nu\), \(\beta\), and \(\gamma\) by performing the standard FSS analysis with the FSS ansatz of \(m_{s}\), \(\chi_{s}\), and \(U_{4}\) in the critical region. Figure 2 presents an example of the FSS analysis for \(\alpha_{\mathrm{LR}}=0.5\), showing that the data points of different \(L\)'s fall well on a common scaling curve with our estimates of the critical exponents. The numerical estimates of the critical exponents and errors are measured using the pyfssa package [69; 70]. We tabulate our estimate of \(\theta_{c}\) and the critical exponents in Table 1. Within the error bars, our estimates of the critical exponents are very close to the SR Ising values for all the values of \(\alpha_{\mathrm{LR}}\) examined as shown in Fig. 2(a), which is consistent with the previous DMRG results [55; 56].
### Second Renyi entropy and central charge
The logarithmic system-size scaling of the entanglement entropy at a critical point in 1D is a useful universal property to measure the central charge of the CFT that characterizes the phase transition [71; 72; 73]. In the previous estimate of the central charge using the TDVP [53], DMRG [54], and generalized Hatree-Fock [59] methods, the von Neumann entanglement entropy was examined under open boundary conditions (OBC). Instead, we consider the second Renyi entropy for the measurement using the RBM wave function under PBC. For the bipartition of a system into subsystems \(A\) and \(B\), the Renyi
\begin{table}
\begin{tabular}{c c c c c c c} \(\alpha_{\mathrm{LR}}\) & \(\theta_{c}\) & \(\nu\) & \(\beta\) & \(\gamma\) & \(\eta\) & \(c_{\infty}\) \\ \hline
3.0 & 0.8714(7) & 1.00(4) & 0.128(5) & 1.77(5) & 0.2510(4) & 0.496(5) \\
2.5 & 0.9041(6) & 1.01(2) & 0.122(3) & 1.76(5) & 0.2491(2) & 0.500(4) \\
2.0 & 0.9489(7) & 1.00(2) & 0.121(7) & 1.76(7) & 0.2518(7) & 0.502(5) \\
1.5 & 1.012(1) & 1.00(1) & 0.126(9) & 1.77(4) & 0.2450(24) & 0.508(4) \\
1.0 & 1.103(1) & 1.01(3) & 0.126(6) & 1.78(7) & 0.2398(35) & 0.491(5) \\
0.5 & 1.251(1) & 1.01(3) & 0.127(5) & 1.76(4) & 0.2363(15) & 0.454(8) \\ \end{tabular}
\end{table}
Table 1: List of the critical points and exponents. Critical exponents \(\nu\), \(\beta\), and \(\gamma\) are determined in the FSS analysis of the collapse of the scaling curve. The exponent \(\eta\) is measured from the scaling of the spin-spin correlation function along a fixed \(r/L=1/4\) at the critical point \(\theta_{c}\). The central charge \(c_{\infty}\) is extracted from the logarithmic scaling of the second Rényi entropy.
Figure 2: FSS analysis of RBM observables. (a) The estimates of the critical exponents \(\nu\), \(\beta\), \(\gamma\) are plotted in the range of \(\alpha_{\mathrm{LR}}\) between 0.5 and 3. The dotted lines are given for comparison with the SR Ising values. The FSS collapse tests with the critical exponents are demonstrated at \(\alpha_{\mathrm{LR}}=0.5\) for the data of (b) the Binder’s cumulant \(U_{4}\), (c) the AF order parameter \(m_{s}\), and (d) the susceptibility \(\chi_{s}\). The inset of (b) shows the crossing point of \(U_{4}\) locating the critical point \(\theta_{c}\).
entanglement entropy of an order \(n\) for \(\rho_{A}\) is written as
\[S_{n}(\rho_{A})=\frac{1}{1-n}\ln\mathrm{tr}\rho_{A}^{n}, \tag{11}\]
where \(\rho_{A}\equiv\mathrm{tr}_{B}\rho\) is the reduced density matrix of \(A\) for a pure state \(\rho\). The von Neumann entropy is recovered at the limit of \(n=1\). For the universality class fixed by the CFT, the von Neumann and Renyi entropies at the critical point indicate the same central charge \(c\) in the leading-order FSS behavior. For PBC, the asymptotic scaling behavior of \(S_{n}\)[73] for half-chain bipartition is written as
\[S_{n}=\frac{c}{6}\left(1+\frac{1}{n}\right)\ln L+c_{n}^{\prime}, \tag{12}\]
where \(c_{n}^{\prime}\) is a nonuniversal constant.
The second Renyi entropy \(S_{2}\) can be reliably measured in QMC calculations by using the replica trick [74], which has been successfully applied to the VMC calculations with the RBM wave function [13]. We consider only \(S_{2}\), but a method was proposed to compute \(S_{n}\) of the higher \(n\) and to approximate \(S_{1}\) in a different NQS representation [75]. Measuring \(S_{2}\) requires two copies of the RBM state, namely \(\mathbf{s}^{(1)}\) and \(\mathbf{s}^{(2)}\), sampled from the joint probability distribution \(P(\mathbf{s}^{(1)},\mathbf{s}^{(2)})\propto|\Psi(\mathbf{s}^{(1)})|^{2}| \Psi(\mathbf{s}^{(2)})|^{2}\). Each copy can be rewritten in a bipartite basis of \(\mathbf{s}\equiv(\mathbf{s}_{A},\mathbf{s}_{B})\), where \(\mathbf{s}_{A}\) and \(\mathbf{s}_{B}\) are associated with the subsystems \(A\) and \(B\). Then, one can obtain \(e^{-S_{2}}\) by measuring the swapping operator on \(A\) as
\[e^{-S_{2}}=\left\langle\frac{\Psi(\mathbf{s}_{A}^{(2)},\mathbf{s}_{B}^{(1)})} {\Psi(\mathbf{s}_{A}^{(1)},\mathbf{s}_{B}^{(1)})}\frac{\Psi(\mathbf{s}_{A}^{(1 )},\mathbf{s}_{B}^{(2)})}{\Psi(\mathbf{s}_{A}^{(2)},\mathbf{s}_{B}^{(2)})} \right\rangle_{\mathrm{mc}}. \tag{13}\]
We extract the central charge from the asymptotic behavior of \(S_{2}(L)=\frac{c}{6}\ln L+c_{2}^{\prime}\). To deal with finite-size corrections, we measure \(S_{2}\) for two system sizes \(L\) and \(L/2\) to define the effective central charge,
\[c_{\mathrm{eff}}(L)=\frac{4}{\ln 2}\big{[}S_{2}(L)-S_{2}(L/2)\big{]}, \tag{14}\]
which would explicitly reveal finite-size behavior. The central charge is then formally written as \(c=c_{\infty}\equiv\lim_{L\to\infty}c_{\mathrm{eff}}(L)\), which can be evaluated by extrapolating \(c_{\mathrm{eff}}(L)\) to infinite \(L\). Figure 3 describes such extrapolation procedures to evaluate \(c_{\infty}\) with finite-size data of \(S_{2}(L)\). We observe that \(c_{\mathrm{eff}}(L)\) exhibits the power-law convergence of \(|c_{\mathrm{eff}}(L)-c_{\infty}|\propto 1/L\). This behavior of \(c_{\mathrm{eff}}(L)\) is consistent with the previous discussion on the finite-size correction of \(L^{-1/\nu}\) in the FSS analysis of the entanglement entropy [76].
Our estimate of \(c_{\infty}\) shows good agreement with \(c=1/2\) of the SR Ising class for \(\alpha_{\mathrm{LR}}\gtrsim 2\). For \(\alpha_{\mathrm{LR}}=1.5\) and \(1\), the values of \(c_{\infty}\) are still close to \(1/2\) within the deviation of \(0.01\), but the finite-size corrections become systematic and stronger. For \(\alpha_{\mathrm{LR}}=0.5\), the deviation of \(c_{\infty}\) from \(1/2\) is much larger than the error bar, implying the breakdown of the conformal symmetry of the SR Ising class. Our results are obtained from the second Renyi entropy under PBC, providing an interesting comparison with the previous results based on the von Neumann entropy under OBC [59, 53, 54]. All studies agree on a significant deviation from \(1/2\) for \(\alpha_{\mathrm{LR}}<1\). However, we observe the tendency of \(c_{\infty}\) to decrease below \(1/2\), which is in contrast to the increase of \(c\) above \(1/2\) previously observed from the von Neumann entropy under OBC. Although we cannot rule out finite-size influences, the inconsistent trend of \(c\) found in the different measures of entanglement with different boundary conditions may also be related to the breakdown of the conformal symmetry.
### Critical Binder ratio
Our RBM estimates of the critical exponents and the central charge are consistent with the combined results of the previous TDVP and DMRG studies, supporting the scenario that the conformal symmetry is broken at a sufficiently small \(\alpha_{\mathrm{LR}}\) while some of the critical exponents are very close to the SR Ising values. However, there is still a great uncertainty in finding the threshold value of \(\alpha_{\mathrm{LR}}\) for the breakdown of the conformal invariance and the universality class of the SR Ising model. Despite the fact that the deviation in the central charge is visibly large only at \(\alpha_{\mathrm{LR}}=0.5\), the strong finite-size correction observed at a larger \(\alpha_{\mathrm{LR}}\) implies that the threshold can be much larger than \(\alpha_{\mathrm{LR}}=0.5\). Therefore, we need more reliable indicators that can go beyond the estimates of the critical exponents and central charge to test the breakdown of the SR Ising class and the associated conformal invariance.
For such an alternative indicator, we consider the Binder ratio, \(Q\equiv\langle M_{s}^{2}\rangle_{\mathrm{mc}}^{2}/\langle M_{s}^{4}\rangle_{ \mathrm{mc}}\), of the second and fourth moments of the staggered magnetization. The Binder ratio at a critical point exhibits a particular value contributing to the universality of the critical behavior, while the value depends on the boundary conditions and the aspect ratio of the system (see, for instance, Refs. [77, 78] and references therein). The
Figure 3: Estimate of the central charge. (a) The second Rényi entropy \(S_{2}\) of a half chain is plotted at the critical point \(\theta_{c}\) as a function of system size \(L\). (b) The effective central charge \(c_{\mathrm{eff}}(L)\) is extrapolated to determine (c) \(c_{\infty}\) for the estimate of the central charge.
critical Binder ratio has been used as a reliable ingredient to identify the universality class in the classical long-range Ising model [40], which inspires us to perform the same test of how the Binder ratio depends on \(\alpha_{\text{LR}}\) for the critical RBM wave function of the AF-LR-TFIC.
In the SR limit, at the exact critical point \(\theta_{c}=\pi/4\), we obtain the value of \(Q_{\text{SR}}^{*}=0.689(4)\) from the power-law extrapolation of \(Q(L)\) to infinite \(L\). This particular value of the ratio has not previously been known for the AF-TFIC, but it turns out that the corresponding value of the cumulant \(U_{4}^{*}=0.516(3)\) is very similar to the previous MC estimate of \(U_{4}^{*}=0.514(1)\) reported in the classical 2D Ising model subject to the mixed boundary conditions where the system is periodic in one direction and open in the other direction [79]. The implicit connection between the mixed boundary conditions and the cylindrical geometry of our periodic chain under the imaginary-time evolution at zero temperature may expect the universal value of the Binder ratio in the SR limit.
For a finite \(\alpha_{\text{LR}}\), we consider the indicator called the self-combined Binder ratio proposed in Ref. [40],
\[S_{\text{SR}}(L)=\frac{1}{Q_{\text{SR}}^{*}}Q(L)+\frac{1}{Q(L)}Q_{\text{SR}}^ {*}-2, \tag{15}\]
which removes the leading-order finite-size correction in \(Q(L)\) and thus exhibits better convergence with increasing \(L\) if an accurate value of \(Q_{\text{SR}}^{*}\) is provided. Figure 4 displays the value of \(S_{\text{SR}}^{*}\equiv\lim_{L\to\infty}S_{\text{LR}}(L)\) obtained from the power-law extrapolation to infinite \(L\). It turns out that while \(S_{\text{SR}}^{*}\) is almost zero for \(\alpha_{\text{LR}}=3\) and \(2.5\), the deviation of \(S_{\text{SR}}^{*}\) appears for \(\alpha_{\text{LR}}\lesssim 2\) and increases as \(\alpha_{\text{LR}}\) decreases. The estimate of \(Q^{*}=\lim_{L\to\infty}Q(L)\) shows a similar increase from the value of the SR limit as \(\alpha_{\text{LR}}\) decreases, although it still indicates a slight deviation even for \(\alpha_{\text{LR}}=2.5\) and \(3\) where \(S_{\text{SR}}^{*}\simeq 0\). This is consistent with the observation in Ref. [40], verifying that \(S_{\text{LR}}(L)\) converges better at a finite \(L\). Our data suggest that the threshold for the SR Ising universality is possibly around \(\alpha_{\text{LR}}=2\) above which \(S_{\text{SR}}^{*}\) is zero within the error bars.
### CFT prediction of the correlation function
The other test for the conformal invariance of the SR Ising class concerns the correlation function to compare its asymptotic scaling behavior with the CFT description given at the SR limit, following the strategy proposed in Ref. [64]. The PBC imposed on our RBM wave function is essential for this test. We consider the spin-spin correlation function,
\[C_{xx}(r)=\frac{1}{L-r}\sum_{i=1}^{L-r}\langle\sigma_{i}^{x}\sigma_{i\neq r}^ {x}\rangle=\frac{1}{L-r}\sum_{i=1}^{L-r}\langle s_{i}s_{i\neq r}\rangle_{ \text{mc}}\,, \tag{16}\]
where the distance \(r\) runs from \(1\) to \(L/2\) in the periodic chain, and the average over the sites is taken for better statistics in the MC measurements.
The CFT in a cylindrical space-time geometry predicts the asymptotic form of the two-point correlation function [80; 81]. In the SR limit, the 2D Ising universality class and its CFT predicts that the correlation function in Eq. (16) behaves as
\[C_{xx}(r)\propto\left(\frac{1}{L\sin(\pi r/L)}\right)^{2\Delta_{\sigma}} \tag{17}\]
with the scaling dimension \(\Delta_{\sigma}=1/8\). A partial test of this prediction includes verification of the scaling dimension that is equivalent to the exponent \(\eta=2\Delta_{\sigma}\) characterizing the algebraic decay of \(C_{xx}(r)\sim r^{-\eta}\) at the critical point. A more comprehensive test is to directly compare the form of the measured correlation function with the description of the CFT. This strategy was originally proposed in the projector QMC
Figure 4: Test of the critical Binder ratio. The self-combined ratio \(S_{\text{SR}}^{*}\) and the Binder ratio \(Q^{*}\) at the critical point are plotted as a function of \(\alpha_{\text{LR}}\). The data points are extrapolated to infinite size. The horizontal solid lines indicate the SR limit.
Figure 5: Critical exponent of the spin-spin correlation function. The correlation function \(C_{xx}(r)\) at \(r=L/4\) is plotted as a function of the system size \(L\). The inset shows the exponent \(\eta\) extracted from the data fitting to \(C_{xx}(L/4)\propto L^{-\eta}\). The dotted lines indicating the SR Ising exponent \(\eta=1/4\) are given for comparison.
simulations [64], which we employ here for the RBM wave function obtained in the VMC simulations.
First, we measure the critical exponent \(\eta\) from the FSS behavior of \(C_{xx}(r)\) along a fixed \(r/L=1/4\). We obtain the estimate of \(\eta\) from the linear fit to the ansatz of \(C_{xx}(L/4)\propto L^{-\eta}\), which is displayed as a function of \(L\) on the logarithmic scale in Fig. 5. For \(\alpha_{\rm LR}\geq 2\), the estimate of \(\eta\) is consistent with the SR Ising value \(1/4\). However, as \(\alpha_{\rm LR}\) decreases below \(2\), it turns out that the estimate of \(\eta\) decreases below \(1/4\), implying that the SR Ising class does not hold below \(\alpha_{\rm LR}=2\). These observations are consistent with the previous TDVP result [53], where the threshold value of \(\alpha_{\rm LR}=2.25\) was suggested based on their estimate of the scaling dimension.
However, it is certainly preferable to find stronger supporting evidence because the numerical deviation of our estimate of \(\eta\) from \(1/4\) is rather small and we cannot rule out finite-size influences in the data fitting. In fact, as shown in Fig. 5, the data points match well with the lines of \(L^{-1/4}\) within the available system sizes. Additionally, assuming that the hyperscaling relation still holds, \(\eta<1/4\) implies \(\gamma>7/4\) and \(\beta<1/8\) if \(\nu=1\) is fixed. We cannot argue such small changes of the exponents in our FSS analysis of data collapse. Within our limited numerical accuracy, it is thus difficult to precisely determine the breakdown of the universality class solely on the basis of the critical exponents.
For the direct test of the CFT-predicted form of the correlation function in Eq. (17), we perform the FSS analysis with data collapse of \(L^{2\Delta_{\sigma}}|C_{xx}(r)|\) as shown in Fig. 6. We fix the exponent \(2\Delta_{\sigma}\) at the SR Ising value \(1/4\) for the test of the SR Ising class. Despite the fact that the measured values of \(\eta\) are not used, we still observe a good collapse of the data points falling on a common scaling curve at all \(\alpha_{\rm LR}\) except with slight deviations found at \(\alpha_{\rm LR}=0.5\) where \(\eta\approx 0.236\) is maximally different from \(1/4\). In the graphical comparison between the collapsed data curve and the CFT prediction of \(L^{2\Delta_{\sigma}}|C_{xx}(r)|\propto[\sin(\pi r/L)]^{-2\Delta_{\sigma}}\), we observe that the data curve starts to deviate from the CFT prediction at \(\alpha_{\rm LR}\lesssim 2\), and the deviations increase with decreasing \(\alpha_{\rm LR}\), which is consistent with the evidence found in the critical Binder ratio.
To compare the measured correlation function with the CFT description in higher resolution, we examine the scaled correlation function \(C_{\rm sc}(r/L)\)[64] defined as the ratio of the measured \(C_{xx}(r/L)\) and the form predicted in Eq. (18),
\[C_{\rm sc}(r/L)=\left[L\sin\left(\pi\frac{r}{L}\right)\right]^{2\Delta_{\sigma }}|C_{xx}(r)|\,, \tag{18}\]
where we also fix \(2\Delta_{\sigma}\) at \(1/4\). If the CFT of the 2D Ising universality class holds for this LR system, one can expect an asymptotically flat tail in this scaled correlation function. Figure 7 displays \(C_{\rm sc}(r/L)\) for \(\alpha_{\rm LR}\) between \(2\) and \(3\).
At
Figure 7: Scaled correlation function for the test of the CFT description. Equation (18) is examined with ten samples of the RBM wave functions independently obtained in the VMC simulations at the critical point. The markers indicate the data of \(L=64\) where the fluctuations in the RBM samples are smaller than the marker size. The gray solid line is the average over the samples for \(L=128\). The shade is filled between the minimum and maximum magnitudes of the data in the RBM wave function samples for \(L=128\).
Figure 6: FSS analysis of the spin-spin correlation function. The data collapse of \(L^{2\Delta_{\sigma}}|C_{xx}(r)|\) is examined as a function of \(r/L\) with the exponent \(2\Delta_{\sigma}\) being fixed at the SR Ising value \(1/4\). The solid line indicates the CFT-predicted form of \(a[\sin(\pi r/L)]^{-2\Delta_{\sigma}}\) given for comparison with the scaled curve of the measured correlation function.
\(\alpha_{\rm LR}=3\), one can clearly notice the flat line, which indicates the validity of the CFT description. Although we have not shown here, our test in the SR limit has shown the same flatness of \(C_{\rm sc}(r/L)\) as displayed for \(\alpha_{\rm LR}=3\) within the fluctuations in the examined RBM samples.
It turns out that the shape of \(C_{\rm sc}(r/L)\) becomes more curved as \(\alpha_{\rm LR}\) decreases. As shown in Fig. 7, the flat tail area in \(C_{\rm sc}(r/L)\) tends to be harder to identify for a smaller \(\alpha\), such a change in the shape of the correlation function develops gradually as \(\alpha_{\rm LR}\) decreases below 3. Within the achievable accuracy of our present calculations, it is difficult to detect a precise threshold value of \(\alpha_{\rm LR}\) at which the asymptotic flat line would disappear, because fluctuations in RBM samples tend to increase with increasing system size. Our test of \(C_{\rm sc}(r/L)\) suggests that the threshold for the CFT of the SR Ising class may be higher than \(\alpha_{\rm LR}=2\), although it still requires much more accurate calculations of the correlation function or more sensitive indicators to identify a precise threshold.
## IV Summary and conclusions
We have studied a quantum phase transition in the AF-LR-TFIC using VMC methods with the RBM trial wave function ansatz. Based on the FSS analysis to measure the critical exponents of the order parameter and the estimate of the central charge from the second Renyi entropy, we have verified the previous TVDP and DMRG results [53; 54; 55; 56], supporting the scenario [62] that conformal symmetry is broken at a sufficiently small LR exponent \(\alpha_{\rm LR}\) while some critical exponents are very close to the SR Ising values regardless of \(\alpha_{\rm LR}\). To identify the threshold for the breakdown of the SR Ising universality and the conformal symmetry, we have performed two additional tests that do not rely on the critical exponents and the central charge. Our first test using the self-combined version of the Binder ratio [40] finds that the universal Binder ratio holds for \(\alpha_{\rm LR}\gtrsim 2\) below which the ratio increases significantly as \(\alpha_{\rm LR}\) decreases. In the test for the CFT description of the spin-spin correlation function, our FSS analysis finds a qualitative difference between the measured data curve and the description of the CFT for \(\alpha_{\rm LR}<2\). The detailed view given by the scaled correlation function [64] indicates a gradual change that still occurs above \(\alpha_{\rm LR}=2\), raising a possibility that the threshold may be larger than \(\alpha_{\rm LR}=2\) while it is less than \(\alpha_{\rm LR}=3\) where the measured data produce the CFT description.
Our measurement of the critical Binder ratio and the test of the CFT-predicted form of the correlation function reveal numerical evidence supporting the breakdown of the Ising universality and the conformal symmetry in the AF-LR-TFIC. However, our rough estimate of the threshold value of \(\alpha_{\rm LR}\) for the breakdown also indicates the need for further intensive studies to precisely determine the threshold with more sensitive indicators of conformal symmetry, such as Klein bottle entropy [82]. In addition, the apparent mismatch between the correlation function exponent and the other critical exponents needs to be further investigated to properly examine the hyperscaling relations with higher numerical accuracy.
Our VMC+RBM calculations for the FSS analysis of the criticality in this LR interacting system exemplify the practical applicability of the NQS framework to the study of quantum phase transitions. While we have only considered a stoquastic Hamiltonian, the accuracy of the RBM wave function shown in our analysis of the critical ground state demonstrates its potential as an alternative or complementary tool to conventional zero-temperature methods.
###### Acknowledgements.
J.K. and D.K. contributed equally to this work. We thank Synge Todo and Hong-Hao Tu for fruitful discussions in the ASG meeting at the PCS-IBS. This work was supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF-2019R1F1A106321) and also by the KIAS associate member program. Computing resources are provided by the KISTI supercomputing center (KSC-2021-CRE-0165). We appreciate APCTP and PCS-IBS for their hospitality during the completion of this work.
|
2307.00816
|
Index of the Kontsevich-Zorich monodromy of origamis in $\mathcal{H}(2)$
|
The Kontsevich-Zorich monodromy of an origami is the image of the action of
the Veech group on the non tautological part of the homology. In this paper we
make some progress to show, that for origamis in the stratum $\mathcal{H}(2)$
the index of the Kontsevich-Zorich monodromy in $SL_2(\mathbb{Z})$ is either 1
or 3.
|
Pascal Kattler
|
2023-07-03T07:56:45Z
|
http://arxiv.org/abs/2307.00816v1
|
# Index of the Kontsevich-Zorich monodromy of origins in \(\mathcal{H}(2)\)
###### Abstract.
The Kontsevich-Zorich monodromy of an origami is the image of the action of the Veech group on the non tautological part of the homology. In this paper we make some progress to show, that for origamis in the stratum \(\mathcal{H}(2)\) the index of the Kontsevich-Zorich monodromy in \(SL_{2}(\mathbb{Z})\) is either 1 or 3.
## 1. Introduction
In this article we show most parts of a conjecture from [1], namely that the index of the Kontsevich-Zorich monodromy of primitive origamis of degree \(d\) in the stratum \(\mathcal{H}(2)\) is either 1 or 3 in \(\mathrm{SL}_{2}(\mathbb{Z})\). Hubert and Lelievre showed in [2], that there are two \(\mathrm{SL}_{2}(\mathbb{Z})\) orbits \(\mathcal{A}_{d}\) and \(\mathcal{B}_{d}\) if the degree \(d\) is odd and one \(\mathrm{SL}_{2}(\mathbb{Z})\)-orbit, if the degree is even, distinguished by their HLK-invariant. Furthermore each orbit has as representative an **L-origamis**\(L(n,m)\). This is an L-shaped origami as in Figure 1, where opposite edges are glued, \(n\) is the number of squares in horizontal direction and \(m\) in vertical direction. In the even case a representative is \(L(n,m)\), where \(n\) is even and \(m\) is odd (or reserved). In the odd case representatives are \(L(n,m)\), where \(m\) and \(n\) are even for \(\mathcal{A}_{d}\) and \(L(n,m)\), where \(m\) and \(n\) are odd for \(\mathcal{B}_{d}\). So we will show the following Theorem.
**Theorem 1.1**.: _Let \(\mathcal{O}\) be an origami of degree \(d\) and genus 2 and \(\Gamma\subseteq\mathrm{SL}_{2}(\mathbb{Z})\) the Kontsevich-Zorich monodromy of \(\mathcal{O}\)._
1. _The index of_ \(\Gamma\) _in_ \(\mathrm{SL}_{2}(\mathbb{Z})\) _is at most 3, if_ \(d\) _is even._
2. _The index of_ \(\Gamma\) _in_ \(\mathrm{SL}_{2}(\mathbb{Z})\) _is 1, if_ \(d\) _is odd and_ \(\mathcal{O}\) _lies in_ \(\mathcal{A}_{d}\)_._
We proceed as follows. We choose for each degree and for each orbit an L-origami \(\mathcal{O}\) as representative. Then we take two directions and their corresponding Dehn multitwists, which are elements of the Veech group of \(\mathcal{O}\), as in [5] (Proposition 2.4). Finally we compute the actions of this Dehn multitwists on the non tautological part of the homology and show that the indices of the groups, generated by them, is 1 or 3.
The following statements still miss to complete the proof of the entire conjecture: In the index 3 case, we just showed, that the index is at most 3. Also the in the even case, the following conjecture still misses.
**Conjecture 1.2**.: _In the setting of Theorem 1.1, the index of \(\Gamma\) in \(\mathrm{SL}_{2}(\mathbb{Z})\) is 3, if \(d\) is odd and \(\mathcal{O}\) lies in \(\mathcal{B}_{d}\)._
**Acknowledgments.** I am grateful for the support provided by my supervisor Gabriela Weitze-Schmithusen throughout working on this paper. This work was founded by the Project-ID 286237555--TRR 195--by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation).
## 2. Computations of the indices
### The odd case and the orbit \(\mathcal{A}_{d}\)
We first show that the index of the Kontsevich-Zorich monodromy of the L-origami \(L(2,2n)\) for \(n\in\mathbb{N}\) (see picture below) is \(1.\) In fact we show that the Kontsevich-Zorich monodromy is generated by the action of the Dehn multitwists of the cylinders in directions \((0,1)\) and \((n,n+1)\) on the homology. We show, that this action is given by the matrices \(\begin{pmatrix}1&-1\\ 0&1\end{pmatrix}\) and \(\begin{pmatrix}2&-1\\ 1&0\end{pmatrix}\), with respect to a given basis of the homology. These matrices generate \(\mathrm{SL}_{2}(\mathbb{Z}).\) The cylinder decomposition of the direction \((n,n+1)\) is as shown in Figure 1 and Figure 2:
We get always three saddle connections in the direction \((n,n+1)\). The green saddle connection is the one, starting in the left bottom corner of the rightmost square. The two red saddle connections \(r_{1}\) and \(r_{2}\) are the other ones. See Section 3 for the proofs of the statements presented in the following. We call
* \(\Theta_{g}\) the green cylinder. That is the cylinder with the green saddle connection as the upper boundary.
Figure 1. Cylinder decomposition of \(L(2,2n)\) with \(n=2\) in direction \((2,3)\)
Figure 2. Cylinder decomposition of \(L(2,2n)\) with \(n=3\) in direction: \((3,4)\)
2. \(\Theta_{r}\) the red cylinder. That is the cylinder with the red saddle connections as the upper boundary.
3. \(Y_{1}\) is the small vertical cylinder.
4. \(Y_{2}\) is the large vertical cylinder.
Note that we use the name of the cylinders also as name for their mid curves, which we consider as elements of the homology.
Now we remind of the definition of the combinatorial length (or width).
**Definition 2.1**.: The **combinatorial length** of a cylinder of an origami \(\pi\colon\mathcal{O}\to E\) with mid curve \(\gamma\) is the multiplicity of the curve \(\pi\circ\gamma\), i.e. \(\#\{t\in(0,1],\pi(\gamma(t))=\pi(\gamma(0))\}\).
Note that the definition of the combinatorial length is equivalent to the definition of the combinatorial width from [1]. The former cylinders have the combinatorial length \(f_{\Theta_{r}}=2n-1,f_{\Theta_{g}}=2,f_{Y_{2}}=2n,f_{Y_{1}}=1\) and the combinatorial heights (this are the \(c_{i}\) defined below) of these cylinders is \(1\), since the cylinders \(\Theta_{g}\) and \(\Theta_{r}\) have the same height, the same holds for \(Y_{1}\) and \(Y_{2}.\) The cylinder decomposition in direction \((n,n+1)\) defines (up to inverse) a unique affine map, which is a minimal Dehn multitwists along the core curves of these cylinders. Let \(D_{\Theta}\) be the Dehn multitwist in direction \((n,n+1)\) and analogously let \(D_{Y}\) be the Dehn multitwist in direction \((0,1).\)
We take as basis of the homology the horizontal and vertical mid curves \(X_{1},X_{2},Y_{1},Y_{2}\) as in Figure 3 and as basis of the non-tautological part
\[X =X_{2}-2X_{1}\] \[Y =Y_{2}-2nY_{1}\]
Let \(D\) be the Dehn multitwist in a two cylinder direction with mid cylinders \(\gamma_{1}\) and \(\gamma_{2}.\) We compute the action \(D_{*}\) of the Dehn multitwist \(D\) in a on the non-tautological part of the homology with the formula
\[D_{*}=\operatorname{Id}+c_{1}f_{2}\Omega(\cdot,\gamma_{1})\gamma_{1}+c_{2}f_{ 1}\Omega(\cdot,\gamma_{2})\gamma_{2}\]
from [1] (Chapter 2.4). The \(f_{i}\) are the combinatorial lengths of \(\gamma_{i}\) and \(c_{i}\) are the smallest integers, such that the \(c_{1}h(\gamma_{2})=c_{2}h(\gamma_{1}))\), where \(h(\gamma_{i})\) is the height of \(\gamma_{i}\).
In Figure 4 we summarize the needed intersection numbers. Note that the sign of the intersection number \(\Omega(s,t)\) of curves \(s\) and \(t\) in direction \(v\) and \(w\) is the sign
Figure 3. the basis of the homology
of the determinant of the matrix \(\begin{pmatrix}v&w\end{pmatrix}\). We compute the not obvious intersection numbers in Section 3.
Now we compute the cylinder middles \(\Theta_{r}\) and \(\Theta_{g}\) as linear combination of our chosen basis \((X_{1},X_{2},Y_{1},Y_{2})\) of the homology:
\[\Theta_{r}=aX_{1}+bX_{2}+cY_{1}+dY_{2}\]
Let
\[A=\begin{pmatrix}0&0&0&1\\ 0&0&1&1\\ 0&-1&0&0\\ -1&-1&0&0\end{pmatrix}\]
be the fundamental matrix of the intersection form with respect to the basis of the homology. Because of the bilinearity, we get \(x=(a,b,c,d)\) is the unique solution of the equation \(Ax=(\Omega(\Theta_{r},X_{1}),\Omega(\Theta_{r},X_{2}),\Omega(\Theta_{r},Y_{1 }),\Omega(\Theta_{r},Y_{2}))^{t}.\) The same holds for \(\Theta_{g}\), hence we get
\[\Theta_{r} =(n-1)X_{2}+((2n-3)n+2)X_{1}+nY_{2}+(n-1)Y_{1}\text{ and }\] \[\Theta_{g} =X_{2}+2(n-1)X_{1}+Y_{2}+2Y_{1}.\]
It follows
\[D_{\Theta}(X) =X+c\Theta_{g}f_{\Theta_{r}}\Omega(X,\Theta_{g})\Theta_{g}+c \Theta_{r}f_{\Theta_{g}}\Omega(X,\Theta_{r})\Theta_{r}\] \[=X+(2n-1)\Theta_{g}-2\Theta_{r}\] \[=X+(2n-1)X_{2}+2(2n-1)(n-1)X_{1}+(2n-1)Y_{2}+(2n-1)2Y_{1}\] \[\quad-2(n-1)X_{2}-2((2n-3)n+2)X_{1}-2nY_{2}-2(n-1)Y_{1}\] \[=X+X_{2}-2X_{1}+2nY_{1}-Y_{2}=2X-Y\] \[D_{\Theta}(Y) =Y+c\Theta_{g}f_{\Theta_{r}}\Omega(Y,\Theta_{g})\Theta_{g}+c \Theta_{r}f_{\Theta_{g}}\Omega(Y,\Theta_{r})\Theta_{r}\] \[=Y-(n-1)\Theta_{g}+2\Theta_{r}\] \[=Y+(2n-1)X_{2}+2(2n-1)^{2}X_{1}+(2n-1)Y_{2}+(2n-1)2Y_{1}\] \[\quad-2(n-1)X_{2}-2((2n-3)n+2)X_{1}-2nY_{2}-2(n-1)Y_{1}\] \[=Y+X_{2}-2X_{1}+2nY_{1}-Y_{2}=X\]
and
\[D_{Y}(X) =X+cY_{2}f_{Y_{1}}\Omega(X,Y_{2})Y_{2}+cY_{Y_{1}}f_{Y_{2}}\Omega( X,Y_{1})Y_{1}\] \[=X-Y_{2}+2nY_{1}=X-Y\] \[D_{Y}(Y) =Y\]
So we have
\[D_{\Theta}=\begin{pmatrix}2&1\\ -1&0\end{pmatrix}\text{ and }D_{Y}=\begin{pmatrix}1&0\\ -1&1\end{pmatrix}\]
### The even case
In this chapter, we treat the origamis in \(\mathcal{H}(2)\) of even degree. In this case we choose as representatives \(L(2,2n+1),n\in\mathbb{N}\). We show that the index of the Kontsevich-Zorich monodromy in \(\operatorname{SL}_{2}(\mathbb{Z})\) is at most three. In fact we show that the group generated by the action of the Dehn multitwists in directions \((2n+2,2n+1)\) and \((2n+1,2n+3)\) is the index 3 subgroup generated by the matrices \(\begin{pmatrix}3&2\\ -2&-1\end{pmatrix}\) and \(\begin{pmatrix}1&0\\ -1&1\end{pmatrix}\). We get three saddle connections in direction
\((2n+1,2n+3)\) (see Figure 5 and Figure 6). The green saddle connection \(g\) is that, starting in the left bottom corner of the rightmost square. The two red saddle connections \(r_{1}\) and \(r_{2}\) are the other ones. And we get three saddle connections in direction \((2n+2,2n+1)\) (see Figure 7 and Figure 8). The blue saddle connection \(b\) is that, starting in the left bottom corner of the second square from the bottom. The two magenta saddle connections \(m_{1}\) and \(m_{2}\) are the other ones.
These cylinders have the combinatorial length \(f_{\Psi_{r}}=2n,f_{\Psi_{g}}=1,f_{\Theta_{m}}=2n+1,f_{\Theta_{b}}=1.\) In table 9 we computed all necessary intersection numbers. Note that \(c_{\Psi_{r}}=c_{\Theta_{m}}=c_{\Theta_{b}}=1\) and \(c_{\Psi_{g}}=2.\)
Figure 8. Cylinder decomposition of \(L(2,2n+1)\) with \(n=2\) in direction (6,5)
Figure 6. Cylinder decomposition of \(L(2,2n+1)\) with \(n=2\) in direction (5,7)
Figure 7. Cylinder decomposition of \(L(2,2n+1)\) with \(n=1\) in direction \((4,3)\)
It holds
\[\Theta_{m} =(2n+1)X_{2}+(2n+1)2nX_{1}+2nY_{2}+(2n+1)Y_{1}\] \[\Theta_{b} =X_{2}+2nX_{1}+Y_{2}\] \[\Psi_{r} =(2n-1)X_{2}+(2(n-1)(2n+1)+4)X_{1}+(2n+1)Y_{2}+(2n-1)Y_{1}\] \[\Psi_{g} =X_{2}+(2n-1)X_{1}+Y_{2}+2Y_{1}\]
and
\[D_{\Psi}(X) =X+c_{\Psi_{r}}f_{\Psi_{g}}\Omega(X,\Psi_{r})\Psi_{r}+c_{\Psi_{g}} f_{\Psi_{r}}\Omega(X,\Psi_{g})\Psi_{g}\] \[=X-2\Psi_{r}+2\cdot 2n\Psi_{g}\] \[=X+4nX_{2}+4n(2n-1)X_{1}+4nY_{2}+8nY_{1}\] \[\quad-2(2n-1)X_{2}-2(2(n-1)(2n+1)+4)X_{1}-2(2n+1)Y_{2}-2(2n-1)Y_{1}\] \[=X+2X_{2}-4X_{1}-2Y_{2}+(4n+2)Y_{1}=3X-2Y\] \[D_{\Psi}(Y) =Y+c_{\Psi_{r}}f_{\Psi_{g}}\Omega(Y,\Psi_{r})\Psi_{r}+c_{\Psi_{g }}f_{\Psi_{r}}\Omega(Y,\Psi_{g})\Psi_{g}\] \[=Y-2\Psi_{r}+4n\Psi_{g}\] \[=2X-Y\]
and
\[D_{\Theta}(X) =X+c_{\Theta_{b}}f_{\Theta_{m}}\Omega(X,\Theta_{b})\Theta_{b}+c_ {\Theta_{m}}f_{\Theta_{b}}\Omega(X,\Theta_{m})\Theta_{m}\] \[=X-(2n+1)\Theta_{b}+\Theta_{m}\] \[=X-(2n+1)X_{2}-(2n+1)2nX_{1}-(2n+1)Y_{2}\] \[\quad+(2n+1)X_{2}+(2n+1)2nX_{1}+2nY_{2}+(2n+1)Y_{1}\] \[=X-Y_{2}+(2n+1)Y_{1}=X-Y\] \[D_{\Theta}(X) =Y+c_{\Theta_{b}}f_{\Theta_{m}}\Omega(Y,\Theta_{b})\Theta_{b}+c_ {\Theta_{m}}f_{\Theta_{b}}\Omega(Y,\Theta_{m})\Theta_{m}=Y\]
So we have
\[D_{\Psi}=\begin{pmatrix}3&2\\ -2&-1\end{pmatrix}\text{ and }D_{\Theta}=\begin{pmatrix}1&0\\ -1&1\end{pmatrix}\]
## 3. Intersection number and combinatorial length
Let \(\mathcal{O}=\) L-Origami\((2,2n)=O((1\dots 2n),(1\;\;(2n+1)))\) be an origami and \(\pi\colon\mathcal{O}\to E\) the corresponding covering of the standard torus \(E=\mathbb{C}/\mathbb{Z}^{2}\) (see e.g. Figure 1). Let \(\sigma\) be the cycle \((2\dots 2n\;1\;(2n+1))\). In order to compute the intersection numbers, we introduce some special lattice points. This was inspired by [3].
**Definition 3.1**.:
1. An \((n,n+1)\)**-lattice point** is a point \(x\in\mathcal{O},\) such that \(\pi(x)=(\frac{a}{n+1},\frac{b}{n})\) with \(a,b\in\mathbb{Z}.\)
2. A **horizontal \((n,n+1)\)-lattice point** is a point \(x\in\mathcal{O},\) such that \(\pi(x)=(\frac{a}{n+1},0)\) with \(a\in\mathbb{Z}.\)
3. A **vertical \((n,n+1)\)-lattice point** is a point \(x\in\mathcal{O},\) such that \(\pi(x)=(0,\frac{a}{n})\) with \(a\in\mathbb{Z}.\)
We notice that the geodesic line in direction \((n,n+1)\) through an \((n,n+1)\)-lattice point meets again an \((n,n+1)\)-lattice point, whence it meets the horizontal edge of a square.
**Lemma 3.2**.: _Let \(x\) be a horizontal \((n,n+1)\)-lattice point at the lower edge of square \(i\), which is no singularity and let \(\gamma\) be the geodesic line through \(x\) in direction \((n,n+1).\)_
1. _Let_ \(y\) _be the point of_ \(\gamma,\) _where it meets the lower edge of a square the next time, in which_ \(y\) _is no singularity. Then_ \(y\) _lies at the lower edge of square_ \(\sigma(i).\)__
2. _If_ \(\pi(x)=(\frac{a}{n+1},0)\)_, then_ \(\gamma\) _meets the point_ \(y\) _with_ \(\pi(y)=(\frac{a}{n+1},0)\) _and_ \(y\) _lies at the upper edge of square_ \(\sigma^{(n+1)}(i),\) _if it meets no singularity._
Proof.:
1. Let \(2\leq i\leq 2n.\) Then \(\gamma\) leaves square \(i,\) when it reaches the upper edge. This is the lower edge of square \(\sigma(i).\) If \(i=1\) then the first coordinate of \(\pi(x)>\frac{1}{n+1},\) since \(x\) and \(y\) are no singularities. Then \(\gamma\) leaves square \(1\)
Figure 9. Intersection numbers of cylinder middles and the homology
over the right edge and reaches the lower edge of square \(2n+1=\sigma(1).\) The case \(i=2n+1\) is treated similarly.
2. If \(\pi(x)=(\frac{a}{n+1},0)\) then \(\pi(\gamma(t)=(\frac{a-1}{n+1}\mod 1,0),\) when \(\gamma\) meets the edge of a square the next time. Then we can apply part (a) \(n+1\) times.
**Lemma 3.3**.: _Each geodesic line \(\gamma\) through an \((n,n+1)\)-lattice point \(x\) in direction \((n,n+1)\) defines a saddle connection._
Proof.: Let \(x=\gamma(0)\) be no singularity. We can assume, that \(x\) is a horizontal \((n,n+1)\)-lattice point at the lower edge of some square \(i,\) because \(\gamma\) meets one, the next time it crosses an edge of a square. If \(\pi(x)=(\frac{a}{n+1},0)\) then then \(\pi(\gamma(t))=(\frac{a-1}{n+1}\mod 1,0),\) when \(\gamma\) meets the edge of a square the next time. So we can assume, that \(x\) lies over \((0,0).\)
The numbers \(n+1\) and \(2n+1=n+(n+1)\) are coprime. (A common divisor of \(n+1\) and \(2n+1\) would be a common divisor of \(n+1\) and \(n=(2n+1-(n+1)).\)) So we get integers \(a,b\in\mathbb{Z}\) with \(a(n+1)+b(2n+1)=1.\) So by Lemma 3.2 either \(\gamma\) meets the lower left point of square \(\sigma^{a(n+1)}(i)=\sigma(i)\) or it meets a singularity before. So \(\gamma\) meets a singularity at least, when the lower left point of square \(\sigma^{k}(i)\) is a singularity.
Since \(\mathcal{O}\) has one singularity of degree \(3,\) there are \(3\) saddle connections in direction \((n,n+1).\)
1. \(r_{1}\) starts in the lower left vertex of square \(1\) and ends in the upper right vertex of \(2n+1.\)
2. \(g\) starts in the lower left vertex of square \(2n+1\) and ends in the upper right vertex of \(1.\)
3. Hence the last saddle connection \(r_{2}\) starts at the lower left vertex of square \(2\) and ends at the upper right vertex of square \(2n.\)
We will now show, that the red saddle connections \(r_{1}\) and \(r_{2}\) are the upper boundary of a cylinder, namely \(\Theta_{r}\) and the green saddle connection \(g\) is the upper boundary of a cylinder, namely \(\Theta_{g}.\) For this we use separatrix diagrams. There is a nice introduction to separatrix diagrams in [4]. We will use just the ribbon graph structure of the separatrix diagram (without the pairings of the boundary components). The separatrix diagram of the origami \(L(2,2n)\) with saddle connections in direction \((n,n+1)\) is shown in Figure 10. The cyclic order of the vertex is as drawn.
We can find the cylinders with \(r_{1}\) as a part of the upper boundary as follows: We follow the edge \(r_{1}\) until we reach a vertex again. Then we follow the the next edge in the reserved cyclic order of the vertex. This is the edge \(r_{2}\). Then we follow the next edge in the reserved cyclic order, which is again \(r_{1}\) with which we started. So we have found the upper boundary of a cylinder. With the same procedure we see that \(g\) is the upper boundary of a cylinder.
We show now, why this procedure gives the boundary's of the cylinders. We follow a saddle connection until we reach a singularity (which corresponds to a vertex in the separatrix diagram). If we want to know the boundary of the cylinder adjacent to our starting saddle connection, we follow a small path from the saddle connection anti clockwise around the singularity, until we reach a saddle connection again. In the separatrix diagram this is the next edge in the reserved cyclic order.
We found the entire upper boundary of the cylinder, when we reach the starting saddle connection again.
Let us count the intersection numbers of the saddle connections and our chosen basis of the homology states in Figure 4. We can represent an element of our basis of the homology by a horizontal or vertical curve \(c\) through \((n,n+1)\)-lattice points. Then the saddle connection meets \(c\) exactly at the \((n,n+1)\)-lattice points.
We will first compute the intersection numbers with \(g\) and conclude the intersection numbers with \(\Theta_{r}\) from this.
Let us compute the intersection number \(\Omega(Y_{2},\Theta_{g})\). We represent \(Y_{2}\) by the left border of the origami. The saddle connection \(g\) runs trough square \(2n+1\) to square \(1\) at the point that lies over \((0,\frac{1}{n})\) on the right edge of square \(1\) and then up through the squares \(2,\ldots,n\), while it meets the left border each of these square once. For sure \(n\) this is the left upper vertex which is equal to the right upper vertex. Next \(\Theta_{g}\) runs through square \(n+1\) without hitting its vertical edge. Finally \(\Theta_{g}\) runs through squares \(n+2,\ldots,2n\), hitting the left edge of each of them once, until it reaches square \(1\), where it runs in a singularity. During this \(\Theta_{g}\) meets the left border \(n-1\) times. In total we have \(\Omega(Y_{2},\Theta_{g})=-(n+(n-1))=-(2n-1)\).
The other intersection numbers with \(\Theta_{g}\) can be computed similar.
By Lemma 3.3, each \((n,n+1)\)-lattice point lies on a saddle connection. So any lattice point, which does not lie on the green saddle connection, meets a red one. So we have
\[\Omega(X_{1},r)=(n+1)-1=n,\]
because \(X_{1}\) contains \(n+1\) lattice points, from which \(1\) is green. Analogously
\[\Omega(X_{2},r) =2(n+1)-3=2n-1\] \[\Omega(Y_{1},r) =n-1\] \[\Omega(Y_{2},r) =2nn-(2n-1)=2n(n-1)+1\]
Finally, we determine the combinatorial length of both cylinders.
**Lemma 3.4**.:
1. _The combinatorial length of_ \(g\) _is 2._
2. _The combinatorial length of_ \(r\) _is_ \(n-1\)_._
Proof.: We note that the combinatorial length of a curve \(\gamma\colon[0,1]\to X\) of a covering \(\pi\colon X\to Y\) is the multiplicity of the curve \(\pi(\gamma).\) Since the geodesic line is determined
Figure 10. The separatrix diagram of L(2, 2n+1) for saddle connections in direction (n, n+1)
by the direction and a point of the geodesic line, the multiplicity of the curve \(\gamma\) is the number \(\#\{t\in(0,1],\pi(\gamma(t))=\pi(\gamma(0))\}\).
1. The green saddle connection meets a point \(x\) with \(\pi(x)=(0,0)\) at the upper right vertex of square \(n\) and at the upper right vertex of the square \(1\). Hence the combinatorial length is \(2\).
2. By Lemma 3.3 the red saddle connection meets the upper right vertex of each square, which meets not the green saddle connection. Hence the combinatorial length is \(2n+1-2=2n-1\).
|
2301.02357
|
Well Cement Degradation and Wellbore Integrity in Geological CO2
Storages: A Literature Review
|
Carbon capture and storage (CCS) has emerged as the most effective method to
curb the CO2 concentration in the atmosphere. It can store up to 5 billion tons
of CO2 per year. To guarantee a safe and economical geological storage, the
well cement degradation and wellbore integrity need to be studied thoroughly.
This review paper is designed to provide a fundamental background of well
cement degradation and wellbore integrity in geological CO2 storages to support
the researchers in further investigation. The review mainly focuses on
mechanical, thermal, chemical property changes and corrosion time for cement in
experiments and simulation during geological CO2 storage. However, the
debonding interface between casing/cement or cement/formation has not been
addressed profoundly. A further investigation should inspect how pressure,
temperature, and chemical reaction affect the micro-annuli of casing/cement or
cement/formation. Also, a mathe-matical model should be established to predict
the corrosion rate in geological CO2 storage.
|
Vu Nguyen, Olatunji Olayiwola, Boyun Guo, Ning Liu
|
2023-01-06T02:00:35Z
|
http://arxiv.org/abs/2301.02357v2
|
# Well Cement Degradation and Wellbore Integrity in Geological CO2 Storages: A Literature Review
###### Abstract
Carbon capture and storage (CCS) has emerged as the most effective method to curb the CO2 concentration in the atmosphere. It can store up to 5 billion tons of CO2 per year. To guarantee a safe and economical geological storage, the well cement degradation and wellbore integrity need to be studied thoroughly. This review paper is designed to provide a fundamental background of well cement degradation and wellbore integrity in geological CO2 storages to support the researchers in further investigation. The review mainly focuses on mechanical, thermal, chemical property changes and corrosion time for cement in experiments and simulation during geological CO2 storage. However, the debonding interface between casing/cement or cement/formation has not been addressed profoundly. A further investigation should inspect how pressure, temperature, and chemical reaction affect the micro-annuli of casing/cement or cement/formation. Also, a mathematical model should be established to predict the corrosion rate in geological CO2 storage.
Geological; Corrosion; CO2 concentration; Storage; Well cement +
Footnote †: journal: Computer Vision and Pattern Recognition
\[Ca_{2}Si_{2}O_{7}.3H_{2}O+3CO_{2}\to 3CaCO_{3}+2SiO_{2}+H_{2}O \tag{4}\]
Cement quality is reduced when it is converted to bicarbonate in the presence of excess CO2[5] in Equation 5.
\[CO_{2}+CaCO_{3}+H_{2}O\to Ca\big{(}HCO_{3}\big{)}_{2}\ \
Numerous failure mechanisms can take place in a cement sheath, such as inner debonding, outer debonding, radial cracks, shear cracks, etc. Figure 2 would interpret these mechanisms.
Orlic B, et al. [16] examined the poro-mechanical effects for geological CO2 storage in a depleted gas field [17, 18] and a deep saline aquifer in the Netherlands.
wellbore materials. The thermal expansion measure how much cement would experience shrinkage or swell due to the change in temperature.
Experimental and numerical models have been used to study the thermal effect on wellbore integrity. The experiments were performed by Shadravan A, et al. [21] by applying different pressure on the casing at high temperatures. Teodoriu C, et al. [22] designed a ring similar to a cement sample which was exposed to various internal pressure at high temperatures. Boukhelifa L, et al. [23] performed studies about sealants under different wellbore conditions, including the pressure, temperature, and geometry changes. The cracks tending to be perpendicular were observed. It is in accordance with the theory. The casing radial displacement is more prominent than its axial displacement, so the possibility for cement sheath cracking to be orthogonal with its radius is high. One of the most significant constraints is the mechanical loading corresponding to the temperature changes, which has not been fully investigated. Furthermore, the thermal property of the material was not taken into serious account. Todorovic], et al. [24] revealed that water saturation is a critical parameter to damage wellbore integrity in harsh cooling conditions. This concludes that during the CO2 injection, the possibility for cement and formation failure would surge. Aursand P, et al. [25] used a mathematical model to couple two-phase flow of CO2 and radial heat transfer between the CO2 flow and the well geometry. The study shows that the most considerable downhole temperature variations take place in the bottom part of the well. It also states that the parameters such as injection temperature, injection flow rate, injection duration, and downtime would affect the thermal stress leading to the damage of wellbore integrity. Lund H, et al. [26] introduced a heat-conduction model to compute the radial heat transfer from the well to the casing, annular seal, and rock formation. The model reveals that displacing cement with an annular sealant material with higher thermal conductivity would reduce the temperature variation between the casing/seal interface and the seal/rock interface. Ruan B, et al. [27] set up a two-dimensional radial wellbore flow model and solve the mass equation, momentum equations, and energy equation to investigate the thermal behavior of CO2. The study showed the temperature profile along the radial and axial direction, which is crucial information to predict the thermal stresses along the casing pipe and the outer cement. Lavrov A, et al. [28] initiated the study, which investigated that the most sensitive part of tensile cracking during CO2 injection is cement adjacent to the casing pipe. The study suggested that reducing stiffness and increasing the thermal conductivity of damaged materials would inhibit the number of tensile cracks.
The thermal stress is proportional to Young's modulus E(MPa), and the linear thermal expansion coefficient \(\alpha\); inversely proportional to Poisson's ratio v. The relationship between them is shown in Equation 10:
\[\Delta\sigma_{{}_{T}}=\frac{\alpha E\Delta T}{1-\nu} \tag{10}\]
Figure 4: Growth of the minimum in situ stress and reservoir pressure compared to the bottom hole pressure (BHP) during CO2 injection [16].
To illustrate the thermal effects on the stability of wellbore, the simulation was performed at sandstone depth of 1300m and the temperature difference of 20C between CO2 and reservoir rock. As shown in Figure 4 for the high injection rate, the minimum horizontal stress with low case and thermal effects is lower than the bottom hole pressure. It signifies that the fracture would take place.
The typical properties of Portland cement concrete were presented in Table 2. The values of thermal expansion coefficient and Poisson's ratio are small while big for modulus of elasticity. Therefore, based on Equation 10, reducing the cement elastic modulus is an effective way to decrease thermal stress. The relationship between thermal stress and elastic modulus was also studied thoroughly in Thiercelin, et al. [30]. This study concluded that increasing the temperature change would advance the thermal stress and a low cement elastic modulus would adapt better than a high one.
## Chemical Effect
The injection of CO2 also affects rock stability in terms of chemical effects. Previous studies were carried out to demonstrate the chemical effects. One of them can be found in Orlic B, et al. [16]. The Permian Zechstein formation was selected for investigating the effect of CO2 storage. The design composed of circular elements of rosettes (60% volume) in the size of millimeters implanted in a matrix of 3 types of anhydrite circular element with the size of 50-83 micrometer. The conditions for simulation are vertical stress= 50 MPa, horizontal stress=40 MPa, T=80C for 50,000 years. The reaction of anhydrite with CO2 and water is in Equation 11:
\[CaSO_{4}+CO_{2}+H_{2}O\to CaCO_{3}+H_{2}SO_{4} \tag{11}\]
According to the authors, the composition of anhydrite caprock was selected from the Permian Zechstein, which was already studied by Hangx SJT, et al. [31], in order to compare the results. The condition for vertical stress, horizontal stress, and temperature for this model has emulated the condition for caprock buried at 2.5 km depth.
The anhydrite failure strength was significantly reduced by 25% over the course of 50,000 years in a CO2-rich environment. For 1000 years, the anhydrite failure strength reduction is inconsiderable. These results match with Hangx SJT, et al. [31].
For injecting CO2 into a deep aquifer and depleted oil reservoir, CO2 dissolves into brine to create carbonic acid, H2CO2 then reacts with rock formation (carbonate). The chemical reactions are presented in Equation 12 and Equation 13. The dissolution of rock by carbonic acid causes the rock properties changes such as geomechanical and petrophysical, which were investigated in Kim K, et al. [32]; Charalampidou EM, et al. [33]; Luquot L, et al. [34]; Rohmer J, et al. [35]; Bemer E, et al. [36]; Vanorio TV, et al. [37], Iyer J, et al. [38]; Davila G, et al. [39]. These studies revealed that injecting CO2 would increase the rock porosity and decrease the elastic moduli.
\[CO_{2}+H_{2}O\leftrightarrow H^{+}+HCO_{3} \tag{12}\]
\[MCO_{3}+H^{+}\leftrightarrow M^{2+}+HCO_{3} \tag{13}\]
Where M is metal such as Ca, Mg.
Tang Y, et al. [40] carried the dynamic and static experiments to interpret the impact of CO2-brine- rock interaction in gas reservoir with an aquifer. The results discovered that CO2 brine-rock interaction takes place in both gas zone and water zone because water vaporizes into gas zone to contact with CO2 to form carbonic acid. Six cores representing three reservoir types that are different in length, diameter, porosity, and permeability, were selected to perform the investigation. In general, the core porosity would be increased, and the core permeability could be decreased, as presented in Figure 5 and Figure 6. It can be demonstrated by mineral dissolution and particle migration in the pore space. Mineral dissolution induces the increasing of rock porosity, and particle migration in the pore space retards the flow resulting in the decreasing of rock permeability. However, two irregular cases were observed. The porosity of core #1 decreases, and the permeability of core #6 increases. It can be explained by the characteristics of core # 1 and # 6. The pore diameter in core #1 is small, so minerals dissolution cannot be driven out easily, causing the decreasing of porosity. In contrast, the pore size of core #6 is large, the free grains move out the pore smoothly, resulting in the decreasing of rock permeability.
\begin{table}
\begin{tabular}{c|c} \hline Coefficient of thermal expansion & 10\({}^{.5}\) (1/\({}^{o}\)C) \\ \hline Poisson’s ratio & 0.2-0.21 \\ \hline Modulus of elasticity & 14-41 Gpa \\ \hline \end{tabular}
\end{table}
Table 2: Typical properties of Portland cement concrete [29].
There are still several studies Santra A, et al. [41]; Ojala IO [42]; Morris JP, et al. [43]; Karimnezhad M, et al. [44]) which were discussed in detail the CO2-injection induced the changes of mechanical, chemical, and thermal property. These studies revealed that the tensile strength is a rock porosity dependence. Particularly, increasing porosity would decrease the tensile strength by an exponential function.
## 5 Cotrosion
The corrosion costs billions of dollars annually and damages our environment due to the leakage of unwanted fluid. An understanding of the process is very crucial to reduce costs and handle the process. There are two different corrosion mechanism that occur in the wellbore cement: the first due to electrochemical reaction and the second caused by the carbonation reaction [45, 46].
### Electrochemical Corrosion Mechanism
This is metal corrosion. The oxidation-reduction reactions take place at anode and cathode. Particularly, the anodic reaction (Equation 14) is the oxidation of iron and the cathodic reaction (Equation 15) is hydrogen evolution.
Anode:
\[Fe\leftrightarrow Fe^{z^{+}}+2e^{-} \tag{14}\]
Cathode:
\[2H^{+}+2e^{-}\to H_{2} \tag{15}\]
Figure 5: The difference of porosity before and after experiment [40].
Figure 6: The difference of permeability before and after experiment [40].
Several researchers have carried out studies to inspect the metal corrosion rate. Nevertheless, the corrosion process under high pressure of CO2 (above the critical point at 7.38 MPa at 31.1oC as seen in Figure 7) needs to be investigated further.
Russick EM, et al. [48] carried the experiment on stainless steels (304L and 316), copper (CDA 101), aluminum alloys (2024, 6061, and 7075), and carbon steel (1018) in contact with pure supercritical CO2 water-saturated CO2 the mixture of supercritical CO2 with 10 wt% of methanol, and supercritical CO2 with 4 wt% of tetrahydrofuranly alcohol (THFA) at 3500 psi and 50 degC. No sign of corrosion on any metal was observed when they are in contact with pure supercritical CO2. For water-saturated CO2 only carbon steel 1018 was sensitive, while the others were not influenced. The copper CDA 101 and aluminum 2024 got corrosive with the combination of supercritical CO2 and 10 wt% of methanol. The mixture of supercritical CO2 with 4 wt% of THFA almost did not cause corrosion on any metal. The THFA comprise an organic additive, Polygard, which acts like a corrosion inhibitor.
Seiersten M, et al. [49] studied the impact of the pressure of CO2 up to 80 bar and temperature up to 50oC to corrosion rate of carbon steel X65. The study presented that dry CO2 and non-CO2 saturated with water did not cause corrosion to carbon steel. At 50oC in the systems consisting of only water, the corrosion rate positively correlates with CO2 partial pressure. The corrosion rate reaches the maximum value of 6.9 mm/year at 40 bar. Seiersten M [50] revealed that at 4oC, increasing the CO2 partial pressure would decrease the corrosion rate. The maximum value for corrosion rate is about 5.6 mm/year at 10 bar. The difference between the two observations above can be demonstrated by the formation of the film FeO3. At 40oC, increasing CO2 partial pressure would decrease the saturation, which leads to non-creation of film. In contrast, at 50oC the solution saturation has a positive correlation with CO2 partial pressure.
Choi YS, et al. [51] investigated the behavior of carbon steel under the CO2- saturated water phase and the water-saturated CO2 phase with and without the presence of oxygen. The research exhibited that oxygen would make the corrosion rate faster. The presence of oxygen would inhibit the formation of the defensive film layer FeO3 which leads to an increase in the corrosion rate. The increasing corrosion rate with the presence of oxygen is also demonstrated by the oxidation-reduction reaction mechanism. Oxygen acts as an oxidizing agent, which would make the redox reaction between iron, oxygen, and water happen. The SEM and EDS techniques also were conducted to justify the results.
Lin G, et al. [52] examined the influence of CO2 at different temperatures and pressure in autoclaves on three types of carbon steel N80, P110, and J55. At 6.89 MPa and 90oC, the corrosion rate are 1.752 mm/y, 2.403 mm/y, and 1.854 mm/y for N80, P110, an J55, respectively. On the other hand, at higher pressure 10.34 MPa and 90 oC, those values seem likely to decrease. They are 0.922 mm/y, 1.054 mm/y, 1.105 mm/y. All values were plotted in Figure 8. The average decreasing percentage of corrosion rate for N80, P110, and J55 is 95 %, and the most decreasing corrosion rate is for P110 with 127%. P110 is also the most corrosive steel in this study. The illustration to explain the most corrosive characteristic of P110 lies in its composition as presented in Table 3. Steel P110 contains the most manganese among three types of steel N80, P110, and J55, and manganese is a very strong oxidizing agent. Therefore, the corrosion rate on P110 is almost higher comparing to N80 and J55.
### Cement Corrosion Mechanism
Understanding the cement corrosion during the CO2 geological storage is necessary to handle the process correctly. This includes the CO2 leakage paths and the carbonation time. Recognizing all possible leakage paths would aid in looking for a reasonable solution to cracking problems. Interpreting the carbonation time could contribute to evaluating the safety of the process for an extended period.
#### CO2 Leakage Path
There are many leakages pathways for CO2. It could be an interface formation-cement or cement-casing, or from cement cracking as shown in Figure 9.
#### Estimated Time for Carbonation Process
Many research groups have predicted the carbonation times of cement exposed to CO2 by using experimental and mathematical models. The carbonation is a substantial variable to evaluate the quality of the Carbon Capture Storage. Therefore, it is essential to obtain it. The experimental and numerical methods have their advantages and limitations. The empirical model in some situations will produce inaccurate results due to equipment error, and occasionally it is hard to manage the procedure properly due to external conditions beyond our observation, and most likely, the cost to do experiments is more expensive and time-consuming. In contrast, on average, the numerical model is much easier to do. It will eliminate the errors from equipment and human. It will produce the result faster. The most important thing for using a numerical model is to set up a governing equation with correct initial and boundary conditions. Although showing the limitation on both experimental and numerical models, they are used to verify one another. That is a reason why it is necessary to study both.
#### Experimental Measurements
Duguid A [53] designed an experiment to forecast the time to deteriorate the cement sheath in a well exposed to carbonated brine. The samples were created by drilling the stone cylinder 55 mm in diameter, 10 mm in height with a 25-mm axial hole. The minimum depth from the outside of the cylinder to the boundary of the hole was 3mm. According to Duguid A [53], the depth of reaction was quantified at five different locations: 0, 45, 90,135,180 degrees as presented in Figure 10 in order to evaluate how different the surface cement would react with carbonate brine.
The relationship between carbonation depth and time1/2 radius is linear in most cases. The most linear relationship is at the condition pH=3 and T=50 degC, and this condition also causes the most carbonation for cement. It is in agreement with Duguid A, et al. [54]. The prediction time for 25 mm cement sheath to be deteriorated is approximately from 30,000 to 70,000 years if the favorable cement is selected and the good cementing job is done.
Figure 8: Corrosion rate for N80, P110, J55 on different conditions [52].
Figure 9: The possible leakage paths of CO2[53].
Kutchko BG [55] showed the reaction rate when the cement was exposed to supercritical CO2 and CO2-saturated brine. Supercritical CO2 is a separate free phase causing hydrodynamic trapping. On the other hand, some would dissolve in the brine existing in CO2 saturated brine form, which induces the solubility trapping. The cement samples were embedded in 1% NaCl at 30.3 MPa and 50 degC under static conditions. The estimation for penetration depth is 1\(\pm\)0.07 mm for the CO2-saturated brine and 2.9\(\pm\)0.89 mm for the supercritical CO2 after 30 years. It indicated that the supercritical CO2 would degrade Portland cement faster than the CO2 saturated brine. This is comprehensible because the condition of supercritical CO2 is at high pressure and high temperature. The penetration depth over time for the CO2-saturated brine and supercritical can be found in Figure 11.
Zhang L, et al. [56] introduced another approach to estimate the penetration depth over time. Fick's diffusion and Elovich's equation were fit to experimental data. Elovich's equation (Equation 16) can be shown by Allen JA, et al. [57] and Kutchko BG, et al. [58].
\[\frac{dL}{dt}=a*exp\left(-bL\right) \tag{16}\]
Where L is the penetration depth(mm) at time t (days) of exposure and a, b are constants decided from experiment data.
Integrating the Equation 16 above with respect to t yields Equation 17:
\[L=\frac{1}{b}ln\left(t\right)+\frac{1}{b}ln\left(ab\right) \tag{17}\]
Where a, b can be estimated to fit the data a=2.47, b=22.08
Estimation of penetration depth with Elovich's equation is more accurate than Fick's diffusion as shown in Figure 12. Elovich's equation has been used in several kinetic studies [59, 60, 61]. However, the outcomes reaffirmed Elovich's equation as a powerful method to measure the CO2 penetration depth.
In Duguid A, et al. [54], the experiment for limestone and sandstone- like condition were executed. Class H cement pastes were exposed to temperatures from 20 to 50degC and pH 2.4 to 5. Then the samples were interpreted by using multiple techniques such as Inductively Coupled Plasma Optical Emission Spectroscopy (ICP-OES), optical microscopy, X-ray diffraction, and Electron Probe Microanalysis (EPMA). The experimental model was designed as presented in Figure 13. The CO2 air was percolated into the carbonated brine and then pumped to the reactor vessel, which contains the cement sample. The purpose of the recirculated flow is to assure that it was saturated with CaCO3 before reaching the reaction vessel. No observable degradation in the limestone-like condition was observed. Under the sandstone-like condition, there are
Figure 11: The penetration depth over time when cement exposed to CO2 saturated brine and Supercritical CO2 [55].
Figure 10: The top view of sample of experiment [53].
5 distinct layers that appeared: orange, brown, white, gray, and core. Each layer would expose the different behavior to carbonated brine. The orange and brown part display a leached region. The white layer shows a carbonated region. The gray section depicts a calcium hydroxide dissolution region, and the core section is no change. The outer layer was degraded fully at pH=2.4, 3.7, and temperatures 20degC and 50degC. The sample got the most damage at pH=2.4 and T=20degC, and the least degradation occurred at pH= 5 and T=50degC. This conclusion is illustrated by the dissolution of carbon dioxide in water. The carbon dioxide solubility in water increases with decreasing temperature, and more carbonic acid is formed. Therefore, the most degradation was occurred at pH=2.4 and temperature 20 degC.
Carey JW, et al. [62] and Carey JW, et al. [63] investigated the behavior of wellbore integrity and CO2-brine flow along the casing-cement micro annulus. The core-flood examination was performed at 40degC, 14 MPa pore pressure, and 28 MPa confining pressure. The experimental system included a 10 cm length of limestone with a combination of rectangular steel embedded in the cement. The blended solution, 50% supercritical CO2 and 50% brine, was run through limestone/cement combination. There are two corrosion processes: steel corrosion and cement corrosion. The corrosion on steel occurs by the electrochemical mechanism. The film FeO3 layer was formed from the CO2-rich fluid in contact with steel. The film layer protected the steel from deeper penetration of the CO2-rich fluid. However, the solubility of the FeO3 layer would increase if the pH decreases. As a result, corrosion rate is increased, with increasing flow rate of the CO2-rich fluid. For cement degradation, the rate is dependent on cement properties and the flow rate of CO2-rich fluid. The diffusion coefficient was discovered in the interval from 10-12 to 10-10 cm2/sec by assuming a 1D diffusion problem with characteristic diffusion time 2/Dt and penetration depth from 50-250um.
Adeoye JT, et al. [64] carried out experiments of a novel engineered cementitious composite (ECC) exposed to CO2-saturated water under static and flow conditions at 10 MPa and 50degC. The depth of alteration estimated was 72 mm over 50 years by Fick's law. In a similar study was performed by Kutchko BG, et al. [65] at 15 MPa and 50degC, the depth of alteration was 224 mm over 50 years. The higher pressure of CO2 in the study Kutchko BG, et al. [65] was not probably the main reason to lead to a 3 three times increase of carbonation depth. Nonetheless, the exciting finding lay on the pozzolan to cement ratio. The material used in the study Kutchko BG, et al. [65] has the pozzolan to cement ratio of 65:35, while this ratio in the study Adeoye JT, et al. [64] is 45:55. Thus, the increase of the pozzolan would lead to faster carbonation.
**Mathematical Prediction:** Along with experimental works, there are very few studies using the robust mathematical model to predict the penetration depth during geological storage CO2. The mathematical model may accompany experimental work, which will conserve resources compared to purely experimental work. Below are some remarkable studies which have been done so far.
Tao Q, et al. [66, 67] developed a mathematical model to investigate some relationships during geological CO2-brine flow along The leaking pathway of CO2 includes two parts, as shown in Figure 14a: the bottom due to cement degradation, the top is the water-saturated porous medium where an assumption of no resistance was made. Figure 14b showed how the pressure of the reservoir changes during CO2 injection. The pressure of the reservoir increased as CO2 was injected and decreased after the injection was stopped.
If there is only buoyancy force causing CO2 flow, the potential gradient of CO2 is calculated by Equation 18:
Figure 14: a) Two different leaking pathways. b) Deviated reservoir pressure during CO2 injection [67].
Figure 13: Experimental system for sandstone like condition (top) and limestone like condition(bottom) reactor [54].
\[\nabla\varphi=\nabla(\Delta\rho gz) \tag{18}\]
Where \(\Delta\rho\) is the density difference between H2O and CO2zz and g are the depth and gravitational, respectively.
During the injection period, both buoyancy and pressure elevation contribute to drive CO2. Therefore, the potential gradient of CO2 is computed by Equation 19:
\[\nabla\varphi=\nabla(\Delta\rho gz)+\nabla p_{c} \tag{19}\]
Where p\({}_{c}\) is the capillary pressure
Several wells were investigated during the geological CO2 storage. The CO2 flux was discovered to be responsive to pressure elevation and the leakage depth. The CO2 flux decreases while increasing the leakage depth due to the increasing of CO2 density. The relationship between the CO2 leakage flux and injection pressure at shallow leak (4000 ft) and deep leak (10000 ft) was investigated. The CO2 leakage flux increases with increasing injection pressure, but the rates are different. The CO2 leakage flux rate is more at deep leak than it does at the shallow leak. According to the author, this is because the injection pressure would overcome the buoyancy pressure at a shallow leak. In contrast, at a deep leak, the buoyancy pressure decreases more gradually due to the increasing CO2 density.
Deremble L, et al. [68] simulated the evolution of layers under CO2-rich brine flow. The development of layer calcite or silica gel will be proportional, while the flux of calcium or CO2 is inversely proportional to the square root of time. An increase in CO2 flow rate increases calcium dissolution rate until calcite equilibrium is reached at point \(\lambda\), as presented in Figure 15. Then ion calcium cannot be discharged anymore. As a result, CO2 would react with other species in the cement until it approaches point \(\delta\) where all species have gone, and then the mathematical model takes into consideration the additional physical aspects including: micro-annulus geometry, the Peclet number, and the characteristic length scales of a defect. The model uses the implicit algorithm to solve for a solution. It also affirms that the penetration depth is proportional to the square root of time.
Huet BM, et al. [69] connected the geochemical and transport module to simulate the degradation of cement during geological CO2 storage. The Dynaflow was adopted to solve a non-linear system of partial differential equations. Both Galerkin finite element and vertex centered finite volume space discretization of transport equations were executed. An implicit backward finite difference time stepping of the transport equation was applied to produce the results. The effective diffusion coefficient needs to be assumed in order of 10-11 m\({}^{2}\)/s by Bentz DP, et al. [70]. The thickness of the calcite layer is also proportional to the square root of time. The difference between experimental data and the model occurred due to the diffusion coefficient estimated. The growth of calcite carbonate concentration over time as presented in Figure 16 would provide insight into the transport of CO2 to cement. The carbonate concentration increases very rapidly while its radius decreases with time elapsed. There are two distinct regions with different transport mechanisms. In the first region with time exposure less than 60 days, carbonate species (CO2, HCO3-) disperse into the sample through the calcite and the silica gel layer. The second region is where the calcite layer dissolves.
Also, well integrity has been investigated extensively in Hawkes C, et al. [71]; Hawkes CD, et al. [72]; Scherer GW, et al. [73]; Neuville N, et al. [74] to evaluate for long-term CO2 storage. The downhole testing programs, such as cement sheath pressure transient testing, mini-frac testing, cement
Figure 16: Carbonate concentration profile over time [69].
Figure 15: The Calcite solubility diagram [68].
sampling, and fluid sampling, were performed.
## 5 Outlook
The traditional Portland cement carries many advantages such as low cost, high compressive strength, low alkali content, and long-term stability. Nevertheless, cement is sensitive in an acidic environment, and the cement industry is a source of CO emissions. Therefore, Portland cement is not the most excellent sealant material to serve in CCS project. Based on the nature of CCS project, the most optimal cement could prevent corrosion under the acidic environment. Also, the cement should possess low permeability, porosity, and high mechanical strength.
Mahmoud AA, et al. [75] proposed to add Synthetic Polypropylene Fiber (PPF) into Class G cement to improve it. Four samples with 0%(PPF0), 0.125%(PPF1), 0.25%(PPF2), 0.375%(PPF3) of PPF were arranged for the experiment. The results of this study showed that the carbonation depth and carbonation rate decreased while compressive strength and tensile strength increased, as presented in Figures 17-20, respectively. The decreasing of carbonation depth and carbonation rate indicated the reduction of cement permeability.
Nanomaterial such as nano-silica (nano SiO\({}_{2}\)) [8], nano-alumina (nano Al\({}_{2}\)O\({}_{3}\)) [76], nano-titanium dioxide (nanoTiO\({}_{2}\)) [9], carbon nanotubes (CNTs) [77], Polymer/ clay nanocomposites [78], nanoglass flake (NGFS) [79] were considered as good additive to improve cement quality because of their large surface area and reactivity. The nanomaterials can solidify the cement microstructure and lessen the porosity, then advance the mechanical strength.
Ponzi GGD, et al. [80] proposed basalt powder as an additive material in cement formulation because the basalt powder has low pozzolanic activity, large inert fraction, and small particle size. The basalt power plays a role as a filling-substance to the porous cement networks to curb the fluid intrusion. The experimental results explored that the formulation with low basalt powder content (\(\lesssim 0.5\) w.%) exhibited more resistance to CO\({}_{2}\) degradation, lower porosity and permeability, and stronger mechanical properties.
Other sealant materials can replace traditional Portland cement, such as geopolymer cement, resins, biofilms barriers, foams [81]. Geopolymer, such as zeolites, was discovered to have better resistance with CO\({}_{2}\)- rich brine because it contains less calcium oxide than Portland cement does. Resins are particle-free fluids with low mobility, hard, rigid, and impermeable materials. They include phenolic,
Figure 19: Compressive strength after experiment 10 days [75].
Figure 20: Tensile Strength after experiment 10 days [75].
epoxy, and furan resins. Biofilm sealants include urea, Ca2+, nutrient feed, and micro-organism. The principle is to accelerate calcium to form calcite and seal fractures. Foam is a gas-liquid blend, and it can block the flow rate of CO2 in porous media and increase the CO2 viscosity.
Because carbon capture and storage projects are expanding, so the cement consumption is increasing. Those materials introduced above have many advantages, but the disadvantages still exist. Therefore, the new research direction should focus on improving the cement quality. For example, geopolymer is detrimental to human health. Thus, new components should be studied to make it become human friendly.
## 4 Conclusions
This study's major goal was to review the previous papers for carbon capture and storage project. Many experiments have been performed to assess the well integrity and predict the time degradation of geological CO2 storage, but there are very few mathematical models. Research predicts 30,000-70,000 years for 25 mm cement to be carbonated. Some studies tried to match the experimental data to a particular equation and introduce that the penetration depth is proportional to the square root of time. Furthermore, the debonding interface between casing/cement or cement/formation is the primary cause for leakage of CO2. However, it has not been thoroughly investigated. Some substantial conclusions can be drawn from this review:
* The need for a more accurate mathematical model to evaluate the well integrity and anticipate the corrosion rate during geological CO2 storage is very crucial.
* A further investigation should identify the debonding interface between casing/cement or cement/formation issue and predict how the micro-annuli of case/cement or cement/formation behaves with the variation of temperature, stress, and chemical reactions during geological CO2 storage.
* The diffusion coefficient is one of the most crucial parameters in the corrosion process. However, it has not been studied sufficiently in petroleum corrosion. Hence, it should be an interesting topic for future studies.
* Improving the cementing property is one of the means to curb the corrosion rate. Future studies should investigate more how to reduce reactive species and add more inhibitors to advance Portland cement quality.
If these tasks are done properly, it will clear up a considerable concern to make the operation more predictable and administered.
## 5 Author Contribution
Writing- original draft preparation, Nguyen V; writing-review and editing, Nguyen V, Olatunji O, Guo B and Ning Liu. All authors have read and agreed to the published version of the manuscript.
## 6 Funding
This research received no external funding
## 7 Conflicts of Interest
The authors declare no conflict of interest.
|
2304.11287
|
Simultaneous Reduction of Number of Spots and Energy Layers in Intensity
Modulated Proton Therapy for Rapid Spot Scanning Delivery
|
Reducing proton treatment time improves patient comfort and decreases the
risk of error from intra-fractional motion, but must be balanced against
clinical goals and treatment plan quality. We formulated the proton treatment
planning problem as a convex optimization problem with a cost function
consisting of a dosimetric plan quality term plus a weighted $l_1$
regularization term. We iteratively solved this problem and adaptively updated
the regularization weights to promote the sparsity of both the spots and energy
layers. The proposed algorithm was tested on four head-and-neck cancer
patients, and its performance was compared with existing standard $l_1$ and
group $l_2$ regularization methods. We also compared the effectiveness of the
three methods ($l_1$, group $l_2$, and reweighted $l_1$) at improving plan
delivery efficiency without compromising dosimetric plan quality by
constructing each of their Pareto surfaces charting the trade-off between plan
delivery and plan quality. The reweighted $l_1$ regularization method reduced
the number of spots and energy layers by an average over all patients of 40%
and 35%, respectively, with an insignificant cost to dosimetric plan quality.
From the Pareto surfaces, it is clear that reweighted $l_1$ provided a better
trade-off between plan delivery efficiency and dosimetric plan quality than
standard $l_1$ or group $l_2$ regularization, requiring the lowest cost to
quality to achieve any given level of delivery efficiency. In summary,
reweighted $l_1$ regularization is a powerful method for simultaneously
promoting the sparsity of spots and energy layers at a small cost to dosimetric
plan quality. This sparsity reduces the time required for spot scanning and
energy layer switching, thereby improving the delivery efficiency of proton
plans.
|
Anqi Fu, Vicki T. Taasti, Masoud Zarepisheh
|
2023-04-22T01:24:28Z
|
http://arxiv.org/abs/2304.11287v2
|
Simultaneous Reduction of Number of Spots and Energy Layers in Intensity Modulated Proton Therapy for Rapid Spot Scanning Delivery
###### Abstract
**Objective**: To improve the delivery efficiency of spot scanning proton therapy by simultaneously reducing the number of spots and energy layers using the reweighted \(l_{1}\) regularization method.
**Approach**: We formulated the proton treatment planning problem as a convex optimization problem with a cost function consisting of a dosimetric plan quality term plus a weighted \(l_{1}\) regularization term. We iteratively solved this problem and adaptively updated the regularization weights to promote the sparsity of both the spots and energy layers. The proposed algorithm was tested on four head-and-neck patients, and its performance, in terms of reducing the number of spots and energy layers, was compared with existing standard \(l_{1}\) and group \(l_{2}\) regularization methods. We also compared the effectiveness of the three methods (\(l_{1}\), group \(l_{2}\), and reweighted \(l_{1}\)) at improving plan delivery efficiency without compromising dosimetric plan quality by constructing each of their Pareto surfaces charting the trade-off between plan delivery and plan quality.
**Main results**: The reweighted \(l_{1}\) regularization method reduced the number of spots and layers by an average of 40% and 35%, respectively, with an insignificant cost to dosimetric plan quality. From the Pareto surfaces, it is clear that reweighted \(l_{1}\) provided a better trade-off between plan delivery efficiency and dosimetric plan quality than standard \(l_{1}\) or group \(l_{2}\) regularization, requiring the lowest cost to quality to achieve any given level of delivery efficiency.
**Significance**: Reweighted \(l_{1}\) regularization is a powerful method for simultaneously promoting the sparsity of spots and energy layers at a small cost to dosimetric plan quality. This sparsity reduces the time required for spot scanning and energy layer switching, thereby improving the delivery efficiency of proton plans.
## 1 Introduction
Intensity modulated proton therapy (IMPT) is typically delivered via the pencil beam scanning technique. The patient is irradiated by a sequence of proton spots, arranged laterally
to cover the treatment volume, where the depth of penetration of each spot is determined by its energy layer. During IMPT, protons are transmitted spot-by-spot within every energy layer, and layer-by-layer within every beam over a set of fixed-angle beams. The total plan delivery time is roughly equal to the total switching time between energy layers, travel time between spots, and dose delivery time at each spot [GCM\({}^{+}\)20, ZSL\({}^{+}\)22].
In this study, we seek to reduce IMPT delivery time by reducing the number of proton spots and energy layers. A shorter treatment time is desirable because it improves patient comfort, increases patient throughput (hence lowering treatment costs), and decreases the risk of error or uncertainty due to intra-fractional motion. Simultaneously, we want to ensure clinical goals are met and the quality of the treatment plan remains uncompromised. This trade-off between delivery time and plan quality has been the subject of abundant research.
One thread of research focuses on greedy algorithms for energy layer assignment. These algorithms combine a variety of techniques for control point sampling, energy layer distribution, energy layer filtration, and spot optimization [DLZ\({}^{+}\)16]. The goal is to reduce energy layer switching time by pruning the number of energy layers and sequencing them so layer switches only occur from low-to-high energy level [LLZ\({}^{+}\)20, EBW\({}^{+}\)22].
Another strand of research takes a structured optimization-based approach. For example, [uW16] directly minimize the sum of spot intensities as part of a prioritized optimization routine. Other authors formulate the proton treatment delivery problem as a mixed-integer program (MIP), where each energy layer [CLL\({}^{+}\)14] or path between layers [WSGJ\({}^{+}\)22, WZSG\({}^{+}\)22] is associated with a binary indicator variable. Their objective is to minimize the dose fidelity (e.g., over/underdose to the target) plus some penalty or constraints on the energy layers, which promote a lower switching time. Although mathematically elegant, these MIPs are computationally difficult to solve, as they scale poorly with the number of energy layers due to the combinatorial nature of the problem.
To avoid this issue, researchers turned their attention to continuous optimization models. A proton treatment planning problem in this category only contains continuous variables, like spot intensities and doses. Typically, the objective includes a regularization function that is selected to encourage sparsity (i.e., more zero elements) in the spots and energy layers. The regularizer applies a penalty to the total spot intensity within each layer group. A variety of options have been proposed for this penalty function: logarithm [vdWKHH15, vdWBA\({}^{+}\)20], \(l_{2,1/2}\)-norm [GRL\({}^{+}\)20], and \(l_{2}\)-norm [JHP\({}^{+}\)18, LCL\({}^{+}\)19, GCM\({}^{+}\)20, WSGJ\({}^{+}\)22]. The last of these is of particular interest because it is convex and widely used in statistics for promoting group sparsity; the associated regularizer is known as the group lasso [YL06, MvdGB08, LH15, IPR16].
The standard lasso (i.e., \(l_{1}\)-norm penalty) promotes sparsity in the spot vector, but ignores the energy layers. The group lasso promotes sparsity of the energy layers, but actually _increases_ the number of nonzero spots. Since IMPT delivery time depends on both the number of spots and energy layers [GCM\({}^{+}\)20], neither of these regularizers is ideal. In this paper, we propose a new regularization method that simultaneously reduces the number of nonzero spots and energy layers, while upholding treatment plan quality. Our reweighted \(l_{1}\) method combines the \(l_{1}\) penalty from standard lasso with a weighting mechanism that
differentiates between the spots of different energy layers, similar to the group lasso. We test the proposed method on four head-and-neck cases and demonstrate its ability to 1) reduce the number of spots and energy layers simultaneously, and 2) provide a better trade-off between dosimetric plan quality and plan delivery efficiency than existing regularization methods (i.e., standard \(l_{1}\) and group lasso).
## 2 Methods and materials
### Problem formulation
We discretize the patient's body into \(m\) voxels and the proton beams into \(n\) spots (a.k.a., beamlets/bixels). For each spot \(i\in\{1,\ldots,n\}\), we calculate the radiation dose delivered by a unit intensity of that spot to voxel \(j\in\{1,\ldots,m\}\) and call this value \(A_{ij}\). The dose influence matrix is then \(A\in\mathbf{R}_{+}^{m\times n}\), where its rows correspond to the voxels and its columns to the spots. Let \(p\in\mathbf{R}_{+}^{m}\) be the prescription vector, i.e., \(p_{i}\) equals the physician-prescribed dose if \(i\) is a target voxel and zero otherwise. The typical treatment planning problem seeks a vector of spot intensities \(x\in\mathbf{R}^{n}\) that minimizes the deviation of the delivered dose, \(d=Ax\), from the prescription \(p\). This deviation can be decomposed into a penalty on the overdose, \(\overline{d}=(d-p)_{+}=\max(d-p,0)\), and the underdose, \(\underline{d}=(d-p)_{-}=-\min(d-p,0)\), which we combine to form the _cost function_
\[f(\overline{d},\underline{d})=\overline{w}\|\overline{d}\|_{2}^{2}+\underline {w}\|\underline{d}\|_{2}^{2}, \tag{1}\]
where \(\overline{w},\underline{w}\in\mathbf{R}_{+}^{n}\) are penalty parameters that determine the relative importance of the over/underdose to the treatment plan. (Note the underdose is ignored for non-target voxels \(i\) because \(p_{i}=0\)).
Dose constraints are defined for each anatomical structure. For a given structure \(s\), let \(A^{s}\in\mathbf{R}_{+}^{m_{s}\times n}\) be the row slice of \(A\) containing only the rows of the \(m_{s}\) voxels in \(s\). A maximum dose constraint takes the form of \(A^{s}x\leq d_{s}^{\max}\), where \(d_{s}^{\max}\) is an upper bound. Similarly, a mean dose constraint is of the form \(\frac{1}{m_{s}}\mathbf{1}^{T}A^{s}x\leq d_{s}^{\rm mean}\). By stacking the constraint matrices/vectors for all \(S\) structures, we can represent the set of dose constraints as a single linear inequality \(Bx\leq c\), where \(B=[A^{1},\frac{1}{m_{1}}\mathbf{1}^{T}A^{1},\ldots,A^{S},\frac{1}{m_{S}} \mathbf{1}^{T}A^{S}]\) and \(c=[d_{1}^{\max},d_{1}^{\rm mean},\ldots,d_{S}^{\max},d_{S}^{\rm mean}]\). Then, our treatment planning problem is
\[\begin{array}{ll}\mbox{minimize}&\underline{f}(\overline{d},\underline{d}) \\ \mbox{subject to}&\overline{d}=(Ax-p)_{+},\quad\underline{d}=(Ax-p)_{-},\quad Bx \leq c\\ &x\geq 0,\quad\overline{d}\geq 0,\quad\underline{d}\geq 0\end{array} \tag{2}\]
with variables \(x\in\mathbf{R}^{n},\overline{d}\in\mathbf{R}^{m}\), and \(\underline{d}\in\mathbf{R}^{m}\). Since the objective function \(f\) is monotonically increasing in \(\overline{d}\) and \(\underline{d}\) over the nonnegative reals, we can write this problem equivalently as
\[\begin{array}{ll}\mbox{minimize}&f(\overline{d},\underline{d})\\ \mbox{subject to}&Ax-\overline{d}+\underline{d}=p,\quad Bx\leq c\\ &x\geq 0,\quad\overline{d}\geq 0,\quad\underline{d}\geq 0.\end{array} \tag{3}\]
(The derivation is provided in appendix A). Problem 3 is a convex quadratic program (QP), hence can be solved using standard convex methods, e.g., the alternating direction method of multipliers (ADMM) [1, 12] or interior-point methods [20, 13]. The reader is referred to [1] and [20] for a thorough discussion of convex optimization.
### Common regularizers
The cost function defined in 1 focuses solely on the difference of the delivered dose from the prescription, i.e., the dosimetric plan quality. However, in our treatment scenario, we are also interested in reducing the dose delivery time, i.e., the plan delivery efficiency. The delivery time is positively correlated with the number of nonzero spots (spot scanning rate) and nonzero energy layers (energy switching time) [1, 2]. Thus, we want to augment the objective of problem 3 with a _regularization function_\(r:\mathbf{R}^{n}\rightarrow\mathbf{R}\), which penalizes the spot vector \(x\) in a way that reduces the number of nonzero spots/layers, while maintaining high plan quality. The regularized treatment planning problem is
\[\begin{array}{ll}\mbox{minimize}&f(\overline{d},\underline{d})+\lambda r( x)\\ \mbox{subject to}&Ax-\overline{d}+\underline{d}=p,\quad Bx\leq c\\ &x\geq 0,\quad\overline{d}\geq 0,\quad\underline{d}\geq 0\end{array} \tag{4}\]
with respect to \(x,\overline{d}\), and \(\underline{d}\). Here we have introduced a regularization weight \(\lambda\geq 0\) to balance the trade-off between dosimetric plan quality, represented by the cost \(f(\overline{d},\underline{d})\), and plan delivery efficiency, as captured by the regularization term \(r(x)\). A larger value of \(\lambda\) places more importance on delivery.
In the following subsections, we review a few regularization functions that have been suggested in the literature. Let \(\mathcal{I}=\{1,\ldots,n\}\) and \(\mathcal{G}=\{\mathcal{I}_{1},\ldots,\mathcal{I}_{G}\}\) be a set of subsets of \(\mathcal{I}\), where each \(\mathcal{I}_{g}\subseteq\mathcal{I}\) has exactly \(n_{g}\leq n\) elements. Specifically in our setting, \(\mathcal{G}\) represents a partition of \(n\) spots into \(G\) energy layers with \(\mathcal{I}_{g}\) containing the indices of the \(n_{g}\) spots in layer \(g\).
#### 2.2.1 \(l_{0}\) regularizer
One method of reducing the delivery time is to directly penalize the number of nonzero spots. This can be accomplished via the \(l_{0}\) regularizer
\[r_{0}(x)=\|x\|_{0}=\mathbf{card}(\{i:x_{i}\neq 0\}), \tag{5}\]
which we have defined as the number of nonzero elements in \(x\). (Here \(\mathbf{card}(A)\) denotes the cardinality of set \(A\)). Unfortunately, the \(l_{0}\) regularization function is computationally expensive to implement. To solve problem 4 with \(r=r_{0}\), we would need to solve a series of large mixed-integer programs (MIPs) in order to determine the optimal subset of nonzero spots out of all possible combinations from \(\mathcal{I}\)[1]. As the number of spots \(n\) is typically very large (on the order of \(10^{3}\) to \(10^{4}\)), this quickly becomes computationally intractable.
Another option is to apply the \(l_{0}\) regularizer to the energy layers:
\[\tilde{r}_{0}(x)=\left\|\left[\sum_{i\in\mathcal{I}_{1}}x_{i},\ldots,\sum_{i\in \mathcal{I}_{G}}x_{i}\right]\right\|_{0}=\mathbf{card}\left(\left\{g:\sum_{i \in\mathcal{I}_{g}}x_{i}\neq 0\right\}\right). \tag{6}\]
In this case, \(\tilde{r}_{0}\) returns the number of nonzero energy layers, where a layer \(g\) is zero if and only if all its spots are zero, i.e., \(\sum_{i\in\mathcal{I}_{g}}x_{i}=0\). The associated combinatorial problem or MIP simplifies to finding the optimal subset of nonzero layers, which is more manageable since \(G\) is typically on the order of \(10^{2}\). [14] developed an iterative method to solve an approximation of this MIP and were able to reduce the number of proton energies in their IMPT plan, while satisfying certain dosimetric criteria. Nevertheless, as combinatorial optimization is still expensive, we turn our attention to a different regularization function.
#### 2.2.2 \(l_{1}\) regularizer
A common approximation of the \(l_{0}\) regularizer is the \(l_{1}\) norm. Define the \(l_{1}\) regularization function to be
\[r_{1}(x)=\|x\|_{1}=\sum_{i=1}^{n}|x_{i}|. \tag{7}\]
This function is closed, convex, and continuous. When used as a regularizer in problem 4, it produces a convex optimization problem -- a form of the lasso problem -- that promotes sparsity in the solution vector \(x\)[15]. The lasso problem is well-studied in the literature [16, 17], and various methods have been developed to solve it quickly and efficiently [18, 19, 20, 21].
One downside of the \(l_{1}\) regularizer is that it does not differentiate between energy layers and is insensitive to the number of layers: since the spot vector is nonnegative, the absolute value of its elements \(|x_{i}|=x_{i}\), and any sum over energy layers decouples into the sum over all spots \(\sum_{g=1}^{G}\sum_{i\in\mathcal{I}_{g}}|x_{i}|=\sum_{i=1}^{n}x_{i}\).
#### 2.2.3 Group \(l_{2}\) regularizer
The group \(l_{2}\) regularizer, also known as the group lasso, provides an alternative method for efficiently implementing group penalties. This regularization function is defined as
\[r_{2}(x)=\sum_{g=1}^{G}\frac{1}{\sqrt{n_{g}}}\|\{x_{i}:i\in\mathcal{I}_{g}\}\|_ {2}=\sum_{g=1}^{G}\sqrt{\frac{1}{n_{g}}\sum_{i\in\mathcal{I}_{g}}x_{i}^{2}}. \tag{8}\]
It is the sum of the \(l_{2}\) norm of the vector corresponding to each group, weighted by the reciprocal of the square root of the total number of group elements [13]. (The weights \(\frac{1}{\sqrt{n_{g}}}\) may differ across applications; see [20] for alternatives). Group lasso has been widely researched in the context of statistical analysis and regression [21, 22, 23, 24], and many algorithms exist for solving the associated optimization problem
effectively [11, 12, 13]. [10] employed a version of the group lasso to perform adaptive IMPT energy layer optimization.
In our proton treatment delivery scenario, the group lasso is capable of differentiating between energy layers, and thus is a good regularizer for reducing the number of nonzero layers. However, it also tends to _increase_ the number of nonzero spots, as the quadratic penalty term in 8 mostly ignores small spot intensities (square of a small \(x_{i}\) is near zero). This failure to generate sparsity in the spot vector due to the characteristics of the \(l_{2}\) norm make it an inadequate regularizer for our purposes.
### Reweighted \(l_{1}\) method
As we have discussed, the \(l_{1}\) regularization function promotes sparsity of the spots, but not the energy layers. The group \(l_{2}\) regularization function promotes sparsity of the layers, but not the spots - indeed, it tends to produce dense spot vectors due to the \(l_{2}\) norm. In this section, we introduce the reweighted \(l_{1}\) regularization method, which promotes sparsity in both the spots and the energy layers.
The reweighted \(l_{1}\) method assigns a weight to every spot based on its magnitude and energy layer. The weights are chosen to counteract the intensity of each layer, so that all spots, regardless of magnitude, contribute roughly equally to the total regularization penalty. This is done to imitate the "ideal" group \(l_{0}\) regularizer 6, which counts every nonzero layer as one unit (due to **card**) regardless of intensity.
Formally, we define the _weighted group \(l_{1}\) regularizer_
\[r_{3}(x;\beta)=\sum_{g=1}^{G}\beta_{g}\sum_{i\in\mathcal{I}_{g}}|x_{i}| \tag{9}\]
with weight parameters \(\beta_{g}\in\mathbf{R}_{+}\cup\{+\infty\}\) for \(g=1,\ldots,G\). An intuitive way to set \(\beta_{g}\) is to make it inversely proportional to the true total intensity of energy layer \(g\), i.e.,
\[\beta_{g}=\begin{cases}\frac{1}{\sum_{i\in\mathcal{I}_{g}}x_{i}^{\star}}&\sum _{i\in\mathcal{I}_{g}}x_{i}^{\star}\neq 0\\ +\infty&\sum_{i\in\mathcal{I}_{g}}x_{i}^{\star}=0\end{cases},\]
where \(x^{\star}\in\mathbf{R}_{+}^{n}\) is an optimal spot vector. However, we do not know \(x^{\star}\) beforehand. We will approximate this weighting scheme iteratively using the reweighted \(l_{1}\) method.
The reweighted \(l_{1}\) method is a type of majorization-minimization (MM) algorithm, which solves an optimization problem by iteratively minimizing a surrogate function that majorizes the actual objective function. MM algorithms have a rich history in the literature [1, 13, 14, 15], and reweighted \(l_{1}\) in particular has been used to solve problems in portfolio optimization [10], matrix rank minimization [11, 12], and sparse signal recovery [14]. Research has shown that it is fast and robust, outperforming standard \(l_{1}\) regularization in a variety of settings.
We now describe the reweighted \(l_{1}\) method in our treatment planning setting. Let the initial weights \(\beta^{(1)}=\mathbf{1}\). At each iteration \(k=1,2,\ldots\),
1. Set \(x^{(k)}\) to a solution of \[\begin{array}{ll}\mbox{minimize}&f(\overline{d},\underline{d})+r_{3}(x;\beta^{( k)})\\ \mbox{subject to}&Ax-\overline{d}+\underline{d}=p,\quad Bx\leq c\\ &x\geq 0,\quad\overline{d}\geq 0,\quad\underline{d}\geq 0.\end{array}\] (10)
2. Compute the total intensity of each energy layer \(e_{g}^{(k)}=\sum_{i\in\mathcal{G}}x_{i}^{(k)}\).
3. Lower threshold the solution \[\tilde{e}_{g}^{(k)}=\max(e_{g}^{(k)},\epsilon^{(k)}),\quad g=1,\ldots,G,\] where \(\epsilon^{(k)}=\delta\max_{g^{\prime}}e_{g^{\prime}}^{(k)}\) for some small \(\delta\in(0,1)\).
4. Update the weights. First, compute the standardized reciprocals \[\alpha_{g}^{(k)}=\left(\frac{1}{\tilde{e}_{g}^{(k)}}\right)\Big{/}\left(\sum _{g^{\prime}=1}^{G}\frac{1}{\tilde{e}_{g^{\prime}}^{(k)}}\right),\quad g=1, \ldots,G.\] (11) Then, calculate the scaling term \[\lambda^{(k)}=\sum_{g=1}^{G}\tilde{e}_{g}^{(k)}\Big{/}\left(\sum_{g=1}^{G} \alpha_{g}^{(k)}\tilde{e}_{g}^{(k)}\right).\] (12) The new weights are \(\beta_{g}^{(k+1)}=\lambda^{(k)}\alpha_{g}^{(k)}\).
5. Terminate on convergence of the objective, or when \(k\) reaches a maximum number of iterations \(K\).
Step 3 was introduced to ensure stability of the algorithm, so that a zero energy layer estimate \(e_{g}^{(k)}=0\) would not preclude the subsequent estimate \(e_{g}^{(k+1)}\) from being nonzero. In our computational experiments, we found that a threshold fraction of \(\delta=0.01\) produced good results. Step 4 was added to ensure the \(l_{1}\) regularization term (\(r_{1}(x)\)) and the reweighted \(l_{1}\) term (\(r_{3}(x)\)) contribute a similar amount to the objective. To this end, the reweighted \(l_{1}\) term is scaled by \(\lambda^{(k)}\) so that \(r_{1}(x^{(k)})=r_{3}(x^{(k)})\).
The reweighted \(l_{1}\) method has a number of advantages over the other regularizers we reviewed. It encourages sparsity in energy layers by properly grouping the \(l_{1}\) penalty. It spreads this penalty evenly across all energy layers using its weighting scheme, rather than prioritizing those spots with large magnitudes. Finally, it is easy to implement: each iteration of the algorithm only requires that we solve a simple convex problem with \(l_{1}\) regularization, which can be done efficiently using many off-the-shelf solvers. Moreover, the number of reweighting iterations needed in practice is typically very low, with most of the improvement coming from the first \(K=2\) to 3 iterations, so its computational cost is overall low. As we will see in the next section, reweighted \(l_{1}\) outperforms regular \(l_{1}\) and group \(l_{2}\) penalties in sparsifying spots/layers.
### Patient population and computational framework
We compared the reweighted \(l_{1}\) method with standard \(l_{1}\) and group \(l_{2}\) regularization on four head-and-neck cancer patient cases from The Cancer Imaging Archive (TCIA) [CVS\({}^{+}\)13, Nol]. For each case, the planning target volume (PTV) was prescribed a dose of \(p=70\) Gy delivered in 35 fractions of 2 Gy per fraction. We created the dose influence matrix using the proton pencil beam calculation engine in the open-source package MatRad [WCW\({}^{+}\)17, WWG\({}^{+}\)18]. The proton spots were situated on a rectangular grid with a spot spacing of 5 mm, and the grid covered the entire PTV plus 1 mm out from its perimeter. Every patient plan was created using two co-planar beams. Table 1 provides more details.
We implemented the standard \(l_{1}\), group \(l_{2}\), and reweighted \(l_{1}\) based treatment planning methods in Python using CVXPY [DB16, AVDB18, DAM\({}^{+}\)22] and solved the associated optimization problems with MOSEK [ApS22]. All computational processes were executed on a 64-bit PC with an AMD Ryzen 9 3900X CPU @ 3.80 GHz/ 12 cores and 128 GB RAM. For reweighted \(l_{1}\), we ran the algorithm for \(K=3\) iterations due to the diminishing benefits of more iterations.
To facilitate comparisons, we scaled the group \(l_{2}\) regularizer so it lay in the same range as the standard \(l_{1}\) regularizer. First, we solved problem 4 with the standard \(l_{1}\) regularizer 7 and \(\lambda=1\). Let us call this solution \(x^{(1)}\). Then, we computed a scaling term \(\eta>0\) such that \(\eta r_{2}(x^{(1)})=r_{1}(x^{(1)})\). When we ran the group \(l_{2}\) method, we used the scaled regularization function \(\tilde{r}_{2}(x;\eta):=\eta r_{2}(x)\) as the regularizer \(r(x)\) in problem 4. This allowed us to obtain better spot/energy layer comparison plots between the standard \(l_{1}\) and group \(l_{2}\) methods. As pointed out earlier, the reweighted \(l_{1}\) method is similarly scaled via step 4 of the algorithm.
After each method finished, we trimmed the optimal spot vector \(x^{\star}\) further to increase sparsity. First, we zeroed out all elements \(x^{\star}_{i}\) that fell below a fraction \(\gamma\in(0,1)\) of the maximum spot intensity, i.e., we set \(x^{\star}_{i}=0\) if \(x^{\star}_{i}<\gamma\max_{j}x^{\star}_{j}\). We then zeroed out all energy layers of the resulting \(\tilde{x}^{\star}\) that fell below the same fraction of the maximum layer intensity: for each \(i\in\mathcal{G}\), we set \(\tilde{x}^{\star}_{i}=0\) if \(\sum_{j\in\mathcal{I}_{g}}\tilde{x}^{\star}_{j}<\gamma\max_{g^{\prime}}\sum_{j \in\mathcal{I}_{g^{\prime}}}x^{\star}_{j}\). A choice of \(\gamma=0.01\) provided a reasonable trade-off between sparsity and dose coverage in our computational experiments.
\begin{table}
\begin{tabular}{l|l l l l} \hline \hline & \multicolumn{4}{c}{Patient} \\ \cline{2-5} & 1 & 2 & 3 & 4 \\ \hline Beam configuration & \(40^{\circ},90^{\circ}\) & \(74^{\circ},285^{\circ}\) & \(75^{\circ},130^{\circ}\) & \(220^{\circ},290^{\circ}\) \\ PTV volume (cm\({}^{3}\)) & 162.7 & 169.7 & 129.9 & 12.4 \\ Number of voxels & 87012 & 117907 & 110869 & 50728 \\ Number of spots & 6378 & 7011 & 5257 & 713 \\ Number of energy layers & 56 & 70 & 62 & 38 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Beam configuration, PTV volume, and problem size for each patient.
Results
### Simultaneous reduction of spots and energy layers
We first compared the results of the regularization methods on a single patient. Figure 1 depicts optimal spot intensities of the unregularized model and the \(l_{1}\), group \(l_{2}\), and reweighted \(l_{1}\) regularized models for patient 2. For all three regularizers, a regularization weight of \(\lambda=5\) was used; this choice accentuated the difference between their spot vectors. Without regularization, about one third of the total 7011 spots are nonzero, with individual spot intensities ranging between \(10^{3}\) and \(10^{4}\). Under \(l_{1}\) regularization, that fraction is halved to only 13%, or 918 nonzero spots, as the \(l_{1}\) penalty encourages further sparsity. By contrast, with group \(l_{2}\) regularization, the number of active spots increases significantly to 5099 or 72%, while the average intensity drops to a little over \(10^{3}\). Reweighted \(l_{1}\) regularization produced a spot vector with the smallest number of nonzero elements: just 541 or 7.7% of the spots are nonzero. These active spots tend to reside in the first beam, and their maximum intensity exceeds that of other methods.
Figure 2 shows the optimal intensity of the energy layers for patient 2, using various regularization models with \(\lambda=5\). Over 95% of the total 70 energy layers are nonzero under the unregularized model. These active layers are divided fairly evenly into two clusters, which coincide with the two beams marked in the previous figure. The total intensity of each energy layer averages between \(10^{4}\) and \(10^{5}\). With \(l_{1}\) regularization, the fraction of nonzero energy layers drops to a modest 80%, where most of that reduction comes from deactivated layers at the edges of the clusters. Group \(l_{2}\) regularization results in a steeper drop in the fraction of active energy layers, down to 61% with additional sparsity in the middle of both beam clusters. However, the reweighted \(l_{1}\) method performs better than both of these methods, cutting the number of nonzero energy layers down to only 18 - a reduction of over 75% - with a commensurate increase in the intensity of the active layers.
A summary of the results from the different regularization methods is given in figure 3. For a fixed \(\lambda\), it is clear that reweighted \(l_{1}\) achieves the lowest number of nonzero spots and nonzero energy layers out of all the methods.
Figure 1: Optimal spot intensities resulting from the unregularized model and the \(l_{1}\), group \(l_{2}\), and reweighted \(l_{1}\) regularized models (\(\lambda=5\)) for patient 2. The vertical red line divides the spots associated with the first beam (1–3516) from the second beam (3517–7011).
Figure 2: Sum of spot intensities in each energy layer (1–70) for the unregularized model and the \(l_{1}\), group \(l_{2}\), and reweighted \(l_{1}\) regularized models (\(\lambda=5\)) for patient 2. The vertical red line divides the layers associated with the first beam (1–33) from the second beam (34–70).
### Trade-off between delivery efficiency and PTV coverage
The regularization weight in the previous section was chosen to highlight the distinctions between the optimal intensity plots. However, \(\lambda\) must be carefully selected to balance the trade-off between the total delivery time (highly correlated with the sparsity of the spots/energy layers) and the quality of the resulting treatment plan. Figure 4 examines this trade-off for reweighted \(l_{1}\) regularization on patient 2 using two measures of plan quality: D98% and D2% for the PTV. For different values of \(\lambda\), we solved problem 4 using the reweighted \(l_{1}\) method, counted up the number of nonzero spots/energy layers, and calculated the optimal dose vector and PTV dose percentiles. We then plotted a point corresponding to this result in each of the subfigures of 4 with the sparsity metric on the vertical axis and the plan quality metric on the horizontal axis (e.g., the unregularized point \(\lambda=0\) is marked by a triangle \(\triangle\)). By connecting the points in each subfigure, we obtained a set of Pareto optimal curves, which show the trade-off between plan delivery efficiency and plan quality.
The top left subfigure depicts the number of nonzero spots versus D98% to the PTV for \(\lambda\) ranging from zero to 6.0 (marked by the square). As \(\lambda\) increases, the number of active spots decreases, but so does D98%. A choice of \(\lambda=0.95\) (marked by the star) achieves the lowest number of nonzero spots \(\approx 720\), while still maintaining D98% above 95% of the prescription, indicated by the vertical gray dotted line at 66.5 Gy. A similar plot can be seen in the
Figure 3: Percentage of nonzero spots/energy layers (relative to the total number of beamlets/layers) for the unregularized model and the \(l_{1}\), group \(l_{2}\), and reweighted \(l_{1}\) regularized models (\(\lambda=5\)) for patient 2.
bottom left subfigure, which shows the number of nonzero energy layers versus D98% to the PTV; the same choice of \(\lambda\) yields 27 active layers. On the righthand side, the subfigures display the number of nonzero spots (top) and energy layers (bottom) versus D2% to the PTV. As the regularization weight increases, D2% also increases, but never exceeds 108% of the prescription (as indicated by the dotted line at 75.6 Gy) for any \(\lambda\leq 0.95\). Thus, out of all the weights, \(\lambda=0.95\) yields a good trade-off between sparsity and PTV coverage: it achieves a reduction of 89% and 61% in the number of spots and energy layers, respectively, while still fulfilling all target dose constraints.
### Trade-off between delivery efficiency and overall plan quality
This section studies the Pareto optimal trade-off curves between spot/energy layer sparsity and treatment plan quality using different regularizers to determine which regularization method provides the _best_ trade-off, i.e., the largest increase in sparsity for the least decrease
Figure 4: Number of nonzero spots (top) and energy layers (bottom) versus PTV coverage for various values of the regularization weight \(\lambda\), computed using the reweighted \(l_{1}\) method on patient 2. As \(\lambda\) increases, the curves sweep from the \(\triangle\) to the \(\blacksquare\) marker. The vertical gray dotted lines indicate clinical dose constraints on the PTV (D98% \(>0.95p\) and D2% \(<1.08p\), where \(p=70\) Gy is the prescription), and the red arrows indicate the directions of desirable change (increasing D98%, decreasing D2%, and decreasing number of nonzero spots/energy layers). A choice of weight \(\lambda=0.95\), marked by the \(\star\), produces a plan with good sparsity that respects the clinical bounds.
in plan quality. Rather than plotting multiple dose-volume metrics, we focus on a single consolidated quality measure: the plan cost function 1, which includes dose fidelity terms for the PTV and all organs-at-risk (OARs). A lower value of \(f(\overline{d},\underline{d})\) at the optimum implies a higher quality treatment plan.
To facilitate comparison, we also focus on the _relative_ change in sparsity (number of nonzero spots/energy layers) and plan cost with respect to the unregularized solution. If \(c_{unreg}\) is the plan cost resulting from the unregularized problem 3 and \(c_{reg}\) is the plan cost resulting from a particular regularization method, the relative percentage change in the plan cost is \(100(c_{reg}-c_{unreg})/c_{unreg}\). The relative change in the number of nonzero spots and nonzero energy layers is defined in a similar fashion as \(100(s_{reg}-s_{unreg})/s_{unreg}\) and \(100(l_{reg}-l_{unreg})/l_{unreg}\), respectively. Thus, to construct the trade-off curve, we solve the regularized problem for various values of \(\lambda\) and plot the relative change in sparsity versus the relative change in plan cost at each solution point.
For every patient, figure 5 depicts the relative percentage change in the number of nonzero spots versus the relative percentage change in plan cost for the \(l_{1}\), group \(l_{2}\), and reweighted \(l_{1}\) regularization methods. The origin corresponds to the unregularized plan (\(\lambda=0\)). Both the \(l_{1}\) and reweighted \(l_{1}\) trade-off curves drop sharply from the origin, attaining on average a 30% to 45% decrease in nonzero spots for a less than 10% increase in plan cost, with reweighted \(l_{1}\) slightly outperforming \(l_{1}\) by on average 5 percentage points over all four patients. By contrast, the number of nonzero spots rises with group \(l_{2}\) regularization, increasing up to 140% within the first 10% to 15% increase in plan cost for all except patient 4. This is consistent with our spot intensity plot (figure 1), which shows the spot distribution is denser under group \(l_{2}\) than without regularization.
Figure 6 depicts the relative percentage change in the number of nonzero energy layers versus the relative percentage change in plan cost for the three regularization methods. Both \(l_{1}\) and group \(l_{2}\) trade-off curves decrease moderately from the origin, with group \(l_{2}\) averaging about 9.5% lower number of nonzero layers for a given percentage increase in cost. This matches our observations in figures 2 and 3 that the group \(l_{2}\) function is more effective at penalizing energy layers than the \(l_{1}\) norm.
However, the reweighted \(l_{1}\) method significantly outperforms both these regularizers. For patient 2, it achieves an over 50% decrease in the number of nonzero energy layers for a less than 10% increase in plan cost. For the other patients, it provides a 25% to 35% reduction in active energy layers with a less than 15% increment in plan cost. Reweighted \(l_{1}\)'s average reduction in number of nonzero layers exceeds group \(l_{2}\)'s best reduction by 12 percentage points, and the majority of this reduction is realized with only about 10% cost to treatment plan quality, relative to the unregularized plan.
The vertical dotted lines in figures 5 and 6 for patient 2 correspond to a 10% increase in the plan cost. The intersection of these lines with the Pareto curves of different regularization methods demonstrates the reduction in the number of nonzero spots and energy layers obtained using different regularizers. Figure 7 compares the DVH curves of the unregularized plan (solid lines), the \(l_{1}\) regularized plan (dashed lines), and the reweighted \(l_{1}\) regularized plan (dotted lines) at the same 10% relative change in plan cost. Compared
to no regularization, the reweighted \(l_{1}\) method reduces the number of active spots and energy layers by more than 50%, while providing relatively similar DVH curves with different trade-offs (compromised left parotid and PTV coverage/homogeneity, and improved right parotid and mandible). One can re-adjust the PTV/OAR weights in the plan cost function of the reweighted \(l_{1}\) problem to achieve more uniform trade-offs. In the same vein, compared to standard \(l_{1}\) regularization, reweighted \(l_{1}\) reduces the number of active spots and energy layers by about 10% and 40%, respectively, while producing very similar DVH curves.
Figure 5: Relative change in number of nonzero spots versus relative cost to plan quality with respect to the unregularized model. For patient 2, reweighted \(l_{1}\) regularization achieves a 57% reduction in the number of nonzero spots at only a 10% cost to overall plan quality, relative to the unregularized model, as indicated by the vertical gray dotted line.
Figure 6: Relative change in number of nonzero energy layers versus relative cost to plan quality with respect to the unregularized model. For patient 2, reweighted \(l_{1}\) regularization achieves a 50% reduction in the number of nonzero layers at only a 10% cost to overall plan quality, relative to the unregularized model, as indicated by the vertical gray dotted line.
## 4 Discussion
This study proposed a method to improve the delivery of pencil beam scanning proton plans by simultaneously reducing the number of spots and energy layers using reweighted \(l_{1}\) regularization. One can exactly model the spot/energy layer reduction problem using the \(l_{0}\) regularizer, which in principle would improve plan delivery at the smallest possible cost to plan quality, but the \(l_{0}\)-regularized optimization problem is nonconvex and computationally prohibitive to solve. In imaging science and statistics, researchers often employ the \(l_{1}\) norm as a convex surrogate for the \(l_{0}\) norm, and in some cases (e.g., compressed sensing), the \(l_{1}\) norm has proven to be just as effective as the \(l_{0}\) norm at promoting sparsity [14]. The reweighted \(l_{1}\) regularization method was proposed [15] to bridge the gap between the \(l_{0}\) regularizer and the \(l_{1}\) regularizer by better approximating the \(l_{0}\) norm, while retaining the convexity of the \(l_{1}\) norm. In proton treatment planning, this property translates to improving the plan delivery at a lower cost to plan quality, which we have demonstrated in this work. Our limited computational experiments on four head-and-neck patients show that, for the same cost to plan quality, the reweighted \(l_{1}\) method reduced the number of nonzero spots by up to 10 percentage points more than standard \(l_{1}\) and the number of nonzero energy layers by 25 to 30 percentage points more than group \(l_{2}\) regularization.
Promoting spot/energy layer sparsity to improve plan delivery in IMRT is analogous to promoting beam profile smoothness to improve plan delivery in IMRT. Prior research
Figure 7: DVH curves for patient 2 obtained from the unregularized model (solid), and the standard \(l_{1}\) (dashed) and reweighted \(l_{1}\) (dotted) models regularized to 10% relative cost to plan quality (with \(\lambda=1.325\) and \(\lambda=0.3\), respectively). The vertical gray line indicates the prescription \(p=70\) Gy.
has shown that plan delivery efficiency in IMRT can be significantly improved at minimal cost to dosimetric plan quality due to the phenomenon of _degeneracy_[1, 1]. The structure of the treatment planning problem results in a multitude of feasible plans with near-equal objective value (i.e., quality). This same phenomenon has been observed in IMRT planning problems [20, 21, 22, 23], although unlike IMRT, it currently lacks a rigorous mathematical analysis. Our computational experiments demonstrated that with the reweighted \(l_{1}\) method, one can reduce the number of spots and energy layers by on average 40% and 35%, respectively, without significantly compromising the dosimetric plan quality.
In this study, we have adopted a constrained optimization framework, where the dosimetric plan quality is represented by a quadratic term in the objective, the sparsity promotion is carried out via a regularization penalty term, and the mean/max clinical dose criteria are enforced by hard constraints. However, the proposed reweighted \(l_{1}\) method is agnostic to the optimization framework and can also be used in conjunction with an automation tool (e.g., hierarchical optimization [16, 15, 17], multiple criteria optimization (MCO) [14, 15], knowledge-based planning (KBP) [1, 23]). Dose-volume histogram (DVH) constraints and plan robustness may be integrated into the optimization problem using existing techniques in the literature [20, 21, 22, 24, 25]. Previous studies have shown that improving plan delivery does not have a significant adverse impact on robustness [22, 23].
Finally, we mention that in this proof-of-concept work, we have not calculated the delivery time because it requires machine-specific delivery parameters such as dose rate, energy layer switching time, and spot travel time. We have also not enforced the machine-specific minimum-monitor-unit (min-MU) constraint. One can enforce the min-MU constraint using a two-step optimization method as described in [23]: the first step identifies the active spots/energy layers (i.e., those with intensity greater than a pre-determined value), and the second step removes the inactive spots/energy layers and enforces the min-MU constraint on the remaining spots. Increasing the min-MU threshold also allows for a higher dose rate, which can accelerate the delivery of each spot. This is especially important because the intensities of the active spots usually increase with the overall sparsity of the spots/energy layers in the treatment plan. [22] suggested using different min-MU thresholds for each energy layer to further increase the dose rate and expedite spot delivery.
## 5 Conclusion
The reweighted \(l_{1}\) regularization method is capable of simultaneously reducing the number of spots and energy layers in a proton treatment plan, while imposing minimal cost to dosimetric plan quality. Moreover, it achieves a better trade-off between delivery efficiency and plan quality than standard \(l_{1}\) and group \(l_{2}\) regularization. Thus, reweighted \(l_{1}\) regularization is a powerful method for improving the delivery of proton therapy.
## Acknowledgments
This work was partially supported by MSK Cancer Center Support Grant/Core Grant from the NIH (P30 CA008748).
|
2305.07914
|
Quantum Uncertainty Principles for Measurements with Interventions
|
Heisenberg's uncertainty principle implies fundamental constraints on what
properties of a quantum system can we simultaneously learn. However, it
typically assumes that we probe these properties via measurements at a single
point in time. In contrast, inferring causal dependencies in complex processes
often requires interactive experimentation - multiple rounds of interventions
where we adaptively probe the process with different inputs to observe how they
affect outputs. Here we demonstrate universal uncertainty principles for
general interactive measurements involving arbitrary rounds of interventions.
As a case study, we show that they imply an uncertainty trade-off between
measurements compatible with different causal dependencies.
|
Yunlong Xiao, Yuxiang Yang, Ximing Wang, Qing Liu, Mile Gu
|
2023-05-13T13:11:28Z
|
http://arxiv.org/abs/2305.07914v1
|
# Quantum Uncertainty Principles for Measurements with Interventions
###### Abstract
Heisenberg's uncertainty principle implies fundamental constraints on what properties of a quantum system can we simultaneously learn. However, it typically assumes that we probe these properties via measurements at a single point in time. In contrast, inferring causal dependencies in complex processes often requires interactive experimentation - multiple rounds of interventions where we adaptively probe the process with different inputs to observe how they affect outputs. Here we demonstrate universal uncertainty principles for general interactive measurements involving arbitrary rounds of interventions. As a case study, we show that they imply an uncertainty trade-off between measurements compatible with different causal dependencies.
**Introduction -** We learn about physical systems through measurement, and the uncertainty principle fundamentally limits what we can simultaneously learn [1]. Quantum mechanics states the existence of incompatible measurements (e.g., position and momentum of a free particle), such that predicting both outcomes to absolute precision is impossible [2; 3; 4]. Subsequent use of information theory then led to various entropic uncertainty relations that quantified uncertainty using entropic measures [5], culminating with universal uncertainty relations that provide general constraints of the joint probabilities of incompatible measurements [6; 7; 8; 9].
Yet these relations pertain to only passive measurements, where a system is left to evolve freely before observation (see Fig. 1a). In contrast, the most powerful means of learning involve intervention. When todders learn of their environment, they do not merely observe. Instead, they actively intervene - performing various actions, observing resulting reactions and adapting future actions based on observations. Such _interactive measurements_ are essential to fully infer causation, so we may know if one event caused another or if both emerged from some common-causes [10]. Indeed, interactive measurements permeate diverse sciences. Whether using reinforcement learning to identify optimal strategies behaviour or sending data packets to probe the characteristics of a network [11; 12; 13]. Such interactive measurement process also describe many quantum protocols, including quantum illumination, quantum-enhanced agents and non-Markovian open systems [14; 15; 16; 17].
Could uncertainty principles also fundamentally constrain such interactive measurements (see Fig. 1b-d)? How would such principle interplay with interventions
Figure 1: **Interactive Measurements:** Our uncertainty relations apply to all interactive measurements, including (a) passive measurements (framed by standard uncertainty relations) and (b) two-time measurements, where a quantum system first passes a quantum instrument that incorporates both a measurement outcome and the output state and later gets measured, as described by the framework of pseudo-density matrix [18; 19]. Our results also pertain (c) non-Markovian interactive measurements that involve coherently interacting the system with a quantum register \(R\) and doing some joint measurement at a subsequent time-step and, most generally, (d) any interactive measurement \(\mathcal{T}_{x}\) with interventions at \(a-1\) different time-steps.
aimed to discern causal structure? Here, we explore these questions by deriving a universal uncertainty principle that constrains the joint measurement probabilities of interactive measurements. This principle then pinpoints when two interactive measurements are non-compatible - and quantifies the necessary trade-offs in the certainty of their measurement outcomes. Our results make no assumptions on the number of interventions or the causal structure of processes we probe, and encompass previous uncertainty relations for states and channels as special cases [20; 21; 22]. We apply them to interactive measurements compatible with direct-cause vs common-cause, showing that they satisfy an uncertainty trade-off analogous to position and momentum.
**Framework -** The premise of an interactive measurement consists of an agent that wishes to probe the dynamics of some unknown quantum process \(\Phi\). Here \(\Phi\) can be modelled as an open quantum system, consisting of a system accessible to the agent with \(\mathcal{H}\) coupled with some generally non-Markovian environment \(E\) (see blue shaded region in Fig. 1d). Initially, the \(\mathcal{H}\)-\(E\) system is in some joint state \(\rho\). At each time-step \(k\), the system and environment jointly evolve under \(\Psi^{k}\). \(\Phi\) is then completely defined by the set \(\{\Psi^{k}\}_{k=2}^{a}\) and the initial state \(\rho\), where \(a\) represent to the number of time-steps. In literature, \(\Phi\) offers the most general representations of non-Markovian quantum stochastic processes [23] and is also closely related to concepts of higher-order quantum maps, adaptive agents, and causal networks [24; 25; 26; 27; 28; 29].
Interactive measurements then represent the most general means for an agent to determine properties of \(\Phi\) (see black shaded region in Fig. 1d): the agent initializes some internal memory register \(R\); between time-steps (i.e., before \(\Psi^{k}\) with \(2\leqslant k\leqslant a\)), the agent performs an _intervention_ - some general quantum operation \(\Lambda^{k}\) that interacts her memory \(R\) with the accessible system \(\mathcal{H}\); after \(a-1\) such interventions, the agent finally makes a joint measurement with respect to some positive operator valued measure (POVM) \(M:=\{M_{x}\}\) on the joint \(\mathcal{H}\)-\(R\) system to obtain some outcome \(x\). Thus, each interactive measurement \(\mathcal{T}\) is completely defined by set of interventions \(\{\Lambda^{k}\}_{k=1}^{a-1}\) and POVM \(M\). Just as a conventional POVM measurement on a quantum state induces some probability distribution over measurement outcomes, so does an interactive measurement on a quantum process. Analogous to eigenstates, we say \(\Phi\) is an _eigencircuit_ of \(\mathcal{T}\) if \(\Phi\) always yields a definite outcome when measured by \(\mathcal{T}\).
We make two remarks. (1) The interactive measurements encompass _everything_ an agent can possibly do causally. Notably, \(R\) can also store classical information. For example, making a projective measurement and conditioning future action on the system based on the result of these measurements. (2) Both \(\Phi\) and \(\mathcal{T}\) have succinct representations using Choi-Jamiolkowski operators, often referred to as quantum combs [25; 26] or process tensors [17]. We provide a rigorous mathematical treatment in supplemental material [30, Sec. IB and IC].
**Uncertainty Principles -** In conventional quantum theory, certain observables are mutually incompatible. Given an observable \(\mathcal{O}\) whose outcomes \(o_{k}\) occurs with probability \(p_{k}\), we can quantify the uncertainty by the Shannon entropy \(H(\mathcal{O}):=-\sum_{k}p_{k}\log p_{k}\). The entropic uncertainty principle then states that there exists mutually non-compatible observables \(\mathcal{O}_{1}\) and \(\mathcal{O}_{2}\), such that the joint uncertainty \(H(\mathcal{O}_{1})+H(\mathcal{O}_{2})\) is always lower-bounded by some state-independent constant \(C>0\)[31; 32; 33].
Can we identify similar uncertainty relations for general interactive measurements? We answer this question by employing majorization [34]. Consider two probability vectors \(\mathbf{x}\) and \(\mathbf{y}\), whose elements \(x_{k}\) and \(y_{k}\) are arranged in non-increasing order. We say \(\mathbf{x}\) is majorized by \(\mathbf{y}\), written as \(\mathbf{x}\prec\mathbf{y}\), if \(\sum_{k=1}^{i}x_{k}\leqslant\sum_{k=1}^{i}y_{k}\) holds for all index \(i\). The rationale is that majorization maintains significant connections with entropy since \(\mathbf{x}\prec\mathbf{y}\) implies that \(H(\mathbf{x})\geqslant H(\mathbf{y})\). In fact, \(\mathbf{x}\prec\mathbf{y}\) implies \(f(\mathbf{x})\geqslant f(\mathbf{y})\) for a large class of functions known as _Schur-concave functions_. Such functions align with those that remain non-decreasing under random relabeling of measurement outcomes, and have been proposed as the most general class of uncertainty quantifiers [6]. Thus, majorization constraints on outcome probabilities for conventional quantum measurements are referred to as universal uncertainty relations [6; 7; 8; 9]. Here, we establish such a _universal uncertainty relation for general interactive measurements_ (See supplemental material [30, Sec. IID] for the proof):
**Lemma 1**.: _Consider two distinct interactive measurements \(\mathcal{T}_{1}\) and \(\mathcal{T}_{2}\) on some dynamical process \(\Phi\), with outcomes described by probability distributions \(\mathbf{p}\) and \(\mathbf{q}\). There then exists a probability vector \(\mathbf{v}(\mathcal{T}_{1},\mathcal{T}_{2})\) such that_
\[\frac{1}{2}\mathbf{p}\oplus\frac{1}{2}\mathbf{q}\prec\mathbf{v}(\mathcal{T}_{1 },\mathcal{T}_{2}). \tag{1}\]
_Here the vector-type bound \(\mathbf{v}(\mathcal{T}_{1},\mathcal{T}_{2})\) is independent of \(\Phi\), and hence captures the essential incompatibility between \(\mathcal{T}_{1}\) and \(\mathcal{T}_{2}\). Meanwhile, \(\oplus\) represents the concatenation of vectors. For example, \((1,0)\oplus(1/2,1/2)=(1,0,1/2,1/2)\)._
Our result for interactive measurement is also universal in this sense. In particular, they imply an infinite family of uncertainty relations, namely \(f(\mathbf{p}/2\oplus\mathbf{q}/2)\geqslant f(\mathbf{v}(\mathcal{T}_{1}, \mathcal{T}_{2}))\) for any Schur-concave function \(f\) (including Renyi entropies). Choosing \(f\) as the Shannon entropy, Lem. 1 then results in entropic bounds for general interactive measurements (see [30, Sec. IID] for details):
**Theorem 1**.: _Given two interactive measurements \(\mathcal{T}_{1}\) and \(\mathcal{T}_{2}\) acting on some dynamical process \(\Phi\). The entropies of their measurement outcomes [35] satisfy_
\[H(\mathcal{T}_{1})_{\Phi}+H(\mathcal{T}_{2})_{\Phi}\geqslant C(\mathcal{T}_{1}, \mathcal{T}_{2}), \tag{2}\]
_where \(C(\mathcal{T}_{1},\mathcal{T}_{2})\) - measuring incompatibility between \(\mathcal{T}_{1}\) and \(\mathcal{T}_{2}\) - is non-negative and independent of \(\Phi\). \(C(\mathcal{T}_{1},\mathcal{T}_{2})\) can be explicitly computed. It is strictly non-zero whenever \(\mathcal{T}_{1}\) and \(\mathcal{T}_{2}\) have no common eigencircuit._
In [30, Sec. IID], we illustrate a choice of \(C(\mathcal{T}_{1},\mathcal{T}_{2})\) that reduces to \(\log(1/c)\) when \(\mathcal{T}_{1}\) and \(\mathcal{T}_{2}\) are standard quantum measurements. Here \(c\) stands for the maximal overlap between measurements [5]. Meanwhile, just as there exist many alternative bounds beyond \(\log(1/c)\)[36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47], there are many other valid bounds for \(H(\mathcal{T}_{1})_{\Phi}+H(\mathcal{T}_{2})_{\Phi}\) (See [30, Sec. IID]). Here we focus on a choice of \(C(\mathcal{T}_{1},\mathcal{T}_{2})\) that can give tighter bounds in causal inference settings. More results are presented in [30, Sec. IIC].
Our formulations of \(\mathbf{v}(\mathcal{T}_{1},\mathcal{T}_{2})\) and \(C(\mathcal{T}_{1},\mathcal{T}_{2})\) carry direct operational meaning in a guessing game which we refer to as the _quantum roulette_. The two-party game consists of (1) Alice, an agent that probes any supplied dynamical process using one of two possible interactive measurements, \(\mathcal{T}_{1}\) and \(\mathcal{T}_{2}\), and (2) Bob, who can engineer various dynamical processes for Alice to probe (see Fig. 2). In each round, Alice and Bob begin with a 'roulette table', whose layout consists of all tuples \((b,x)\), where \(b\in\{1,2\}\) and \(x\) are all possible measurement outcomes of \(\mathcal{T}_{1}\) and \(\mathcal{T}_{2}\). Bob begins with \(k\) chips, which he can use to place bets on \(k\) of the possible tuples and supplies Alice with any \(\Phi\) of his choosing. Alice will then select some \(b\in\{1,2\}\) at random and probe \(\Phi\) with \(\mathcal{T}_{b}\). She finally announces both \(b\) and the resulting measurement outcome \(x\). Bob wins if one of his chips is on \((b,x)\).
Let \(p_{k}\) denote Bob's maximum winning probability. Naturally \(p_{0}=0\) and \(p_{k}\) increases monotonically with \(k\), tending to \(1\). We define a probability vector \(\mathbf{w}\) with elements \(w_{k}=p_{k}-p_{k-1}\), \(k=1,2,\ldots\), representing the increase in Bob's probability of winning with \(k\) rather than \(k-1\) chips. In [30, Sec. IID], we show that \(\mathbf{v}(\mathcal{T}_{1},\mathcal{T}_{2}):=\mathbf{w}\) and \(C(\mathcal{T}_{1},\mathcal{T}_{2}):=2H(\mathbf{w})-2\) are bounds for \(\mathbf{p}/2\oplus\mathbf{q}/2\) and \(H(\mathcal{T}_{1})_{\Phi}+H(\mathcal{T}_{2})_{\Phi}\) respectively.
This game gives an operational criterion of non-compatibility for interactive measurements. When two observables are compatible, \(H(\mathbf{w})=1\). This aligns with the scenario that \(\mathbf{w}=(0.5,0.5,0,\ldots,0)\), which occurs when Bob's success rate is limited only by his uncertainty of which measurement Alice makes. That is, placing one counter ensures Bob can correctly predict the outcome of \(\mathcal{T}_{1}\) and two counters gives him perfect prediction regardless of \(b\). We see this is only possible if \(\mathcal{T}_{1}\) and \(\mathcal{T}_{2}\) share at least one common eigencircuit. Thus, \(H(\mathcal{T}_{1})_{\Phi}+H(\mathcal{T}_{2})_{\Phi}\) is strictly greater than \(0\) whenever \(\mathcal{T}_{1}\) and \(\mathcal{T}_{2}\) share no common eigencircuit.
**Causal Uncertainty Relations -** The central relevance of interventions in causal inference makes it an appropriate illustrative example [48]. Consider the case where \(\Phi\) represents a \(d\)-level system (the accessible qudit) that evolves while in possible contact with other systems (e.g. a non-Markovian environment \(E\)). Now suppose an agent, Alice, can access this qudit at two different points in time, say \(t_{X}\) and \(t_{Y}\). In general the quantum process \(\Phi\) can fall under three scenarios [49]:
1. The system at \(t_{X}\) is a _direct cause_ of the system at \(t_{Y}\): the qudit at \(t_{Y}\) is the output of some quantum
Figure 2: **The Quantum Roulette:** The quantum roulette is a game that aids in interpreting lower bounds for the combined uncertainty of two general interactive measurements \(\{\mathcal{T}_{b}\}_{b=1,2}\). \(\mathcal{T}_{1}\) and \(\mathcal{T}_{2}\) – picture in (b). Now introduce a quantum ‘roulette table’ with \(2\times m\) grid of cells (c), labelled \((b,x)\) with \(x=1,\ldots,m\). In the \(k^{th}\) order game, Bob begins with \(k\) chips, of which he can allocate to \(k\) of these cells. Bob then supplies Alice with a dynamical process \(\Phi\) (a). Alice selects a \(b\) at random, and measures \(\Phi\) with \(\mathcal{T}_{b}\) to obtain outcome \(x\). Bob wins if he has a chip on the cell \((b,x)\). Lem. 1 and Thm. 1 then relate Bob’s winning probabilities with the incompatibility between \(\mathcal{T}_{1}\) and \(\mathcal{T}_{2}\).
map acting on the qubit at \(t_{X}\) (Fig. 3b-i).
2. The system at \(t_{X}\) and \(t_{Y}\) share a _common cause_: the qudit at \(t_{X}\) is correlated with an environmental qudit \(E\). \(E\) is measured at time \(t_{Y}\) (Fig. 3b-ii).
3. A mixture of both, corresponding to a general non-Markovian quantum process (Fig. 3b-iii).
We now introduce two families of interactive measurements: \(\mathcal{M}_{\mathrm{CC}}\) and \(\mathcal{M}_{\mathrm{DC}}\), as depicted in Fig. 4. Each \(\mathcal{T}_{1}\in\mathcal{M}_{\mathrm{CC}}\) is a _maximal common-cause indicator_, such that its eigencircuits imply that \(X\) and \(Y\) are actually two arms of some maximally entangled state (Fig. 3b-ii). Meanwhile, each \(\mathcal{T}_{2}\in\mathcal{M}_{\mathrm{DC}}\) is a _maximal direct-cause indicator_, whose eigencircuit involve a lossless channel from \(X\) to \(Y\) (i.e., Fig. 3b-i where \(\Psi\) is unitary). In [30, Sec. IIIA], we establish the following _causal uncertainty relation_:
\[H(\mathcal{T}_{1})+H(\mathcal{T}_{2})\geqslant 2\log d, \tag{3}\]
for any \(\mathcal{T}_{1}\in\mathcal{M}_{\mathrm{CC}}\) and \(\mathcal{T}_{2}\in\mathcal{M}_{\mathrm{CC}}\). Here \(H(\mathcal{T}_{i})\) (\(i=1,2\)) is the Shannon entropy of the probability distribution associated with outcomes when \(\mathcal{T}_{i}\) is measured. Furthermore, this bound can be saturated.
Consider the application of this uncertainty to a specific parameterized quantum circuit \(\Phi_{\alpha,\beta}\) (Fig. 5a) describing a single qubit undergoing non-Markovian evolution. Fig. 5b then demonstrates the combined uncertainty \(H(\mathcal{T}_{1})+H(\mathcal{T}_{2})\) for various values of \(\alpha\) and \(\beta\), including cases where they saturate the lower bound of 2. We also note that unlike classical processes, which must be either purely common-cause, or purely direct-cause, or a probabilistic mixture of both - quantum processes can feature richer causal dependencies [50]. Fig. 5c depicts this for the cross-section of \(\alpha=\pi/4\). Such circuits include the coherent superposition of direct and common cause as a special case. Our causal uncertainty relation also applies to these uniquely quantum causal structures.
**Discussion** - The most powerful means of learning involves interactive measurement - a procedure in which we can intervene by injecting (possible entangled) quantum states into the process over multiple time-steps before observing the final output. Here, we derive entropic uncertainty relations that govern all interactive measurements, bounding their joint uncertainty whenever such measurement outcomes are non-compatible. In the context of causal inference, they predict a uniquely quantum entropic trade-off between measurements that probe for direct and common cause. More generally, our relations encompass all possible means for an agent to interact and learn about a target quantum system and thus in
Figure 3: **Quantum Description of Causal Structures:** There are three possible causal structures for two events \(X\) and \(Y\), all of which can be expressed by a quantum dynamic process \(\Phi_{B\to AC}\). In (i) direct-cause, \(\Phi_{B\to AC}\) involves preparing a state \(A\) to be observed at \(X\), whose output is sent directly to \(Y\) via quantum channel from \(B\) to \(C\). In (ii) common-cause, correlations between \(X\) and \(Y\) can be attributed to measurements on some pre-prepared correlated state \(\rho_{AC}\) (event \(Z\)). Most generally (iii), \(\Phi_{B\to AC}\) consists of a state-preparation process \(\Psi_{\mathrm{C}\to AE}^{\mathrm{P}\mathrm{P}\mathrm{P}}\) and a post-processing quantum channel \(\Psi_{B\to C}^{\mathrm{P}\mathrm{P}\mathrm{P}\mathrm{P}}\) (b-iii; \(E\) is an ancillary system). This then corresponds to a (possibly coherent) mixture of direct and common cause.
Figure 4: **Maximal Common-Cause and Direct-Cause Indicators:** We introduce (a) \(\mathcal{M}_{\mathrm{CC}}=\{\mathcal{T}_{\mathrm{CC}}(U_{1},U_{2})\}\) and (b) \(\mathcal{M}_{\mathrm{DC}}=\{\mathcal{T}_{\mathrm{DC}}(U_{3},U_{4})\}\) as two respective families of interactive measurements with a single intervention. Here, system \(A\), \(B\) and \(C\) are d-level quantum systems (qudits), and each \(U_{k}\), \(k=1,2,3,4\) is some single-qudit unitary, and \(\ket{\Phi_{1}}:=\sum_{k=0}^{d-1}\ket{kk}/\sqrt{d}\). Measurements are done with respect to a maximally entangling basis \(\{\Phi_{i}\}_{i}\) with \(d^{2}\) possible outcomes. The two measurement families are incompatible, and satisfy the causal uncertainty relation in Eq. 3.
clude previously studied uncertainty relations on states and channels as special cases.
One potential application of such relations is the metrology of unknown quantum processes with memory [51, 52, 53]. In practice, full tomography of a general quantum process can be extremely costly. Even a single non-Markovian qubit measured at two different times requires 54 different interactive measurements [54]. Our result may help us ascertain specific properties of a process while avoiding this costly procedure. In [30, Sec. IVB], we illustrate how our causal uncertainty relations imply that a single interactive measurement can rule out specific causal structures. Indeed, quantum illumination and adaptive sensing can both cast as measuring desired properties of a candidate quantum process, and thus could benefit from such an approach.
Interactive measurements through repeated interventions also emerge in other settings [55, 56, 57]. In quantum open systems, sequential intervention provides a crucial toolkit for characterizing non-Markovian noise [58, 59, 60, 61]. Meanwhile, in reinforcement learning, quantum agents that continuously probe an environment show enhancements in enacting or learning complex adaptive behaviour [62, 63, 12]. Investigating uncertainty relations specific to such contexts has exciting potential, perhaps revealing new means of probing non-Markovian dynamics, or fundamental constraints on how well an agent can simultaneously optimize two different rewards.
###### Acknowledgements.
We would like to thank Varun Narasimhachar, Jayne Thompson, and Bartosz Regula for fruitful discussions. This work is supported by the Singapore Ministry of Education Tier 2 Grant MOE-T2EP50221-0005, the National Research Foundation, Singapore, and Agency for Science, Technology and Research (A*STAR) under its QEP2.0 programme (NRF2021-QEP2-02-P06), the National Research Foundation and under the NRF-QEP program (NRF2021-QEP2-02-P06), The Singapore Ministry of Education Tier 1 Grant RG146/20, FQXi-RFP-1809 (The Role of Quantum Effects in Simplifying Quantum Agents) from the Foundational Questions Institute and Fetzer Franklin Fund (a donor-advised fund of Silicon Valley Community Foundation). Y. X. is supported by A*STAR's Central Research Fund (CRF UIBR). Y. Y. acknowledges the support from the Swiss National Science Foundation via the National Center for Competence in Research "QSIT" as well as via project No. 200020_165843, the support from Guangdong Basic and Applied Basic Research Foundation (Project No. 2022A1515010340), and the support from the Hong Kong Research Grant Council (RGC) through the Early Career Scheme (ECS) grant 27310822. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of National Research Foundation or the Ministry of Education, Singapore.
## References
* (1) W. Heisenberg, Uber den anschaulichen inhalt der quantentheoretischen kinematik und mechanik, Zeitschrift fur Physik **43**, 172 (1927).
* (2) E. H. Kennard, Zur quantenmechanik einfacher bewegungstypen, Zeitschrift fur Physik **44**, 326 (1927).
* (3) H. Weyl, _Gruppentheorie und Quantenmechanik_ (S. Hirzel, 1928).
* (4) H. P. Robertson, The uncertainty principle, Phys. Rev. **34**, 163 (1929).
* (5) D. Deutsch, Uncertainty in quantum measurements, Phys. Rev. Lett. **50**, 631 (1983).
* (6) S. Friedland, V. Gheorghiu, and G. Gour, Universal uncertainty relations, Phys. Rev. Lett. **111**, 230401 (2013).
* (7) Z. Puchala, L. Rudnicki, and K. Zyczkowski, Majorana entropic uncertainty relations, Journal of Physics A **46**, 272002 (2013).
* (8) L. Rudnicki, Z. Puchala, and K. Zyczkowski, Strong ma
Figure 5: **Causal Uncertainty Relations on Non-Markovian Dynamics:** Consider a single qubit – bottom rail of the circuit in (a) – undergoing non-Markovian evolution. Here the systems are initialized in maximally entangled state, \(Z(\theta)\) and \(X(\theta)\) represent single-qubit rotation gates in \(X\) and \(Z\) axis. (b) illustrates the combined uncertainty \(H(\mathcal{T}_{1})+H(\mathcal{T}_{2})\), where \(\mathcal{T}_{1}\in M_{\mathrm{CC}}\) and \(\mathcal{T}_{2}\in M_{\mathrm{DC}}\) are respectively common-cause and direct-cause indicators in Fig. 4 with all \(U_{k}\) set to the identity. Observe this never goes below the fundamental lower bound of 2 (gray plane). (c) illustrates \(H(\mathcal{T}_{1})\) (green dashed), \(H(\mathcal{T}_{2})\) (red dashed) and their sum (blue solid) for \(\alpha=-\pi/4\) and various values of \(\beta\), corresponding to various coherent superpositions of common-cause and direct-cause circuits.
jorization entropic uncertainty relations, Phys. Rev. A **89**, 052115 (2014).
* Puchala _et al._ [2018]Z. Puchala, L. Rudnicki, A. Krawiec, and K. Zyczkowski, Majorization uncertainty relations for mixed quantum states, Journal of Physics A **51**, 175306 (2018).
* Pearl [2009]J. Pearl, _Causality_, 2nd ed. (Cambridge University Press, 2009).
* Lample and Chaplot [2017]G. Lample and D. S. Chaplot, Playing fps games with deep reinforcement learning, in _Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence_, AAAI'17 (AAAI Press, 2017) p. 2140-2146.
* Paparo _et al._ [2014]G. D. Paparo, V. Dunjko, A. Makmal, M. A. Martin-Delgado, and H. J. Briegel, Quantum speedup for active learning agents, Phys. Rev. X **4**, 031002 (2014).
* Nguyen and Thiran [2004]H. X. Nguyen and P. Thiran, Active measurement for multiple link failures diagnosis in ip networks, in _Passive and Active Network Measurement_, edited by C. Barakat and I. Pratt (Springer Berlin Heidelberg, Berlin, Heidelberg, 2004) pp. 185-194.
* Elliott _et al._ [2022]T. J. Elliott, M. Gu, A. J. P. Garner, and J. Thompson, Quantum adaptive agents with efficient long-term memories, Phys. Rev. X **12**, 011007 (2022).
* Lloyd [2008]S. Lloyd, Enhanced sensitivity of photodetection via quantum illumination, Science **321**, 1463 (2008), [https://www.science.org/doi/pdf/10.1126/science.1160627](https://www.science.org/doi/pdf/10.1126/science.1160627).
* Seah _et al._ [2019]S. Seah, S. Nimmrichter, and V. Scarani, Nonequilibrium dynamics with finite-time repeated interactions, Phys. Rev. E **99**, 042103 (2019).
* Pollock _et al._ [2018]F. A. Pollock, C. Rodriguez-Rosario, T. Frauenheim, M. Paternostro, and K. Modi, Non-markovian quantum processes: Complete framework and efficient characterization, Phys. Rev. A **97**, 012127 (2018).
* Fitzsimons _et al._ [2015]J. F. Fitzsimons, J. A. Jones, and V. Vedral, Quantum correlations which imply causation, Scientific Reports **5**, 18281 (2015).
* Marletto _et al._ [2019]C. Marletto, V. Vedral, S. Virzi, E. Rebufello, A. Avella, F. Piacentini, M. Gramegna, I. P. Degiovanni, and M. Genovese, Theoretical description and experimental simulation of quantum entanglement near open time-like curves via pseudo-density operators, Nature Communications **10**, 182 (2019).
* Kraus _et al._ [1983]K. Kraus, A. Bohm, J. Dollard, and W. Wootters, _States, Effects, and Operations: Fundamental Notions of Quantum Theory_, Lecture Notes in Physics (Springer Berlin Heidelberg, 1983).
* Ziman [2008]M. Ziman, Process positive-operator-valued measure: A mathematical framework for the description of process tomography experiments, Phys. Rev. A **77**, 062112 (2008).
* Xiao _et al._ [2021]Y. Xiao, K. Sengupta, S. Yang, and G. Gour, Uncertainty principle of quantum processes, Phys. Rev. Research **3**, 023077 (2021).
* Milz and Modi [2021]S. Milz and K. Modi, Quantum stochastic processes and quantum non-markovian phenomena, PRX Quantum **2**, 030201 (2021).
* Chiribella _et al._ [2008]G. Chiribella, G. M. D'Ariano, and P. Perinotti, Transforming quantum operations: Quantum supermaps, EPL **83**, 30004 (2008).
* Chiribella _et al._ [2008]G. Chiribella, G. M. D'Ariano, and P. Perinotti, Quantum circuit architecture, Phys. Rev. Lett. **101**, 060401 (2008).
* Chiribella _et al._ [2009]G. Chiribella, G. M. D'Ariano, and P. Perinotti, Theoretical framework for quantum networks, Phys. Rev. A **80**, 022339 (2009).
* Bisio and Perinotti [2019]A. Bisio and P. Perinotti, Theoretical framework for higher-order quantum theory, Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences **475**, 20180706 (2019), [https://royalsocietypublishing.org/doi/pdf/10.1098/rspa.2018.0706](https://royalsocietypublishing.org/doi/pdf/10.1098/rspa.2018.0706).
* Gour [2019]G. Gour, Comparison of quantum channels by superchannels, IEEE Transactions on Information Theory **65**, 5880 (2019).
* Wechs _et al._ [2021]J. Wechs, H. Dourdent, A. A. Abbott, and C. Branciard, Quantum circuits with classical versus quantum control of causal order, PRX Quantum **2**, 030335 (2021).
* [30]See Supplemental Material for full proofs and mathematical details of our theorem, an improved entropic uncertainty relation, applications in causal inference, and corresponding numerical experiments, as well as Refs. [64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 80; 81; 82].
* Kraus [1987]K. Kraus, Complementary observables and uncertainty relations, Phys. Rev. D **35**, 3070 (1987).
* Maassen and Uffink [1988]H. Maassen and J. B. M. Uffink, Generalized entropic uncertainty relations, Phys. Rev. Lett. **60**, 1103 (1988).
* Berta _et al._ [2010]M. Berta, M. Christandl, R. Colbeck, J. M. Renes, and R. Renner, The uncertainty principle in the presence of quantum memory, Nature Physics **6**, 659 (2010).
* Marshall _et al._ [2010]A. Marshall, I. Olkin, and B. Arnold, _Inequalities: Theory of Majorization and Its Applications_, Springer Series in Statistics (Springer New York, 2010).
* [35]Denote the probability distribution of outcomes when \(\mathcal{T}_{1}\) is measured as \(\mathbf{p}\), then the uncertainty of \(\mathcal{T}_{1}\) can be quantified by Shannon entropy, i.e. \(H(\mathcal{T}_{1})_{\mathbf{q}}:=H(\mathbf{p})\).
* Sanches-Ruiz [1998]J. Sanches-Ruiz, Optimal entropic uncertainty relation in two-dimensional hilbert space, Physics Letters A **244**, 189 (1998).
* Ghirardi _et al._ [2003]G. Ghirardi, L. Marinatto, and R. Romano, An optimal entropic uncertainty relation in a two-dimensional hilbert space, Physics Letters A **317**, 32 (2003).
* de Vicente and Sanchez-Ruiz [2008]J. I. de Vicente and J. Sanchez-Ruiz, Improved bounds on entropic uncertainty relations, Phys. Rev. A **77**, 042110 (2008).
* Tomamichel and Renner [2011]M. Tomamichel and R. Renner, Uncertainty relation for smooth entropies, Phys. Rev. Lett. **106**, 110506 (2011).
* Coles _et al._ [2012]P. J. Coles, R. Colbeck, L. Yu, and M. Zwolak, Uncertainty relations from simple entropic properties, Phys. Rev. Lett. **108**, 210405 (2012).
* Coles and Piani [2014]P. J. Coles and M. Piani, Improved entropic uncertainty relations and information exclusion relations, Phys. Rev. A **89**, 022112 (2014).
* Rudnicki [2015]L. Rudnicki, Majorization approach to entropic uncertainty relations for coarse-grained observables, Phys. Rev. A **91**, 032123 (2015).
* Xiao _et al._ [2016]Y. Xiao, N. Jing, S.-M. Fei, and X. Li-Jost, Improved uncertainty relation in the presence of quantum memory, Journal of Physics A **49**, 49LT01 (2016).
* Coles _et al._ [2017]P. J. Coles, M. Berta, M. Tomamichel, and S. Wehner, Entropic uncertainty relations and their applications, Rev. Mod. Phys. **89**, 015002 (2017).
* Xiao [2017]Y. Xiao, _A Framework for Uncertainty Relations_, Ph.D. thesis, Leipzig University, Leipzig, Germany (2017).
* Coles _et al._ [2019]P. J. Coles, V. Katariva, S. Lloyd, I. Marvian, and M. M. Wilde, Entropic energy-time uncertainty relation, Phys. Rev. Lett. **122**, 100401 (2019).
* Xiao _et al._ [2020]Y. Xiao, Y. Xiang, Q. He, and B. C. Sanders, Quasi-fine-grained uncertainty relations, New Journal of Physics **22**, 073063 (2020).
* [48] H. Reichenbach, M. Reichenbach, and H. Putnam, _The Direction of Time_, California library reprint series (University of California Press, 1956).
* [49] K. Ried, M. Agnew, L. Vermeyden, D. Janzing, R. W. Spekkens, and K. J. Resch, A quantum advantage for inferring causal structure, Nature Physics **11**, 414 (2015).
* [50] J.-P. W. MacLean, K. Ried, R. W. Spekkens, and K. J. Resch, Quantum-coherent mixtures of causal relations, Nature Communications **8**, 15149 (2017).
* [51] Y. Yang, Memory effects in quantum metrology, Phys. Rev. Lett. **123**, 110501 (2019).
* [52] A. Altherr and Y. Yang, Quantum metrology for non-markovian processes, Phys. Rev. Lett. **127**, 060501 (2021).
* [53] W. Gorecki, A. Riccardi, and L. Maccone, Quantum metrology of noisy spreading channels, Phys. Rev. Lett. **129**, 240503 (2022).
* [54] A. Feix and C. Brukner, Quantum superpositions of 'common-cause' and 'direct-cause' causal structures, New Journal of Physics **19**, 123028 (2017).
* [55] O. Guhne, M. Kleinmann, A. Cabello, J.-A. Larsson, G. Kirchmair, F. Zahringer, R. Gerritsma, and C. F. Roos, Compatibility and noncontextuality for sequential measurements, Phys. Rev. A **81**, 022121 (2010).
* [56] M. Gu, K. Wiesner, E. Rieper, and V. Vedral, Quantum mechanics can reduce the complexity of classical models, Nature Communications **3**, 762 (2012).
* [57] D. Tan, S. J. Weber, I. Siddiqi, K. Molmer, and K. W. Murch, Prediction and retrodiction for a continuously monitored superconducting qubit, Phys. Rev. Lett. **114**, 090403 (2015).
* [58] L. Li, M. J. Hall, and H. M. Wiseman, Concepts of quantum non-markovianity: A hierarchy, Physics Reports **759**, 1 (2018).
* [59] S. Cialdi, C. Benedetti, D. Tamascelli, S. Olivares, M. G. A. Paris, and B. Vacchini, Experimental investigation of the effect of classical noise on quantum non-markovian dynamics, Phys. Rev. A **100**, 052104 (2019).
* [60] G. A. L. White, C. D. Hill, F. A. Pollock, L. C. L. Hollenberg, and K. Modi, Demonstration of non-markovian process characterisation and control on a quantum processor, Nature Communications **11**, 6301 (2020).
* [61] S. Virz, A. Avella, F. Piacentini, M. Gramegna, T. c. v. Opatrny, A. G. Kofman, G. Kurizki, S. Gherardini, F. Caruso, I. P. Degiovanni, and M. Genovese, Quantum zeno and anti-zeno probes of noise correlations in photon polarization, Phys. Rev. Lett. **129**, 030401 (2022).
* [62] J. Thompson, A. J. P. Garner, V. Vedral, and M. Gu, Using quantum theory to simplify input-output processes, npj Quantum Information **3**, 6 (2017).
* [63] T. J. Elliott, M. Gu, A. J. Garner, and J. Thompson, Quantum adaptive agents with efficient long-term memories, Physical Review X **12**, 011007 (2022).
* [64] M. A. Nielsen and I. L. Chuang, _Quantum Computation and Quantum Information: 10th Anniversary Edition_ (Cambridge University Press, 2010).
* [65] M. M. Wilde, _Quantum Information Theory_ (Cambridge University Press, 2013).
* [66] J. Watrous, _The Theory of Quantum Information_ (Cambridge University Press, 2018).
* [67] A. Jamiolkowski, Linear transformations which preserve trace and positive semidefiniteness of operators, Reports on Mathematical Physics **3**, 275 (1972).
* [68] M.-D. Choi, Completely positive linear maps on complex matrices, Linear Algebra and its Applications **10**, 285 (1975).
* [69] P. Taranto, F. A. Pollock, S. Milz, M. Tomamichel, and K. Modi, Quantum markov order, Phys. Rev. Lett. **122**, 140401 (2019).
* [70] S. Seah, S. Nimmrichter, D. Grimmer, J. P. Santos, V. Scarani, and G. T. Landi, Collisional quantum thermometry, Phys. Rev. Lett. **123**, 180602 (2019).
* [71] A. Abbas, D. Sutter, C. Zoufal, A. Lucchi, A. Figalli, and S. Woerner, The power of quantum neural networks, Nature Computational Science **1**, 403 (2021).
* [72] K. Bharti, A. Cervera-Lierta, T. H. Kyaw, T. Haug, S. Alperin-Lea, A. Anand, M. Degroote, H. Heinonen, J. S. Kottmann, T. Menke, W.-K. Mok, S. Sim, L.-C. Kwek, and A. Aspuru-Guzik, Noisy intermediate-scale quantum algorithms, Rev. Mod. Phys. **94**, 015004 (2022).
* [73] J. Preskill, Quantum Computing in the NISQ era and beyond, Quantum **2**, 79 (2018).
* [74] F. Cicalese and U. Vaccaro, Supermodularity and subadditivity properties of the entropy on the majorization lattice, IEEE Transactions on Information Theory **48**, 933 (2002).
* [75] I. Bialynicki-Birula and J. Mycielski, Uncertainty relations for information entropy in wave mechanics, Communications in Mathematical Physics **44**, 129 (1975).
* [76] L. Vandenberghe and S. Boyd, Semidefinite programming, SIAM Review **38**, 49 (1996), [https://doi.org/10.1137/1038003](https://doi.org/10.1137/1038003).
* [77] S. Boyd and L. Vandenberghe, _Convex Optimization_ (Cambridge University Press, 2004).
* [78] G. Chiribella, G. M. D'Ariano, P. Perinotti, and B. Valiron, Quantum computations without definite causal structure, Phys. Rev. A **88**, 022318 (2013).
* [79] D. Ebler, S. Salek, and G. Chiribella, Enhanced communication with the assistance of indefinite causal order, Phys. Rev. Lett. **120**, 120502 (2018).
* [80] G. Chiribella, M. Wilson, and H. F. Chau, Quantum and classical data transmission through completely depolarizing channels in a superposition of cyclic orders, Phys. Rev. Lett. **127**, 190502 (2021).
* [81] G. Rubino, L. A. Rozema, D. Ebler, H. Kristjansson, S. Salek, P. Allard Guerin, A. A. Abbott, C. Branciard, i. c. v. Brukner, G. Chiribella, and P. Walther, Experimental quantum communication enhancement by superposing trajectories, Phys. Rev. Research **3**, 013093 (2021).
* [82] V. V. Shende, I. L. Markov, and S. S. Bullock, Minimal universal two-qubit controlled-not-based circuits, Phys. Rev. A **69**, 062321 (2004).
# Quantum Uncertainty Principles for Measurements with Interventions
Supplemental Material
Yunlong Xiao
[email protected] Institute of High Performance Computing (IHPC), Agency for Science Technology and Research (A*STAR), 1 Fusionopolis Way, #16-16 Connexis, Singapore 138632, Republic of Singapore
Yuxiang Yang
[email protected] QICI Quantum Information and Computation Initiative, Department of Computer Science, The University of Hong Kong, Pokfulam Road, Hong Kong Institute for Theoretical Physics, ETH Zurich, 8093 Zurich, Switzerland
Ximing Wang
Nanyang Quantum Hub, School of Physical and Mathematical Sciences, Changyang Technological University, Singapore 637371, Singapore
Qing Liu
Nanyang Quantum Information and Computation Initiative, Department of Computer Science, The University of Hong Kong, Pokfulam Road, Hong Kong
Mile Gu
[email protected] Complexity Institute, Nanyang Technological University, Singapore 639673, Singapore
November 6, 2021
###### Abstract
In this supplemental material, we formulate several uncertainty relations for measurements with interventions, extending the results presented in the main text of our work. As a by-product, we have derived causal uncertainty relations for quantum dynamics, establishing a trade-off between common-cause and direct-cause in quantum causal inference. Such a fundamental trade-off have been further utilized to infer the causality associated with parameterized quantum circuits, which are the essential building blocks for Noisy Intermediate-Scale Quantum (NISQ) technologies. Detailed analyses and proofs of the results presented in the main text have also been included. More precisely, to systemically investigate the most general quantum dynamics with definite causal orders and the corresponding measuring processes with interventions, we introduce the framework of _quantum circuit fragments_ and _interactive measurements_ in Sec. I. The uncertainty principle for multiple interactive measurements has been demonstrated in Sec. II, which works for any quantum circuit fragments. We have further developed the causal uncertainty relation and detailed its application to causal inference in Sec. III. Finally, numerical experiments for our results have been provided in Sec. IV. It is worth noting that we may reiterate some of the steps in the main text to make this supplemental material more explicit and self-contained.
###### Contents
* I Quantum Circuit Fragment
* I.1 Quantum Channels and Superchannels
* I.2 Quantum Circuit Fragments: Multiple Quantum Processes with Definite Causal Order
* I.3 Interactive Measurements: How to Measure the Quantum Circuit Fragments?
* I.4 Quantum Causal Maps
* II Universal Uncertainty Relation for Measurements with Interventions
* II.1 Mathematical Toolkit: Majorization Lattice
* II.2 Brief Introduction to Uncertainty Relations
* II.3 Operational Interpretation of Universal Uncertainty Relation: Quantum Roulette
* II.4 Lemma 1 and Theorem 1 of the Main Text: Their Proofs, Improvements, and Generalizations
* III Causal Uncertainty Relation
* III.1 Uncertainty Relation for Common-Cause and Direct-Cause: Eq. 3 of the Main Text and Its Extension
* III.2 Necessary and Sufficient Conditions for Common-Cause and Direct-Cause
* III.3 Coherent Mixture of Common-Cause and Direct-Cause
* IV Numerical Experiments
* IV.1 The Landscape of Joint Uncertainty
* IV.2 Advantage in Inferring Causal Structures
###### Abstract
We consider the case of a \(\mathbb{Z}_{2}\)-valued \(
Quantum circuit Fragment
In this section, we introduce our notations and prepare the groundwork for results presented in the main text of our work. Additionally, the general quantum dynamics and the corresponding measuring processes with interventions have been rigorously formulated as _quantum circuit fragments_ and _interactive measurements_. This preparatory section contains four subsections: In Subsec. I.1, we give a brief introduction to the concept of quantum channels, superchannels, and their Choi-Jamiolkowski operators. Subsequently, in Subsec. I.2, we move on to quantum dynamics with definite causal order, and formally introduce the concept of quantum circuit fragments. The most general measuring processes for quantum circuit fragments - interactive measurements - have been constructed in Subsec. I.3. Finally, a special type of quantum circuit fragments, known as causal maps in quantum causal inference, has been discussed in the last subsection, i.e. Subsec. I.4.
### Quantum Channels and Superchannels
The fundamental building blocks that makeup quantum technologies and quantum information processing are quantum channels. Physically, the preparation of quantum states, the implementation of quantum measurements, the noise arising from the system-environment interactions, and the operation (including quantum gates) carried out on quantum devices are all characterized by the concept of quantum channel. In this subsection, we will give a brief introduction to the concept of the quantum channel and the corresponding mathematical toolkit. For detailed consideration of the quantum channels and their mathematical properties, we refer the reader to Refs. [1; 2; 3].
A linear map \(\mathcal{E}\) from system \(A\) to system \(B\) is called a quantum channel if it is both completely positive (CP) and trace-preserving (TP). The property of completely positive implies that by applying quantum channels to part of a quantum system, the resultant system is still a well-defined quantum system. Meanwhile, as indicated by the name, the trace-preserving property guarantees that any channel's output state still has trace 1, when the input is a quantum state.
Instead of dealing with a map (i.e. quantum channel) directly, usually we prefer to investigate the properties of its matrix representation. To do so, let us start with the concept of Choi-Jamiolkowski (CJ) operators [4; 5]. Formally, it is defined as
\[\begin{array}{c}\text{Definition}\ \text{LP}\ \text{Clice-Jamiolkowski}\ \eqref{CJ}\ \text{Operation}\ \@@cite[cite]{[\@@bibref{}{CJ}{}{}]}\\ \end{array}\]
For a quantum channel \(\mathcal{E}:A\to B\), its Choi-Jamiolkowski operator \(J^{\mathcal{E}}_{AB}\) is defined as
\[J^{\mathcal{E}}_{AB}:=\text{id}_{A}\otimes\mathcal{E}_{A^{\prime}\to B}(|I \rangle\!\langle I|_{AA^{\prime}}), \tag{1}\]
where \(|I\rangle_{AA^{\prime}}:=\sum_{i}\left|i\right\rangle_{A}\left|i\right\rangle_{ A^{\prime}}\) is the unnormalized maximally entangled state with \(A^{{}^{\prime}}\) being a replica of system \(A\), and \(\{\left|i\right\rangle\}\) being an orthonormal basis on \(A\). See Fig. 1 for an illustration of Eq. 1.
Here the correspondence between a quantum channel \(\mathcal{E}\) and its CJ operator \(J^{\mathcal{E}}\) is referred as the Choi-Jamiolkowski isomorphism in quantum information theory [1; 2; 3]. For any quantum state \(\rho\) acting on system \(A\), and a quantum channel \(\mathcal{E}:A\to B\), the output state is fully characterized by the CJ operator \(J^{\mathcal{E}}\). More precisely, the output on system \(b\) can be written as
\[\mathcal{E}(\rho)=\text{Tr}_{A}[J^{\mathcal{E}}_{AB}\cdot\rho^{\mathbf{T}}_{A }\otimes\mathbb{1}_{B}]. \tag{2}\]
Using the language of CJ operators, we know that \(\mathcal{E}\) is (i) completely positive (CP) if and only if its CJ operator satisfies \(J^{\mathcal{E}}\geqslant 0\) and (ii) trace-preserving (TP) if and only if its CJ operator satisfies \(\mathrm{Tr}_{B}[J^{\mathcal{E}}_{AB}]=\mathbb{I}_{A}\). Here \(A\) and \(B\) represent the input and output systems of \(\mathcal{E}\) respectively. The notation \(\mathrm{Tr}_{A}\) stands for the partial trace over system \(A\).
The development of quantum telecommunications and networks requires the investigations of quantum channels with multiple inputs and outputs at different time ticks, which comply with the causality in the theory of relativity. More precisely, in the theory of quantum information, such a causality is restricted by the concept of non-signaling (NS), which is formally defined as
Definition 1.2.1 Non-signaling (NS)
Given a bipartite channel \(\mathcal{E}:AC\to BD\), it is called non-signaling from \(C\to D\) to \(A\to B\) if the following condition is satisfied
\[\mathrm{Tr}_{D}\circ\mathcal{E}_{AC\to BD}=\mathcal{F}_{A\to B} \otimes\mathrm{Tr}_{C}, \tag{3}\]
for some quantum channel \(\mathcal{F}\) from \(A\) to \(B\). On the other hand, we say \(\mathcal{E}\) is non-signaling from \(A\to B\) to \(C\to D\), if there exists a quantum channel \(\mathcal{G}\) from \(C\) to \(D\) such that
\[\mathrm{Tr}_{B}\circ\mathcal{E}_{AC\to BD}=\mathcal{G}_{C\to D} \otimes\mathrm{Tr}_{A}\,. \tag{4}\]
Finally, the bipartite channel \(\mathcal{E}\) is said to be non-signaling if it meets the condition of both Eq. 3 and Eq. 4.
Noted that in Def. I.2, Eq. 3 means that for any bipartite input state \(\rho_{AC}\) acting on systems \(AC\), we have
\[\mathrm{Tr}_{D}\circ\mathcal{E}_{AC\to BD}(\rho_{AC})=\mathrm{Tr}_{D}[ \mathcal{E}_{AC\to BD}(\rho_{AC})]=\mathcal{F}_{A\to B}\otimes\mathrm{Tr}_{C} (\rho_{AC})=\mathcal{F}_{A\to B}(\rho_{A}), \tag{5}\]
where \(\rho_{A}:=\mathrm{Tr}_{C}[\rho_{AC}]\) is the reduced state of \(\rho_{AC}\) on system \(A\). A similar situation applies to Eq. 4, as shown in Fig. 2.
Physically, Eq. 3 describes a situation where \(C\to D\) is a quantum dynamical process that happened after the process of \(A\to B\). Hence, information cannot be transmitted from the future (i.e. \(C\to D\)) to the past (i.e. \(A\to B\)). A similar statement holds for Eq. 4, where the temporal order for \(C\to D\) and \(A\to B\) has been switched.
The non-signaling condition also plays an important role in the composition of dynamical processes. Let us consider a concrete example. Assume a quantum channel \(\mathcal{E}_{1}:A\to BE\) is followed by another channel \(\mathcal{E}_{2}:CE\to D\), where the two channels are connected by a memory system \(E\), as illustrated in Fig. 3. In this case, the whole quantum dynamics \(\mathcal{E}:=\mathcal{E}_{2}\circ\mathcal{E}_{1}\) is a linear map from \(AC\) to \(BD\), satisfying the following conditions:
1. Completely Positive (CP),
2. Trace-Preserving (TP),
Figure 2: (color online) Pictorial demonstrations of non-signaling (NS) bipartite quantum channel \(\mathcal{E}\) from systems \(AC\) to \(BD\): (a) NS from \(C\to D\) to \(A\to B\) (see Eq. 3); (b) NS from \(A\to B\) to \(C\to D\) (see Eq. 4).
(iii) Non-Signaling (NS) from \(C\to D\) to \(A\to B\).
Denote the CJ operators of quantum channels \(\mathcal{E}_{1}\) and \(\mathcal{E}_{2}\) as \(J^{1}_{ABE}\) and \(J^{2}_{CDE}\) (or simply \(J^{1}\) and \(J^{2}\)) respectively. It now follows immediately that the CJ operator of \(\mathcal{E}=\mathcal{E}_{2}\circ\mathcal{E}_{1}\) is given by [6]
\[J^{\mathcal{E}}=\mathrm{Tr}_{E}[(J^{1})^{\mathbf{T}_{E}}\cdot J^{2}], \tag{6}\]
where \(\mathbf{T}_{E}\) stands for the partial transpose over system \(E\). Here the non-signaling condition indicates the temporal order between processes \(\mathcal{E}_{1}\) and \(\mathcal{E}_{2}\). As a by-product, Eq. 6 motivates the abstract definition of _link product_\(\star\)[6; 7].
Definition [BS-link Definition [6; 7]]
Given two operators \(M\) and \(N\) acting on systems \(XY\) and \(YZ\) respectively, we define their link product \(M\star N\) as
\[M\star N:=\mathrm{Tr}_{Y}[M^{\mathbf{T}_{Y}}\cdot N], \tag{7}\]
where the common space \(Y\) is appeared as the link between operators \(M\) and \(N\), and has been "swallowed" by the product \(\star\). Here \(\mathrm{Tr}_{Y}\) and \(\mathbf{T}_{Y}\) represent the partial trace and partial transpose over the system \(Y\) respectively.
Thanks to link product, now Eq. 2 can be simplified as
\[\mathcal{E}(\rho)=J^{\mathcal{E}}\star\rho. \tag{8}\]
Equipped with the link product, now we can rewrite the CJ operator of \(\mathcal{E}=\mathcal{E}_{2}\circ\mathcal{E}_{1}\) as
\[J^{\mathcal{E}}=J^{1}\star J^{2}. \tag{9}\]
Here \(\mathcal{E}=\mathcal{E}_{2}\circ\mathcal{E}_{1}\) is a typical example of quantum superchannel, where channels \(\mathcal{E}_{1}\) and \(\mathcal{E}_{2}\) are known as the pre-processing and post-processing of \(\mathcal{E}\) respectively. Similar to the case of quantum channels, all above restrictions on \(\mathcal{E}\) can be translated into the language of CJ operators; a map \(\mathcal{E}\) from \(AC\) to \(BD\) is (i) completely positive (CP) if and only if \(J^{\mathcal{E}}\geqslant 0\), (ii) trace-preserving (TP) if and only if \(\mathrm{Tr}_{BD}[J^{\mathcal{E}}]=\mathds{1}_{AC}\), and (iii) non-signaling (NS) from \(C\to D\) to \(A\to B\) if and only if \(\mathrm{Tr}_{D}[J^{\mathcal{E}}]=\mathrm{Tr}_{CD}[J^{\mathcal{E}}]\otimes \mathds{1}_{C}/d_{C}\) with \(d_{C}:=\dim C\). Note that here \(\mathrm{Tr}_{CD}[J^{\mathcal{E}}]\big{/}d_{C}\) forms a CJ operator for some quantum channel, as it is both completely positive (CP) and trace-preserving (TP).
In fact, all quantum superchannels satisfy above conditions, namely completely positive (CP), trace-preserving (TP), and non-signaling (NS) from one process to another. Furthermore, the inverse statement - a linear map satisfying completely positive (CP), trace-preserving (TP), and non-signaling (NS) from one process to another forms a quantum superchannel - is also true [8]. Mathematically, given a bipartite quantum channel \(\Phi:AC\to BD\) and non-signaling (NS) from \(C\to D\) to \(A\to B\), there exist two quantum channels - \(\Psi^{\mathrm{Pre}}\) (known as pre-processing) and \(\Psi^{\mathrm{Post}}\) (known as post-processing) - such that for any quantum channel \(\mathcal{E}\) from system \(B\) to system \(C\) and quantum state \(\rho\) acting on system \(A\), we have
\[\Phi(\mathcal{E})(\rho)=\Psi^{\mathrm{Post}}\circ\mathcal{E}\circ\Psi^{\mathrm{ Pre}}(\rho). \tag{10}\]
Here \(\Phi\) represents a general manipulation of quantum channels with definite causal order, which maps a quantum channel of form \(B\to C\) to a resultant channel of form \(A\to D\), visualizing in Fig. 4.
Remark that, on the one hand, a quantum channel can be viewed as a special kind of superchannel with only one process (either pre- or post-), and no quantum memory system. On the other hand, quantum superchannels can also be recognised as bipartite quantum channels with additional conditions, namely NS from a later process to early one. From this perspective, it is straightforward to formulate the CJ operator of a superchannel, such as \(\Phi:AC\to BD\).
\[J^{\Phi}_{ABCD}:=\mathrm{id}_{AC}\otimes\Phi_{A^{\prime}C^{\prime}\to BD}( \left|I\right\rangle\!\!\left\langle I\right|_{AA^{\prime}}\otimes\left|I \right\rangle\!\!\left\langle I\right|_{CC^{\prime}}), \tag{11}\]
where \(\left|I\right\rangle_{AA^{\prime}}\) and \(\left|I\right\rangle_{CC^{\prime}}\) are unnormalized maximally entangled states acting on systems \(AA^{\prime}\) and \(CC^{\prime}\) respectively. For a quantum channel \(\mathcal{E}:B\to C\), its resultant channel under \(\Phi\) has the following CJ operator,
\[J^{\Phi(\mathcal{E})}=J^{\Phi}\star J^{\mathcal{E}}, \tag{12}\]
where \(J^{\mathcal{E}}\) stands for the CJ operator of the channel \(\mathcal{E}\).
### Quantum Circuit Fragments: Multiple Quantum Processes with Definite Causal Order
Current quantum theories typically concern passive measurements, where a system is left to evolve freely before observation. A visual illustration of passive measurement has been provided in Fig. 5a, where the blue box represents state-preparational channel, emitting independent and identically distributed (i.i.d.) copies of state \(\rho\). On the right side of Fig. 5a, the black box, waiting to receive an input, stands for the general positive operator valued measure (POVM) \(M:=\{M_{x}\}_{x}\). To further investigate the system of interest (characterized by \(\rho\)) and its evolution (illustrated by \(\Psi\)), the observer might implement measurement \(M:=\{M_{x}\}_{x}\) at two different time points \(t_{1}\) and \(t_{2}\) (\(t_{1}<t_{2}\)) and collect the corresponding outcome \(x_{1}\) and \(x_{2}\), as demonstrated in Fig. 5b. For simplicity, here we assume that the measurements executed in \(t_{1}\) and \(t_{2}\) are the same. However, generally they do not have to be the same. In this case, the first measurement occurred at time \(t_{1}\) interacts with the system, influencing the second measurement. Such a measuring protocol (colored black in Fig. 5b) forms a simple example of interactive measurement. In addition, systems can have prior correlations. For example, the quantum dynamics inside blue dashed box of Fig. 5c is what it looks like. Now based on the measurement outcome \(x_{1}\) obtained at \(t_{1}\), we can then apply an intervention, described by a quantum channel \(\mathcal{F}_{x_{1}}\) (depending on the outcome \(x_{1}\)), and finalize the measuring process by carrying out a joint measurement. An illustration is given in the black dashed box of Fig. 5c. Most generally, systems can have prior correlations over different time points, and measuring processes can contain multiple interventions, limmed by Fig. 5d. The quantum dynamics and the measurement with interventions are colored blue and black respectively. In this work, the entire class of quantum dynamics with definite causal order and measurement with interventions mentioned above are characterized by the framework of _quantum circuit fragments_ (see Def. I.4 of Subsec. I.2) and _interactive measurements_ (see Def. I.5 of Subsec. I.3) respectively, and our results involves the most general picture of them.
In this subsection, we focus on the idea of quantum circuit fragments. Roughly speaking, quantum channels have been employed to characterize the quantum dynamics with a single process. Meanwhile, quantum superchannels offer us the most general way of manipulating quantum channels [8], constituting two processes that occur at different time ticks. These specific quantum dynamics give us a glimpse of a more general framework - quantum circuit fragments, where multiple quantum processes are connected through quantum memories with definite causal orders. Formally, the 'Quantum Circuit Fragments' or briefly 'Circuit Fragments' is defined as
Figure 4: (color online) Physical realization of the quantum superchannel \(\Phi=\Psi^{\mathrm{Post}}\circ\Psi^{\mathrm{Pre}}\), whose action on input channel is described by Eq. 10. Here channel \(\Psi^{\mathrm{Pre}}\) stands for the pre-processing of superchannel \(\Phi\), channel \(\Psi^{\mathrm{Post}}\) represents the post-processing of superchannel \(\Phi\), and they are connected through the memory system \(E\).
Figure 5: (color online) Quantum circuit fragments and interactive measurements: (a) Passive measurement, where the system described by \(\rho\) is left to evolve before feeding into the measuring process \(M:=\{M_{x}\}_{x}\). (b) Measuring quantum dynamics at two different time points. To investigate the system of \(\rho\) and its dynamics \(\Psi\), we have executed a quantum instrument \(\overline{M}\) at time point \(t_{1}\), followed by a quantum measurements \(M\) taken at time point \(t_{2}\), where \(t_{1}<t_{2}\). The measurement outcome obtained at time \(t_{1}\) and \(t_{2}\) are denoted as \(x_{1}\) and \(x_{2}\) respectively. In this example, the entire quantum dynamics, including the preparation of initial state \(\rho\) and quantum evolution \(\Psi\), is marked with blue color. Meanwhile, the adaptive measuring protocol forms a simple example of interactive measurement, which is colored black. (c) Correlated quantum dynamics and measurement with interventions. Based on previous measurement outcome \(x_{1}\), an intervention \(\mathcal{F}_{x_{1}}\), characterized by a quantum channel, has been applied to the quantum dynamics. In this case, quantum information has been transmitted to later time point by using \(\mathcal{F}_{x_{1}}\). (d) Quantum circuit fragment \(\Phi\) and interactive measurement \(\mathcal{T}:=\{\mathcal{T}_{x}\}_{x}\). The most general picture of quantum dynamics \(\Phi\) is described by quantum circuit fragment (colored blue). Here \(\mathcal{T}_{x}\) (colored black) represents an interactive measurement. Before obtaining a classical outcome \(x\), \(\Phi\) and \(\mathcal{T}_{x}\) have interacted multiple times.
A bipartite quantum state \(\rho\) is prepared in systems \(\mathcal{H}_{1}E_{1}\), and subjected to \(a-1\) quantum channels \(\{\Psi_{2},\ldots,\Psi_{a}\}\) in the form of
\[\Psi^{i}: \;\mathcal{H}_{2i-2}E_{i-1}\rightarrow\mathcal{H}_{2i-1}E_{i}, \quad 2\leqslant i\leqslant a-1, \tag{13}\] \[\Psi^{a}: \;\mathcal{H}_{2a-2}E_{a-1}\rightarrow\mathcal{H}_{2a-1}. \tag{14}\]
Then we call the quantum dynamics
\[\Phi:=\Psi^{a}\circ\Psi^{a-1}\circ\cdots\circ\Psi^{2}(\rho) \tag{15}\]
a quantum circuit fragment (see Fig. 6a). Denote the set of all quantum circuit fragments in the form of Eq. 15 as \(\mathfrak{F}_{a}\).
Here \(\Phi\) is a quantum channel with \(a\) individual processes \(\{\Psi^{i}\}_{i=a}^{i=a}\), where \(\Psi^{1}:=\rho\) acting on systems \(\mathcal{H}_{1}E_{1}\). Its input systems and output systems are \(\otimes_{i=2}^{a}\mathcal{H}_{2i-2}\) and \(\otimes_{i=1}^{a}\mathcal{H}_{2i-1}\) respectively. More precisely, \(\Phi:\otimes_{i=2}^{a}\mathcal{H}_{2i-2}\rightarrow\otimes_{i=1}^{a} \mathcal{H}_{2i-1}\) is a quantum channel satisfying non-signaling (NS) from \(\mathcal{H}_{2i-2}\rightarrow\mathcal{H}_{2i-1}\) to \(\mathcal{H}_{2i-4}\rightarrow\mathcal{H}_{2i-3}\) for all \(2\leqslant i\leqslant a\), with \(\mathcal{H}_{0}=\mathds{C}\). Noted that here all the memory systems \(\{E_{i}\}_{i=1}^{a-1}\) of \(\Phi\) have been swallowed by the composition of quantum channels. Similar to the CJ operators of quantum superchannels discussed in Sec. I.1, the CJ operator of \(\Phi\) can be
obtained by considering the following equation
\[J^{\Phi}:=\mathrm{id}_{\mathbb{S}^{a}_{i=1}\mathcal{H}_{2i-2}} \otimes\Psi^{a}_{\mathcal{H}^{\prime}_{2a-2}E_{a-1}\rightarrow\mathcal{H}_{2a-1 }}\circ\Psi^{a-1}_{\mathcal{H}^{\prime}_{2a-4}E_{a-2}\rightarrow\mathcal{H}_{2 a-3}E_{a-1}}\circ\cdots\] \[\cdots\circ\Psi^{2}_{\mathcal{H}^{\prime}_{2}E_{1}\rightarrow \mathcal{H}_{3}E_{2}}(\rho_{\mathcal{H}_{1}E_{1}}\otimes|I\rangle\!\langle I |_{\mathbb{S}^{a}_{i=2}\mathcal{H}_{2i-2}\mathcal{H}^{\prime}_{2i-2}}), \tag{16}\]
where the multiple copies of unnormalized maximally entangled state \(|I\rangle\!\langle I|_{\mathbb{S}^{a}_{i=2}\mathcal{H}_{2i-2}\mathcal{H}^{ \prime}_{2i-2}}\) is defined as
\[|I\rangle\!\langle I|_{\mathbb{S}^{a}_{i=2}\mathcal{H}_{2i-2} \mathcal{H}^{\prime}_{2i-2}}:=|I\rangle\!\langle I|_{\mathcal{H}_{2}\mathcal{ H}^{\prime}_{2}}\otimes|I\rangle\!\langle I|_{\mathcal{H}_{4}\mathcal{H}^{ \prime}_{4}}\otimes\cdots\otimes|I\rangle\!\langle I|_{\mathcal{H}_{2a-2} \mathcal{H}^{\prime}_{2a-2}}\,. \tag{17}\]
The physical process of generating \(J^{\Phi}\) is demonstrated in Fig. 6b. Denote the CJ operator of \(\Psi^{i}\) (see Def. I.4) as \(J^{i}\) (\(i\in\{2,3,\ldots,a-1\}\)), then it is straightforward to check that \(J^{\Phi}\) can also be written as a result of link product, namely
\[J^{\Phi}=J^{a}\star J^{a-1}\star\cdots\star J^{2}\star\rho. \tag{18}\]
In literature [6; 7], \(J^{\Phi}\) is called an \(a\)-comb, where \(a\) indicates the number of processes contained in quantum circuit fragment \(\Phi\). Analogous to the Choi-Jamiolkowski (CJ) isomorphism between the quantum channels \(\mathcal{E}:A\to B\) and their CJ operators \(J^{\mathcal{E}}_{AB}\), the morphism between the quantum circuit fragment \(\Phi\) and the corresponding \(a\)-comb \(J^{\Phi}\) forms a bijection. This implies that the physical properties associated with \(\Phi\) are completely characterized by its comb representation \(J^{\Phi}\).
In open quantum systems, a special form of quantum circuit fragment - known as process tensor [9; 10; 11] - has been used to describe the system-environment unitary dynamics. In this model, it is assumed that the system-environment is initialized in a state \(\rho_{\mathcal{H}_{i}E_{1}}\). Without loss of generality, we can always assume it is a pure state (namely \(\rho_{\mathcal{H}_{i}E_{1}}=\phi_{\mathcal{H}_{i}E_{1}}\)) by enlarging the environment system \(E_{1}\). After the preparation of \(\phi_{\mathcal{H}_{i}E_{1}}\), the system-environment dynamics undergo unitary interactions \(\{\mathcal{U}_{3:2},\ldots,\mathcal{U}_{2a-1:2a-2}\}\) at \(a-1\) time steps. Now the whole process \(\Phi\) is demonstrated by the following map
\[\Phi=\mathrm{Tr}_{E_{a}}[\,\mathcal{U}_{2a-1:2a-2}\circ\cdots\circ\mathcal{U}_ {3:2}(\phi_{\mathcal{H}_{i}E_{1}})], \tag{19}\]
where, for any input state \(\sigma\), the output state under \(\mathcal{U}\) is given by \(\mathcal{U}(\sigma)=U\sigma U^{\dagger}\). It is easy to verify that this process tensor can be viewed as a special form of quantum circuit fragment (see Def. I.4) by setting
\[\rho=\phi_{\mathcal{H}_{1}E_{1}}:\,\mathds{C}\rightarrow\mathcal{H }_{1}E_{1}, \tag{20}\] \[\Psi^{i}=\mathcal{U}_{2i-1:2i-2}:\,\mathcal{H}_{2i-2}E_{i-1} \rightarrow\mathcal{H}_{2i-1}E_{i},\quad 2\leqslant i\leqslant a-1,\] (21) \[\Psi^{a}=\mathcal{U}_{2a-1:2a-2}:\,\mathcal{H}_{2a-2}E_{a-1} \rightarrow\mathcal{H}_{2a-1}E_{a}. \tag{22}\]
Remark that, here the \(a\)-th quantum process is a unitary interaction from systems \(\mathcal{H}_{2a-2}E_{a-1}\) to \(\mathcal{H}_{2a-1}E_{a}\) followed by the partial trace over environmental system \(E_{a}\).
Besides process tensor, in the context of different theories, different concepts of quantum dynamics with multiple processes have been introduced, such as the causal maps in quantum causal inference [12; 13], the pseudo-density matrices (PDMs) in witnessing temporal correlations [14], the collisional model of quantum thermometer [15], the quantum neural networks (QNNs) [16] and more general parameterized quantum circuits (PQCs) [17] in the noisy intermediate-scale quantum (NISQ) era [18], and so forth. Despite having different names, there is a common point for all these concepts - they contain multiple quantum processes with definite causal order. Finally, it is worth mentioning that all of them can be treated as special cases of our quantum circuit fragment.
### Interactive Measurements: How to Measure the Quantum Circuit Fragments?
Quantum circuit fragments offer us a general way of demonstrating multiple quantum processes with definite causal order. To investigate the physical properties associated with quantum circuit fragments and decoding information from such quantum circuit fragments, a measuring process is needed. Unlike state case, where a system is left to evolve freely before observation, now we can interact with the quantum dynamics and make multiple preceding interventions before final measurement. Here, the whole process is called interactive measurement.
Let us consider the quantum circuit fragment \(\Phi\) demonstrated in Eq. 15. The corresponding interactive measurement, denoted as \(\mathcal{T}\), contains \(a-1\) rounds of interventions \(\{\Lambda^{i}\}_{i=1}^{a-1}\) and a joint measurement \(\{M_{x}\}_{x}\). Here the general interventions are described by quantum channels. In particular, the interactive measurement for quantum circuit fragment \(\Phi\) is defined as
An interactive measurement \(\mathcal{T}\) designed for quantum circuit fragment \(\Phi\) (See Eq. 15) constitutes of \(a-1\) rounds of interventions \(\{\Lambda^{i}\}_{i=1}^{a-1}\) in the form of
\[\Lambda^{1} :\,\mathcal{H}_{1}\rightarrow\mathcal{H}_{2}R_{1}, \tag{23}\] \[\Lambda^{i} :\,\mathcal{H}_{2i-1}R_{i-1}\rightarrow\mathcal{H}_{2i}R_{i}, \quad 2\leqslant i\leqslant a-1, \tag{24}\]
and a joint measurement \(\{M_{x}\}_{x}\) acting on systems \(\mathcal{H}_{2a-1}R_{a-1}\), namely \(M_{x}:\mathcal{H}_{2a-1}R_{a-1}\rightarrow\mathds{C}\). Here each intervention \(\Lambda^{i}\) forms a quantum channel, and different interventions are connected through memory systems \(\{R_{i}\}_{i=1}^{a-1}\). Mathematically, the interactive measurement \(\mathcal{T}\) is characterized by the following maps \(\{\mathcal{T}_{x}\}_{x}\) (see Fig. 7a); that are
\[\mathcal{T}_{x}(\cdot):=\mathrm{Tr}\big{[}M_{x}\cdot\Lambda^{a-1}\circ\Lambda^ {a-2}\circ\cdots\circ\Lambda^{1}(\cdot)\big{]}. \tag{25}\]
Denote the set of all interactive measurements in the form of Eq. 25 as \(\mathfrak{T}_{a}\).
Within the framework of interactive measurement, previous interventions can be used to improve the performance of upcoming interventions. To be more specific, take \(\mathcal{T}\) for instance, the interventions \(\{\Lambda^{i}\}_{i=1}^{k-1}\) occurred before the implementation of \(\Lambda^{k}\) strengthen our capability of gaining information from subsequent quantum interventions \(\{\Lambda^{i}\}_{i=k}^{a-1}\). Given quantum circuit fragment \(\Phi\), the probability of obtaining outcome \(x\) by measuring interactive measurement \(\mathcal{T}\) is given by
\[p_{x}(\Phi,\mathcal{T}):=\mathrm{Tr}\big{[}M_{x}\cdot\Psi^{a}\circ\Lambda^{a-1 }\circ\Psi^{a-1}\circ\Lambda^{a-2}\circ\cdots\circ\Psi^{2}\circ\Lambda^{1}( \rho)\big{]}. \tag{26}\]
Since the dependencies of probability distribution \(p_{x}(\Phi,\mathcal{T})\) is usually clear from the context, we simply re-express it as \(p_{x}\).
To characterize interactive measurements and simplify related calculations, the CJ operator of interactive measurements is a good choice. Again, let us take \(\mathcal{T}\) (see Eq. 25) for an illustration. Here the input and output systems of \(\mathcal{T}:=\{\mathcal{T}_{x}\}_{x}\) (see Eq. 25 for the definition of \(\mathcal{T}_{x}\)) are \(\otimes_{i=1}^{a}\mathcal{H}_{2i-1}\) and \(\otimes_{i=1}^{a}\mathcal{H}_{2i-2}\), and the trivial system \(\mathds{C}\) has been ignored. Thus, the CJ operator of each \(\mathcal{T}_{x}\) becomes
\[J_{x}^{\mathcal{T}}:=J^{\mathcal{T}_{x}}\] \[= \mathrm{id}_{\otimes_{i=1}^{a}\mathcal{H}_{2i-1}}\otimes \mathrm{Tr}\bigg{[}M_{x}\cdot\Lambda_{\mathcal{H}_{2a-3}^{\prime}R_{a-2} \rightarrow\mathcal{H}_{2a-2}R_{a-1}}^{a-1}\circ\Lambda_{\mathcal{H}_{2a-5}^ {\prime}R_{a-3}\rightarrow\mathcal{H}_{2a-4}R_{a-2}}^{a-2}\circ\cdots\circ \Lambda_{\mathcal{H}_{1}^{\prime}\rightarrow\mathcal{H}_{2}R_{1}}^{1}(|I \rangle\langle I|_{\otimes_{i=1}^{a}\mathcal{H}_{2i-1}\mathcal{H}_{2i-1}^{ \prime}}\rangle\bigg{]}. \tag{27}\]
A circuit illustration of \(J_{x}^{\mathcal{T}}\) is depicted in Fig. 7b. Here the measurement \(M_{x}\) is acting on systems \(\mathcal{H}_{2a-1}^{{}^{\prime}}R_{a-1}\), and the operator \(|I\rangle\langle I|_{\otimes_{i=1}^{a}\mathcal{H}_{2i-1}\mathcal{H}_{2i-1}^{ \prime}}\) is defined as
\[|I\rangle\langle I|_{\otimes_{i=1}^{a}\mathcal{H}_{2i-1}\mathcal{H}_{2i-1}^{ \prime}}:=|I\rangle\langle I|_{\mathcal{H}_{1}\mathcal{H}_{1}^{\prime}}\otimes |I\rangle\langle I|_{\mathcal{H}_{3}\mathcal{H}_{3}^{\prime}}\otimes\cdots \otimes|I\rangle\langle I|_{\mathcal{H}_{2a-1}\mathcal{H}_{2a-1}^{\prime}}\,. \tag{28}\]
Let us further denote the CJ operator of quantum circuit fragment \(\Phi\) as \(J^{\Phi}\), then probability \(p_{x}\) considered in Eq. 26 can be re-expressed as
\[p_{x}(\Phi,\mathcal{T})=J^{\Phi}\star J_{x}^{\mathcal{T}}, \tag{29}\]
where \(J_{x}^{\mathcal{T}}\) is defined in Eq. 27.
Another new concept that will help us demonstrate the trade-off between incompatible interactive measurements is _eigencircuit_, which is formally defined as
Definition 1.1.1.1.1 [label=Definition 1.1.1]
Given an interactive measurement \(\mathcal{T}:=\{\mathcal{T}_{x}\}_{x}\), we call quantum circuit fragment \(\Phi\) an eigencircuit of \(\mathcal{T}\) if
\[J^{\Phi}\star J_{x}=1, \tag{30}\]
for some \(x\). Here \(J^{\Phi}\) and \(J_{x}\) represent the CJ operators of \(\Phi\) and \(\mathcal{T}_{x}\) respectively.
After building the general framework of interactive measurements (see Def. I.5), let us now investigate some special forms of measurements for quantum dynamics and show that they are all special cases of our interactive measurements. First of all, consider the channel measurement, which was first introduced in Ref. [19] and named as process positive-operator-valued measure (PPOVM). Such a measuring process can be viewed as an interactive measurement by trivializing the input system of the first intervention, which is followed by a joint measurement directly. Second, we move on to investigating the framework of testers introduced in Ref. [6]. Mathematically, the testers are linear maps that take quantum causal networks as their inputs and output probability distributions. Physically, the testers generalize the concept of PPOVMs. In particular, a tester is consisted of the preparation of an initial state, finite rounds of interventions, and a joint measurement at the end of dynamical process. Analogous to relation between PPOVMs and interactive measurements, all testers can be regarded as interactive measurements with trivialized input system of the first intervention. More precisely, for general quantum testers, they can be written in the form of Eq. 25 with restricted \(\Lambda^{1}\), where \(\mathcal{H}_{1}=\mathds{C}\), i.e. \(\Lambda^{1}=\rho_{\mathcal{H}_{2}\mathcal{R}_{1}}\) holds for some bipartite state acting on systems \(\mathcal{H}_{2}R_{1}\). To summarize, both PPOVMs and testers are all special cases of interactive measurements introduced in this work.
Different from previous works of investigating quantum dynamics, in this work we are not interested in the statistical properties associated with quantum circuit fragments and interactive measurements. Instead, we are caring about the physical properties that can be learned from quantum circuit fragments by implementing interactive measurements,
such as their causal structures, non-Markovianity, and so on. More details will be unfolded in subsequent sections, especially the investigation of causal inference in Sec. III.
### Quantum Causal Maps
In this subsection, we turn our attention to the dynamics of quantum causal map, which was first introduced in the Ref. [12], and further discuss how to obtain a causal map from process tensor in open quantum system (see Refs. [9; 10; 11] for more details about related topics). We start by writing down the system-environment dynamics described by process tensor \(\Phi\) of the following form (see Eq. 31):
\[\Phi=\mathrm{Tr}_{E_{a}}[\mathcal{U}_{2a-1:2a-2}\circ\cdots\circ\mathcal{U}_{ 3:2}(\phi_{\mathcal{H}_{1}E_{1}})], \tag{31}\]
where each \(\mathcal{U}_{2i-1,2i-2}\) is a unitary map from system-environment dynamics \(\mathcal{H}_{2i-2}E_{i-1}\) to \(\mathcal{H}_{2i-1}E_{i}\) with \(2\leqslant i\leqslant a\). Assume each system \(\mathcal{H}_{k}\) happens at time point \(t_{k}\) (\(1\leqslant k\leqslant 2a-1\)), and we can only interact with the system-environment dynamics in two time periods - \((t_{2i-1},t_{2i})\) and \((t_{2j-1},t_{2j})\) (\(1\leqslant i<j\leqslant a-1\)). Based on the feedback from these two time periods, our goal is to identify its causal structure.
In this model, non-intervention, i.e. letting the system-environment dynamics evolve automatically, is equivalent to apply \(\mathrm{id}_{\mathcal{H}_{2k-1}\to\mathcal{H}_{2k}}\) to system \(\mathcal{H}_{2k-1}\), where \(1\leqslant k\leqslant a-1\) and \(k\neq i,j\). To gain information from the quantum circuit fragment \(\Phi\), we need to end our interactions with a measurement. Therefore, a POVM has been applied to the system \(\mathcal{H}_{2j-1}\), and after the time point \(t_{2j}\) all systems will be discarded, including \(\mathcal{H}_{2j}\). Denote the state \(\phi^{{}^{\prime}}_{\mathcal{H}_{2i-1}E_{i}}\) as
\[\phi^{{}^{\prime}}_{\mathcal{H}_{2i-1}E_{i}}:=\mathcal{U}_{2i-1:2i-2}\circ \mathrm{id}_{\mathcal{H}_{2i-3}\to\mathcal{H}_{2i-2}}\circ\cdots\circ\mathcal{ U}_{3:2}\circ\mathrm{id}_{\mathcal{H}_{1}\to\mathcal{H}_{2}}(\phi_{\mathcal{H}_{1}E_{1 }}), \tag{32}\]
and define the following unitary map \(\mathcal{U}^{{}^{\prime}}_{\mathcal{H}_{2i}E_{i}\to\mathcal{H}_{2j-1}E_{j}}\),
\[\mathcal{U}^{{}^{\prime}}_{\mathcal{H}_{2i}E_{i}\to\mathcal{H}_{2j-1}E_{j}}:= \mathcal{U}_{2j-1:2j-2}\circ\mathrm{id}_{\mathcal{H}_{2j-3}\to\mathcal{H}_{2j -2}}\circ\cdots\circ\mathrm{id}_{\mathcal{H}_{2i+1}\to\mathcal{H}_{2i+2}} \circ\mathcal{U}_{2i+1:2i}. \tag{33}\]
Now, besides the time periods \((t_{2i-1},t_{2i})\) and \((t_{2j-1},t_{2j})\), the whole quantum circuit fragment turns out to be \(\Phi^{{}^{\prime}}\), which is demonstrated by the following map
\[\Phi^{{}^{\prime}}_{\mathcal{H}_{2i}\to\mathcal{H}_{2i-1}\mathcal{H}_{2j-1}}= \mathrm{Tr}_{E_{j}}[\mathcal{U}^{{}^{\prime}}_{\mathcal{H}_{2i}E_{i}\to \mathcal{H}_{2j-1}E_{j}}(\phi^{{}^{\prime}}_{\mathcal{H}_{2i-1}E_{i}})]. \tag{34}\]
Figure 8: (color online) Circuit demonstration of process positive-operator-valued measure (PPOVM) (a) and tester (b). Both of them are obtained by trivializing the input system, i.e. \(\mathcal{H}_{1}\), of our interactive measurement model \(\mathcal{T}_{x}\) (see Eq. 28). In particular, for interactive measurement \(\mathcal{T}_{x}\in\mathfrak{T}_{a}\) depicted in Fig. 7a, taking \(\Lambda^{1}\) as \(\rho_{\mathcal{H}_{2}E_{1}}\otimes\mathrm{Tr}_{\mathcal{H}_{1}}\) leads to testers. When \(a=2\), we cover the framework of PPOVMs.
Take a system-environment dynamics with four rounds of unitary evaluations for instance (see Fig. 9); that is
\[\Phi=\mathrm{Tr}_{E_{5}}[\,\mathcal{U}_{9:8}\circ\mathcal{U}_{7:6}\circ\mathcal{U }_{5:4}\circ\mathcal{U}_{3:2}(\phi_{\mathcal{H}_{1}E_{1}})]\in\mathfrak{F}_{5}. \tag{35}\]
To infer the causal models between time periods \((t_{3},t_{4})\) and \((t_{5},t_{6})\), we apply noiseless quantum channels to systems \(\mathcal{H}_{1}\) and \(\mathcal{H}_{7}\), i.e. described by \(\mathrm{id}_{\mathcal{H}_{1}\rightarrow\mathcal{H}_{2}}\) and \(\mathrm{id}_{\mathcal{H}_{7}\rightarrow\mathcal{H}_{8}}\) respectively. Then trace out all the systems happened after time point \(t_{6}\), namely implementing \(\mathrm{Tr}_{E_{5}}\) and \(\mathrm{Tr}_{\mathcal{H}_{9}}\). Thus, the resultant quantum dynamics turns out to be
\[\Phi^{{}^{\prime}}_{\mathcal{H}_{4}\rightarrow\mathcal{H}_{3}\mathcal{H}_{5}} =\mathrm{Tr}_{E_{3}}[\,\mathcal{U}_{5:4}(\phi^{{}^{\prime}}_{\mathcal{H}_{3}E _{2}})], \tag{36}\]
where \(\phi^{{}^{\prime}}_{\mathcal{H}_{3}E_{2}}:=\mathcal{U}_{3:2}(\mathrm{id}_{ \mathcal{H}_{1}\rightarrow\mathcal{H}_{2}}(\phi_{\mathcal{H}_{1}E_{1}}))\).
Let us return to the model of general system-environment dynamics investigated in Eq. 34. To simplify our notations, we relabel the systems as
\[A :=\mathcal{H}_{2i-1}, \tag{37}\] \[B :=\mathcal{H}_{2i},\] (38) \[C :=\mathcal{H}_{2j-1},\] (39) \[E :=E_{i},\] (40) \[F :=E_{j}. \tag{41}\]
Equipped with new labels, the quantum circuit fragment \(\Phi^{{}^{\prime}}\) (see Eq. 34) can be re-expressed as
\[\Phi^{{}^{\prime}}_{B\to AC}=\mathrm{Tr}_{F}[\,\mathcal{U}^{{}^{ \prime}}_{BE\to CF}(\phi^{{}^{\prime}}_{AE})]. \tag{42}\]
Above example can be seen as a prototype of causal map in open quantum system. Generally, we call all the quantum circuit fragments \(\Phi_{B\to AC}\) with \(t_{A}\leqslant t_{B}\leqslant t_{C}\) quantum causal maps (see Fig. 2b of our main text), which form the set of \(\mathfrak{F}_{2}\) (see Def. 1.4). Such a quantum dynamics can also be viewed as a special form of superchannel with trivialized input system of pre-processing. In particular, given a causal map \(\Phi_{B\to AC}\in\mathfrak{F}_{2}\), it can be divided into two quantum processes: pre-processing \(\Psi^{\mathrm{Pre}}_{\mathds{C}\to AE}\) and post-processing \(\Psi^{\mathrm{Post}}_{BE\to C}\), namely
\[\Phi_{B\to AC}=\Psi^{\mathrm{Post}}_{BE\to C}\circ\Psi^{\mathrm{Pre}}_{ \mathds{C}\to AE}. \tag{43}\]
Here the subscript \(\mathds{C}\to AE\) indicates that \(\Psi^{\mathrm{Pre}}_{\mathds{C}\to AE}\) is a state preparational channel. For Eq. 42, the map \(\Phi^{{}^{\prime}}_{B\to AC}\) can also be written as the composition of pre-processing \(\phi^{{}^{\prime}}_{AE}\) and post-processing \(\mathrm{Tr}_{F}\circ\mathcal{U}^{{}^{\prime}}_{BE\to CF}\). Denote the CJ operators of pre-processing and post-processing as \(J^{\mathrm{Pre}}\) and \(J^{\mathrm{Post}}\) respectively, then the CJ operator of causal map \(\Phi_{B\to AC}\) can be obtained from their link product; that is
\[J^{\Phi}_{ABC}=J^{\mathrm{Pre}}_{AE}\star J^{\mathrm{Post}}_{BCE}. \tag{44}\]
Here the CJ operator \(J^{\Phi}\) of causal map \(\Phi_{B\to AC}\), acting on systems \(ABC\), satisfies the conditions of CPTP and NS from post-processing to pre-processing. It now follows immediately that
1. CP: \(\quad J^{\Phi}\geqslant 0\),
Figure 9: (color online) Investigating the causality of \(\Phi\) (see Eq. 35) between time periods \((t_{2i-1},t_{2i})\) and \((t_{2j-1},t_{2j})\). For this purpose, \(\mathrm{id}\) channels have been applied to systems \(\mathcal{H}_{1}\) and \(\mathcal{H}_{7}\), followed by \(\mathrm{Tr}_{E_{5}}\) and \(\mathrm{Tr}_{\mathcal{H}_{9}}\). The final quantum circuit fragment \(\Phi^{{}^{\prime}}\) belongs to the set of all causal maps, i.e. \(\mathfrak{F}_{2}\), and has been characterized by Eq. 36.
(ii) TP: \(\quad\mathrm{Tr}_{AC}[J^{\Phi}]=\mathds{1}_{B}\), (iii) NS from post-processing to pre-processing: \(\quad\mathrm{Tr}_{C}[J^{\Phi}]=\mathrm{Tr}_{BC}[J^{\Phi}]\otimes\mathds{1}_{B}/ d_{B}\).
Actually, these conditions can be further simplified as \(J^{\Phi}\geqslant 0\) and \(\mathrm{Tr}_{C}[J^{\Phi}]=\rho_{A}\otimes\mathds{1}_{B}\) for some quantum state \(\rho_{A}\).
From an experimental viewpoint, the normalized CJ operator (also called CJ state in this work) \(\Phi_{B\to AC}/d_{B}\) can be obtained directly by feeding maximally entangled state to the dynamics, which is given by
\[\mathrm{id}_{B}\otimes\Phi_{B^{{}^{\prime}}\to AC}(\phi_{BB^{{}^{\prime}}}^{+}), \tag{45}\]
where \(\phi_{BB^{{}^{\prime}}}^{+}:=\left|I\rangle\!\langle I\right|_{BB^{{}^{\prime} }}/d_{B}\) be the maximally entangled state with \(B^{{}^{\prime}}\) being a replica of system \(B\). The circuit realization is demonstrated in Fig. 10.
The interactive measurement is essential in inferring the causal structures associated with \(\Phi_{B\to AC}\). Such a measuring process \(\mathcal{T}\), which is an element of \(\mathfrak{T}_{2}\) (see Def. I.5), consists of a quantum channel \(\Lambda\) from system \(A\) to \(BR\), a joint POVM \(M:=\{M_{x}\}_{x}\) acting on systems \(CR\), and an ancillary system \(R\) connecting them together (see Fig. 1a of our main text), leading to a set of linear maps
\[\mathcal{T}_{AC\to BC}(\cdot):=\Big{\{}\mathrm{Tr}_{CR}[M_{x}\cdot\Lambda_{A \to BR}(\cdot)]\Big{\}}_{x}, \tag{46}\]
where each \(\mathcal{T}_{x}(\cdot):=\mathrm{Tr}_{CR}[M_{x}\cdot\Lambda_{A\to BR}( \cdot)]\) forms a CP and trace-non-increasing (TNI) map from \(AC\) to \(B\mathds{C}\). Since the CJ operator of the positive operator-valued measure (POVM) \(M_{x}\) is given by \(M_{x}^{\mathrm{Tr}_{CR}}\), the CJ operator of \(\mathcal{T}_{x}\), denoted by \(J_{x}\), can be obtained by using link product between \(J^{\Lambda}\) and \(M_{x}^{\mathrm{Tr}_{CR}}\). Writing everything out explicitly, we have
\[J_{x}=M_{x}^{\mathrm{Tr}_{CR}}\star J^{\Lambda}. \tag{47}\]
To realize the CJ state \(J_{x}/d_{A}d_{C}\) experimentally, we employ the quantum circuit shown in Fig. 11. For any incompatible interactive measurements \(\mathcal{T}_{1}=\{J_{x}^{\mathcal{T}_{1}}\}_{x=1}^{m}\) and \(\mathcal{T}_{2}=\{J_{y}^{\mathcal{T}_{2}}\}_{y=1}^{n}\) with
\[J_{x}^{\mathcal{T}_{1}} :=M_{x}^{\mathrm{Tr}_{CR}}\star J^{\Lambda_{A\to BR}}, \tag{48}\] \[J_{y}^{\mathcal{T}_{2}} :=N_{y}^{\mathrm{Tr}_{CR}}\star J^{\Upsilon_{A\to BR}}, \tag{49}\]
the probability of obtain classical outcomes \(x\) and \(y\) from causal map \(\Phi_{B\to AC}=\Psi_{BE\to C}^{\mathrm{Post}}\circ\Psi_{\mathrm{C}\to AE}^{ \mathrm{Pre}}\) is given by
\[p_{x} =\mathrm{Tr}\big{[}M_{x}\cdot\Psi_{BE\to C}^{\mathrm{Post}}\circ \Lambda_{A\to BR}(\Psi_{\mathrm{C}\to AE}^{\mathrm{Pre}})\big{]}, \tag{50}\] \[q_{y} =\mathrm{Tr}\big{[}N_{y}\cdot\Psi_{BE\to C}^{\mathrm{Post}}\circ \Upsilon_{A\to BR}(\Psi_{\mathrm{C}\to AE}^{\mathrm{Pre}})\big{]}. \tag{51}\]
Using the language of CJ operators and link product, Eqs. 50 and 51 can also be simplified as
\[p_{x} =J^{\Phi}\star J_{x}^{\mathcal{T}_{1}}, \tag{52}\] \[q_{y} =J^{\Phi}\star J_{y}^{\mathcal{T}_{2}}, \tag{53}\]
where \(J^{\Phi}\) stands for the CJ operator of causal map \(\Phi_{B\to AC}\). Collect \(p_{x}\) and \(q_{y}\) into two probability vector \(\mathbf{p}\) and \(\mathbf{q}\), namely \(\mathbf{p}:=\{p_{x}\}_{x=1}^{m}\) and \(\mathbf{q}:=\{q_{y}\}_{y=1}^{n}\). In upcoming sections, we will show how to analyze the causal structure of \(\Phi\) by using the uncertainty trade-off between \(\mathbf{p}\) and \(\mathbf{q}\), and exhibit the corresponding advantages.
## II Universal uncertainty relation for measurements with interventions
In this section, starting from the algebraic structure of majorization lattice, we investigate the connections between universal uncertainty relation and quantum roulette, and provide the optimal bound of universal uncertainty relation for all quantum circuit fragments, leading to the entropic uncertainty relation for measurements with interventions (viz., Theorem 1 of the main text). Specifically, in Subsec. II.1, we briefly review the concept of majorization lattice and discuss its completeness. The standard steps of finding least upper bound (LUB) for any set and a useful tool known as flatness process, which was originally introduced to investigate the supermodularity and subadditivity of entropies [20], are also supplied. In Subsec. II.2, we offer a short review for the historical developments of uncertainty principle, and introduce the requirement for uncertainty measures, which gives rise to the majorization uncertainty relation (known as universal uncertainty relation in literature). In Subsec. II.3, we formulate an universal uncertainty relation for arbitrary quantum circuit fragments, showing its direct operational meaning in quantum roulette. With different uncertainty measures and different quantum dynamics, such a form leads to an infinite number of uncertainty relations. The generality and optimality of our results have also been analyzed. Lemma 1 and Theorem 1 of the main text are proved in the last subsection, i.e. Subsec. II.4. Additionally, we have also introduced the entropic uncertainty relation with multiple interactive measurements, extending the results presented in the main text of our work.
### Mathematical Toolkit: Majorization Lattice
In this subsection, we turn our attention to the mathematical concept of _lattice_, which is crucial in constructing the optimal bound of _universal uncertainty relation_ (see Subsec. II.2 and Refs. [21; 22; 23] for more details). Let us start with the definition of Lattice:
\[\begin{array}{l}\hline\text{Definition}\,\text{LUB}\,\text{Lud}\,\text{Lud} \,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\, \text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{ Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\, \text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud }\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\, \text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud }\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud }\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud }\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud }\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud }\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud }\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud }\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud }\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud }\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud }\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud }\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud }\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud }\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud }\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud }\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud }\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud }\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud }\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud }\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud }\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud }\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud }\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud }\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud }\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud }\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud }\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud }\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud }\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud }\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud }\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud }\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud }\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud }\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud }\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud }\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\, \text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud }\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud }\,\text{Ludud}\,\text{Lud}\,\text{Lud}\,\text{Ludud}\,\text{Lud}\,\text{Lud}\, \text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Lud}\,\text{Ludud}\,\text{Lud }\,\text{Ludud}\,\text{Lud}\,\text{Ludud}\,\text{Lud}\,\text{Ludud}\,\text{Lud }\,\text{Ludud}\,\text{Lud}\,\text{Lud}\,\text{Ludud}\,\text{Ludud}\,\text{Lud }\,\text{Ludud}\,\text{Lud}\,\text{Ludud}\,\text{Ludud}\,\text{Ludud}\,\text{Lud }\,\text{Lud}\,\text{Ludud}\,\text{Ludud}\,\text{Ludud}\,\text{Ludud}\,\text{Lud }\,\text{Ludud}\,\text{Lud}\,\text{Lud}\,\text{Ludud}\,\text{Ludud}\,\text{Ludud }\,\text{Ludud}\,\text{Lud}\,\text{Ludud}\,\text{Ludud}\,\text{Ludud}\,\text{Lud }\,\text{Ludud}\,\text{Ludud}\,\text{Ludud}\,\text{Ludud}\,\text{Ludud}\,\text{Lud }\,\text{Ludud}\,\text{Ludud}\,\text{Ludud}\,\text{Lududud}\,\text{Ludud}\,\text{Lud }\,\text{Ludud}\,\text{Ludud}\,\text{Ludud}\,\text{Ludud}\,\text{Lududud}\,\text{Ludud }\,\text{Ludud}\,\text{Ludud}\,\text{Lududud}\,\text{Ludud}\,\text{Lududud}\,\text{Lud }\,\text{Lududud}\,\text{Ludud}\,\text{Lududud}\,\text{Ludud}\,\text{Lududud}\,\text{Ludud }\,\text{Ludud}\,\text{
Let us consider a concrete example of majorization. For any 3-dimensional probability vector \(\mathbf{p}\), we see that
\[(1/3,1/3,1/3)\prec\mathbf{p}\prec(1,0,0). \tag{60}\]
To build a lattice structure for probability simplex, we hope that majorization can induce a partial order for all probability vectors. However, this is not the case. Majorization only forms a preorder. It is straightforward to check that majorization satisfies the properties of reflexivity (\(\mathbf{x}\prec\mathbf{x}\) holds for any \(\mathbf{x}\)) and transitivity (\(\mathbf{x}\prec\mathbf{y}\) and \(\mathbf{y}\prec\mathbf{z}\) imply \(\mathbf{x}\prec\mathbf{z}\)). However, the property of antisymmetry is absent. Take vectors \((1,0)\) and \((0,1)\) for instance, it is clear that \((1,0)\prec(0,1)\) and \((0,1)\prec(1,0)\), but \((1,0)\neq(0,1)\). To remedy this problem, we should work on the ordered set of \(\mathds{P}_{n}^{d,\,\downarrow}:=\{\mathbf{x}\in\mathds{R}^{d}\,\|\,x_{k}\geqslant x _{k+1}\geqslant 0,\,\forall 1\leqslant k\leqslant d-1,\,\sum_{k}x_{k}=n\}\) instead of the probability simplex. Noted that when \(n=1\), the set \(\mathds{P}_{1}^{d,\,\downarrow}\) stands for the ordered probability simplex.
As proven in Ref. [25], the quadruple \((\mathds{P}_{n}^{d,\,\downarrow},\prec,\wedge,\vee)\) indeed forms a complete lattice under majorization, called majorization lattice. The properties of majorization lattice lead to a standard approach in finding the optimal bounds for any subset \(S\) of \(\mathds{P}_{n}^{d,\,\downarrow}\)[25]. Formally, given a subset \(S\subset\mathds{P}_{n}^{d,\,\downarrow}\), there are two steps in constructing its LUB \(\vee S\). The first step is to find the quantities \(\mathbf{b}_{S,k}\), which is defined as
\[\mathbf{b}_{S,k}:=\left(\max_{\mathbf{x}\in S}\sum_{i=1}^{k}x_{i}\right)-\sum _{i=1}^{k-1}\mathbf{b}_{S,i}, \tag{61}\]
for \(1\leqslant k\leqslant d\). Remark that the resultant vector \(\mathbf{b}_{S}:=(\mathbf{b}_{S,k})_{k}\) might not always belongs to the set \(\mathds{P}_{n}^{d,\,\downarrow}\). To gain some intuitions, recall the example constructed in Ref. [20]. Take \(S=\{\mathbf{x},\mathbf{y}\}\) with
\[\mathbf{x} =(0.6,0.15,0.15,0.1), \tag{62}\] \[\mathbf{y} =(0.5,0.25,0.2,0.05). \tag{63}\]
In this case, the vector
\[\mathbf{b}_{S}=(0.6,0.15,0.2,0.05), \tag{64}\]
obtained from (61) does not belong to the set \(\mathbb{P}_{1}^{d,\,\downarrow}\), since
\[\mathbf{b}_{S,2}=0.15<\mathbf{b}_{S,3}=0.2. \tag{65}\]
Actually, even if we rearrange the vector \(\mathbf{b}_{S}\) into non-increasing order
\[\mathbf{b}_{S}^{\downarrow}=(0.6,0.2,0.15,0.05), \tag{66}\]
the re-ordered vector \(\mathbf{b}_{S}^{\downarrow}\) is not the optimal upper bound, i.e. \(\mathbf{b}_{S}^{\downarrow}\neq\lor S\).
In order to achieve the optimal bound \(\lor S\) of the subset \(S\), an additional process \(\mathcal{F}\) on \(\mathbf{b}_{S}\) (not \(\mathbf{b}_{S}^{\downarrow}\)) is needed, named _flatness process_[20]:
Definition 1.16-PInterestiveness
Let \(\mathbf{x}\in\mathds{R}_{+}^{d}\) be a non-negative \(d\)-dimensional vector, and \(j\) be the smallest integer in \(\{2,\ldots,d\}\) such that \(x_{j}>x_{j-1}\), and \(i\) be the greatest integer in \(\{1,\ldots,j-1\}\) such that \(x_{i-1}\geqslant(\sum_{k=i}^{j}x_{k})/(j-i+1):=a\). Define
\[\mathcal{T}(\mathbf{x}):=(x_{1}^{\prime},\ldots,x_{n}^{\prime})\quad\text{ with }\quad x_{k}^{\prime}=\begin{cases}a&\text{for}\quad k=i,\ldots,j\\ x_{k}&\text{otherwise}.\end{cases} \tag{67}\]
and \(\mathcal{F}(\mathbf{x}):=\mathcal{T}^{d-1}(\mathbf{x})=\mathcal{T}(\mathcal{ T}^{d-2}(\mathbf{x}))\), i.e. applying \(\mathcal{T}\) on the vector \(\mathbf{x}\) successively \(d-1\) times. We call \(\mathcal{F}\) the flatness process of vector \(\mathbf{x}\). If the vector \(\mathbf{x}\) is already arranged in non-increasing order, or equivalently there does not exist an integer \(j\) in \(\{2,\ldots,d\}\) such that \(x_{j}>x_{j-1}\), then we simply have \(\mathcal{T}(\mathbf{x})=\mathbf{x}\).
Based on the definition of flatness process, we will show how to obtain the LUB for \(\mathbf{x}\) and \(\mathbf{y}\) considered in Eqs. 62 and 63. In this case, \(\mathbf{b}_{S}=(0.6,0.15,0.2,0.05)\) is not arranged in non-increasing order. Let us define \(x_{k}\) (\(k=1,2,3,4\)) as
\[x_{1} =0.6, \tag{68}\] \[x_{2} =0.15,\] (69) \[x_{3} =0.2,\] (70) \[x_{4} =0.05. \tag{71}\]
Then the smallest integer in \(\{2,3,4\}\) such that \(x_{j}>x_{j-1}\) is \(3\) as \(x_{3}=0.2>x_{2}=0.15\). Note that now the quantity \((\sum_{k=2}^{3}x_{k})/2\) is given by
\[\frac{x_{2}+x_{3}}{2}=0.175\leqslant x_{1}=0.6. \tag{72}\]
Thus, we can write \(\mathcal{T}(\mathbf{b}_{S})=(0.6,0.175,0.175,0.05)\). As \(\mathcal{T}(\mathbf{b}_{S})\) is already in non-increasing order, we have \(\mathcal{F}(\mathbf{b}_{S})=\mathcal{T}(\mathbf{b}_{S})\), which is exactly the optimal bound for \(\mathbf{x}=(0.6,0.15,0.15,0.1)\) and \(\mathbf{y}=(0.5,0.25,0.2,0.05)\), namely
\[\lor S=\mathbf{x}\vee\mathbf{y}=\mathcal{F}(\mathbf{b}_{S})=(0.6,0.175,0.175,0.05), \tag{73}\]
with
\[(0.6,0.15,0.15,0.1) \prec(0.6,0.175,0.175,0.05)\prec(0.6,0.15,0.2,0.05), \tag{74}\] \[(0.5,0.25,0.2,0.05) \prec(0.6,0.175,0.175,0.05)\prec(0.6,0.15,0.2,0.05), \tag{75}\]
and for any probability vector \(\mathbf{z}\) satisfying \(\mathbf{x}\prec\mathbf{z}\) and \(\mathbf{y}\prec\mathbf{z}\), it follows that
\[(0.6,0.175,0.175,0.05)\prec\mathbf{z}. \tag{76}\]
It is worth mentioning that the flatness process \(\mathcal{F}\) introduced in Ref. [20] is exactly the second step of formulating the optimal bound for \(S\). More precisely, \(\lor S=\mathcal{F}(\mathbf{b}_{S})\) holds in general [25].
To summarize, the standard approach in finding the optimal bounds for a subset \(S\) of \(\mathbb{P}_{n}^{d,\,\downarrow}\) contains two steps:
**Step 1.**: Formulating \(\mathbf{b}_{S}\) defined in Eq. (61),
**Step 2.**: Applying the flatness process defined in Eq. (67) to obtain \(\mathcal{F}(\mathbf{b}_{S})\),
leading to the following lemma.
**Lemma II.6**: **Leans Hole Based Upper Bound (LUB)**
For any subset \(S\subset\mathds{P}_{n}^{d,\,\downarrow}\), its least upper bound (LUB) \(\lor S\) is given by
\[\lor S=\mathcal{F}(\mathbf{b}_{S}), \tag{77}\]
where the vector elements of \(\mathbf{b}_{S}\) are defined in Eq. 61, and \(\mathcal{F}\) represents the flatness process introduced in Def. II.4.
Finally, we note that majorization is naturally connected with entropies. More precisely, all _Schur-concave_ functions, including Renyi entropies, are order-reversing functions for majorization. The relations between majorization, doubly stochastic matrix (square matrix of non-negative real numbers, whose rows and columns sums to 1), and Schur-concave functions are detailed in the following lemma.
**Lemma II.6**: **Neighborization Derivatives [24]**
For two vectors \(\mathbf{x}\) and \(\mathbf{y}\), the following conditions are equivalent:
1. \(\mathbf{x}\) is majorized by \(\mathbf{y}\), i.e. \(\mathbf{x}\prec\mathbf{y}\);
2. \(x=y\cdot D\) for some doubly stochastic matrix \(D\);
3. \(f(\mathbf{x})\geqslant f(\mathbf{y})\) holds for any Schur-concave function \(f\).
Let us check some simple applications of Lem. II.6. First, Shannon entropy is invariant under permutations of its components since \(\mathbf{p}\prec\mathbf{p}\cdot D\) and \(\mathbf{p}\cdot D\prec\mathbf{p}\) hold for any probability vector \(\mathbf{p}\) and permutation matrix \(D\). Second, for any \(d\)-dimensional probability vector \(\mathbf{p}\), we always have \(\log d\geqslant H(\mathbf{p})\) as \((1/d,\ldots,1/d)\prec\mathbf{p}\), where \(H\) stands for the Shannon entropy.
### Brief Introduction to Uncertainty Relations
If an observer is well-equipped, then estimating the position and momentum of a moving football simultaneously is not a difficult task. However, if the item of observation has been replaced by a particle, then it is impossible to predict the outcomes for both position and momentum with arbitrary precision, no matter how well the observer is equipped. Quantum mechanics constrains what we can learn about the observables of particle. Such a restriction is known as uncertainty principle, which was first introduced by Heisenberg in Ref. [26]. Quantitatively, the fundamental trade-off between position and momentum is characterized by the following inequality [27; 28]
\[\Delta\mathbf{x}\cdot\Delta\mathbf{p}\geqslant\frac{\hbar}{2}, \tag{78}\]
where \(\mathbf{x}\) and \(\mathbf{p}\) stand for the position and momentum operators, and \(\Delta\) is the standard deviation. For bounded operators \(M\) and \(N\), Robertson gave a more general form in terms of the commutator \([M,N]\)[29]
\[\Delta M\cdot\Delta N\geqslant\frac{1}{2}|\left\langle\psi\right|[M,N]|\psi \rangle|. \tag{79}\]
From an information-theoretic perspective, it is more natural to quantify the uncertainty associated with quantum measurements in terms of entropy rather than the statistical tools, such as the standard deviation. Using this insight, the entropic version of position-momentum uncertainty relation can be formulated as
\[h(\mathbf{x})+h(\mathbf{p})\geqslant\log(\pi e\hbar) \tag{80}\]
Remark that the entropic uncertainty relation of Eq. 80 is obtained by Bialynicki-Birula and Mycielski in Ref. [30]. Here \(e\) is the Euler's number, and \(h\) stands for the differential entropy: consider a random variable \(X\) with density
\(f(x)\), then its differential entropy is defined as
\[h(X):=-\int_{\infty}^{\infty}f(x)\log f(x)\,d\,f(x). \tag{81}\]
The first entropic uncertainty relation for general bounded operators was introduced by Deutsch in Ref. [31], which had later been improved by Maassen and Uffink. In particular, given projective measurements \(M:=\{|u_{i}\rangle\}\) and \(N:=\{|v_{j}\rangle\}\), the Maassen-Uffink entropic uncertainty relation reads [32]
\[H(M)+H(N)\geqslant-\log c, \tag{82}\]
where \(H(M)\) represents the Shannon entropy of probability distribution obtained by implementing measurement \(M\), and \(c\) is the maximal overlap between measurements \(M\) and \(N\). More precisely, here the quantity \(c\) is defined by
\[c:=\max_{i,j}|\left\langle u_{i}|v_{j}\right\rangle|^{2}, \tag{83}\]
which depends only on the measurements, or equivalently state-independent, and hence captures their inherent incompatibility. Technically, the quantity \(c\) is derived from _Riesz theorem_ in functional analysis.
In the ensuing decades, quantum uncertainty associated with measurements has been quantified by a variety of entropies. Various forms of entropic uncertainty relations have also been proposed [33]. However, none of these forms answer the fundamental question - which uncertainty measure is the most adequate to use? Shannon entropy, Collision entropy, Min-entropy, or even Tsallis entropy? To fully understand the uncertainty induced by measurements and address this question, the concept of'reasonable measure' for uncertainty has been introduced in Ref. [21]: any reasonable measure \(f\) of the quantum uncertainty should be a function only of the probability vector associated with measurement and satisfies monotonicity under random relabeling \(\mathfrak{R}\). For example, given a probability vector \(\mathbf{p}\), function \(f\) is an uncertainty measure if the following condition is satisfied
\[f(\mathfrak{R}(\mathbf{p}))\geqslant f(\mathbf{p}). \tag{84}\]
In other words, the procedure of random relabeling - known as forgetting outcome labels in classical world - turns the results to be more uncertain. Technically speaking, the process of random relabeling \(\mathfrak{R}\) is characterized by the convex hull of permutations [24], which implies that
\[\mathfrak{R}(\mathbf{p})=\mathbf{p}\cdot D_{\mathfrak{R}}, \tag{85}\]
holds for some doubly stochastic matrix \(D_{\mathfrak{R}}\). As a direct consequence of Lem. II.6, we know that for two probability vectors \(\mathbf{p}\) and \(\mathbf{q}\), \(\mathbf{p}\) is more uncertain than \(\mathbf{q}\) if and only if \(\mathbf{p}=\mathbf{q}\cdot D\) for some doubly stochastic matrix \(D\), or equivalently \(\mathbf{p}\prec\mathbf{q}\). Therefore, all Schur-concave functions (including Renyi entropies) are reasonable uncertainty measures. Historically, majorization \(\prec\) is initially introduced as a tool to extend known inequalities and unify all inequalities based on convex functions. Here, in the context of uncertainty relation, majorization will generate an infinite family of uncertainty relations. Thus, the uncertainty relation in the forms of majorization is also known as _universal uncertainty relation_. A typical example is the direct-sum majorization uncertainty relation [23]. Consider the probability vectors \(\mathbf{p}:=(p_{i})_{i=1}^{m}\) and \(\mathbf{q}:=(q_{j})_{j=1}^{n}\) obtained by measuring a quantum state \(\rho\) with respect to a pair of incompatible measurements \(M\) and \(N\), their direct-sum \(\oplus\) is defined as the union of vectors. Formally,
\[\mathbf{p}\oplus\mathbf{q}:=(p_{1},\ldots,p_{m},q_{1},\ldots,q_{n}). \tag{86}\]
For example, given probability vectors \(\mathbf{p}=(1,0)\) and \(\mathbf{q}=(1/2,1/2)\), their direct-sum \(\mathbf{p}\oplus\mathbf{q}\) is simply \((1,0,1/2,1/2)\). The goal is find the optimal upper bound \(\mathbf{w}_{M,N}\) such that
\[\mathbf{p}\oplus\mathbf{q}\prec\mathbf{w}_{M,N}, \tag{87}\]
holds for all quantum state \(\rho\), namely \(\mathbf{w}_{M,N}\) is a vector independent of the state \(\rho\). For any Schur-concave function \(f\), uncertainty relation 87 leads to
\[f(\mathbf{p}\oplus\mathbf{q})\geqslant f(\mathbf{w}_{M,N}), \tag{88}\]
which includes all entropic uncertainty relations as special cases. Take Shannon entropy for instance, we immediately obtain the entropic uncertainty relation in the form of \(H(\mathbf{p}\oplus\mathbf{q})\geqslant H(\mathbf{w}_{M,N})\). Before the end of this subsection, we make two remarks. First, direct-sum majorization uncertainty relation is not the only form of universal uncertainty relations. There also exists a direct-product majorization uncertainty relation, which was formulated by Friedland, Gheorghiu, Gour in Ref. [21] and Puchaha, Rudnicki, Zyczkowski in Ref. [22]. Specifically, they are trying to bound the uncertainty associated with \(\mathbf{p}\otimes\mathbf{q}:=(p_{i}\cdot q_{j})_{i,j}\). In this work, we mainly focus on the direct-sum form since it shows a provable advantage over the direct-product form (see Ref. [23] for more details). Second, all the uncertainty relations considered in this work belong to the category of preparational uncertainty relation. The noise-disturbance uncertainty relation of interactive measurements is beyond the scope of this work, and merits future investigation.
### Operational Interpretation of Universal Uncertainty Relation: Quantum Roulette
Universal uncertainty relations capture the essential trade-off between incompatible measurements in terms of their probability vectors, leading to a family of uncertainty relations. Although it can outperform previous well-known results, such as the Maassen-Uffink entropic uncertainty relation (see Eq. 82) to some extent, its operational interpretation remains unknown. In this subsection, we will connect the direct-sum majorization uncertainty relation with a guessing game which we refer to as the quantum roulette.
In particular, quantum roulette is a game played between two parties - Alice and Bob. In this game, Alice is an agent that is capable of probing the quantum circuit fragment \(\Phi\) (see Def. I.4) supplied by Bob with two possible interactive measurements, \(\mathcal{T}_{1}:=\{\mathcal{T}_{1,x_{1}}\}_{x_{1}=1}^{m_{1}}\) and \(\mathcal{T}_{2}:=\{\mathcal{T}_{2,x_{2}}\}_{x_{2}=2}^{m_{2}}\) (see Def. I.5). Generally, these interactive measurements do not need to have the same number of outcomes, i.e. \(m_{1}\neq m_{2}\). In each round of the game, Alice and Bob begin with a 'roulette table', whose layout consists of all tuples \((b,x_{b})\), where \(b\in\{1,2\}\) denotes Alice's choice of interactive measurements, and \(x_{b}\in\{1,\ldots,m_{b}\}\) represents the corresponding measurement outcome by implementing \(\mathcal{T}_{b}\). Bob starts the game with \(k\) chips, which he can use to place bets on \(k\) of the possible tuples and supplies Alice with any \(\Phi\in\mathfrak{F}_{a}\) of his choosing. Alice will then select some \(b\) at random and probe \(\Phi\) with interactive measurement \(\mathcal{T}_{b}\in\mathfrak{T}_{a}\). She finally announces both \(b\) and the resulting measurement outcome \(x_{b}\). Bob wins if one of his chips is on \((b,x_{b})\). We denote Bob's maximum winning probability as \(p_{\text{win},k}\), and our goal is to characterize this maximum winning probability for any quantum circuit fragment \(\mathfrak{F}_{a}\).
To gain some intuition, let us first consider the simplest case in which Bob can only provide quantum causal map \(\Phi_{B\to AC}\in\mathfrak{F}_{2}\) (see Subsec. I.4 for more details). Meanwhile, assume the interactive measurements chosen by Alice are \(\mathcal{T}_{1}\) of Eq. 48 and \(\mathcal{T}_{2}\) of Eq. 49, where \(x_{1}=x\in\{1,\ldots,m\}\) and \(x_{2}=y\in\{1,\ldots,n\}\). In this case, the outcomes \(x\) of \(\mathcal{T}_{1}\) and \(y\) of \(\mathcal{T}_{2}\) happen with the following probabilities (see Eqs. 52 and 53)
\[\frac{1}{2}p_{x} =\frac{1}{2}J^{\Phi}\star J_{x}^{\mathcal{T}_{1}}, \tag{89}\] \[\frac{1}{2}q_{y} =\frac{1}{2}J^{\Phi}\star J_{y}^{\mathcal{T}_{2}}. \tag{90}\]
Here the coefficient \(1/2\) comes from the fact that interactive measurements \(\mathcal{T}_{1}\) and \(\mathcal{T}_{2}\) have the same probability of occurring. Now Bob's guessing strategy can be described by two sets \(\mathcal{S}_{1}\subset\{1,\ldots,m\}\) and \(\mathcal{S}_{2}\subset\{1,\ldots,n\}\) satisfying
\[|\mathcal{S}_{1}|+|\mathcal{S}_{2}|=k, \tag{92}\]
where the set \(\mathcal{S}_{b}\) (\(b\in\{1,2\}\)) demonstrates the collection of his guesses in the form of \((b,\cdot)\). For example, when Bob has \(2\) chips, \(\mathcal{S}_{1}=\{2,3\}\) means that he will put his chips on \((1,2)\) and \((1,3)\). When Bob has \(3\) chips, with \(\mathcal{S}_{1}\) and \(\mathcal{S}_{2}\) are given by the sets \(\{2,3\}\) and \(\{2\}\) respectively, then he will place the chips on \((1,2)\), \((1,3)\) and \((2,2)\). Given \(k\) chips and a quantum causal map \(\Phi\), Bob's maximum winning probability is characterized by
\[p_{\text{win},k}(\Phi) =\max_{|\mathcal{S}_{1}|+|\mathcal{S}_{2}|=k}\sum_{\begin{subarray} {c}x\in\mathcal{S}_{1}\subset\{1,\ldots,m\}\\ y\in\mathcal{S}_{2}\subset\{1,\ldots,n\}\end{subarray}}\left(\frac{1}{2}p_{x} +\frac{1}{2}q_{y}\right) \tag{93}\] \[=\max_{|\mathcal{S}_{1}|+|\mathcal{S}_{2}|=k}\sum_{\begin{subarray} {c}x\in\mathcal{S}_{1}\subset\{1,\ldots,m\}\\ y\in\mathcal{S}_{2}\subset\{1,\ldots,n\}\end{subarray}}\frac{1}{2}\left(J^{ \Phi}\star(J_{x}^{\mathcal{T}_{1}}+J_{y}^{\mathcal{T}_{2}})\right). \tag{94}\]
The second equation comes from Eqs. 52 and 53. Note that the largest probability of \(k\) possible tuples is given by
\[\sum_{i=1}^{k}(\frac{1}{2}\mathbf{p}\oplus\frac{1}{2}\mathbf{q})_{i}^{\downarrow}, \tag{95}\]
where the superscript \(\downarrow\) indicates that the corresponding vector is arranged in non-increasing order, and the subscript \(i\) means it is the \(i\)-th element of the vector. Take \((0.1,0.1,0.6,0.2)\) for instance, we have
\[(0.1,0.1,0.6,0.2)_{1}^{\downarrow} =0.6, \tag{96}\] \[(0.1,0.1,0.6,0.2)_{2}^{\downarrow} =0.2,\] (97) \[(0.1,0.1,0.6,0.2)_{3}^{\downarrow} =0.1,\] (98) \[(0.1,0.1,0.6,0.2)_{4}^{\downarrow} =0.1. \tag{99}\]
It now follows immediately that
\[\sum_{i=1}^{k}(\frac{1}{2}\mathbf{p}\oplus\frac{1}{2}\mathbf{q})_{i}^{\downarrow} =p_{\mathrm{win},k}(\Phi)=\max_{|\mathcal{S}_{1}|+|\mathcal{S}_{2}|=k}\sum_{ \begin{subarray}{c}x\in\mathcal{S}_{1}\subset\{1,\ldots,m\}\\ y\in\mathcal{S}_{2}\subset\{1,\ldots,n\}\end{subarray}}\frac{1}{2}\left(J^{ \Phi}\star(J_{x}^{\mathcal{T}_{1}}+J_{y}^{\mathcal{T}_{2}})\right). \tag{100}\]
There are two trivial case of quantum roulette: (i) \(k=0\), which means Bob does not have any chips on hand. Thus, his winning probability is simply zero. In this case, we can write \(p_{\mathrm{win},0}(\Phi)=0\); (i) \(k=m+n\). In this case, Bob has enough chips to select all tuples, namely \(p_{\mathrm{win},m+n}(\Phi)=1\). Our goal here is to find the fundamental limitations of winning the game for all causal maps, which is defined as
\[p_{\mathrm{win},k} :=\max_{\Phi\in\mathcal{\bar{B}}_{2}}p_{\mathrm{win},k}(\Phi) \tag{101}\] \[=\max_{|\mathcal{S}_{1}|+|\mathcal{S}_{2}|=k}\sum_{ \begin{subarray}{c}x\in\mathcal{S}_{1}\subset\{1,\ldots,m\}\\ y\in\mathcal{S}_{2}\subset\{1,\ldots,n\}\end{subarray}}\frac{1}{2}\left(\max_{ \Phi\in\mathcal{\bar{B}}_{2}}J^{\Phi}\star(J_{x}^{\mathcal{T}_{1}}+J_{y}^{ \mathcal{T}_{2}})\right). \tag{102}\]
Thus, for any k (\(1\leqslant k\leqslant m+n\)) and quantum causal map supplied by Bob, the probability vectors \(\mathbf{p}\) and \(\mathbf{q}\) obtained from interactive measurements \(\mathcal{T}_{1}\) and \(\mathcal{T}_{2}\) satisfy
\[\sum_{i=1}^{k}(\frac{1}{2}\mathbf{p}\oplus\frac{1}{2}\mathbf{q})_{i}^{ \downarrow}\leqslant p_{\mathrm{win},k},\quad 1\leqslant k\leqslant m+n, \tag{103}\]
leading to
\[\frac{1}{2}\mathbf{p}\oplus\frac{1}{2}\mathbf{q}\prec\left(p_{\mathrm{win},1},\left(p_{\mathrm{win},2}-p_{\mathrm{win},1}\right),\ldots,\left(p_{\mathrm{ win},m+n}-p_{\mathrm{win},m+n-1}\right)\right). \tag{104}\]
Above majorization inequality connects the direct-sum majorization uncertainty relation with Bob's maximum winning probability in quantum roulette, showing that the largest summation of \(k\) elements of \(\frac{1}{2}\mathbf{p}\oplus\frac{1}{2}\mathbf{q}\) is completely determined by Bob's maximum winning probability with \(k\) chips in quantum roulette. Denote the increment of Bob's maximum winning probability with \(k\) rather than \(k-1\) chips as \(w_{k}\), i.e.
\[w_{k}:=p_{\mathrm{win},k}-p_{\mathrm{win},k-1},\quad 1\leqslant k\leqslant m+n, \tag{105}\]
with \(p_{\mathrm{win},0}:=0\), and set \(\mathbf{w}_{\mathcal{T}_{1},\mathcal{T}_{2}}:=(w_{k})_{k=1}^{m+n}\), we see that
\[\frac{1}{2}\mathbf{p}\oplus\frac{1}{2}\mathbf{q}\prec\mathbf{w}_{\mathcal{T} _{1},\mathcal{T}_{2}}. \tag{106}\]
To further simplify the representation, we take the union of CJ operators \(J_{x}^{\mathcal{T}_{1}}\) and \(J_{y}^{\mathcal{T}_{2}}\), and denote it by \(\{J_{z}\}_{z}\), where
\[J_{z}=\left\{\begin{array}{cc}J_{z}^{\mathcal{T}_{1}}&1\leqslant z\leqslant m,\\ J_{z-m}^{\mathcal{T}_{2}}&m+1\leqslant z\leqslant m+n.\end{array}\right. \tag{107}\]
We adopt the convention \(J_{\mathcal{S}}\) for
\[J_{\mathcal{S}}:=\sum_{z\in\mathcal{S}}J_{z}, \tag{108}\]
where \(\mathcal{S}\subset\{1,\ldots,m+n\}\), e.g. for \(\mathcal{S}=\{2,5\}\), we immediately have \(J_{\{2,5\}}=J_{2}+J_{5}\). To more precise, if \(m=n=3\), then we then see that
\[J_{\{2,5\}}=J_{2}+J_{5}=J_{2}^{\mathcal{T}_{1}}+J_{2}^{\mathcal{T}_{2}}. \tag{109}\]
Based on CJ operators, now the universal uncertainty relation in the form of direct-sum can be formulated as
For any quantum causal map \(\Phi\in\mathfrak{F}_{2}\), the probability vectors \(\mathbf{p}\) and \(\mathbf{q}\) obtained by measuring \(\Phi\) with respect to interactive measurements \(\mathcal{T}_{1}\) and \(\mathcal{T}_{2}\in\mathfrak{T}_{2}\) satisfy the following trade-off
\[\frac{1}{2}\mathbf{p}\oplus\frac{1}{2}\mathbf{q}\prec\mathbf{w}_{\mathcal{T}_ {1},\mathcal{T}_{2}}=(p_{\mathrm{win},k}-p_{\mathrm{win},k-1})_{k=1}^{m+n}, \tag{110}\]
where \(p_{\mathrm{win},0}:=0\), and for \(1\leqslant k\leqslant m+n\), the quantity \(p_{\mathrm{win},k}\) is characterized by the following optimization problem.
\[p_{\mathrm{win},k}=\frac{1}{2}\max_{|\mathcal{S}|=k}\max \mathrm{Tr}[J_{\mathcal{S}}\cdot J]\] \[\mathrm{s.t.}\quad J\geqslant 0,\quad\mathrm{Tr}[\rho_{A}]=1, \quad\mathrm{Tr}_{C}[J]=\rho_{A}\otimes\mathds{1}_{B}. \tag{111}\]
In particular, here the optimal solution of \(p_{\mathrm{win},k}\) forms a semidefinite program (SDP), and hence can be computed efficiently in polynomial time (up to desired accuracy) by using the ellipsoid method or an interior point method [34, 35].
Proof.: From Eq. 102, we can write
\[p_{\mathrm{win},k} =\frac{1}{2}\max_{|\mathcal{S}|=k}\max_{\Phi\in\mathfrak{F}_{2}} J^{\Phi}\star J_{\mathcal{S}} \tag{112}\] \[=\frac{1}{2}\max_{|\mathcal{S}|=k}\max_{\Phi\in\mathfrak{F}_{2}} \mathrm{Tr}\big{[}J^{\Phi}\cdot J_{\mathcal{S}}\big{]}, \tag{113}\]
where in the second equation, namely Eq. 113, we have used the fact that the set of all CJ operators associated with causal map is algebraically closed under the transpose over all of its systems. Remark that, given a CJ operator \(J_{ABC}^{\Phi}\) for some causal map \(\Phi_{B\to AC}\), the operator \((J_{ABC}^{\Phi})^{\mathbf{T}}\) is still a CJ operator for some causal map. However, this may not be the case for partial transpose, such as \((J_{ABC}^{\mathbf{T}_{B}})^{\mathbf{T}_{B}}\). As discussed in Subsec. I.4, a quantum dynamics \(\Phi_{B\to AC}\) is a causal map if and only if its CJ operator \(J^{\Phi}\) satisfies the requirements of (i) CP: \(J^{\Phi}\geqslant 0\), (ii) TP: \(\mathrm{Tr}_{AC}[J^{\Phi}]=\mathds{1}_{B}\) and (iii) NS from \(B\to C\) to \(A\): \(\mathrm{Tr}_{C}[J^{\Phi}]=\mathrm{Tr}_{BC}[J^{\Phi}]\otimes\mathds{1}_{B}/d_{B}\). It hence follows that
\[p_{\mathrm{win},k}=\frac{1}{2}\max_{|\mathcal{S}|=k}\max \mathrm{Tr}[J_{\mathcal{S}}\cdot J]\] \[\mathrm{s.t.}\quad J\geqslant 0,\quad\mathrm{Tr}_{AC}[J]= \mathds{1}_{B},\] \[\mathrm{Tr}_{C}[J]=\mathrm{Tr}_{BC}[J]\otimes\mathds{1}_{B}/d_{B}, \tag{114}\]
which is equivalent to Eq. 111, as required.
In general, for different quantum causal map \(\Phi\), we will have different probability vectors \(\mathbf{p}\) and \(\mathbf{q}\) with respect to interactive measurements \(\mathcal{T}_{1}\) and \(\mathcal{T}_{2}\). To indicate their dependencies, we re-express them as \(\mathbf{p}(\Phi)\) and \(\mathbf{q}(\Phi)\), then the set \(\{\frac{1}{2}\mathbf{p}(\Phi)\oplus\frac{1}{2}\mathbf{q}(\Phi)\}_{\Phi}\) forms a subset of the probability simplex. Constructing the probability vector \(\mathbf{w}_{\mathcal{T}_{1},\mathcal{T}_{2}}\) is exactly the first step of finding the optimal bounds for \(\{\frac{1}{2}\mathbf{p}(\Phi)\oplus\frac{1}{2}\mathbf{q}(\Phi)\}_{\Phi}\) (see Eq. 61 of Subsec. II.1), and the next step is flatness process \(\mathcal{F}\) (see Def. II.4), which implies that
\[\vee\left\{\frac{1}{2}\mathbf{p}(\Phi)\oplus\frac{1}{2}\mathbf{q}(\Phi) \right\}_{\Phi}^{\downarrow}=\mathcal{F}(\mathbf{w}_{\mathcal{T}_{1},\mathcal{ T}_{2}}). \tag{115}\]
Writing above result formally, we have
For any quantum causal map \(\Phi\in\mathfrak{F}_{2}\), the probability vectors \(\mathbf{p}\) and \(\mathbf{q}\) obtained by measuring \(\Phi\) with respect to interactive measurements \(\mathcal{T}_{1}\) and \(\mathcal{T}_{2}\in\mathfrak{T}_{2}\) satisfy the following trade-off
\[\frac{1}{2}\mathbf{p}\oplus\frac{1}{2}\mathbf{q}\prec\mathcal{F}(\mathbf{w}_{ \mathcal{T}_{1},\mathcal{T}_{2}}), \tag{116}\]
where \(\mathcal{F}\) is the flatness process defined in Def. II.4. The vector \(\mathbf{w}_{\mathcal{T}_{1},\mathcal{T}_{2}}\) is defined as
\[\mathbf{w}_{\mathcal{T}_{1},\mathcal{T}_{2}}:=(p_{\text{win},k}-p_{\text{win}, k-1})_{k=1}^{m+n}, \tag{117}\]
with \(p_{\text{win},0}:=0\) and \(p_{\text{win},k}\) is solved by SDPs in the form of Eq. 110 for \(1\leqslant k\leqslant m+n\). Here the bound \(\mathcal{F}(\mathbf{w}_{\mathcal{T}_{1},\mathcal{T}_{2}})\) is optimal, due to the structure of majorization lattice, namely for any probability vector \(\mathbf{x}\) satisfying
\[\left\{\frac{1}{2}\mathbf{p}\oplus\frac{1}{2}\mathbf{q}\right\}_{\Phi}\prec \mathbf{x}, \tag{118}\]
then we also have
\[\mathcal{F}(\mathbf{w}_{\mathcal{T}_{1},\mathcal{T}_{2}})\prec\mathbf{x}. \tag{119}\]
Physically, the causal map independence of \(\mathbf{x}\) leads to an universal uncertainty relation \(\frac{1}{2}\mathbf{p}\oplus\frac{1}{2}\mathbf{q}\prec\mathbf{x}\), and \(\mathcal{F}(\mathbf{w}_{\mathcal{T}_{1},\mathcal{T}_{2}})\) constructed here is the optimal one. Mathematically, \(\mathcal{F}(\mathbf{w}_{\mathcal{T}_{1},\mathcal{T}_{2}})\) is the GLB (see Def. II.1) for all such \(\mathbf{x}\). In particular, taken any Schur-concave function \(f\), we have
\[f(\frac{1}{2}\mathbf{p}\oplus\frac{1}{2}\mathbf{q})\geqslant f(\mathcal{F}( \mathbf{w}_{\mathcal{T}_{1},\mathcal{T}_{2}}))\geqslant f(\mathbf{x}). \tag{120}\]
Here we would like to remark that permutations will induce an equivalence relation, denoted as \(\sim\), on the probability simplex. Given two vectors \(\mathbf{x}\) and \(\mathbf{y}\), we say that they are equivalent, written as
\[\mathbf{x}\sim\mathbf{y}, \tag{121}\]
if there exists a permutation matrix \(D\) such that
\[\mathbf{x}=\mathbf{y}\cdot D. \tag{122}\]
To illustrate \(\sim\), let us consider \((0.2,0.1,0.7)\) and \((0.7,0.2,0.1)\). Clearly, there exists a permutation matrix satisfying Eq. 122, then we have
\[(0.2,0.1,0.7)\sim(0.7,0.2,0.1). \tag{123}\]
The equivalence class of \(\mathbf{x}\) under \(\sim\), denoted \([\mathbf{x}^{\downarrow}]\), is defined as
\[[\mathbf{x}^{\downarrow}]:=\{\mathbf{y}\,|\,\mathbf{y}\sim\mathbf{x}\}. \tag{124}\]
Thus, as a corollary of Thm. II.8, we obtain the following result about universal uncertainty relation
Given a quantum causal map \(\Phi\in\mathfrak{F}_{2}\), let us denote the probability vectors obtained by measuring \(\Phi\) with respect to interactive measurements \(\mathcal{T}_{1}\) and \(\mathcal{T}_{2}\in\mathfrak{T}_{2}\) as \(\mathbf{p}\) and \(\mathbf{q}\). Define the vector \(\mathbf{w}_{\mathcal{T}_{1},\mathcal{T}_{2}}\) as \((p_{\mathrm{win},k}-p_{\mathrm{win},k-1})_{k=1}^{m+n}\) with \(p_{\mathrm{win},0}:=0\) and \(p_{\mathrm{win},k}\) in the form of Eq. 110 for \(1\leqslant k\leqslant m+n\). Then for any \(\mathbf{p}^{{}^{\prime}}\in[\mathbf{p}^{\downarrow}]\), \(\mathbf{q}^{{}^{\prime}}\in[\mathbf{q}^{\downarrow}]\), and \(\mathbf{w}^{{}^{\prime}}\in[\mathcal{F}(\mathbf{w}_{\mathcal{T}_{1},\mathcal{T }_{2}})]\), we have the following universal uncertainty relation
\[\frac{1}{2}\mathbf{p}^{{}^{\prime}}\oplus\frac{1}{2}\mathbf{q}^{{}^{\prime}} \prec\mathbf{w}^{{}^{\prime}}. \tag{125}\]
In particular, given any permutation matrices \(D_{1}\), \(D_{2}\), and \(D_{3}\), we also have
\[\frac{1}{2}(\mathbf{p}^{{}^{\prime}}\cdot D_{1})\oplus\frac{1}{2}(\mathbf{q}^ {{}^{\prime}}\cdot D_{2})\prec\mathbf{w}^{{}^{\prime}}\cdot D_{3}. \tag{126}\]
Moreover, if there exists a causal map independent probability vector \(\mathbf{x}\), such that
\[\left\{\frac{1}{2}\mathbf{p}^{{}^{\prime}}\oplus\frac{1}{2}\mathbf{q}^{{}^{ \prime}}\right\}_{\Phi}\prec\mathbf{x}, \tag{127}\]
then it is straightforward to see that
\[\mathbf{w}^{{}^{\prime}}\cdot D\prec\mathbf{x}, \tag{128}\]
holds for any permutation matrix \(D\).
We now move on to investigating the most general form of quantum roulette, where the player Bob can engineer any quantum circuit fragment \(\Phi\in\mathfrak{F}_{a}\) with finite integer \(a\). On top of that, the agent Alice will select an interactive measurement \(\mathcal{T}_{b}\) with \(b\in\{1,2,\ldots,c\}\) at random and probe \(\Phi\) with her choice of interactive measurement \(\mathcal{T}_{b}\). Without loss of generality, let us assume the quantum circuit fragment (see Def. I.4) provided by Bob is in the form of
\[\Phi=\Psi^{a}\circ\Psi^{a-1}\circ\cdots\circ\Psi^{2}(\rho)\in\mathfrak{F}_{a}. \tag{129}\]
Meanwhile, the interactive measurement \(\mathcal{T}_{b}:=\{\mathcal{T}_{b,x_{b}}\}_{x_{b}=1}^{m_{b}}\in\mathfrak{T}_{ a}\) (\(b=1,2,\ldots,c\)) (see Def. I.5) is constituted of
\[\mathcal{T}_{b,x_{b}}(\cdot):=\mathrm{Tr}\big{[}M_{b,x_{b}}\cdot\Lambda_{b}^{ a-1}\circ\Lambda_{b}^{a-2}\circ\cdots\circ\Lambda_{b}^{1}(\cdot)\big{]}. \tag{130}\]
Here \(\{M_{b,x_{b}}\}_{x_{b}=1}^{m_{b}}\) forms a POVM for each \(b\in\{1,2,\ldots,c\}\). In this case, the layout of roulette table consists of \(\sum_{b=1}^{c}m_{b}\) tuples \((b,x_{b})\), where \(b\in\{1,2,\ldots,c\}\) and \(x_{b}\in\{1,2,\ldots,m_{b}\}\). In particular, for Bob's quantum circuit fragment \(\Phi\), the tuple \((b,x_{b})\) happens with probability \(p_{x_{b}}(\Phi,\mathcal{T}_{b})/c\), where the quantity \(p_{x_{b}}(\Phi,\mathcal{T}_{b})\) is given by
\[p_{x_{b}}(\Phi,\mathcal{T}_{b}):=\mathrm{Tr}\big{[}M_{b,x_{b}}\cdot\Psi^{a} \circ\Lambda_{b}^{a-1}\circ\Psi^{a-1}\circ\Lambda_{b}^{a-2}\circ\cdots\circ \Psi^{2}\circ\Lambda_{b}^{1}(\rho)\big{]}. \tag{131}\]
Similar to the case of quantum causal maps, when starts with \(k\) chips, Bob's guessing strategy is characterized by \(c\) sets \(\mathcal{S}_{b}\subset\{1,\ldots,m_{b}\}\) such that
\[\sum_{b=1}^{c}|\mathcal{S}_{b}|=k, \tag{132}\]
and his maximum winning probability is given by
\[p_{\mathrm{win},k}(\Phi) =\max_{\sum_{b=1}^{c}|\mathcal{S}_{b}|=k}\sum_{x_{b}\in\mathcal{S} _{b}\subset\{1,\ldots,m_{b}\}}\frac{1}{c}\cdot p_{x_{b}}(\Phi,\mathcal{T}_{b}) \tag{133}\] \[=\max_{\sum_{b=1}^{c}|\mathcal{S}_{b}|=k}\sum_{x_{b}\in\mathcal{S} _{b}\subset\{1,\ldots,m_{b}\}}\frac{1}{c}\left(J^{\Phi}\star J_{x_{b}}^{ \mathcal{T}_{b}}\right), \tag{134}\]
where the operator \(J^{\Phi}\) and \(J_{x_{b}}^{\mathcal{T}_{b}}\) stand for the CJ operators of quantum circuit fragment \(\Phi\) and the component \(\mathcal{T}_{b,x_{b}}\) of interactive measurement \(\mathcal{T}_{b}\) respectively. To simplify our representation, we take the union of CJ operators \(J_{x_{b}}^{\mathcal{T}_{b}}\)
and denote it by \(\{J_{z}\}_{z}\), which is formally defined as
\[J_{z}:=J_{z-\sum_{j=1}^{b-1}m_{j}}^{\mathcal{T}_{b}},\quad\text{for} \ \sum_{j=1}^{b-1}m_{j}+1\leqslant z\leqslant\sum_{j=1}^{b}m_{j}, \tag{135}\]
and set \(J_{\mathcal{S}}\) as
\[J_{\mathcal{S}}:=\sum_{z\in\mathcal{S}}J_{z}, \tag{136}\]
where \(\mathcal{S}\) is a subset of \(\{1,\ldots,\sum_{b=1}^{c}m_{b}\}\). Before going further, let us consider an example of \(J_{\mathcal{S}}\). Assume \(m_{1}=m_{2}=m_{3}=3\), then \(J_{\{2,5,8\}}\) is an abbreviation of the following summation,
\[J_{\{2,5,8\}}=J_{2}+J_{5}+J_{8}=J_{2}^{\mathcal{T}_{1}}+J_{2}^{ \mathcal{T}_{2}}+J_{2}^{\mathcal{T}_{3}}. \tag{137}\]
Thus, given quantum circuit fragment \(\Phi\), Bob's maximum winning probability, i.e. Eq. 134, can be simplified to the following form in terms of the CJ operators and link product,
\[p_{\text{win},k}(\Phi)=\frac{1}{c}\max_{|\mathcal{S}|=k}J^{\Phi} \star J_{\mathcal{S}}. \tag{138}\]
Denote the probability vector obtained by measuring \(\Phi\) with respect to interactive measurements \(\mathcal{T}_{b}\) as \(\mathbf{p}_{b}\), then the largest summation of \(k\) elements of \(\oplus_{b=1}^{c}\mathbf{p}_{b}/c\) is exactly \(p_{\text{win},k}(\Phi)\). Thus, written out explicitly, we have
\[\sum_{i=1}^{k}(\bigoplus_{b=1}^{c}\frac{1}{c}\mathbf{p}_{b})_{i}^ {\downarrow}=p_{\text{win},k}(\Phi)=\frac{1}{c}\max_{|\mathcal{S}|=k}J^{\Phi} \star J_{\mathcal{S}}, \tag{139}\]
where the superscript \(\downarrow\) indicates that the corresponding vector is arranged in non-increasing order, and the subscript \(i\) means it is the \(i\)-th element of the vector. By maximizing over all quantum circuit fragment \(\Phi\in\mathfrak{F}_{a}\), we have the following fundamental limitation \(p_{\text{win},k}\) for Bob's winning probability, namely
\[p_{\text{win},k}:=\max_{\Phi\in\mathfrak{F}_{a}}p_{\text{win},k}( \Phi). \tag{140}\]
It hence follows that for any quantum circuit fragment \(\Phi\in\mathfrak{F}_{a}\) prepared by Bob, his winning probability with \(k\) chips is upper-bounded by \(p_{\text{win},k}\),
\[\sum_{i=1}^{k}(\bigoplus_{b=1}^{c}\frac{1}{c}\mathbf{p}_{b})_{i}^ {\downarrow}=p_{\text{win},k}(\Phi)\leqslant p_{\text{win},k}, \tag{141}\]
for \(1\leqslant k\leqslant\sum_{b=1}^{c}m_{b}\). It is worth mentioning that the maximum winning probability \(p_{\text{win},k}\) for Bob is achievable. Then, by the definition of majorization (see Def. II.3), we can unify \(\sum_{b=1}^{c}m_{b}\) inequalities in the form of Eq. 141 into a single majorization inequality; that is
\[\bigoplus_{b=1}^{c}\frac{1}{c}\mathbf{p}_{b}\prec\mathbf{w}_{ \mathcal{T}_{1},\ldots,\mathcal{T}_{c}}, \tag{142}\]
where the probability vector \(\mathbf{w}_{\mathcal{T}_{1},\ldots,\mathcal{T}_{c}}\) only relies on the set of interactive measurements, i.e. \(\{\mathcal{T}_{1},\mathcal{T}_{2},\ldots,\mathcal{T}_{c}\}\), and independent of the quantum circuit fragment \(\Phi\) chosen by Bob. Formally, we define \(\mathbf{w}_{\mathcal{T}_{1},\ldots,\mathcal{T}_{c}}\) as
\[\mathbf{w}_{\mathcal{T}_{1},\ldots,\mathcal{T}_{c}}:=(w_{k})_{k= 1}^{\sum_{k=1}^{c}m_{b}}=(w_{1},\ldots,w_{\sum_{b=1}^{c}m_{b}}), \tag{143}\]
with each component \(w_{k}\) is defined as
\[w_{k}:=p_{\text{win},k}-p_{\text{win},k-1}, \tag{144}\]
for \(1\leqslant k\leqslant\sum_{b=1}^{c}m_{b}\), and \(p_{\text{win},0}:=0\). Using the language of CJ operators, the maximum winning probability \(p_{\text{win},k}\) can be solved by using SDPs [6; 7],
\[p_{\text{win},k}=\frac{1}{c}\max_{|\mathcal{S}|=k}\max_{\Phi\in \mathfrak{F}_{a}}J^{\Phi}\star J_{\mathcal{S}}=\frac{1}{c}\max_{|\mathcal{S}|= k}\max_{\text{T}} \text{Tr}[J(a)\cdot J_{\mathcal{S}}]\] \[\text{s.t.} J(a)\geqslant 0,\quad\text{Tr}_{\mathcal{H}_{1}}[J(1)]=1,\] \[\text{Tr}_{\mathcal{H}_{2i-1}}[J(i)]=\text{Tr}_{\mathcal{H}_{2i- 1}}\mathcal{H}_{2i-2}[J(i)]\otimes\frac{\mathds{1}_{\mathcal{H}_{2i-2}}}{d_ {\mathcal{H}_{2i-2}}},\quad\text{for}\ 2\leqslant i\leqslant a, \tag{145}\]
where \(d_{\mathcal{H}_{2i-2}}:=\dim\mathcal{H}_{2i-2}\). From a causal perspective, the last restriction of Eq. 145 indicates that the quantum circuit fragment \(\Phi\) is NS from \(\Psi^{i}\) to \(\Psi^{i-1}\) for \(2\leqslant i\leqslant a\) with \(\Psi^{1}:=\rho\) (see Eq. 129). Now the universal uncertainty relation for multiple interactive measurements can be formulated as
\[\begin{split}\hline\end{split} \tag{146}\]
For any quantum circuit fragment \(\Phi\in\mathfrak{F}_{a}\), the probability vector \(\mathbf{p}_{b}\) obtained by measuring \(\Phi\) with respect to interactive measurements \(\mathcal{T}_{b}\in\mathfrak{T}_{a}\) (\(b\in\{1,2,\ldots,c\}\)) satisfies the following trade-off
\[\bigoplus_{b=1}^{c}\frac{1}{c}\mathbf{p}_{b}\prec\mathbf{w}_{\mathcal{T}_{1}, \ldots,\mathcal{T}_{c}}=(p_{\text{win},k}-p_{\text{win},k-1})_{k=1}^{\sum_{c=1 }^{c}m_{b}}, \tag{147}\]
where \(p_{\text{win},0}:=0\), and for \(1\leqslant k\leqslant\sum_{b=1}^{c}m_{b}\), the quantity \(p_{\text{win},k}\) is characterized by Eq. 145. Similar to the case of causal maps (see Eq. 111), the optimal solution of \(p_{\text{win},k}\) forms a semidefinite program (SDP), and hence can be computed efficiently in polynomial time (up to desired accuracy) by using the ellipsoid method or an interior point method [34, 35].
As discussed in Subsec. II.1, constructing \(\mathbf{w}_{\mathcal{T}_{1},\ldots,\mathcal{T}_{c}}\) is the first step of formulating the optimal bound of \(\left\{\oplus_{b=1}^{c}\frac{1}{c}\mathbf{p}_{b}\right\}_{\Phi}\), and the next step is the flatness process \(\mathcal{F}\), which implies that
\[\begin{split}\hline\end{split} \tag{148}\]
where \(\mathcal{F}\) is the flatness process defined in Def. II.4. The vector \(\mathbf{w}_{\mathcal{T}_{1},\ldots,\mathcal{T}_{c}}\) is defined as
\[\mathbf{w}_{\mathcal{T}_{1},\ldots,\mathcal{T}_{c}}:=(p_{\text{win},k}-p_{ \text{win},k-1})_{k=1}^{\sum_{c=1}^{c}m_{b}}, \tag{149}\]
with \(p_{\text{win},0}:=0\) and \(p_{\text{win},k}\) is solved by SDPs in the form of Eq. 145 for \(1\leqslant k\leqslant\sum_{b=1}^{c}m_{b}\). Here the bound \(\mathcal{F}(\mathbf{w}_{\mathcal{T}_{1},\ldots,\mathcal{T}_{c}})\) is optimal, due to the structure of majorization lattice, namely for any probability vector \(\mathbf{x}\) satisfying
\[\left\{\bigoplus_{b=1}^{c}\frac{1}{c}\mathbf{p}_{b}\right\}_{\Phi}\prec \mathbf{x}, \tag{150}\]
then we also have
\[\mathcal{F}(\mathbf{w}_{\mathcal{T}_{1},\ldots,\mathcal{T}_{c}})\prec \mathbf{x}. \tag{151}\]
Physically, the quantum circuit fragment independence of \(\mathbf{x}\) leads to a universal uncertainty relation \(\bigoplus_{b=1}^{c}\frac{1}{c}\mathbf{p}_{b}\prec\mathbf{x}\), and \(\mathcal{F}(\mathbf{w}_{\mathcal{T}_{1},\ldots,\mathcal{T}_{c}})\) constructed here is the optimal one. Mathematically, \(\mathcal{F}(\mathbf{w}_{\mathcal{T}_{1},\ldots,\mathcal{T}_{c}})\) is the GLB (see Def. II.1) for all such \(\mathbf{x}\). In particular, taken any Schur-concave function \(f\), we have
\[f(\bigoplus_{b=1}^{c}\frac{1}{c}\mathbf{p}_{b})\geqslant f(\mathcal{F}( \mathbf{w}_{\mathcal{T}_{1},\ldots,\mathcal{T}_{c}}))\geqslant f(\mathbf{x}). \tag{152}\]
From the equivalence relation \(\sim\) based on permutations, we know that permutations will not change the order of majorization. As an illustration, let us consider two equivalence class \([\mathbf{x}]\) and \([\mathbf{y}]\). If \(\mathbf{x}\prec\mathbf{y}\), then it is straightforward to check that \(\mathbf{x}\cdot D_{1}\prec\mathbf{y}\cdot D_{2}\) holds for any permutation matrices \(D_{1}\) and \(D_{2}\). Using this insight, we obtain the following corollary.
Given a quantum circuit fragment \(\Phi\in\mathfrak{F}_{a}\), let us denote the probability vectors obtained by measuring \(\Phi\) with respect to interactive measurement \(\mathcal{T}_{b}\in\mathfrak{T}_{a}\) (\(b\in\{1,2,\ldots,c\}\)) as \(\mathbf{p}_{b}\). Define the vector \(\mathbf{w}_{\mathcal{T}_{1},\ldots,\mathcal{T}_{c}}\) as \((p_{\text{win},k}-p_{\text{win},k-1})_{k=1}^{\sum_{c=1}^{c}m_{b}}\) with \(p_{\text{win},0}:=0\) and \(p_{\text{win},k}\) in the form of Eq. 145 for \(1\leqslant k\leqslant\sum_{b=1}^{c}m_{b}\). Then for any \(\mathbf{p}_{b}^{{}^{\prime}}\in[\mathbf{p}_{b}^{\prime}]\), and \(\mathbf{w}^{{}^{\prime}}\in[\mathcal{F}(\mathbf{w}_{\mathcal{T}_{1},\ldots, \mathcal{T}_{c}})]\), we have the following universal uncertainty relation
\[\bigoplus_{b=1}^{c}\frac{1}{c}\mathbf{p}_{b}{{}^{\prime}}\prec \mathbf{w}^{{}^{\prime}}. \tag{152}\]
In particular, given any permutation matrices \(D_{1},D_{2},\ldots,D_{c+1}\), we also have
\[\bigoplus_{b=1}^{c}\frac{1}{c}(\mathbf{p}_{b}{{}^{\prime}}\cdot D _{b})\prec\mathbf{w}^{{}^{\prime}}\cdot D_{c+1}. \tag{153}\]
Moreover, if there exists a quantum circuit fragment independent probability vector \(\mathbf{x}\), such that
\[\left\{\bigoplus_{b=1}^{c}\frac{1}{c}\mathbf{p}_{b}{{}^{\prime}} \right\}_{\Phi}\prec\mathbf{x}, \tag{154}\]
then it is straightforward to see that
\[\mathbf{w}^{{}^{\prime}}\cdot D\prec\mathbf{x}, \tag{155}\]
holds for any permutation matrix \(D\).
So far we've been talking about the formulation of universal uncertainty relation and its intrinsic connection with quantum roulette, but the advantages of using this form are just as important. One of the greatest advantages of formulating Eq. 147 is its universality. Here we addressed the question of how universal our framework is, and highlight the generality of our uncertainty relations.
Uncertainty principle in quantum mechanics has revolutionized our view of the physical world. Roughly speaking, it captures the uncertainty trade-off between observables associated with static system - quantum states. However, systems in quantum mechanics evolve. Hence, quantum dynamics with definite causal order, known as quantum circuit fragment in this work, emerge as a central ingredient in state-of-the-art quantum technologies. Our framework extends uncertainty principle to incorporate quantum dynamics, allowing for any quantum circuit fragment \(\Phi\in\mathfrak{F}_{a}\). In particular, our framework, including theorems (Thms. II.10 and II.11) and corollary (Cor. II.12), does not focus on a particular choice of quantum circuit fragment. By employing our theory, one can build uncertainty relations for any quantum dynamics with definite causal order, e.g., based upon the universal uncertainty relation for quantum causal maps, i.e. \(\Phi\in\mathfrak{F}_{2}\) (see Thms. II.7 and II.8), we can derived the causal uncertainty relation, which characterizes the fundamental trade-off between causal structures, such as direct-cause and common-cause in causal inference. Besides causality, it would also be interesting to apply our framework to other physical properties, such as completely positive divisibility, non-Markovianity, and so forth, of quantum dynamics, and identify the corresponding trade-off. This could be a good issue for future works.
Finally, we ask if the uncertainty relation \(f(\bigoplus_{b=1}^{c}\frac{1}{c}\mathbf{p}_{b})\geqslant f(\mathcal{F}( \mathbf{w}_{\mathcal{T}_{1},\ldots,\mathcal{T}_{c}}))\) induced by some Schur-concave function \(f\) is optimal? Unfortunately, we cannot guarantee the optimality of \(f(\mathcal{F}(\mathbf{w}_{\mathcal{T}_{1},\ldots,\mathcal{T}_{c}}))\) with specific uncertainty measure \(f\). There might exist a quantum circuit fragment independent vector \(\mathbf{x}\) such that
\[f(\bigoplus_{b=1}^{c}\frac{1}{c}\mathbf{p}_{b})\geqslant f( \mathbf{x})\geqslant f(\mathcal{F}(\mathbf{w}_{\mathcal{T}_{1},\ldots, \mathcal{T}_{c}})). \tag{156}\]
Remark that Eq. 156 only holds for a specific \(f\), and is not equivalent to \(\oplus_{b=1}^{c}\frac{1}{c}\mathbf{p}_{b}\prec\mathbf{x}\prec\mathcal{F}( \mathbf{w}_{\mathcal{T}_{1},\ldots,\mathcal{T}_{c}})\). In this case, there must exist another uncertainty measure \(g\), leading to the following uncertainty relation
\[g(\bigoplus_{b=1}^{c}\frac{1}{c}\mathbf{p}_{b})\geqslant g( \mathcal{F}(\mathbf{w}_{\mathcal{T}_{1},\ldots,\mathcal{T}_{c}}))\geqslant g (\mathbf{x}). \tag{157}\]
This in turn implies that, as a lower bound of uncertainty relation, the quantity \(g(\mathbf{x})\) is weaker than \(g(\mathcal{F}(\mathbf{w}_{\mathcal{T}_{1},\ldots,\mathcal{T}_{c}}))\). From above discussion, it is clear that there does not exist another circuit fragment independent vector \(\mathbf{x}\) that always
outperform our bound \(\mathcal{F}(\mathbf{w}_{\mathcal{T}_{1},\ldots,\mathcal{T}_{c}})\). Actually, if there exist a quantum circuit fragment independent vector \(\mathbf{x}\) such that Eq. 156 holds for any uncertainty measure \(f\), i.e. Schur-concave function, then Lem. II.6 immediately implies that
\[\bigoplus_{b=1}^{c}\frac{1}{c}\mathbf{p}_{b}\prec\mathbf{x}\prec\mathcal{F}( \mathbf{w}_{\mathcal{T}_{1},\ldots,\mathcal{T}_{c}}), \tag{158}\]
holds for any \(\Phi\in\mathfrak{F}_{\mathfrak{s}}\). Note that vector \(\mathcal{F}(\mathbf{w}_{\mathcal{T}_{1},\ldots,\mathcal{T}_{c}})\) is the LUB of set \(\left\{\oplus_{b=1}^{c}\frac{1}{c}\mathbf{p}_{b}\right\}_{\Phi}\), we then find that the reverse is also true, i.e. \(\mathcal{F}(\mathbf{w}_{\mathcal{T}_{1},\ldots,\mathcal{T}_{c}})\prec \mathbf{x}\). Therefore, they are equivalent under permutations, namely
\[\mathcal{F}(\mathbf{w}_{\mathcal{T}_{1},\ldots,\mathcal{T}_{c}})\sim\mathbf{ x}, \tag{159}\]
or there exists a permutation matrix \(D\) such that
\[\mathcal{F}(\mathbf{w}_{\mathcal{T}_{1},\ldots,\mathcal{T}_{c}})=\mathbf{x} \cdot D, \tag{160}\]
which implies
\[f(\mathcal{F}(\mathbf{w}_{\mathcal{T}_{1},\ldots,\mathcal{T}_{c}}))=f( \mathbf{x}), \tag{161}\]
holds for any Schur-concave function \(f\). Put simply, the completeness of majorization lattice guarantee the optimality of Eq. 147, and there does exist another circuit fragment independent vector that outperforms \(\mathcal{F}(\mathbf{w}_{\mathcal{T}_{1},\ldots,\mathcal{T}_{c}})\).
### Lemma 1 and Theorem 1 of the Main Text: Their Proofs, Improvements, and Generalizations
In this subsection, we prove the results introduced in the main text of our work, including Lemma 1 and Theorem 1. Let us begin with Lemma 1 of the main text, namely
**Corollary II.1**.: **Lemma 1 of the Main Text**
Collect the probability distributions obtained by implementing interactive measurements \(\mathcal{T}_{1}\) and \(\mathcal{T}_{2}\) on some dynamical process \(\Phi\) as \(\mathbf{p}\) and \(\mathbf{q}\), then there exists a probability vector \(\mathbf{v}(\mathcal{T}_{1},\mathcal{T}_{2})\) such that
\[\frac{1}{2}\mathbf{p}\oplus\frac{1}{2}\mathbf{q}\prec\mathbf{v}(\mathcal{T}_{ 1},\mathcal{T}_{2}). \tag{162}\]
The vector-type bound
\[\mathbf{v}(\mathcal{T}_{1},\mathcal{T}_{2}):=\mathbf{w}_{\mathcal{T}_{1}, \mathcal{T}_{2}}, \tag{163}\]
is independent of the dynamical process \(\Phi\) being measured, and hence captures the essential incompatibility between \(\mathcal{T}_{1}\) and \(\mathcal{T}_{2}\).
Proof.: By restricting Thm. II.10 to the case of two interactive measurements, i.e. \(\mathcal{T}_{1}\) and \(\mathcal{T}_{2}\), we immediately obtain Eq. 164, where the upper bound under majorization is completely characterized by the winning probability of quantum roulette.
From the algebraic structure of majorization lattice (see Subsec. II.1), we know that the upper bound \(\mathbf{w}_{\mathcal{T}_{1},\mathcal{T}_{2}}\) provided by Cor. II.13 (or originally by Thm. II.10) may not be optimal. Thanks to Thm. II.11, the optimal bound for \(\mathbf{p}/2\oplus\mathbf{q}/2\) under majorization can be obtained by further applying the flatness process \(\mathcal{F}\) (see Def. II.4); that is
Collective fluid improved laminar \(\mathbf{I}\) of the Main Text
Collect the probability distributions obtained by implementing interactive measurements \(\mathcal{T}_{1}\) and \(\mathcal{T}_{2}\) on some dynamical process \(\Phi\) as \(\mathbf{p}\) and \(\mathbf{q}\), then there exists a probability vector \(\mathbf{v}(\mathcal{T}_{1},\mathcal{T}_{2})\) such that
\[\frac{1}{2}\mathbf{p}\oplus\frac{1}{2}\mathbf{q}\prec\mathbf{v}(\mathcal{T}_{1 },\mathcal{T}_{2}). \tag{164}\]
The vector-type bound
\[\mathbf{v}(\mathcal{T}_{1},\mathcal{T}_{2}):=\mathcal{F}(\mathbf{w}_{ \mathcal{T}_{1},\mathcal{T}_{2}}), \tag{165}\]
is independent of the dynamical process \(\Phi\) being measured, and hence captures the essential incompatibility between \(\mathcal{T}_{1}\) and \(\mathcal{T}_{2}\). Here \(\mathcal{F}\) is the flatness process defined in Def. II.4. Remark that, the bound \(\mathcal{F}(\mathbf{w}_{\mathcal{T}_{1},\mathcal{T}_{2}})\) is optimal: for any probability vector \(\mathbf{x}\) satisfying
\[\left\{\frac{1}{2}\mathbf{p}\oplus\frac{1}{2}\mathbf{q}\right\}_{\Phi}\prec \mathbf{x}, \tag{166}\]
then we also have
\[\mathcal{F}(\mathbf{w}_{\mathcal{T}_{1},\mathcal{T}_{2}})\prec\mathbf{x}. \tag{167}\]
Physically, the quantum circuit fragment independence of \(\mathbf{x}\) leads to a universal uncertainty relation \(\mathbf{p}/2\oplus\mathbf{q}/\prec\mathbf{x}\), and \(\mathcal{F}(\mathbf{w}_{\mathcal{T}_{1},\mathcal{T}_{2}})\) constructed here is the optimal one. Mathematically, \(\mathcal{F}(\mathbf{w}_{\mathcal{T}_{1},\mathcal{T}_{2}})\) is the GLB (see Def. II.1) for all such \(\mathbf{x}\). In particular, taken any Schur-concave function \(f\), we have
\[f(\frac{1}{2}\mathbf{p}\oplus\frac{1}{2}\mathbf{q})\geqslant f(\mathcal{F}( \mathbf{w}_{\mathcal{T}_{1},\mathcal{T}_{2}}))\geqslant f(\mathbf{x}). \tag{168}\]
We now move on to demonstrating a generalized entropic uncertainty relation with multiple interactive measurements, which includes the Theorem 1 in the main text as a special case. Moreover, compare with the bound \(C(\mathcal{T}_{1},\mathcal{T}_{2}):=2H(\mathbf{w}_{\mathcal{T}_{1},\mathcal{T }_{2}})-2\) offered in the main text, an improved entropic uncertainty relation has also been provided. We start by writing down a direct consequence of Thm. II.10:
[
Then, by applying Shannon entropy \(H\), it now follows immediately that
\[H(\bigoplus_{b=1}^{c}\frac{1}{c}\mathbf{p}_{b}) =-\sum_{b,x_{b}}\frac{p_{x_{b}}(\Phi,\mathcal{T}_{b})}{c}\log\frac{ p_{x_{b}}(\Phi,\mathcal{T}_{b})}{c} \tag{171}\] \[=-\frac{1}{c}\sum_{b,x_{b}}p_{x_{b}}(\Phi,\mathcal{T}_{b})\log p_{ x_{b}}(\Phi,\mathcal{T}_{b})+\frac{\sum_{b,x_{b}}p_{x_{b}}(\Phi,\mathcal{T}_{b})}{c}\log c\] (172) \[=\frac{1}{c}\sum_{b=1}^{c}H(\mathcal{T}_{b})_{\Phi}+\log c\] (173) \[\geqslant H(\mathbf{w}_{\mathcal{T}_{1},\ldots,\mathcal{T}_{c}}), \tag{174}\]
where the second equation comes from the fact that, for each \(b\in\{1,2,\ldots,c\}\), we have \(\sum_{x_{b}}p_{x_{b}}(\Phi,\mathcal{T}_{b})=1\), and thus \(\sum_{b}\sum_{x_{b}}p_{x_{b}}(\Phi,\mathcal{T}_{b})=c\). The third equation follows from the definition, i.e.
\[H(\mathcal{T}_{b})_{\Phi}:=H(\mathbf{p}_{b})=-\sum_{x_{b}}p_{x_{b}}(\Phi, \mathcal{T}_{b})\log p_{x_{b}}(\Phi,\mathcal{T}_{b}). \tag{175}\]
Note that throughout this supplemental material, all logarithms are base 2. Now we have
\[\sum_{b=1}^{c}H(\mathcal{T}_{b})_{\Phi}\geqslant c\left(H(\mathbf{w}_{ \mathcal{T}_{1},\ldots,\mathcal{T}_{c}})-\log c\right)=cH(\mathbf{w}_{ \mathcal{T}_{1},\ldots,\mathcal{T}_{c}})-c\log c. \tag{176}\]
Bu further define
\[C(\mathcal{T}_{1},\mathcal{T}_{2},\ldots,\mathcal{T}_{c}):=cH(\mathbf{w}_{ \mathcal{T}_{1},\ldots,\mathcal{T}_{c}})-c\log c, \tag{177}\]
we obtain the entropic uncertainty relation of Eq. 169, as required. Remark that, as a probability vector with \(\sum_{b=1}^{c}m_{b}\) components, our bound \(\mathbf{w}_{\mathcal{T}_{1},\ldots,\mathcal{T}_{c}}\) is bounded by
\[(\frac{1}{\sum_{b=1}^{c}m_{b}},\ldots,\frac{1}{\sum_{b=1}^{c}m_{b}}\choose \sum_{c=1}^{c}m_{b}}\prec\mathbf{w}_{\mathcal{T}_{1},\ldots,\mathcal{T}_{c}} \prec\underbrace{(\underbrace{1}_{c},\ldots,\frac{1}{c},\underbrace{0,\ldots,0} _{c})}_{\sum_{c=1}^{c}m_{b}-c}), \tag{178}\]
where the captions in Eq. 178 indicate the number of elements inside the vector. Hence
\[H(\mathbf{w}_{\mathcal{T}_{1},\ldots,\mathcal{T}_{c}})\geqslant\log c, \tag{179}\]
and the bound \(C(\mathcal{T}_{1},\mathcal{T}_{2},\ldots,\mathcal{T}_{c})=cH(\mathbf{w}_{ \mathcal{T}_{1},\ldots,\mathcal{T}_{c}})-c\log c\) is non-negative. The majorization upper bound of Eq. 178 is achieved, i.e. \(\mathbf{w}_{\mathcal{T}_{1},\ldots,\mathcal{T}_{c}}=(1/c,\ldots,1/c,0,\ldots,0)\), if and only if \((\mathbf{p}_{b})_{1}^{\downarrow}=1\) for all \(b\in\{1,2,\ldots,c\}\), which is equivalent to say that the bound \(C(\mathcal{T}_{1},\mathcal{T}_{2},\ldots,\mathcal{T}_{c})\) is zero whenever these interactive measurements \(\{\mathcal{T}_{1},\mathcal{T}_{2},\ldots,\mathcal{T}_{c}\}\) have a common eigencircuit \(\Phi\in\mathfrak{F}_{a}\) (see Def. I.6). According to the definition of \(\mathbf{w}_{\mathcal{T}_{1},\ldots,\mathcal{T}_{c}}\), we know that \(C(\mathcal{T}_{1},\mathcal{T}_{2},\ldots,\mathcal{T}_{c})\) is independent of the quantum circuit fragment \(\Phi\), and hence quantifies the inherent incompatibility between interactive measurements. Eq. 145 further ensures that \(\mathbf{w}_{\mathcal{T}_{1},\ldots,\mathcal{T}_{c}}\) (and thus \(C(\mathcal{T}_{1},\mathcal{T}_{2},\ldots,\mathcal{T}_{c})\)) can be explicitly computed.
Thm. II.15 forms a fundamental limitation of incompatible interactive measurements in terms of Shannon entropy, covering all quantum dynamics with definite causal order (called quantum circuit fragment in this work, see Def. I.4 for more details). In fact, Eq. 169 works for quantum circuit fragment \(\Phi\in\mathfrak{F}_{a}\) with arbitrary \(a\) and arbitrary number of interactive measurements, and hence generates an infinite number of entropic uncertainty relations. As a special case, we consider the situation where \(c=2\), i.e. a pair of interactive measurements, and obtain the following corollary straightforwardly, which is exactly Theorem 1 of the main text.
Given two interactive measurements \(\mathcal{T}_{1}\) and \(\mathcal{T}_{2}\) acting on some quantum circuit fragment \(\Phi\in\mathfrak{F}_{a}\). The entropy of their measurement outcomes, when summed, satisfies
\[H(\mathcal{T}_{1})_{\Phi}+H(\mathcal{T}_{2})_{\Phi}\geqslant C(\mathcal{T}_{1}, \mathcal{T}_{2}), \tag{180}\]
where \(H(\mathcal{T}_{b})_{\Phi}:=H(\mathbf{p}_{b})\), with \(\mathbf{p}_{b}\) denotes the probability vectors obtained by measuring \(\Phi\) with respect to interactive measurement \(\mathcal{T}_{b}\) (\(b=1,2\)). The bound
\[C(\mathcal{T}_{1},\mathcal{T}_{2}):=2H(\mathbf{w}_{\mathcal{T}_{1},\mathcal{T }_{2}})-2, \tag{181}\]
which measures incompatibility between \(\mathcal{T}_{1}\) and \(\mathcal{T}_{2}\), is non-negative and independent of \(\Phi\). \(C(\mathcal{T}_{1},\mathcal{T}_{2})\) can be explicitly computed and is strictly non-zero whenever these interactive measurements have no common eigencircuit.
Here we have two remarks: First, when the quantum circuit fragment \(\Phi\) degenerates to the case of a quantum channel, we can simply adopt the framework introduced in [25] to construct \(C(\mathcal{T}_{1},\mathcal{T}_{2})\), which includes a Maassen-Uffink type uncertainty relation. However, in such a case, the intervention is absent. Second, till now we have a variety of different forms of entropic uncertainty relations [33], but none of them can surpass all the others. Among them, the form derived from universal uncertainty relation outperforms other forms of entropic uncertainty relations in a large number of instances. For readers who are unfamiliar with entropic uncertainty relation and majorization, recent works, and overviews exist (e.g., Refs. [21; 23; 33]).
If the vector \(\mathbf{w}_{\mathcal{T}_{1},\ldots,\mathcal{T}_{c}}\) is not arranged in non-increasing order, then the flatness process \(\mathcal{F}\) will further improve the result [20]. Recall the example of \(S=\{\mathbf{x}=(0.6,0.15,0.15,0.1),\mathbf{y}=(0.5,0.25,0.2,0.05)\}\) considered in Subsec. II.1, where
\[\mathcal{F}(\mathbf{b}_{S})=(0.6,0.175,0.175,0.05)\prec\mathbf{b}_{S}=(0.6,0.1 5,0.2,0.05). \tag{182}\]
Using this insight, we can employ Eq. 147 instead of Eq. 146 to formulate entropic uncertainty relation. Thus, written out explicitly, we have
Theorem II.1.1: The improved Strategic Uncertainty Relation for Measurements with Interventions
Given \(c\) interactive measurements \(\mathcal{T}_{b}\in\mathfrak{T}_{a}\) (\(b\in\{1,2,\ldots,c\}\)) acting on some quantum circuit fragment \(\Phi\in\mathfrak{F}_{a}\).
The entropy of their measurement outcomes, when summed, satisfies
\[\sum_{b=1}^{c}H(\mathcal{T}_{b})_{\Phi}\geqslant C(\mathcal{T}_{1},\mathcal{T }_{2},\ldots,\mathcal{T}_{c}):=cH(\mathcal{F}(\mathbf{w}_{\mathcal{T}_{1}, \ldots,\mathcal{T}_{c}}))-c\log c, \tag{183}\]
where \(H(\mathcal{T}_{b})_{\Phi}:=H(\mathbf{p}_{b})\), with \(\mathbf{p}_{b}\) denotes the probability vectors obtained by measuring \(\Phi\) with respect to interactive measurement \(\mathcal{T}_{b}\), and \(\mathcal{F}\) stands for the flatness process defined in Def. II.4. The bound \(C(\mathcal{T}_{1},\mathcal{T}_{2},\ldots,\mathcal{T}_{c})\) - measuring incompatibility between interactive measurements \(\mathcal{T}_{b}\) - is non-negative and independent of \(\Phi\). \(C(\mathcal{T}_{1},\mathcal{T}_{2},\ldots,\mathcal{T}_{c})\) can be explicitly computed and is strictly non-zero whenever these interactive measurements have no common eigencircuit.
Thm. II.17 follows directly by replacing \(\mathbf{w}_{\mathcal{T}_{1},\ldots,\mathcal{T}_{c}}\) with \(\mathcal{F}(\mathbf{w}_{\mathcal{T}_{1},\ldots,\mathcal{T}_{c}})\) in the proof of Thm. II.15. Thanks to the property of flatness process \(\mathcal{F}\), now we have
\[\mathcal{F}(\mathbf{w}_{\mathcal{T}_{1},\ldots,\mathcal{T}_{c}})\prec\mathbf{ w}_{\mathcal{T}_{1},\ldots,\mathcal{T}_{c}}, \tag{184}\]
so that
\[H(\mathcal{F}(\mathbf{w}_{\mathcal{T}_{1},\ldots,\mathcal{T}_{c}}))\geqslant H (\mathbf{w}_{\mathcal{T}_{1},\ldots,\mathcal{T}_{c}}). \tag{185}\]
Written in full, that is
\[\sum_{b=1}^{c}H(\mathcal{T}_{b})_{\Phi}\geqslant cH(\mathcal{F}(\mathbf{w}_{ \mathcal{T}_{1},\ldots,\mathcal{T}_{c}}))-c\log c\geqslant cH(\mathbf{w}_{ \mathcal{T}_{1},\ldots,\mathcal{T}_{c}})-c\log c. \tag{186}\]
In particular, given two interactive measurements \(\mathcal{T}_{1}\) and \(\mathcal{T}_{2}\), we would like to show that
**Corollary II.18 Improved Theorem 1 of the Main Test**
Given two interactive measurements \(\mathcal{T}_{1}\) and \(\mathcal{T}_{2}\) acting on some quantum circuit fragment \(\Phi\in\mathfrak{F}_{a}\). The entropy of their measurement outcomes, when summed, satisfies
\[H(\mathcal{T}_{1})_{\Phi}+H(\mathcal{T}_{2})_{\Phi}\geqslant C(\mathcal{T}_{1 },\mathcal{T}_{2}), \tag{187}\]
where \(H(\mathcal{T}_{b})_{\Phi}:=H(\mathbf{p}_{b})\), with \(\mathbf{p}_{b}\) denotes the probability vectors obtained by measuring \(\Phi\) with respect to interactive measurement \(\mathcal{T}_{b}\) (\(b=1,2\)). The bound
\[C(\mathcal{T}_{1},\mathcal{T}_{2}):=2H(\mathcal{F}(\mathbf{w}_{\mathcal{T}_{1 },\mathcal{T}_{2}}))-2\log 2, \tag{188}\]
which measures incompatibility between \(\mathcal{T}_{1}\) and \(\mathcal{T}_{2}\), is non-negative and independent of \(\Phi\). \(C(\mathcal{T}_{1},\mathcal{T}_{2})\) can be explicitly computed and is strictly non-zero whenever these interactive measurements have no common eigencircuit.
Similar to Eq. 186, we have the following relation between Cor. II.18 and Cor. II.16 (i.e. Theorem 1 of the main text),
\[H(\mathcal{T}_{1})_{\Phi}+H(\mathcal{T}_{2})_{\Phi}\geqslant 2H(\mathcal{F}( \mathbf{w}_{\mathcal{T}_{1},\mathcal{T}_{2}}))-2\log 2\geqslant 2H(\mathbf{w}_{ \mathcal{T}_{1},\mathcal{T}_{2}})-2\log 2. \tag{189}\]
To summarize, in this subsection, Thm. II.15 introduces the entropic uncertainty relation for arbitrary quantum circuit fragments with multiple interactive measurements, which includes Cor. II.16 (namely, Theorem 1 of the main text) as a special case. By applying the flatness process, Thm. II.17 and Cor. II.18 further improve the results presented in Thm. II.15 and Cor. II.16 respectively.
## III Causal Uncertainty Relation
In this section, we extend the uncertainty principle from studying observables for quantum states to inferring causal structures of quantum dynamics, providing a lower bound for the fundamental trade-off between maximal common-cause indicator and the maximal direct-cause indicator. Such a trade-off is called causal uncertainty relation in this work. Specifically, the causal uncertainty relations in terms of both Shannon entropy and majorization are provided. We further infer the causality associated with system-environment unitary dynamics by utilizing entropic causal uncertainty relation. In Subsec. III.1, we introduce the concepts of maximal common-cause indicator and maximal direct-cause indicator, and show that it is possible to formulate a universal uncertainty relation for these two families of interactive measurements. As a special case, we obtain the entropic uncertainty relation for maximal common-cause indicators and maximal direct-cause indicators, characterizing the incompatibility between common-cause and direct-cause in quantum causal inference. We then discuss the necessary and sufficient conditions for system-environment unitary dynamics to be purely common-cause and purely direct-cause, and further detail the application of our causal uncertainty relation to inferring causality in Subsec. III.2. Finally, in Subsec. III.3, we demonstrate that the parameterized quantum circuit considered in our main text contains a coherent superpositions of common-cause and direct-cause as a special case.
### Uncertainty Relation for Common-Cause and Direct-Cause: Eq. 3 of the Main Text and Its Extension
Physicists have studied the fundamental limitations of observable pairs for quantum states - like the position and momentum of a particle, the phase and excitation number of a harmonic oscillator, the orthogonal components of spin angular momentum, and etc. Even with the complete description of the quantum state, it is impossible to predict the outcomes of these observable pairs. Such a restriction leads to a key principle in quantum mechanics - known as Heisenberg's uncertainty principle. However, less is known about the intrinsic constraints on underlying physical properties of quantum dynamics. In this subsection, we give a complete characterization of the uncertainty trade-off between common-cause and direct-cause, establishing uncertainty relations for the maximal common-cause indicator and the maximal direct-cause indicator. Unlike previous studies with static object (e.g. quantum states), here we focus on the dynamical process of quantum causal maps, i.e. \(\mathfrak{F}_{2}\) (see Subsec. I.4).
There are so many types of interactive measurements that can be used to get information about the dynamical properties of quantum causal maps. For the purpose of causal inference, which of them are likely to be most useful
to us? For these useful interactive measurements, how to characterize their incompatibility and formulate the corresponding trade-off between them? To address these questions and explain our uncertainty relation for common-cause and direct-cause, let us first recall two families of interactive measurements introduced in the main text - the set of all maximal common-cause indicators \(\mathcal{M}_{\mathrm{CC}}\subset\mathfrak{T}_{2}\) and the set of all maximal direct-cause indicators \(\mathcal{M}_{\mathrm{DC}}\subset\mathfrak{T}_{2}\) (see Fig. 4 of the main text for an illustration). For consistency purposes, we use the same notation as in Subsec. I.D, in particular that a causal map \(\Phi\in\mathfrak{F}_{2}\) is a linear map from system \(B\) to systems \(A\) and \(C\). Without loss of generality, we assume \(d_{A}=d_{B}=d_{C}=d\) in this work. Formally, the set \(\mathcal{M}_{\mathrm{CC}}\) is defined as
\[\text{Definition III- Maximal Connor-Cause Inditaten}\] \[\text{An interactive measurement }\mathcal{T}_{\mathrm{CC}}(\mathcal{U}_{1}, \mathcal{U}_{2}):=\{\mathcal{T}_{\mathrm{CC},i}(\mathcal{U}_{1},\mathcal{U}_{ 2})\}_{i}\in\mathfrak{T}_{2}\text{ is called maximal common-cause indicator,}\] \[\mathcal{T}_{\mathrm{CC},i}(\mathcal{U}_{1},\mathcal{U}_{2})( \cdot)=\mathrm{Tr}\bigg{[}\Phi_{i}\cdot\mathcal{U}_{2}(\cdot)\otimes\frac{ \mathds{1}_{B}}{d}\otimes\mathcal{U}_{1}(\cdot)\bigg{]}, \tag{190}\]
where \(\mathcal{U}_{1}\) and \(\mathcal{U}_{2}\) are some local unitary channels acting on systems \(A\) and \(C\) respectively, namely \(\mathcal{U}_{b}(\cdot)=U_{b}(\cdot)U_{b}^{\dagger}\) (\(b=1,2\)), with \(\left|\Phi_{1}\right\rangle:=\left|\phi^{+}\right\rangle=\sum_{k=0}^{d-1}\left| kk\right\rangle/\sqrt{d}\) being the maximally entangled state. Measurements are done with respect to a maximally entangling basis \(\{\Phi_{i}\}_{i}\) with \(d^{2}\) possible outcomes. Denote the collection of all maximal common-cause indicators as \(\mathcal{M}_{\mathrm{CC}}\).
Remark that, in the main text of our work, we simply denote the elements of \(\mathcal{M}_{\mathrm{CC}}\) as \(\mathcal{T}_{1}\). But, to identify the dependence and causation, we keep all the subscripts and arguments of interactive measurements in this Supplemental Material. On the other hand, the maximal direct-cause indicator is defined as
\[\text{Definition III-D}\text{ Maximal Direct-Causal Inditaten}\] \[\text{An interactive measurement }\mathcal{T}_{\mathrm{DC}}(\mathcal{U}_{3}, \mathcal{U}_{4}):=\{\mathcal{T}_{\mathrm{DC},j}(\mathcal{U}_{3},\mathcal{U}_{ 4})\}_{j}\in\mathfrak{T}_{2}\text{ is called maximal direct-cause indicator, if}\] \[\mathcal{T}_{\mathrm{DC},j}(\mathcal{U}_{3},\mathcal{U}_{4})( \cdot)=\mathrm{Tr}[\Phi_{j}\cdot\mathcal{U}_{4}(\cdot)\otimes\mathcal{U}_{3 }(\Phi_{1})\otimes\mathrm{Tr}_{A}(\cdot)], \tag{191}\]
where \(\mathcal{U}_{3}\) and \(\mathcal{U}_{4}\) are some local unitary channels acting on systems \(B\) and \(C\) respectively, namely \(\mathcal{U}_{b}(\cdot)=U_{b}(\cdot)U_{b}^{\dagger}\) (\(b=1,2\)). \(\left|\Phi_{1}\right\rangle:=\left|\phi^{+}\right\rangle=\sum_{k=0}^{d-1}\left| kk\right\rangle/\sqrt{d}\) is the maximally entangled state acting on systems \(BR\). Measurements are done with respect to a maximally entangling basis \(\{\Phi_{j}\}_{j}\) with \(d^{2}\) possible outcomes. Denote the collection of all maximal direct-cause indicators as \(\mathcal{M}_{\mathrm{DC}}\).
The circuit illustrations of Def. III.1 and Def. III.2 are given by Figs. 4a and 4b of the main text. The principal objective of this subsection is to characterize the fundamental trade-off between \(\mathcal{T}_{\mathrm{CC}}(\mathcal{U}_{1},\mathcal{U}_{2})\in\mathcal{M}_{ \mathrm{CC}}\) and \(\mathcal{T}_{\mathrm{DC}}(\mathcal{U}_{3},\mathcal{U}_{4})\in\mathcal{M}_{ \mathrm{DC}}\), and formulate their uncertainties in terms of Shannon entropy. To simplify our discussion and gain some intuitions, here we mainly focus on the qubit case, where \(d_{A}=d_{B}=d_{C}=2\). The general qudit case can be obtained by a straightforward extension of the procedure.
Before stating our uncertainty relation for common-cause and direct-cause, let us first introduce some notations related with maximal common-cause indicator and maximal direct-cause indicator. Denote the probability vectors obtained by measuring causal map \(\Phi\) with respect to interactive measurements and \(\mathbf{q}(\mathcal{U}_{3},\mathcal{U}_{4})_{\Phi}=(q_{j}(\mathcal{U}_{3}, \mathcal{U}_{4})_{\Phi})_{j}\), where each and \(\mathbf{q}(\mathcal{U}_{3},\mathcal{U}_{4})_{\Phi}=(q_{j}(\mathcal{U}_{3}, \mathcal{U}_{4})_{\Phi})_{j}\), where each \(p_{i}(\mathcal{U}_{1},\mathcal{U}_{2})_{\Phi}\) and \(q_{j}(\mathcal{U}_{3},\mathcal{U}_{4})_{\Phi}\) are given by
\[p_{i}(\mathcal{U}_{1},\mathcal{U}_{2})_{\Phi} =\mathcal{T}_{\mathrm{CC},i}(\mathcal{U}_{1},\mathcal{U}_{2})( \Phi)=J^{\Phi}\star J_{i}^{\mathrm{CC}}(\mathcal{U}_{1},\mathcal{U}_{2}), \tag{192}\] \[q_{j}(\mathcal{U}_{3},\mathcal{U}_{4})_{\Phi} =\mathcal{T}_{\mathrm{DC},j}(\mathcal{U}_{3},\mathcal{U}_{4})( \Phi)=J^{\Phi}\star J_{j}^{\mathrm{DC}}(\mathcal{U}_{3},\mathcal{U}_{4}). \tag{193}\]
Here \(J_{i}^{\mathrm{CC}}(\mathcal{U}_{1},\mathcal{U}_{2})\) and \(J_{j}^{\mathrm{DC}}(\mathcal{U}_{3},\mathcal{U}_{4})\) are CJ operators of the measuring processes \(\mathcal{T}_{\mathrm{CC},i}(\mathcal{U}_{1},\mathcal{U}_{2})\) and \(\mathcal{T}_{\mathrm{DC},j}(\mathcal{U}_{3},\mathcal{U}_{4})\) respectively. The probability \(p_{i}(\mathcal{U}_{1},\mathcal{U}_{2})_{\Phi}\) is a functional of unitary channels \(\mathcal{U}_{1}\), \(\mathcal{U}_{2}\), and the causal map \(\Phi\). Similarly, the probability \(q_{j}(\mathcal{U}_{3},\mathcal{U}_{4})_{\Phi}\) is a functional of unitary channels \(\mathcal{U}_{3}\), \(\mathcal{U}_{4}\), and the causal map \(\Phi\). In the case of qubit systems, the CJ operators of \(\mathcal{T}_{\mathrm{CC},i}(\mathcal{U}_{1},\mathcal{U}_{2})\) and \(\mathcal{T}_{\mathrm{DC},j}(\mathcal{U}_{3},\mathcal{U}_{4})\) are written as
\[J_{i}^{\mathrm{CC}}(\mathcal{U}_{1},\mathcal{U}_{2}) =\frac{\mathds{1}_{B}}{2}\otimes(U_{1}^{\dagger}\otimes U_{2}^{ \dagger}\left|\Phi_{i}\right\rangle\!\!\left\langle\Phi_{i}\right|U_{1} \otimes U_{2})^{\mathsf{T}},\quad i\in\{1,2,3,4\}. \tag{194}\] \[J_{j}^{\mathrm{DC}}(\mathcal{U}_{3},\mathcal{U}_{4}) =\mathds{1}_{A}\otimes(U_{3}\left|\Phi_{1}\right\rangle\!\!\left\langle \Phi_{1}\right|_{BR}U_{3}^{\dagger})\star(U_{4}^{\dagger}\left|\Phi_{j} \right\rangle\!\!\left\langle\Phi_{j}\right|_{CR}U_{4})^{\mathsf{T}},\quad j\in \{1,2,3,4\}. \tag{195}\]
The coefficient of \(1/2\) in Eq. 194 comes from the fact that \(d_{B}=2\), ensuring \(\mathbb{1}_{B}/2\) is a maximally mixed state on system \(B\). In Eq. 195, \(\mathbb{1}_{A}\) is the CJ operator of \(\mathrm{Tr}_{A}\) (partial trace over system \(A\)), i.e. \(J^{\mathrm{Tr}_{A}}=\mathbb{1}_{A}\). For example, given a bipartite quantum state \(\rho_{AB}\), it reduced state on \(B\) system can be rewritten as \(\rho_{B}:=\mathrm{Tr}_{A}[\rho_{AB}]=\rho_{AB}\star\mathbb{1}_{A}\).
Now the quantum uncertainty associated with the maximal common-cause indicator \(\mathcal{T}_{\mathrm{CC}}(\mathcal{U}_{1},\mathcal{U}_{2})\) and the maximal direct-cause indicator \(\mathcal{T}_{\mathrm{DC}}(\mathcal{U}_{3},\mathcal{U}_{4})\) can be quantified through the Shannon entropy of probability vectors \(\mathbf{p}(\mathcal{U}_{1},\mathcal{U}_{2})_{\Phi}\) and \(\mathbf{q}(\mathcal{U}_{3},\mathcal{U}_{4})_{\Phi}\). Mathematically, they are given by
\[H(\mathcal{T}_{\mathrm{CC}}(\mathcal{U}_{1},\mathcal{U}_{2}))_{\Phi} :=H(\mathbf{p}(\mathcal{U}_{1},\mathcal{U}_{2})_{\Phi})=-\sum_{i }p_{i}(\mathcal{U}_{1},\mathcal{U}_{2})_{\Phi}\log p_{i}(\mathcal{U}_{1}, \mathcal{U}_{2})_{\Phi}, \tag{196}\] \[H(\mathcal{T}_{\mathrm{DC}}(\mathcal{U}_{3},\mathcal{U}_{4}))_{\Phi} :=H(\mathbf{q}(\mathcal{U}_{3},\mathcal{U}_{4})_{\Phi})=-\sum_{j}q_{ j}(\mathcal{U}_{3},\mathcal{U}_{4})_{\Phi}\log q_{j}(\mathcal{U}_{3},\mathcal{U}_{4})_{ \Phi}. \tag{197}\]
Particularly, we are interested in the minimization of their joint uncertainty over all causal maps \(\Phi\in\mathfrak{F}_{2}\), maximal common-cause indicators \(\mathcal{T}_{\mathrm{CC}}(\mathcal{U}_{1},\mathcal{U}_{2})\in\mathcal{M}_{ \mathrm{CC}}\), and maximal direct-cause indicators \(\mathcal{T}_{\mathrm{DC}}(\mathcal{U}_{3},\mathcal{U}_{4})\in\mathcal{M}_{ \mathrm{DC}}\), which is
\[\mathcal{B}: =\min_{\begin{subarray}{c}\mathcal{T}_{\mathrm{CC}}(\mathcal{U} _{1},\mathcal{U}_{2})\in\mathcal{M}_{\mathrm{CC}}\\ \mathcal{T}_{\mathrm{DC}}(\mathcal{U}_{3},\mathcal{U}_{4})\in\mathcal{M}_{ \mathrm{DC}}\end{subarray}}\min_{\Phi}\Big{\{}H(\mathcal{T}_{\mathrm{CC}}( \mathcal{U}_{1},\mathcal{U}_{2}))_{\Phi}+H(\mathcal{T}_{\mathrm{CC}}(\mathcal{ U}_{1},\mathcal{U}_{2}))_{\Phi}\Big{\}} \tag{198}\] \[=\min_{\mathcal{U}_{1},\mathcal{U}_{2},\mathcal{U}_{3},\mathcal{U} _{4}}\min_{\Phi}\Big{\{}H(\mathcal{T}_{\mathrm{CC}}(\mathcal{U}_{1},\mathcal{U }_{2}))_{\Phi}+H(\mathcal{T}_{\mathrm{CC}}(\mathcal{U}_{1},\mathcal{U}_{2}))_{ \Phi}\Big{\}}. \tag{199}\]
We note that the characterization of \(\mathcal{B}\) identifies the uncertainty trade-off between two families of interactive measurements, leading to a quantitative connection between different causal structures - common-cause and direct-cause - in quantum theories.
Roughly speaking, the universal uncertainty relation investigated in Subsec. II.3 is the key to solving \(\mathcal{B}\). In the first stage, we collect all the measurement data into a set \(\mathcal{Q}\), and study its algebraic properties. Formally, the set \(\mathcal{Q}\) is defined as
\[\mathcal{Q}:=\{\mathbf{p}(\mathcal{U}_{1},\mathcal{U}_{2})_{\Phi}\oplus \mathbf{q}(\mathcal{U}_{3},\mathcal{U}_{4})_{\Phi}\}_{\mathcal{U}_{1},\ldots.\mathcal{U}_{4},\Phi}, \tag{200}\]
and we denote the ordered version of \(\mathcal{Q}\) as \(\mathcal{Q}_{1}\), namely
\[\mathcal{Q}_{1}:=\mathcal{Q}^{\downarrow}=\{\mathbf{p}(\mathcal{U}_{1}, \mathcal{U}_{2})_{\Phi}\oplus\mathbf{q}(\mathcal{U}_{3},\mathcal{U}_{4})_{ \Phi}\}_{\mathcal{U}_{1},\ldots,\mathcal{U}_{4},\Phi}^{\downarrow}, \tag{201}\]
where the down-arrow \({}^{\downarrow}\) indicates that all the elements of \(\mathcal{Q}\) are arranged in non-increasing order. In qubit case, \(\mathcal{Q}_{1}\) forms a subset of \(\mathrm{P}_{2}^{8,\,\downarrow}\), i.e. \(\mathcal{Q}_{1}\subset\mathrm{P}_{2}^{8,\,\downarrow}\). Here we have three remarks: First, instead of investigating \(\frac{1}{2}\mathbf{p}(\mathcal{U}_{1},\mathcal{U}_{2})_{\Phi}\oplus\frac{1}{ 2}\mathbf{q}(\mathcal{U}_{3},\mathcal{U}_{4})_{\Phi}\), we work on \(\mathbf{p}(\mathcal{U}_{1},\mathcal{U}_{2})_{\Phi}\oplus\mathbf{q}(\mathcal{U }_{3},\mathcal{U}_{4})_{\Phi}\) directly to simplify our representation. Second, according to the definition of majorization, the least upper bound (LUB) of \(\mathcal{Q}_{1}\) is also a LUB of \(\mathcal{Q}\) itself. Thanks to the completeness of \((\mathrm{P}_{2}^{8,\,\downarrow},\prec,\wedge,\vee)\) (see Subsec. II.1), there exist a unique LUB, denoted by \(\vee\mathcal{Q}_{1}\), in \(\mathrm{P}_{2}^{8,\,\downarrow}\); that is
\[(\vee\mathcal{Q}_{1})_{k}\geqslant(\vee\mathcal{Q}_{1})_{k+1},\quad\text{for }1 \leqslant k\leqslant 7, \tag{202}\]
where \((\vee\mathcal{Q}_{1})_{k}\) denotes the \(k\)-th element of \(\vee\mathcal{Q}_{1}\). Third, the LUB of \(\mathcal{Q}\) is not unique in the probability simplex. To be more precise, for any permutation matrix \(D\), vector \((\vee\mathcal{Q}_{1})\cdot D\) forms a LUB for set \(\mathcal{Q}\). Another useful set is \(\mathcal{Q}_{2}\), which is derived by taking all unitary channels in maximal common-cause indicator and maximal direct-cause indicator as noiseless channel, i.e. id.
\[\mathcal{Q}_{2}:=\{\mathbf{p}(\mathrm{id},\mathrm{id})_{\Phi}\oplus\mathbf{q}( \mathrm{id},\mathrm{id})_{\Phi}\}_{\Phi}^{\downarrow}. \tag{203}\]
Clearly, the newly constructed \(\mathcal{Q}_{2}\) forms a subset of \(\mathcal{Q}_{1}\), satisfying
\[\mathcal{Q}_{2}\subset\mathcal{Q}_{1}\subset\mathrm{P}_{2}^{8,\,\downarrow}. \tag{204}\]
In addition to inclusion, a key observation about the sets \(\mathcal{Q}_{1}\) and \(\mathcal{Q}_{2}\) is given in the following lemma.
Given maximal common-cause indicators \(\mathcal{T}_{\mathrm{CC}}(\mathcal{U}_{1},\mathcal{U}_{2})\in\mathcal{M}_{ \mathrm{CC}}\) (see Def. III.1), and maximal direct-cause indicators \(\mathcal{T}_{\mathrm{DC}}(\mathcal{U}_{3},\mathcal{U}_{4})\in\mathcal{M}_{ \mathrm{DC}}\) (see Def. III.2), the largest sum of any two elements in \(\vee\mathcal{Q}_{1}\) (see Eq. 201) is achieved by only considering noiseless channels, namely
\[\sum_{k=1}^{2}(\vee\mathcal{Q}_{1})_{k}=\sum_{k=1}^{2}(\vee\mathcal{Q}_{2})_{k}, \tag{205}\]
with
\[\sum_{k=1}^{2}(\vee\mathcal{Q}_{1})_{k} :=\sum_{k=1}^{2}(\vee\{\mathbf{p}(\mathcal{U}_{1},\mathcal{U}_{2}) _{\Phi}\oplus\mathbf{q}(\mathcal{U}_{3},\mathcal{U}_{4})_{\Phi}\}_{\mathcal{U} _{1},\ldots,\mathcal{U}_{4},\Phi}^{\downarrow})_{k}, \tag{206}\] \[\sum_{k=1}^{2}(\vee\mathcal{Q}_{2})_{k} :=\sum_{k=1}^{2}(\vee\{\mathbf{p}(\mathrm{id},\mathrm{id})_{\Phi} \oplus\mathbf{q}(\mathrm{id},\mathrm{id})_{\Phi}\}_{\Phi}^{\downarrow})_{k}. \tag{207}\]
Here the down-arrow notation \({}^{\downarrow}\) means that the components of corresponding vector are arranged in non-increasing order. The probability vectors are defined by \(\mathbf{p}(\mathcal{U}_{1},\mathcal{U}_{2})_{\Phi}:=(p_{i}(\mathcal{U}_{1}, \mathcal{U}_{2})_{\Phi})_{i}\) and \(\mathbf{q}(\mathcal{U}_{3},\mathcal{U}_{4})_{\Phi}:=(q_{j}(\mathcal{U}_{3}, \mathcal{U}_{4})_{\Phi})_{j}\), with each \(p_{i}(\mathcal{U}_{1},\mathcal{U}_{2})_{\Phi}\) and \(q_{j}(\mathcal{U}_{3},\mathcal{U}_{4})_{\Phi}\) being obtained from Eqs. 192 and 193 respectively.
Proof.: Remark that for both \(\mathcal{Q}_{1}\) and \(\mathcal{Q}_{2}\), their largest element is \(1\); that is
\[(\vee\mathcal{Q}_{1})_{1}=\mathbf{b}_{\mathcal{Q}_{1},1}=(\vee\mathcal{Q}_{2}) _{1}=\mathbf{b}_{\mathcal{Q}_{2},1}=1, \tag{208}\]
where \(\mathbf{b}_{S}\) is defined in Eq. 61, with \(S=\mathcal{Q}_{b}\) (\(b=1,2\)).
If we only pick up two elements from \(\mathbf{p}(\mathcal{U}_{1},\mathcal{U}_{2})_{\Phi}\) or \(\mathbf{q}(\mathcal{U}_{3},\mathcal{U}_{4})_{\Phi}\), their summation is still upper-bounded by \(1\). Thus, to investigate the largest sum of any two elements in \(\mathbf{p}(\mathcal{U}_{1},\mathcal{U}_{2})_{\Phi}\oplus\mathbf{q}(\mathcal{ U}_{3},\mathcal{U}_{4})_{\Phi}\), we should select one element from \(\mathbf{p}(\mathcal{U}_{1},\mathcal{U}_{2})_{\Phi}\), and choose another one from \(\mathbf{q}(\mathcal{U}_{3},\mathcal{U}_{4})_{\Phi}\). Similar situation applies to \(\mathbf{p}(\mathrm{id},\mathrm{id})_{\Phi}\) and \(\mathbf{q}(\mathrm{id},\mathrm{id})_{\Phi}\). Writing everything out explicitly, we have
\[\sum_{k=1}^{2}\mathbf{b}_{\mathcal{Q}_{1},k} \tag{209}\] \[=\max_{\mathcal{U}_{1},\ldots,\mathcal{U}_{4}}\max_{i,j}\max_{ \Phi\in\mathcal{F}_{2}}\mathrm{Tr}\Big{[}(J_{i}^{\mathrm{CC}}(\mathcal{U}_{1},\mathcal{U}_{2})+J_{j}^{\mathrm{DC}}(\mathcal{U}_{3},\mathcal{U}_{4}))\cdot J ^{\Phi}\Big{]}\] (210) \[=\max_{\mathcal{U}_{1},\ldots,\mathcal{U}_{4}}\max_{i,j}\max_{ \Phi\in\mathcal{F}_{2}}\mathrm{Tr}\bigg{[}(\frac{\mathds{1}_{B}}{2}\otimes(U_{ 1}^{\dagger}\otimes U_{2}^{\dagger}\ket{\Phi_{i}}\!\bra{\Phi_{i}}\!\bra{U_{1} \otimes U_{2}}_{\mathbf{7}})^{\mathbf{T}}+\mathds{1}_{A}\otimes(U_{3}\ket{\Phi_ {1}}\!\bra{\Phi_{1}}_{BR}U_{3}^{\dagger})\star(U_{4}^{\dagger}\ket{\Phi_{j}}\! \bra{\Phi_{j}}_{CR}U_{4})^{\mathbf{T}})\cdot J^{\Phi}\bigg{]}\] (211) \[=\max_{\mathcal{U}_{B}}\max_{i,j}\max_{\Phi\in\mathcal{F}_{2}} \mathrm{Tr}\bigg{[}(\frac{\mathds{1}_{B}}{2}\otimes(|\Phi_{i}\rangle\!\bra{ \Phi_{i}}|)^{\mathbf{T}}+\mathds{1}_{A}\otimes(U_{B}\ket{\Phi_{1}}\!\bra{ \Phi_{1}}_{BR}U_{B}^{\dagger})\star|\Phi_{j}\rangle\!\bra{\Phi_{j}}_{CR}^{ \mathbf{T}})\cdot J^{\Phi}\bigg{]}\] (212) \[=\max_{i,j}\max_{\Phi\in\mathcal{F}_{2}}\mathrm{Tr}\bigg{[}(\frac{ \mathds{1}_{B}}{2}\otimes(|\Phi_{i}\rangle\!\bra{\Phi_{i}}|)^{\mathbf{T}}+ \mathds{1}_{A}\otimes|\Phi_{1}\rangle\!\bra{\Phi_{1}}_{BR}\star|\Phi_{j}\rangle \!\bra{\Phi_{j}}_{CR}^{\mathbf{T}})\cdot J^{\Phi}\bigg{]}\] (213) \[=\sum_{k=1}^{2}\mathbf{b}_{\mathcal{Q}_{2},k}\] (214) \[=\frac{5}{4}, \tag{215}\]
where the third equation is obtained by absorbing unitaries into the causal map (visualized in Fig. 12), and the fourth equation is a direct consequence of \(U_{B}\,U_{B}^{\dagger}=\mathds{1}\). Finally, the last equation is the result of SDP formulated in Eq. (213). This immediately implies that for the set \(\mathcal{Q}_{1}\), we have
\[\sum_{k=1}^{2}(\vee\mathcal{Q}_{1})_{k}\leqslant\sum_{k=1}^{2}\mathbf{b}_{ \mathcal{Q}_{1},k}=\sum_{k=1}^{2}\mathbf{b}_{\mathcal{Q}_{2},k}=\frac{5}{4}. \tag{216}\]
On the other hand, by taking \(\Phi=\mathrm{Tr}_{E}[\left|\Phi_{1}\right\rangle\!\!\left\langle\Phi_{1}\right|_{ AE}]\otimes\mathrm{id}_{B\to C}\), maximal common-cause indicators \(\mathcal{T}_{\mathrm{CC}}(\mathrm{id},\mathrm{id})\in\mathcal{M}_{\mathrm{CC}}\), and maximal direct-cause indicators \(\mathcal{T}_{\mathrm{DC}}(\mathrm{id},\mathrm{id})\in\mathcal{M}_{\mathrm{DC}}\), we obtain
\[\mathbf{p}(\mathrm{id},\mathrm{id})_{\mathrm{Tr}_{E}[\left|\Phi_ {1}\right\rangle\!\!\left\langle\Phi_{1}\right|_{AE}]\otimes\mathrm{id}_{B \to C}} =(1/4,1/4,1/4), \tag{217}\] \[\mathbf{q}(\mathrm{id},\mathrm{id})_{\mathrm{Tr}_{E}[\left|\Phi_ {1}\right\rangle\!\!\left\langle\Phi_{1}\right|_{AE}]\otimes\mathrm{id}_{B \to C}} =(1,0,0). \tag{218}\]
In this case, we have
\[(1,1/4,1/4,1/4,1/4,0,0,0)\in\mathcal{Q}_{2}\subset\mathcal{Q}_{1}, \tag{219}\]
and the largest sum of any two elements in both \(\vee\mathcal{Q}_{1}\) and \(\vee\mathcal{Q}_{2}\) are lower bounded by \(5/4\), i.e.
\[\frac{5}{4}\leqslant\sum_{k=1}^{2}(\vee\mathcal{Q}_{2})_{k}\leqslant\sum_{k=1} ^{2}(\vee\mathcal{Q}_{1})_{k}. \tag{220}\]
Figure 12: (color online) Visualization of the process transforming Eq. (211) to Eq. (212). By absorbing \(\mathcal{U}_{1}\) and \(\mathcal{U}_{2}\) into the quantum causal map, we transform the uncertainty relation associated with interactive measurements, illustrated by (a1) and (b1), into the uncertainty relation for (a2) and (b2). As the optimization is taken over all causal maps, we can simply package the quantum dynamics inside the blue dashed box of (a2) and (b2), and reformulate them into (a3) and (b3). The transformation of (b3)\(\rightarrow\)(b4)\(\rightarrow\)(b5) follows directly from the properties of Bell state. Noted that from above visualization, it is clear that Eq. (212) can be further simplified to the following semidefinite programming (SDP) \(\max_{i}\max_{\Phi\in\mathcal{E}_{2}}\mathrm{Tr}\Big{[}\big{(}(\frac{1}{2} \otimes(\left|\Phi_{i}\right\rangle\!\!\left\langle\Phi_{i}\right|)^{\mathbf{ T}}+\mathds{1}_{A}\otimes\left|\Phi_{1}\right\rangle\!\!\left\langle\Phi_{1} \right|_{B}\!\!\!\left\langle\Phi_{1}\right|_{CR}^{\mathbf{T}})\cdot J^{\Phi} \Big{]}\), whose numerical value is \(5/4\).
Combining Eq. 216 with Eq. 220 leads to the following equation,
\[\sum_{k=1}^{2}(\vee\mathcal{Q}_{1})_{k}=\sum_{k=1}^{2}(\vee\mathcal{Q}_{2})_{k}= \frac{5}{4}, \tag{221}\]
what is to be shown.
As a by-product of Eq. 221, it is now straightforward to see that
\[(\vee\mathcal{Q}_{1})_{2}=\sum_{k=1}^{2}(\vee\mathcal{Q}_{1})_{k}-(\vee \mathcal{Q}_{1})_{1}=\frac{5}{4}-1=\frac{1}{4}. \tag{222}\]
Then, it can be shown that
[breakable, label=] Theorem III.3 (1)
Given maximal common-cause indicators \(\mathcal{T}_{\mathrm{CC}}(\mathcal{U}_{1},\mathcal{U}_{2})\in\mathcal{M}_{ \mathrm{CC}}\) (see Def. III.1), and maximal direct-cause indicators \(\mathcal{T}_{\mathrm{DC}}(\mathcal{U}_{3},\mathcal{U}_{4})\in\mathcal{M}_{ \mathrm{DC}}\) (see Def. III.2), the LUB \(\vee\mathcal{Q}_{1}\) (see Eq. 201) is given by
\[\vee\mathcal{Q}_{1}=(1,1/4,1/4,1/4,1/4,0,0,0). \tag{223}\]
Proof.: From the analysis of Lem. III.3, we know that the largest element of \(\vee\mathcal{Q}_{1}\) is \(1\) (i.e. \((\vee\mathcal{Q}_{1})_{1}=1\)), and the second largest element of \(\vee\mathcal{Q}_{1}\) is \(1/4\) (i.e. \((\vee\mathcal{Q}_{1})_{2}=1/4\)). Consider an eigencircuit \(\Phi\) of \(\mathcal{T}_{\mathrm{DC}}(\mathcal{U}_{3},\mathcal{U}_{4})\). Without loss of generality, we assume that \(q_{1}(\mathcal{U}_{3},\mathcal{U}_{4})_{\Phi}=1\). For example, see Eqs. 217 and 218. In this case, it follows immediately that
\[2=\sum_{i}p_{i}(\mathcal{U}_{1},\mathcal{U}_{2})_{\Phi}+q_{1}(\mathcal{U}_{3},\mathcal{U}_{4})_{\Phi}\leqslant\sum_{k=1}^{5}(\vee\mathcal{Q}_{1})_{k} \leqslant\sum_{k=1}^{8}(\vee\mathcal{Q}_{1})_{k}=2, \tag{224}\]
and hence we have
\[(\vee\mathcal{Q}_{1})_{6}=(\vee\mathcal{Q}_{1})_{7}=(\vee\mathcal{Q}_{1})_{8 }=0. \tag{225}\]
Let us now come back to \((\vee\mathcal{Q}_{1})_{k}\) (\(k=3,4,5\)). Since the elements of \(\vee\mathcal{Q}_{1}\) are arranged in non-increasing order, it follows that
\[(\vee\mathcal{Q}_{1})_{5}\leqslant(\vee\mathcal{Q}_{1})_{4}\leqslant(\vee \mathcal{Q}_{1})_{3}\leqslant(\vee\mathcal{Q}_{1})_{2}=\frac{1}{4}. \tag{226}\]
If any of them is smaller than \(1/4\), then \(\sum_{k=1}^{8}(\vee\mathcal{Q}_{1})_{k}<2\), which is a contradiction. Therefore, the LUB \(\vee\mathcal{Q}_{1}\) of \(\mathcal{Q}_{1}\) is given by Eq. 223 as required.
In the second stage, equipped with Thm. III.4, we now introduce the universal uncertainty relation for maximal common-cause indicators and maximal direct-cause indicators, which is characterized by the following theorem.
[breakable, label=] Theorem III.3 (1)
Given a quantum causal map \(\Phi:B\to AC\) with \(d_{A}=d_{B}=d_{C}=2\), let us denote the probability vectors obtained by measuring \(\Phi\) with respect to maximal common-cause indicators \(\mathcal{T}_{\mathrm{CC}}(\mathcal{U}_{1},\mathcal{U}_{2})\in\mathcal{M}_{ \mathrm{CC}}\) (see Def. III.1) and maximal direct-cause indicators \(\mathcal{T}_{\mathrm{DC}}(\mathcal{U}_{3},\mathcal{U}_{4})\in\mathcal{M}_{ \mathrm{DC}}\) (see Def. III.2) as \(\mathbf{p}(\mathcal{U}_{1},\mathcal{U}_{2})_{\Phi}\) and \(\mathbf{q}(\mathcal{U}_{3},\mathcal{U}_{4})_{\Phi}\) respectively. Then for any causal map \(\Phi\in\mathfrak{F}_{2}\) and unitary channels \(\mathcal{U}_{b}\) (\(b\in\{1,2,3,4\}\)), we have the following universal causal uncertainty relation
\[\frac{1}{2}\mathbf{p}(\mathcal{U}_{1},\mathcal{U}_{2})_{\Phi}\oplus\frac{1}{ 2}\mathbf{q}(\mathcal{U}_{3},\mathcal{U}_{4})_{\Phi}\prec\frac{1}{2}\cdot( \vee\mathcal{Q}_{1})=(1/2,1/8,1/8,1/8,1/8,0,0,0). \tag{227}\]
In particular, by applying Schur-concave functions, we can generate a family of infinite causal uncertainty relations from Eq. 227.
Now, direct calculation implies that
\[2H(\frac{1}{2}\cdot(\vee\mathcal{Q}_{1}))-2=2. \tag{228}\]
Since the Shannon entropy \(H\) is a Schur-concave function, we hence obtain the following entropic uncertainty relation for maximal common-cause indicators and maximal direct-cause indicators
**Corollary III.16**.: **Bipartite Causal Uncertainty Relation**
Given maximal common-cause indicators \(\mathcal{T}_{\mathrm{CC}}(\mathcal{U}_{1},\mathcal{U}_{2})\in\mathcal{M}_{ \mathrm{CC}}\) (see Def. III.1) and maximal direct-cause indicators \(\mathcal{T}_{\mathrm{DC}}(\mathcal{U}_{3},\mathcal{U}_{4})\in\mathcal{M}_{ \mathrm{DC}}\) (see Def. III.2) acting on some qubit quantum causal map \(\Phi\in\mathfrak{F}_{2}\). The entropy of their measurement outcomes, when summed, satisfies
\[H(\mathcal{T}_{\mathrm{CC}}(\mathcal{U}_{1},\mathcal{U}_{2}))_{\Phi}+H( \mathcal{T}_{\mathrm{CC}}(\mathcal{U}_{1},\mathcal{U}_{2}))_{\Phi}\geqslant 2, \tag{229}\]
where the uncertainties \(H(\mathbf{p}(\mathcal{U}_{1},\mathcal{U}_{2})_{\Phi})\) and \(H(\mathbf{q}(\mathcal{U}_{3},\mathcal{U}_{4}))_{\Phi}\) are defined by Eqs. 196 and 197 respectively.
In particular, for the case with \(\Phi=\mathrm{Tr}_{E}[\left|\Phi_{1}\right\rangle\!\!\left\langle\Phi_{1} \right|_{AE}]\otimes\mathrm{id}_{B\to C}\) and \(\mathcal{U}_{1}=\mathcal{U}_{2}=\mathcal{U}_{3}=\mathcal{U}_{4}=\mathrm{id}\), the joint uncertainty is 2, i.e.
\[H(\mathcal{T}_{\mathrm{CC}}(\mathrm{id},\mathrm{id}))_{\mathrm{Tr}_{E}[\left| \Phi_{1}\right\rangle\!\!\left\langle\Phi_{1}\right|_{AE}]\otimes\mathrm{id}_{ B\to C}}+H(\mathcal{T}_{\mathrm{CC}}(\mathrm{id},\mathrm{id}))_{\mathrm{Tr}_{E}[ \left|\Phi_{1}\right\rangle\!\!\left\langle\Phi_{1}\right|_{AE}]\otimes \mathrm{id}_{B\to C}}=2. \tag{230}\]
Thus, in Eq. 229 of Cor. III.6, the lower bound 2 is optimal. In other words, here \(\mathcal{B}=2\) (see Eq. 198). All results presented in this subsection, including Lem. III.3, Thm. III.4, Thm. III.5, and Cor. III.6, for qubit systems can be extended to the case of qudit systems straightforwardly by replacing the Pauli operator appeared in Bell measurements and Eq. (217) with Heisenberg-Weyl operators and \((1/d^{2},\ldots,1/d^{2})\) respectively. In particular that, for the case with \(d_{A}=d_{B}=d_{C}=d\), we have
\[H(\mathcal{T}_{\mathrm{CC}}(\mathcal{U}_{1},\mathcal{U}_{2}))_{\Phi_{B\to AC }}+H(\mathcal{T}_{\mathrm{DC}}(\mathcal{U}_{3},\mathcal{U}_{4}))_{\Phi_{B\to AC }}\geqslant 2\log d, \tag{231}\]
where \(\mathcal{U}_{1}\), \(\mathcal{U}_{2}\), \(\mathcal{U}_{3}\), and \(\mathcal{U}_{4}\) are now \(d\)-dimensional unitary channels.
### Necessary and Sufficient Conditions for Common-Cause and Direct-Cause
In this subsection, we give a complete characterization of when the quantum causal map \(\Phi\in\mathfrak{F}_{2}\) obtained from system-environment unitary dynamics in open quantum systems (see Eq. 42) indicates a common-cause, direct-cause, or even their mixture. The method of our causal inference is based on the entropic causal uncertainty relation, namely Eq. 229.
In particular, given a quantum causal map \(\Phi\in\mathfrak{F}_{2}\), its quantum dynamics is given by a initial pure state \(\phi_{AE}\) and a system-environment unitary evolution \(\mathcal{U}_{BE\to CF}\) (see Fig. 13 for an illustration). Practically, we do not have access to the environmental systems. Hence, \(\Phi\in\mathfrak{F}_{2}\) is characterized by
\[\Phi_{B\to Ac}=\mathrm{Tr}_{F}[\mathcal{U}_{BE\to CF}(\phi_{AE})], \tag{232}\]
where \(E\) and \(F\) stand for the environment before and after the unitary evolution \(\mathcal{U}\). Meanwhile, \(A\), \(B\), and \(C\) represent the system of interest at different time points \(t_{A}\), \(t_{B}\), and \(t_{C}\) respectively, with \(\dim A=\dim B=\dim C\).
Figure 13: (color online) Quantum causal map \(\Phi_{B\to AC}\), where the initial state is \(\phi_{AE}\), and the systems \(BE\) and \(CF\) are connected through the bipartite unitary channel \(\mathcal{U}_{BE\to CF}\).
Causal understanding of the quantum dynamics, especially the system-environment unitary dynamics, enables us to reason about the environment and its interactions with the system of interest at multiple time points. Here, our primary goal is to discover the structural dependencies among environmental and object systems, namely \(\mathcal{U}_{BE\to CF}(\phi_{AE})\). This is a particularly challenging task - Typically, quantum tomography of \(\mathcal{U}_{BE\to CF}(\phi_{AE})\) offers us a general solution. But it requires lots of experimental effort, including a large number of measurements, an extensive analysis of data, and etc. What is even worse is that usually we do not have complete access to all information in the environment. A main contribution of this work lies in the quantum causal inference without tomography. To do, let us first introduce the definitions of purely common-cause, purely direct-cause, and their mixture for \(\mathcal{U}_{BE\to CF}(\phi_{AE})\); that are
[title=] Definition III.1.1.1 [title=]
For a system-environment unitary dynamics \(\mathcal{U}_{BE\to CF}(\phi_{AE})\), its causal structure is purely common-cause, if the following condition holds
\[\mathcal{U}_{BE\to CF}(\phi_{AE})=\rho_{AC}\otimes\mathcal{E}_{B\to F}, \tag{233}\]
where \(\rho_{AC}\) is an entangled state acting on systems \(AC\), and \(\mathcal{E}_{B\to F}\) is a quantum channel from system \(B\) to \(F\).
In the case where the quantum dynamics is constituted of pure initial state and unitary channel, like \(\mathcal{U}_{BE\to CF}(\phi_{AE})\), its CJ operator is a rank-1 operator, implying that the operator \(\rho_{AC}\otimes J^{\mathcal{E}}_{BF}\) is also rank-1. In other words, in Eq. 234, \(\rho_{AC}\) must be a pure state and meanwhile \(\mathcal{E}_{B\to F}\) must be a unitary channel. Thus, written out explicitly, we have the following lemma.
[title=] Lemma III.1.1.1 [title=] For a system-environment unitary dynamics \(\mathcal{U}_{BE\to CF}(\phi_{AE})\), its causal structure is purely common-cause, if the following condition holds
\[\mathcal{U}_{BE\to CF}(\phi_{AE})=\psi_{AC}\otimes\mathcal{U}_{B\to F}, \tag{234}\]
where \(\psi_{AC}\) is a pure entangled state acting on systems \(AC\), and \(\mathcal{U}_{B\to F}\) is a unitary channel from system \(B\) to \(F\).
When measuring \(\mathcal{U}_{BE\to CF}(\phi_{AE})\), if an outcome (without loss of generality, we take 1 for instance) of \(\mathcal{T}_{\mathrm{CC}}(\mathcal{U}_{1},\mathcal{U}_{2})\otimes\mathrm{Tr} _{F}\) happens with certainty, namely
\[\mathrm{Tr}\bigg{[}\Phi_{1}\otimes\mathbb{1}_{F}\cdot\mathcal{U}_{2}\circ \mathcal{U}_{BE\to CF}\left(\mathcal{U}_{1}(\psi_{AE})\otimes\frac{\mathbb{1} _{B}}{d_{B}}\right)\bigg{]}=1. \tag{235}\]
Then, according to the definition of maximal common-cause indicator \(\mathcal{T}_{\mathrm{CC}}(\mathcal{U}_{1},\mathcal{U}_{2})\) (see Def. III.1), it is straightforward to check that in this case we have \(\mathcal{U}_{BE\to CF}(\phi_{AE})=\mathcal{U}_{1}^{\dagger}\otimes\mathcal{U}_ {2}^{\dagger}(\phi_{AC}^{+})\otimes\mathcal{U}_{B\to F}\), where \(\phi^{+}\) is the maximally entangled state, and \(\mathcal{E}^{\dagger}\) stands for the dual channel of \(\mathcal{E}\). Conversely, if system-environment unitary dynamics \(\mathcal{U}_{BE\to CF}(\phi_{AE}^{+})\) exhibits common-cause, where \(\phi^{+}\) represents the maximally entangled state, then we can always find one maximal common-cause indicator \(\mathcal{T}_{\mathrm{CC}}(\mathcal{U}_{1},\mathcal{U}_{2})\in\mathcal{M}_{ \mathrm{CC}}\), such that a measurement outcome of \(\mathcal{T}_{\mathrm{CC}}(\mathcal{U}_{1},\mathcal{U}_{2})\otimes\mathrm{Tr} _{F}\) will occur with certainty. Thus, in the case of \(\mathcal{U}_{BE\to CF}(\phi_{AE}^{+})\), we have the following necessary and sufficient condition for quantum dynamics to be purely common-cause; that is
[title=] Theorem III.1.1.1 [title=] For a system-environment unitary dynamics \(\mathcal{U}_{BE\to CF}(\phi_{AE}^{+})\) with \(\phi_{AE}^{+}\) being the maximally entangled state, its causal structure is purely common-cause, if and only if there exist unitary channels \(\mathcal{U}_{1}\) and \(\mathcal{U}_{2}\) such that
\[H(\mathcal{T}_{\mathrm{CC}}(\mathcal{U}_{1},\mathcal{U}_{2})\otimes\mathrm{Tr} _{F})_{\mathcal{U}_{BE\to CF}(\phi_{AE}^{+})}=H(\mathcal{T}_{\mathrm{CC}}( \mathcal{U}_{1},\mathcal{U}_{2}))_{\mathrm{Tr}_{F}[\mu_{BE\to CF}(\phi_{AE}^{+ })]}=0. \tag{236}\]
That is, an outcome of the maximal common-cause indicator \(\mathcal{T}_{\mathrm{CC}}(\mathcal{U}_{1},\mathcal{U}_{2})\in\mathcal{M}_{ \mathrm{CC}}\) happens with certainty for some unitary channels \(\mathcal{U}_{1}\) and \(\mathcal{U}_{2}\).
On the other hand, the definition of purely direct-cause for \(\mathcal{U}_{BE\to CF}(\phi_{AE})\) is given by
For a system-environment unitary dynamics \(\mathcal{U}_{BE\to CF}(\phi_{AE})\), its causal structure is purely direct-cause, if the following condition holds
\[\mathcal{U}_{BE\to CF}(\phi_{AE})=\rho_{AF}\otimes\mathcal{E}_{B\to C}, \tag{237}\]
where \(\rho_{AF}\) is a quantum state acting on systems \(AF\), and \(\mathcal{E}_{B\to C}\) is a quantum channel from system \(B\) to \(C\). Here the channel \(\mathcal{E}\) can not be the one constituted of tracing \(B\) and then preparing a state on system \(C\).
It is worth mentioning that if the system-environment unitary dynamics \(\mathcal{U}_{BE\to CF}(\phi_{AE})\) is decomposed as \(\rho_{A}\otimes\mathcal{E}_{B\to CF}\), the causal structure associated with system-environment dynamics can also be defined as purely direct-cause. However, the unitarity of evolution \(\mathcal{U}_{BE\to CF}\) and the dimensional restriction on the systems (i.e. \(\dim A=\dim B=\dim C\)) will force the environment to be a trivial system, i.e. \(F=\mathds{C}\). Hence, the system-environment unitary dynamics can still be described by Eq. 237. Similar to Lem. III.8, now the rank-1 property of \(\mathcal{U}_{BE\to CF}(\phi_{AE})\) implies that
**Lemma III.8** **Direct-Gause**
For a system-environment unitary dynamics \(\mathcal{U}_{BE\to CF}(\phi_{AE})\), its causal structure is purely direct-cause, if the following condition holds
\[\mathcal{U}_{BE\to CF}(\phi_{AE})=\psi_{AF}\otimes\mathcal{U}_{B\to C}, \tag{238}\]
where \(\psi_{AF}\) is a pure quantum state acting on systems \(AF\), and \(\mathcal{U}_{B\to C}\) is a unitary channel from system \(B\) to \(C\).
When measuring \(\mathcal{U}_{BE\to CF}(\phi_{AE})\), if an outcome (without loss of generality, we take 1 for instance) of \(\mathcal{T}_{\mathrm{DC}}(\mathcal{U}_{3},\mathcal{U}_{4})\otimes\mathrm{Tr}_ {F}\) happens with certainty, namely
\[\mathrm{Tr}[\mathbb{1}_{AF}\otimes\Phi_{1,CR}\cdot\mathcal{U}_{4}\circ \mathcal{U}_{BE\to CF}\left(\phi_{AE}\otimes\mathcal{U}_{3}(\Phi_{1,BR}) \right)]=1. \tag{239}\]
Then it is straightforward to conclude that \(\mathcal{U}_{BE\to CF}(\phi_{AE})\) can be decomposed as \(\psi_{AF}\otimes(\mathcal{U}_{4}^{\dagger}\circ\mathcal{U}_{3}^{\dagger})\), hence reveals a direct-cause. On the other hand, if system-environment unitary dynamics \(\mathcal{U}_{BE\to CF}(\phi_{AE})\) exhibits a direct-cause, i.e. \(\mathcal{U}_{BE\to CF}(\phi_{AE})=\psi_{AF}\otimes\mathcal{U}_{B\to C}\), then we have \(H(\mathcal{T}_{\mathrm{DC}}(\mathrm{id},(\mathcal{U}_{B\to C})^{\dagger}) \otimes\mathrm{Tr}_{F})_{\mathcal{U}_{BE\to CF}(\phi_{AE})}=0\), leading to the following theorem.
Theorem III.8** **Direct-Gause**
For a system-environment unitary dynamics \(\mathcal{U}_{BE\to CF}(\phi_{AE})\), its causal structure is purely direct-cause, if and only if there exist unitary channels \(\mathcal{U}_{3}\) and \(\mathcal{U}_{4}\) such that
\[H(\mathcal{T}_{\mathrm{DC}}(\mathcal{U}_{3},\mathcal{U}_{4})\otimes\mathrm{Tr }_{F})_{\mathcal{U}_{BE\to CF}(\phi_{AE})}=H(\mathcal{T}_{\mathrm{DC}}( \mathcal{U}_{3},\mathcal{U}_{4}))_{\mathrm{Tr}_{F}\mathcal{U}_{BE\to CF}( \phi_{AE})]}=0. \tag{240}\]
That is, an outcome of the maximal direct-cause indicator \(\mathcal{T}_{\mathrm{DC}}(\mathcal{U}_{3},\mathcal{U}_{4})\in\mathcal{M}_{ \mathrm{DC}}\) happens with certainty for some unitary channels \(\mathcal{U}_{3}\) and \(\mathcal{U}_{4}\).
After we obtain the necessary and sufficient condition for system-environment unitary dynamics \(\mathcal{U}_{BE\to CF}(\phi_{AE}^{+})\) to be purely common-cause, we can now infer the causal model associated with \(\mathcal{U}_{BE\to CF}(\phi_{AE}^{+})\) without using tomography. More specifically, we would like to show that the causal uncertainty relation (see Eq. 229) implies causality. Here we take quantum dynamics with qubit system as an example, that is \(d_{A}=d_{B}=d_{C}=2\). In particular, for \(\mathcal{U}_{BE\to CF}(\phi_{AE}^{+})\), we have
**Corollary III.8** **Non-Markovianity**
For a system-environment unitary dynamics \(\mathcal{U}_{BE\to CF}(\phi_{AE}^{+})\), if there exist some unitary channels \(\mathcal{U}_{1}\) and \(\mathcal{U}_{2}\), such that
\[2>H(\mathcal{T}_{\mathrm{CC}}(\mathcal{U}_{1},\mathcal{U}_{2})\otimes\mathrm{ Tr}_{F})_{\mathcal{U}_{BE\to CF}(\phi_{AE}^{+})}>0. \tag{241}\]
Then the system-environment unitary dynamics \(\mathcal{U}_{BE\to CF}(\phi_{AE}^{+})\) is non-Markovian.
Proof.: According to entropic uncertainty relation presented in Cor. III.6, it follows that
\[H(\mathcal{T}_{\mathrm{DC}}(\mathcal{U}_{3},\mathcal{U}_{4}))_{ \mathrm{Tr}_{F}[\mathcal{U}_{BE\to CF}(\phi_{AE})]} \geqslant 2-H(\mathcal{T}_{\mathrm{CC}}(\mathcal{U}_{1},\mathcal{U}_{2}))_{ \mathrm{Tr}_{F}[\mathcal{U}_{BE\to CF}(\phi_{AE}^{+})]} \tag{242}\] \[=2-H(\mathcal{T}_{\mathrm{DC}}(\mathcal{U}_{3},\mathcal{U}_{4}) \otimes\mathrm{Tr}_{F})_{\mathcal{U}_{BE\to CF}(\phi_{AE})}\] (243) \[>0, \tag{244}\]
holds for arbitrary unitary channels \(\mathcal{U}_{3}\) and \(\mathcal{U}_{4}\). Hence, Thm. III.12 immediately implies that the system-environment unitary dynamics \(\mathcal{U}_{BE\to CF}(\phi_{AE}^{+})\) can either be purely common-cause, or the mixture of common-cause and direct-cause. But in either case, the initial state \(\phi_{AE}^{+}\) will influence the quantum dynamics from \(B\) to \(C\). Thus, in this case, the system-environment unitary dynamics \(\mathcal{U}_{BE\to CF}(\phi_{AE}^{+})\) is non-Markovian.
Let us further consider how to infer the mixture of common-cause and direct-cause by employing the entropic causal uncertainty relation formulated in Cor. III.6.
**Corollary III.14**.: _Mixture of Common-Cause and Direct-Cause_
For a system-environment unitary dynamics \(\mathcal{U}_{BE\to CF}(\phi_{AE}^{+})\), if there exist some unitary channels \(\mathcal{U}_{b}\) (\(b\in\{1,2,3,4\}\)), such that
\[2> H(\mathcal{T}_{\mathrm{CC}}(\mathcal{U}_{1},\mathcal{U}_{2}) \otimes\mathrm{Tr}_{F})_{\mathcal{U}_{BE\to CF}(\phi_{AE}^{+})}>0, \tag{245}\] \[2> H(\mathcal{T}_{\mathrm{DC}}(\mathcal{U}_{3},\mathcal{U}_{4}) \otimes\mathrm{Tr}_{F})_{\mathcal{U}_{BE\to CF}(\phi_{AE}^{+})}>0. \tag{246}\]
Then the system-environment unitary dynamics \(\mathcal{U}_{BE\to CF}(\phi_{AE}^{+})\) is a mixture of common-cause and direct-cause.
Proof.: On the one hand, Eq. 245 implies that the causal structure of \(\mathcal{U}_{BE\to CF}(\phi_{AE}^{+})\) cannot be purely direct-cause (see Thm. III.12). On the other hand, Eq. 246 tells us that the causal model of \(\mathcal{U}_{BE\to CF}(\phi_{AE}^{+})\) will not be pure common-cause (see Thm. III.9). Thus, the system-environment unitary dynamics \(\mathcal{U}_{BE\to CF}(\phi_{AE}^{+})\) should be a mixture of common-cause and direct-cause, which completes the proof.
From Cor. III.14, we know that by using the entropic causal uncertainty relation (see Eq. 229), the causal structure of system-environment unitary dynamics \(\mathcal{U}_{BE\to CF}(\phi_{AE}^{+})\) can be determined by implementing only two interactive measurements, namely a maximal common-cause indicator \(\mathcal{T}_{\mathrm{CC}}(\mathcal{U}_{1},\mathcal{U}_{2})\in\mathcal{M}_{ \mathrm{CC}}\) (see Def. III.1) and a maximal direct-cause indicator \(\mathcal{T}_{\mathrm{DC}}(\mathcal{U}_{3},\mathcal{U}_{4})\in\mathcal{M}_{ \mathrm{DC}}\) (see Def. III.2).
### Coherent Mixture of Common-Cause and Direct-Cause
Most things in nature are mixtures. Classically, two or more substances can be combined in a probabilistic way, such as the convex combination of probability density functions in statistics. Quantumly, operations or states of the systems can be controlled coherently, such as the superpositions of trajectories (i.e. quantum switch [36]) in quantum communication [37; 38; 39]. In quantum causal inference, the models of pure common-cause and purely direct-cause can also be combined in a coherent way [13]. In this subsection, we further investigate the quantum circuit (which is also known as quantum causal map in this work) \(\Phi_{\alpha,\beta}\in\mathfrak{F}_{2}\) considered in the main text (see also Fig. 5 of the main text), and show that \(\Phi_{\alpha=-\pi/4,\beta=-\pi/2}\) is indeed a coherent mixture purely common-cause and purely direct-cause.
In quantum circuit \(\Phi_{\alpha,\beta}=\mathrm{Tr}_{F}[\mathcal{U}(\alpha,\beta)(\phi_{AE}^{+})] \in\mathfrak{F}_{2}\), the unitary channel \(\mathcal{U}(\alpha,\beta)\) from systems \(BE\) to \(CF\) is given by \(\mathcal{U}(\alpha,\beta)(\cdot)=U(\alpha,\beta)(\cdot)U^{\dagger}(\alpha,\beta)\), where the unitary matrix \(U(\alpha,\beta)\) is characterized by the following form
\[U(\alpha,\beta)=e^{i\alpha/2}\begin{pmatrix}e^{-i\alpha}&0&0&0\\ 0&\cos(\beta/2)&-i\sin(\beta/2)&0\\ 0&-i\sin(\beta/2)&\cos(\beta/2)&0\\ 0&0&0&e^{-i\alpha}\end{pmatrix}. \tag{247}\]
When the parameters are taken as \(\alpha=-\pi/4\) and \(\beta=-\pi/2\), the corresponding unitary matrix turns out to be
\[U(\alpha=-\pi/4,\beta=-\pi/2)=e^{-i\pi/8}\begin{pmatrix}e^{i\pi/4}&0&0&0\\ 0&\cos(\pi/4)&i\sin(\pi/4)&0\\ 0&i\sin(\pi/4)&\cos(\pi/4)&0\\ 0&0&0&e^{i\pi/4}\end{pmatrix}. \tag{248}\]
Thus in this case, written out explicitly, the quantum circuit is given by
\[\Phi_{\alpha=-\pi/4,\beta=-\pi/2}=\mathrm{Tr}_{F}[\mathcal{U}(\alpha=-\pi/4,\beta =-\pi/2)_{BE\to CF}(\phi_{AE}^{+})]. \tag{249}\]
Meanwhile, the existence of coherent mixture of causal models of purely common-cause and purely direct-cause has been proved by examining the quantum circuit \(\mathrm{Tr}_{F}[\mathcal{U}_{*}(\phi_{AE}^{+})]\)[13], where the unitary \(\mathcal{U}_{*}\) is defined as
\[\mathcal{U}_{*}=(\cos\pi/4)\mathrm{id}_{B\to C}\otimes\mathrm{id}_{E\to F}+(i \sin\pi/4)\cdot\mathrm{SWAP}(BE\to FC). \tag{250}\]
Noted that here \(\mathcal{U}_{*}\) forms a partial SWAP from \(BE\) to \(CF\). For any input state \(\cdot\), we have \(\mathcal{U}_{*}(\cdot)=U_{*}(\cdot)U_{*}^{\dagger}\), where
\[U_{*}=\begin{pmatrix}e^{i\pi/4}&0&0&0\\ 0&\cos(\pi/4)&i\sin(\pi/4)&0\\ 0&i\sin(\pi/4)&\cos(\pi/4)&0\\ 0&0&0&e^{i\pi/4}\end{pmatrix}. \tag{251}\]
Remark that for any input state \(\rho\) acting on systems \(BE\), we have
\[U_{*}\rho U_{*}^{\dagger}=U(\alpha=-\pi/4,\beta=-\pi/2)\rho U(\alpha=-\pi/4, \beta=-\pi/2)^{\dagger}. \tag{252}\]
It now follows immediately that
\[\Phi_{\alpha=-\pi/4,\beta=-\pi/2}=\mathrm{Tr}_{F}[\mathcal{U}_{*}(\phi_{AE}^ {+})]. \tag{253}\]
In other words, the quantum circuit \(\Phi_{\alpha=-\pi/4,\beta=-\pi/2}\) studied in Fig. 5(c) of the main text is indeed a coherent mixture of both purely common-cause (see Def. III.7) and purely direct-cause (see Def. III.10), which goes beyond the classical way of mixing causal models in causal inference.
## IV Numerical Experiments
In this section, we focus on a parameterized quantum circuit, and illustrate the uncertainty associated with this parameterized quantum circuit numerically. In Subsec. IV.1, we demonstrate the entropic causal uncertainty relation with respect to a special pair of maximal common-cause indicator and maximal direct-cause indicator, showing that our lower bound is tight. In Subsec. IV.2, we detail the application of our entropic causal uncertainty relation to inferring causality by using few interactive measurements - one maximal common-cause indicator and one maximal direct-cause indicator.
### The Landscape of Joint Uncertainty
Instead of investigating a special bipartite unitary (see Fig. 5\(a\) of the main text), we consider the universal two-qubit unitary gates \(\mathcal{U}(\alpha,\beta,\gamma)\) (see Fig. 14 of this Supplemental Material) between system and environment in this subsection. We numerically demonstrate the optimality of entropic uncertainty relation (see Cor. III.6) by comparing the value of
\[H(\mathcal{T}_{\mathrm{CC}}(\mathrm{id},\mathrm{id}))_{\Phi_{\alpha,\beta, \gamma}}+H(\mathcal{T}_{\mathrm{DC}}(\mathrm{id},\mathrm{id}))_{\Phi_{\alpha, \beta,\gamma}}, \tag{254}\]
with the lower bound 2. Here the maximal common-cause indicator \(\mathcal{T}_{\rm CC}({\rm id},{\rm id})\in\mathcal{M}_{\rm CC}\) (see Def. III.1) and the maximal direct-cause indicators \(\mathcal{T}_{\rm DC}({\rm id},{\rm id})\in\mathcal{M}_{\rm DC}\) (see Def. III.2) are obtained by setting \(\mathcal{U}_{b}={\rm id}\) for all \(b\in\{1,2,3,4\}\). Meanwhile, the parameterized quantum circuit \(\Phi_{\alpha,\beta,\gamma}\in\mathfrak{F}_{2}\) (Fig. 14) is given by
\[\Phi_{\alpha,\beta,\gamma}={\rm Tr}_{F}[\mathcal{U}(\alpha,\beta,\gamma)_{BE \to CF}(\phi_{AE}^{+})], \tag{255}\]
where the system \(A\) and environment \(E\) are initialized in the maximally entangled state \(\phi^{+}\). In this case, the bipartite unitary \(\mathcal{U}(\alpha,\beta,\gamma)_{BE\to CF}\) constitutes of three CNOT gates and five Pauli rotations. More precisely, the rotations with respect to \(Z\) and \(Y\) axis are defined as
\[R_{Z}(\theta):=\begin{pmatrix}e^{-i\theta/2}&0\\ 0&e^{i\theta/2}\end{pmatrix},\quad R_{Y}(\theta):=\begin{pmatrix}\cos(\theta/ 2)&-\sin(\theta/2)\\ \sin(\theta/2)&\cos(\theta/2)\end{pmatrix}. \tag{256}\]
Figure 15: (color online) Comparison between the joint uncertainty \(H(\mathcal{T}_{\rm CC}({\rm id},{\rm id}))_{\Phi_{\alpha,\beta,\gamma}}+H( \mathcal{T}_{\rm DC}({\rm id},{\rm id}))_{\Phi_{\alpha,\beta,\gamma}}\) (blue) and the lower bound 2 (gray).
Here we investigate \(\mathcal{U}(\alpha,\beta,\gamma)_{BE\to CF}\) since it forms a family of universal two-qubit unitary gates, up to some local unitary pre-processing and post-processing [40]. On the one hand, the environment \(F\) will be traced out at the end. Hence, the post-processing on \(F\) can be ignored. On the other hand, as the initial system-environment quantum dynamics is prepared in the maximally entangled state \(\phi^{+}\), any unitary pre-processing on system \(E\) is equivalent to its transpose acting on system \(A\). Consequently, this family of two-qubit unitary operators are universal up to some local unitary channels on systems \(A,B\) and \(C\). The two \(R_{z}\) gates at the end are inserted merely for aesthetic purposes.
Our numerical experiments are plotted at different values of \(\gamma\), including the case of \(\gamma=0,\pi/4,\pi/2,3\pi/4,\pi\), and
runs over \(\alpha\) and \(\beta\) from \(-\pi\) to \(\pi\). Fig. 15 illustrates the the tightness of lower bound 2, demonstrating the optimality of our entropic causal uncertainty relation (see Eq. 229).
### Advantage in Inferring Causal Structures
In this subsection, we infer the causal model of parameterized quantum circuit \(\Phi_{\alpha,\beta,\gamma}\) (see Fig. 14) by using our entropic causal uncertainty relation (see Eq. 229). In this case, the initial state is \(\phi^{+}\), and hence we can apply the results of Thms. III.9 and III.12 directly. To simplify the experimental processes, we simply implement the maximal common-cause indicator \(\mathcal{T}_{\rm CC}({\rm id},{\rm id})\in\mathcal{M}_{\rm CC}\) (see Def. III.1) and the maximal direct-cause indicators \(\mathcal{T}_{\rm DC}({\rm id},{\rm id})\in\mathcal{M}_{\rm DC}\) (see Def. III.2), and evaluating the corresponding Shannon entropies of \(H(\mathcal{T}_{\rm CC}({\rm id},{\rm id}))_{\Phi_{\alpha,\beta,\gamma}}\) and \(H(\mathcal{T}_{\rm DC}({\rm id},{\rm id}))_{\Phi_{\alpha,\beta,\gamma}}\), which are defined by Eqs. 196 and 197 respectively.
We have depicted the landscape of Shannon entropies \(H(\mathcal{T}_{\rm CC}({\rm id},{\rm id}))_{\Phi_{\alpha^{\star},\beta^{\star},\gamma^{\star}}}\) and \(H(\mathcal{T}_{\rm DC}({\rm id},{\rm id}))_{\Phi_{\alpha^{\star},\beta^{\star},\gamma^{\star}}}\) in Fig. 16. For parameters \(\alpha^{\star}\), \(\beta^{\star}\) and \(\gamma^{\star}\), if the numerical value of \(H(\mathcal{T}_{\rm CC}({\rm id},{\rm id}))_{\Phi_{\alpha^{\star},\beta^{\star},\gamma^{\star}}}\) belongs to \((0,2)\), then the Shannon entropy associated with maximal direct-cause indicators satisfies
\[H(\mathcal{T}_{\rm DC}(\mathcal{U}_{3},\mathcal{U}_{4}))_{\Phi_{\alpha^{\star },\beta^{\star},\gamma^{\star}}}\geqslant 2-H(\mathcal{T}_{\rm CC}({\rm id},{\rm id}))_{ \Phi_{\alpha^{\star},\beta^{\star},\gamma^{\star}}}>0, \tag{257}\]
for any unitary channels \(\mathcal{U}_{3}\) and \(\mathcal{U}_{4}\). Thus, Thm. III.12 implies that the causal model of quantum circuit \(\Phi_{\alpha^{\star},\beta^{\star},\gamma^{\star}}\) will never be purely direct-cause. Meanwhile, for some parameters \(\alpha^{\star}\), \(\beta^{\star}\) and \(\gamma^{\star}\), if the numerical value of \(H(\mathcal{T}_{\rm DC}({\rm id},{\rm id}))_{\Phi_{\alpha^{\star},\beta^{\star},\gamma^{\star}}}\) belongs to \((0,2)\), then the Shannon entropy associated with maximal common-cause indicators meets the following inequality
\[H(\mathcal{T}_{\rm CC}(\mathcal{U}_{1},\mathcal{U}_{2}))_{\Phi_{\alpha^{\star },\beta^{\star},\gamma^{\star}}}\geqslant 2-H(\mathcal{T}_{\rm DC}({\rm id},{\rm id}))_{ \Phi_{\alpha^{\star},\beta^{\star},\gamma^{\star}}}>0, \tag{258}\]
for any unitary channels \(\mathcal{U}_{1}\) and \(\mathcal{U}_{2}\). Hence, Thm. III.9 implies that the causal structure of \(\Phi_{\alpha^{\star},\beta^{\star},\gamma^{\star}}\) cannot be purely common-cause. At last, if the numerical value of both \(H(\mathcal{T}_{\rm CC}({\rm id},{\rm id}))_{\Phi_{\alpha^{\star},\beta^{\star},\gamma^{\star}}}\) and \(H(\mathcal{T}_{\rm DC}({\rm id},{\rm id}))_{\Phi_{\alpha^{\star},\beta^{\star},\gamma^{\star}}}\) belong to
Figure 17: (color online) Contour lines for the uncertainty associated with maximal direct-cause indicator \(\mathcal{T}_{\rm DC}({\rm id},{\rm id})\) and maximal common-cause indicator \(\mathcal{T}_{\rm CC}({\rm id},{\rm id})\). Here the green dashed line stand for the cases where \(H(\mathcal{T}_{\rm DC}({\rm id},{\rm id}))_{\Phi_{\alpha,\beta,\beta}}=2\), and the yellow dashed line represents the cases where \(H(\mathcal{T}_{\rm CC}({\rm id},{\rm id}))_{\Phi_{\alpha^{\star},\beta,\beta}}=2\). The green and yellow dots are the tuples \((\alpha,\beta)\) such that \(H(\mathcal{T}_{\rm DC}({\rm id},{\rm id}))_{\Phi_{\alpha,\beta,\beta}}=0\) and \(H(\mathcal{T}_{\rm CC}({\rm id},{\rm id}))_{\Phi_{\alpha,\beta,\beta}}=0\). In such cases, the corresponding causal structures are purely direct-cause and purely common-cause respectively. Besides dashed lines and dotted, all other quantum circuits exhibit a mixture of both direct-cause and common-cause.
\((0,2)\) for the same \(\alpha^{\star}\), \(\beta^{\star}\) and \(\gamma^{\star}\), then the corresponding causality of \(\Phi_{\alpha^{\star},\beta^{\star},\gamma^{\star}}\) must be a mixture of both direct-cause and common-cause.
In particular, for the case \(\beta=\gamma\), we have demonstrated all the circuits whose causal structure can be inferred by implementing only two interactive measurements in Fig. 17, where the green (dashed) line stands for the uncertainty associated with maximal direct-cause indicator \(\mathcal{T}_{\mathrm{DC}}(\mathrm{id},\mathrm{id})\) and the yellow (dashed) line represents the uncertainty associated with maximal common-cause indicators \(\mathcal{T}_{\mathrm{CC}}(\mathrm{id},\mathrm{id})\). The dashed lines stand for the case when the uncertainty is equal to \(2\), where we cannot get any useful information about the causal structure of \(\Phi_{\alpha,\beta,\gamma}\). Meanwhile, the dots stand for the case whose uncertainty is equal to \(0\). For these dots, we can get a deterministic statement about its causal structure. Take the green (yellow) dots for instance, then the corresponding causal structure of circuit \(\Phi_{\alpha,\beta,\beta}\) is a purely direct-cause (purely common-cause). Compared with the tomography of system-environment unitary dynamics, we have seen at least two advantages in causal inference by using causal uncertainty relation (see Eq. 229): First, we can infer the structural dependencies among environmental and object systems without the need of information from the environment. Second, the causality can sometimes be determined by using very few interactive measurements, e.g. for \(\Phi_{\alpha,\beta,\beta}\), the causal structure of almost all circuits can be determined by using only two interactive measurements - \(\mathcal{T}_{\mathrm{CC}}(\mathrm{id},\mathrm{id})\in\mathcal{M}_{\mathrm{CC}}\) (see Def. III.1) and \(\mathcal{T}_{\mathrm{DC}}(\mathrm{id},\mathrm{id})\in\mathcal{M}_{\mathrm{DC}}\) (see Def. III.2).
|
2303.13437
|
$p$-capacity with Bessel convolution
|
We define and examine nonlinear potential by Bessel convolution with Bessel
kernel. We investigate removable sets with respect to Laplace-Bessel
inequality. By studying the maximal and fractional maximal measure, a Wolff
type inequality is proved. Finally the relation of B-$p$ capacity and
B-Lipschitz mapping, and the B-$p$ capacity and weighted Hausdorff measure and
the B-$p$ capacity of Cantor sets are examined.
|
Á. P. Horváth
|
2023-03-23T16:59:19Z
|
http://arxiv.org/abs/2303.13437v1
|
# \(p\)-capacity with Bessel convolution
###### Abstract.
We define and examine nonlinear potential by Bessel convolution with Bessel kernel. We investigate removable sets with respect to Laplace-Bessel inequality. By studying the maximal and fractional maximal measure, a Wolff type inequality is proved. Finally the relation of B-\(p\) capacity and B-Lipschitz mapping, and the B-\(p\) capacity and weighted Hausdorff measure and the B-\(p\) capacity of Cantor sets are examined.
Key words and phrases:nonlinear potential, Bessel convolution, Laplace-Bessel equation, Wolff inequality, weighted Hausdorff measure 2020 Mathematics Subject Classification: 31C45, 26D15, 28A78
## 1. Introduction
Classical, nonlinear, and Bessel potentials are widespread, have an extensive literature, and are widely applicable, see e.g. [17], [10], [15] and the references therein. Below we introduce and examine nonlinear potential defined by Bessel convolution with Bessel kernel.
Bessel translation was defined by Delsarte [5] and the basic investigation is due to Levitan, [14]. In a series of works the authors pointed out that Bessel translation and convolution methods are effective tools to handle Bessel-type partial differential operators, see e.g. [16], [12], [18], [13]. It also proved useful for deriving Nikol'skii type inequality, see [4], and for giving compactness criteria in some Banach spaces, see [11].
This leads to examine nonlinear potential and \(p\)-capacity with respect to Bessel convolution. The curiosity of the method is that the underlying space of Bessel-\(p\) capacity is automatically weighted. Weighted nonlinear potential was studied already in the '80-s, see e. g. [1], [3]. For logarithmic potentials with external field see the monograph [21]. In our investigation the Bessel weighted space is a natural consequence of the definition of convolution, and so many of the results are very similar to the ones proved in the unweighted case.
The paper is organized as follows. After the preliminaries, in the third section, applying recent results on Bessel potential, we investigate removable sets for Laplace-Bessel equation. In the fourth section a Wolff type inequality is proved, which is the basis of the study of the last section. This last section contains some "metric" results on Lipschitz type mapping and on capacity of Cantor sets. Since Bessel translation is not a geometric similarity, moreover the underlying space is weighted, we have to introduce a special property (B-Lipschitz mapping), and the notion of weighted Hausdorff measure.
## 2. Notation, preliminaries
Let \(\mathbb{R}^{n}_{+}:=\{x=(x_{1},\ldots,x_{n}):x_{i}\geq 0,\ i=1,\ldots,n.\}\). \(\lambda\) is the \(n\)-dimensional Lebesgue measure. \(a=a_{1},\ldots,a_{n}\) is a multiindex. Let \(E\subset\mathbb{R}^{n}_{+}\) and \(\mathcal{M}(E)\) stands for the Radon measures supported on \(E\). If \(\mu\in\mathcal{M}(E)\) for some \(E\), \(d\mu_{a}(x):=x^{a}d\mu(x)\) Define the Banach space \(L^{p}_{a}\) as follows.
\[\|f\|^{p}_{p,a}=\int_{\mathbb{R}^{n}_{+}}|f(x)|^{p}d\lambda_{a}(x),\]
and as usual
\[L^{p}_{a}:=L^{p}_{a}(\mathbb{R}^{n}_{+})=\{f:\|f\|_{p,a}<\infty\},\ \ \ L^{p+}_{a}:=\{f\in L^{p}_{a}:f\geq 0\}.\]
The dual index \(p^{\prime}\) is defined by \(\frac{1}{p}+\frac{1}{p^{\prime}}=1\).
### Bessel translation
Let
\[a:=2\alpha_{1}+1,\ldots,2\alpha_{n}+1,\ \ \ \ \alpha_{i}>-\frac{1}{2},\ i=1, \ldots,n,\ \ |a|=\sum_{i=1}^{n}(2\alpha_{i}+1).\]
The Bessel translation of a function, \(f\) (see e.g. [14], [16], [20]) is
\[T^{t}_{a}f(x)=T^{t_{n}}_{a_{n}}\ldots T^{t_{1}}_{a_{1}}f(x_{1},\ldots,x_{n}),\]
where
\[T^{t_{i}}_{a_{i}}f(x_{1},\ldots,x_{n}) \tag{1}\]
\[=\frac{\Gamma(\alpha_{i}+1)}{\sqrt{\pi}\Gamma\left(\alpha_{i}+\frac{1}{2} \right)}\int_{0}^{\pi}f(x_{1},\ldots,\sqrt{x_{i}^{2}+t_{i}^{2}-2x_{i}t_{i} \cos\vartheta_{i}},x_{i+1},\ldots,x_{n})\sin^{2\alpha_{i}}\vartheta_{i}d \vartheta_{i}.\]
The translation can also be expressed as an integral with respect to a kernel function:
\[T^{t_{i}}_{a_{i}}f(x_{1},\ldots,x_{n})=\int_{0}^{\infty}K(x_{i},t_{i},z_{i})f (z_{1},\ldots,z_{n})d\lambda_{a_{i}}(z_{i}), \tag{2}\]
where
\[K(x,t,z)=\left\{\begin{array}{ll}\frac{\pi^{\alpha+\frac{1}{2}}\Gamma( \alpha+1)}{2^{2\alpha-1}\Gamma\left(\alpha+\frac{1}{2}\right)}\frac{[(x+t)^{2 }-z^{2})(z^{2}-(x-t)^{2})]^{\alpha-\frac{1}{2}}}{(xtz)^{2\alpha}},&|x-t|<z<x+t \\ 0,&\mbox{otherwise}.\end{array}\right. \tag{3}\]
Obviously
\[T^{t}_{a}f(x)=T^{x}_{a}f(t).\]
\(T_{a}\) is a positive operator, and
\[\|T^{t}_{a,x}f(x)\|_{p,a}\leq\|f\|_{p,a},\ \ 1\leq p\leq\infty, \tag{4}\]
see e.g. [14].
The generalized convolution with respect to the Bessel translation is
\[f*_{a}g=\int_{\mathbb{R}^{n}_{+}}T^{t}_{a,x}f(x)g(x)d\lambda_{a}(x).\]
We have
\[f*_{a}g=g*_{a}f,\]
and Young's inequality fulfils i. e. if \(1\leq p,q,r\leq\infty\) with \(\frac{1}{r}=\frac{1}{p}+\frac{1}{q}-1\); if \(f\in L^{p}_{\alpha}\) and \(g\in L^{q}_{\alpha}\), then
\[\|f*_{a}g\|_{r,a}\leq\|f\|_{p,a}\|g\|_{q,a}, \tag{5}\]
see [20, (3.178)].
Subsequently if it does not cause any confusion, \(T^{t}f(x)\) stands for \(T^{t}_{a}f(x)\). For any set \(H\subset\mathbb{R}^{n}\) we denote by \(H_{+}:=H\cap\mathbb{R}^{n}_{+}\). The next technical lemma will be useful in the following sections.
**Lemma 1**.: \(\mathrm{supp}T^{t}\chi_{B_{+}(0,r)}(x)=\overline{B_{+}(x,r)}\)_, \(\mathrm{supp}T^{t}\chi_{[0,r)^{n}}(x)=\times_{i=1}^{n}[x_{i}-r,x_{i}+r]_{+}=:T_ {+}(x,r)\). There is a \(c>0\) such that for all \(x\in\mathbb{R}^{n}_{+}\), \(t\in B_{+}(x,r)\)_
\[T^{t}\chi_{B_{+}(0,r)}(x)\leq c\prod_{i=1}^{n}\min\left\{1,\left(\frac{r}{x_{i }}\right)^{a_{i}}\right\}. \tag{6}\]
_There is a \(c>0\) such that for all \(x\in\mathbb{R}^{n}_{+}\), \(t\in T_{+}\left(x,\frac{r}{2}\right)\)_
\[T^{t}\chi_{[0,r)^{n}}(x)\geq c\prod_{i=1}^{n}\min\left\{1,\left(\frac{r}{x_{i }}\right)^{a_{i}}\right\}. \tag{7}\]
Proof.: The first two statements are direct consequences of the definition, for (6) see [8, p. 321].
(7): Since \(\sqrt{x_{i}^{2}+t_{i}^{2}-2x_{i}t_{i}\cos\vartheta_{i}}\leq x_{i}+t_{i}\) if \(x_{i}+t_{i}\leq r\), then
\[\int_{\{\vartheta\in[0\pi):\sqrt{x_{i}^{2}+t_{i}^{2}-2x_{i}t_{i}\cos\vartheta _{i}}\leq r\}}1d\sigma_{i}\vartheta=1,\]
where \(d\sigma_{i}:=\frac{\Gamma(\alpha_{i}+1)}{\sqrt{\pi\Gamma\left(\alpha_{i}+ \frac{1}{2}\right)}}\sin\vartheta^{2\alpha_{i}}d\vartheta_{i}\) is a probability measure on \([0,\pi]\).
If \(r<x_{i}+t_{i}\leq 2r\), using (2) and recalling that \(|x_{i}-t_{i}|\leq\frac{r}{2}\) we have
\[I_{i}=\frac{c(a_{i})}{(x_{i}t_{i})^{2\alpha_{i}}}\int_{|x_{i}-t_{i}|}^{r} \left[(z_{i}^{2}-(x_{i}-t_{i})^{2})((x_{i}+t_{i})^{2}-z_{i}^{2})\right]^{ \alpha_{i}-\frac{1}{2}}z_{i}dz_{i}\]
\[\geq c(a_{i})\frac{1}{(x_{i}+t_{i})^{4\alpha_{i}}}\int_{\frac{5}{8}r}^{\frac{ 7}{8}r}(\cdot)dz_{i}=c\frac{1}{r^{4\alpha_{i}}}r^{4\alpha_{i}-2}rr\geq c(a_{ i}).\]
If \(x_{i}+t_{i}>2r\), then \(x_{i}\sim t_{i}\) (\(f\sim g\) if there are positive constants \(A\) and \(B\) such that \(Af<g<Bf\)) and we have
\[I_{i}\geq c\int_{\frac{5}{8}r}^{r}r^{2\alpha_{i}-1}\frac{(x_{i}+t_{i})^{2 \alpha_{i}-1}}{(x_{i}t_{i})^{2\alpha_{i}}}z_{i}dz_{i}=c\left(\frac{r}{x_{i}} \right)^{2\alpha_{i}+1}.\]
**Remark 1**.: For any \(c\geq 2\sqrt{n}\)
\[T^{t}\chi_{B_{+}(0,r)}(x)\sim T^{t}\chi_{B_{+}(0,cr)}(x),\]
for all \(x\in\mathbb{R}^{n}_{+}\), \(t\in B_{+}(x,r)\).
Indeed, in view of (2) if \(H\subset S\subset\mathbb{R}^{n}_{+}\), then \(T^{t}\chi_{H}(x)\leq T^{t}\chi_{S}(x)\) for any \(x,t\). Together with Lemma 1 it implies that \(c_{1}T^{t}\chi_{B_{+}(0,r)}(x)\leq T^{t}\chi_{B_{+}(0,cr)}(x)\leq c_{2}T^{t} \chi_{B_{+}(0,r)}(x)\).
### Radially decreasing kernels and B-\(p\) capacity
**Definition 1**.: _Let \(g\) be a non-negative lower semi-continuous, non-increasing function on \(\mathbb{R}_{+}\) for which_
\[\int_{0}^{1}g(t)t^{n+|a|-1}dt<\infty. \tag{8}\]
_Then \(\kappa:=g(|x|)\) is a radially decreasing kernel on \(\mathbb{R}^{n}\)._
The B-\(p\) capacity with respect to \(\kappa\) is as follows.
**Definition 2**.: _Let \(E\subset\mathbb{R}_{+}^{n}\), \(1\leq p<\infty\)._
\[C_{p,\kappa}(E):=\inf\{\|f\|_{p,a}^{p}:f\in L_{a}^{p+},\ \kappa*_{a}f(x)\geq 1,\ \forall x \in E\}.\]
**Remark 2**.: (1) Definition 1 is a special case of [2, Definition 2.3.3]. Thus all the standard properties proved in [2, Chapter 2.3] are valid.
(2) Notice that by the definition of Bessel translation if \(\kappa\) is a radially decreasing kernel, then \(T^{t}\kappa(x)\leq g(|x-t|)\). Thus if \(f\geq 0\), \(\kappa*_{a}f(x)\leq\kappa*h(x)\), where \(h(x)=f(x)x^{a}\chi_{\mathbb{R}_{+}^{n}}(x)\) and \(*\) stands for the standard convolution.
(3) Let \(K\subset\mathbb{R}_{+}^{n}\), \(1<p<\infty\). An equivalent form of Definition 2 is
\[C_{p,\kappa}^{\frac{1}{p}}(K)=\sup\{\mu_{a}(K):\mu\in\mathcal{M}(K),\ \|\kappa*_{a}\mu\|_{p^{\prime},a}\leq 1\},\]
where \(\mathcal{M}(K)\) is the set of (positive) measures on \(K\), see [2, Theorem 2.5.1].
(4) As usual, the definitions above can be extended to any subsets of \(\mathbb{R}_{+}^{n}\) as it follows. If \(O\subset\mathbb{R}_{+}^{n}\) is open, then \(C_{p,\kappa}(O):=\sup\{C_{p,\kappa}(K):K\subset O,\ K\ \mbox{is compact}\}\) and if \(E\subset\mathbb{R}_{+}^{n}\) is arbitrary, then \(C_{p,\kappa}(E):=\inf\{C_{p,\kappa}(O):E\subset O,\ O\ \mbox{is open}\}\).
(5) \(C_{p,\kappa}\) is monotone and \(\sigma\)-subadditive (cf. [2, Propositions 2.3.4 and 2.3.6]).
**Proposition 1**.: _Let \(1<p<\infty\). If \(\kappa\) is a radially decreasing kernel, then_
(1) if \(\|\kappa\|_{p^{\prime},a}<\infty\), then \(C_{p,\kappa}(\{y\})>0\) for all \(y\in\mbox{int}\mathbb{R}_{+}^{n}\),
(2) if \(\int_{\mathbb{R}_{+}^{n}\setminus B(0,1)}\kappa^{p^{\prime}}d\lambda_{a}=\infty\), then \(C_{p,\kappa}(E)=0\) for all \(E\subset\mathbb{R}_{+}^{n}\),
(3) if \(\int_{\mathbb{R}_{+}^{n}\setminus B(0,1)}\kappa^{p^{\prime}}d\lambda_{a}<\infty\), \(E\) is measurable and \(C_{p,\kappa}(E)=0\), then \(\lambda_{a}(E)=0\).
Proof.: (1) Let \(\delta_{y}\) be the Dirac measure concentrated at \(y\). According to Remark 2 and (5),
\[C_{p,\kappa}^{\frac{1}{p}}(\{y\})=\sup\left\{\frac{\mu_{a}(\{y\})}{\|\kappa*_ {a}\delta_{y}\|_{p^{\prime},a}}:\mu\in\mathcal{M}(\{y\})\right\}\geq\frac{y^{ a}}{\|\kappa*_{a}\delta_{y}\|_{p^{\prime},a}}\geq\frac{1}{\|\kappa\|_{p^{\prime},a}}>0.\]
(2) It is enough to show that \(C_{p,\kappa}(B_{+}(0,r))=0\) for all \(r>0\). Let \(\mu\in\mathcal{M}(B_{+}(0,r))\). In view of (2.1) \(T^{t}\kappa(x)\geq g(|x+t|)\), thus
\[\|\kappa*_{a}\mu\|_{p^{\prime},a}\geq\mu(B_{+}(0,r))\left(\int_{\mathbb{R}_{ +}^{n}}g(r+|t|)^{p^{\prime}}t^{a}dt\right)^{\frac{1}{p^{\prime}}}\]
\[\geq c\left(\int_{\mathbb{R}_{+}^{n}\setminus B(0,2r)}g(|t|)^{p^{\prime}}t^{a} dt\right)^{\frac{1}{p^{\prime}}}=\infty.\]
The last inequality is equivalent with the assumption, and according to Remark 2, it proves the statement.
(3) It is enough to show that \(\lambda_{a}(E\cap B_{+}(0,r))=0\) for all \(r>0\). Let \(F=E\cap B_{+}(0,r)\) and \(f\in L^{p+}_{a}\) such that \(\kappa*_{a}f(x)\geq 1\) on \(F\). Then by Fubini's theorem
\[\lambda_{a}(F)\leq\int_{F}\kappa*_{a}f(x)x^{a}dx=\int_{\mathbb{R}^{n}_{+}} \chi_{F}(x)\kappa*_{a}f(x)x^{a}dx=\int_{\mathbb{R}^{n}_{+}}\kappa*_{a}\chi_{F}( t)f(t)t^{a}dt\]
\[\leq\|f\|_{p,a}\|\kappa*_{a}\chi_{F}\|_{p^{\prime},a}\leq\|f\|_{p,a}\|\kappa*_{ a}\chi_{B_{+}(0,r)}\|_{p^{\prime},a}.\]
We estimate the second factor.
\[\|\kappa*_{a}\chi_{B_{+}(0,r)}\|_{p^{\prime},a}\]
\[\leq\left(\int_{B_{+}(0,2r)}(\kappa*_{a}\chi_{B_{+}(0,r)}(t))^{p^{\prime}}t^{ a}dt\right)^{\frac{1}{p^{\prime}}}+\left(\int_{\mathbb{R}^{n}_{+}\setminus B_{+} (0,2r)}(\cdot)\right)^{\frac{1}{p^{\prime}}}=I+II.\]
If \(|t|>2r\), \(\kappa(x)\leq\kappa\left(\frac{t}{2}\right)\) while by Lemma 1\(T^{t}\chi_{B_{+}(0,r)}(x)\leq cr^{|a|}\frac{1}{x^{a}}\) on \(|x-t|<r\). Thus by the assumption we have
\[II=\left(\int_{\mathbb{R}^{n}_{+}\setminus B_{+}(0,2r)}\left(\int_{\mathbb{R }^{n}_{+}}T^{t}\chi_{B_{+}(0,r)}(x)\kappa(x)x^{a}dx\right)^{p^{\prime}}t^{a} dt\right)^{\frac{1}{p^{\prime}}}\]
\[\leq c\left(\int_{\mathbb{R}^{n}_{+}\setminus B_{+}(0,2r)}\kappa^{p^{\prime}} \left(\frac{t}{2}\right)t^{a}\right)^{\frac{1}{p^{\prime}}}<c.\]
In the first integral \(|t|<2r\) and \(|x-t|<r\), so the convolution can be estimated as
\[\kappa*_{a}\chi_{B_{+}(0,r)}(t)\leq\int_{B_{(}0,3r)}(g|x|)x^{a}dx\leq c,\]
where in spherical coordinates the last inequality is just (8). Thus, \(I\) is also bounded by a constant. Taking infimum over appropriate functions \(f\), we have that \(\lambda_{a}(F)\leq cC_{p,\kappa}(F)\), which implies the statement.
**Remark 3**.: Of course, the nonlinear potential with Bessel convolution is \(V^{\mu}_{\kappa,p}=\kappa*_{a}(\kappa*_{a}\mu)^{p^{\prime}-1}\). Subsequently, we focus on capacity.
### Bessel and Riesz kernels
The modified Bessel function of the second kind, \(K_{\alpha}\) is defined as follows.
\[i^{-\alpha}J_{\alpha}(ix)=\sum_{k=0}^{\infty}\frac{1}{k!\Gamma(k+\alpha+1)} \left(\frac{x}{2}\right)^{2k+\alpha},\]
where \(J_{\alpha}\) is the Bessel function, and
\[K_{\alpha}(x)=\frac{\pi}{2}\frac{i^{\alpha}J_{-\alpha}(ix)-i^{-\alpha}J_{ \alpha}(ix)}{\sin\alpha\pi}.\]
Considering \(r>0\), around zero
\[K_{\alpha}(r)\sim\left\{\begin{array}{ll}-\ln\frac{r}{2}-c,&\mbox{if }\alpha=0\\ C(\alpha)r^{-\alpha},&\mbox{if }\alpha>0,\end{array}\right. \tag{9}\]
and around infinity
\[K_{\alpha}(r)\sim\frac{c}{\sqrt{r}}e^{-r}. \tag{10}\]
The Bessel kernel is
\[G_{a,\nu}(x):=\frac{2^{\frac{n-a-\nu}{2}+1}}{\Gamma\left(\frac{\nu}{2}\right) \prod_{i=1}^{n}\Gamma(\alpha_{i}+1)}\frac{K_{\frac{n+|a|-\nu}{2}}(|x|)}{|x|^{ \frac{n+|a|-\nu}{2}}}. \tag{11}\]
Below we also need the Riesz kernel:
\[I_{\beta}(x)=\frac{c(\beta)}{|x|^{n-\beta}},\ \ x\in\mathbb{R}^{n}. \tag{12}\]
In the last section we use Bessel kernel rather than the Riesz kernel, because its behavior at infinity allows wider function classes. On the other hand, around the origin the Riesz kernel, \(I_{\nu-|a|}(x)\), behaves similarly to the Bessel kernel and is simpler, thus it proved to be a useful tool for computations.
Below we examine B-\(p\) capacity, which is defined by generalized convolution referring to the Bessel kernel: \(C_{p,a,\nu}(E):=C_{p,G_{a,\nu}}(E)\). In view of (9), (10) and (11) \(G_{a,\nu}(x)\in L_{a}^{1}\) if and only if \(\nu>0\). On the other hand according to Proposition 1, B-\(p\) capacity is non trivial if and only if \(1<p<\frac{n+|a|}{\nu}\) or \(1=p=\frac{n+|a|}{\nu}\). Thus subsequently we investigate \(C_{p,a,\nu}\) if \(1<p<\infty\), that is
\[0<\nu<\frac{n+|a|}{p}. \tag{13}\]
## 3. The Laplace-Bessel operator
B-elliptic equations are investigated by several authors. For instance fundamental solutions are given, see e.g.[12] and [13]. Harmonic analysis associated with Bessel operator is examined, see e.g. [16] and mean-value theorems are proved, see [19]. Here we give a simple application of B-\(p\) capacity.
We begin this section by introducing some additional notation. According to (13), (4) if \(g\in L_{a}^{p}\), \(G_{a,\nu}*_{a}g\in L_{a}^{p}\), moreover by [18, Lemma 4.3 (3)]
\[\|G_{a,\nu}*_{a}g\|_{p,a}\leq\|g\|_{p,a}. \tag{14}\]
Thus we define the next Banach space.
\[L_{a,\nu}^{p}:=L_{a,\nu}^{p}(\mathbb{R}_{+}^{n})=\{f:f=G_{a,\nu}*_{a}g;\ g\in L _{a}^{p}\},\ \ \ \|f\|_{p,a,\nu}:=\|g\|_{p,a}.\]
Let
\[B_{\alpha,x}:=\frac{\partial^{2}}{\partial x^{2}}+\frac{2\alpha+1}{x}\frac{ \partial}{\partial x}\]
be The Bessel operator. The Laplace-Bessel operator is defined as
\[\Delta_{a}=\sum_{i=1}^{n}B_{\alpha_{i},x_{i}}.\]
With this notation we define the Sobolev space \(W_{p,a}^{m}\) with \(m\in\mathbb{N}\) as it follows.
\[W_{p,a}^{m}:=\{f\in L_{a}^{p}:\Delta_{a}^{k}f\in L_{a}^{p},\ k=1,\ldots,m\}, \ \ \ \|f\|_{W_{p,a}^{m}}=\sum_{k=0}^{m}\|\Delta_{a}^{k}f\|_{p,a}.\]
**Notation**.: We need the "even" functions from the Schwartz class in \(\mathbb{R}^{n}_{+}\).
\[\mathcal{S}_{e}:=\left\{f\in C^{\infty}(\mathbb{R}^{n}_{+}):\left.\frac{\partial^ {2k+1}f}{\partial x_{i}^{2k+1}}\right|_{x_{i}=0}=0,\ k\in\mathbb{N};\right.\]
\[\left.\sup_{x\in\mathbb{R}^{n}_{+}}\left|x^{\alpha}D^{\beta}f(x)\right|<\infty \ \forall\alpha,\beta\in\mathbb{N}^{n}\right\}.\]
The next lemma describes the relation of Bessel potential and Sobolev spaces above.
**Lemma 2**.: _Let \(m\) be a positive integer. Then_
\[W^{m}_{p,a}=L^{p}_{a,2m}.\]
Proof.: Let us notice first that if we define \(\|f\|_{\tilde{W}^{m}_{p,a}}=\sum_{k=0}^{m}\|(I-\Delta_{a})^{k}f\|_{p,a}\), then \(\|f\|_{\tilde{W}^{m}_{p,a}}\sim\|f\|_{W^{m}_{p,a}}\). According to [18, Lemma 4.3] if \(g\in L^{p}_{a}\), \(1\leq p\leq\infty\) and \(k\in\mathbb{N}\),
\[(I-\Delta_{a})^{k}(G_{a,\nu}*_{a}g)=G_{a,\nu-2k}*_{a}g,\ \ \text{ and }G_{a,0}*_{a}g=g. \tag{15}\]
Comparing this with (14) we have if \(f\in L^{p}_{a,2m}\) and \(k\leq 2m\), then
\[\|(I-\Delta_{a})^{k}f\|_{p,a}\leq\|G_{a,2m-2k}*_{a}g\|_{p,a}\leq\|g\|_{p,a}=\|f \|_{p,a,2m}.\]
On the other hand taking into consideration that \(S_{e}\) is a dense subset in \(W^{m}_{p,a}\), let \(f\in S_{e}\). According to [18, Theorem 4.5] for \(1\leq p\leq\infty\), \(k\in\mathbb{N}\)
\[G_{a,\nu+2k}*_{a}(I-\Delta_{a})^{k}f=G_{a,\nu}*_{a}f. \tag{16}\]
Thus by (15) and (16)
\[f=G_{a,0}*_{a}f=G_{a,2m}*_{a}(I-\Delta_{a})^{m}f=G_{a,2m}*_{a}g,\]
where \(g=(I-\Delta_{a})^{m}f\in L^{p}_{a}\). So \(f\in L^{p}_{a,2m}\), and
\[\|f\|_{p,a,2m}\leq\|f\|_{W^{m}_{p,a}}.\]
**Definition 3**.: _Let \(K\subset\mathbb{R}^{n}_{+}\) compact. \(\mathcal{S}\) is the Schwartz class restricted to \(\mathbb{R}^{n}_{+}\)._
\(N_{p,a,\nu}(K):=\inf\{\|f\|_{p,a,\nu}^{p}:f=G_{a,\nu}*_{a}g\in\mathcal{S},\ f \equiv 1\) _in a neigborhood of \(K\}\)._
**Remark 4**.: (1) If \(p=2\) with standard convolution, \(N\) is the spectral measure defined by Deny, see [6].
(2)Of course, \(N_{p,a,\nu}\) can be extended as above and \(C_{p,a,\nu}(E)\leq N_{p,a,\nu}(E)\).
**Notation**.: Let us introduce the inner product for measurable functions
\[\langle f,g\rangle_{a}:=\int_{\mathbb{R}^{n}_{+}}fgd\lambda_{a}.\]
We denote in the same way the effect of a distribution.
**Theorem 1**.: _Let \(1<p<\infty\), \(K\subset\mathrm{int}\mathbb{R}^{n}_{+}\) compact, and \(L=\sum_{k=0}^{m}a_{k}(I-\Delta_{a})^{k}\), \(a_{k}\in\mathbb{R}\), \(k=1,\ldots,m\) be defined in a bounded open neighborhood \(O\) of \(K\) such that \(\overline{O}\subset\mathrm{int}\mathbb{R}^{n}_{+}\). Let \(u\in L^{p}_{a}(O\backslash K)\) is a solution to \(Lu=0\) in \(O\backslash K\). If \(N_{p^{\prime},a,2m}(K)=0\), \(u\) can be extended to \(\tilde{u}\in L^{p}_{a}(O)\) such that \(L\tilde{u}=0\) in \(O\) in weak sense._
**Remark 5**.: (1) Let \(1<p<\infty\), \(\nu>0\). If \(N_{p,a,\nu}(K)=0\), then \(\lambda(K)=0\).
Indeed, let \(\varepsilon>0\) be arbitrary and \(f=G_{a,\nu}*_{a}g\) such that \(\|g\|_{p^{\prime},a}\leq\varepsilon\), \(f\equiv 1\) on \(K\). By (14)
\[\varepsilon^{p}\geq\|G_{a,\nu}*_{a}g\|_{p^{\prime},a}^{p}\geq\int_{K}|G_{a, \nu}*_{a}g|^{p}d\mu_{a}=\mu_{a}(K).\]
Thus \(\lambda_{a}(K)=0\) and so \(\lambda(K)=0\).
(2) If \(f\in C_{0}^{\infty}(\mathrm{int}\mathbb{R}_{+}^{n})\) and \(g\in C_{0}^{\infty}(\mathrm{int}\mathbb{R}_{+}^{n})^{\prime}\), that is in the dual space, then
\[\langle Lf,g\rangle_{a}=\langle f,Lg\rangle_{a}. \tag{17}\]
In one dimension for \(L=B_{\alpha}\) it is [16, (2.4)]. Similarly to this case if \(f\in C_{0}^{\infty}(\mathrm{int}\mathbb{R}_{+}^{n})\) and \(g\) is smooth enough, then integration by parts implies the result. For general elements of the dual space we extend \(Lg\) by the formula above, that is \(Lgf:=\langle Lf,g\rangle_{a}\).
Proof.: (of Theorem 1) Let \(O\) be a bounded open neighborhood of \(K\) in \(\mathbb{R}_{+}^{n}\) and \(u\in L_{a}^{p}(O\setminus K)\) for which \(Lu=0\) in \(O\setminus K\). Since \(N_{p^{\prime},a,2m}(K)=0\), for all \(\varepsilon>0\) there is a \(\varphi=G_{a,2m}*_{a}f\in\mathcal{S}\) such that \(\varphi\equiv 1\) on a neighborhood \(U\subset O\) of \(K\) and \(\|f\|_{p^{\prime},a}\leq\varepsilon\). Let \(g\in C_{0}^{\infty}(O)\). Then \((1-\varphi)g\in C_{0}^{\infty}(O\setminus K)\). In view of (1) of Remark 5\(u\) is a.e. defined, so it can be handled as a distribution. Since \((1-\varphi)g\in C_{0}^{\infty}(O)\), by the assumption
\[\langle u,L(1-\varphi)g\rangle_{a}=0.\]
This implies that
\[|\langle u,Lg\rangle_{a}|=|\langle u,L\varphi g\rangle_{a}|\leq\|u\|_{p,a,O} \|L\varphi g\|_{p^{\prime},a}\leq c\|u\|_{p,a,O}\|\varphi g\|_{W_{p^{\prime},a }^{m}},\]
where \(c=c(L)\) depends on the coefficients of \(L\).
Applying Lemma 2 and the inversion of Bessel potential (for formulae see e.g. [7, Theorem 1]) we have
\[|\langle u,Lg\rangle_{a}|\leq c\|u\|_{p,a,O}\|\varphi g\|_{p^{\prime},a,\nu} \leq c\|\varphi g\|_{p^{\prime},a}\leq c\|g\|_{\infty}\|f\|_{p^{\prime},a},\]
where \(c\) depends on \(u\) and \(L\). Since \(\varepsilon\) was arbitrary, for all \(g\in C_{0}^{\infty}(O)\)\(\langle u,Lg\rangle_{a}=0\), so in view of (17) \(u\) is a weak solution on \(O\).
The fundamental solution for the Laplace-Bessel operator, that is
\[\Delta_{a}E=\delta_{a},\ \ \text{where}\ \langle\delta_{a},\varphi\rangle_{a}= \varphi(0),\ \ \varphi\in\mathcal{S}_{e},\]
is
\[E(x)=\left\{\begin{array}{ll}c(n,a)\ln x,n+|a|=2\\ c(n,a)|x|^{2-n-|a|},n+|a|>2,\end{array}\right. \tag{18}\]
see e.g. [20, Theorem 93 (page 324)].
**Corollary 1**.: _With the notation above, let \(L=\Delta_{a}\), that is \(\Delta_{a}u=0\) on \(O\setminus K\), and \(u\in L_{a}^{p}(O\setminus K)\). Let \(2\leq\nu<\frac{n+|a|}{p^{\prime}}\). Then \(u\) can be extended to \(\tilde{u}\in L_{a}^{p}(O)\) such that \(L\tilde{u}=0\) in \(O\) in weak sense if and only if \(C_{p^{\prime},a,\nu}(K)=0\)._
Proof.: If \(C_{p^{\prime},a,\nu}(K)>0\), there is a nonzero measure \(\mu\in\mathcal{M}(K)\) such that \(G_{a,\nu}*_{a}\mu\in L_{a}^{p}(\mathbb{R}_{+}^{n})\). In view of (18), \(E*_{a}\mu\in L_{a,loc}^{p}(\mathbb{R}_{+}^{n})\). Since \(E*_{a}\mu\) is a solution to \(\Delta_{a}u=0\) in \(O\setminus K\), \(K\) is not removable.
On the other hand we prove that if \(C_{p^{\prime},a,\nu}(K)=0\), then \(N_{p^{\prime},a,2m}(K)=0\) too. According to Definition 2 for an \(\varepsilon>0\) there is a nonnegative function \(f\in L_{a}^{p}(\mathbb{R}_{+}^{n})\) such that \(G_{a,\nu}*_{a}f\geq 1\) on a neighborhood of \(K\) and \(\|f\|_{p,a}^{p}\leq\varepsilon\).
Define a function \(h\in C^{\infty}(\mathbb{R}_{+})\), \(0\leq h\leq 1\) and \(h(t)=0\) on \(t\in\left[0,\frac{1}{2}\right]\); \(h(t)=1\) if \(t\geq 1\). Taking into consideration that \(G_{a,\nu}*_{a}f\geq 0\), \(\varepsilon\geq\int_{\mathbb{R}_{+}}(G_{a,\nu}*_{a}f)^{p}x^{a}dx\geq\int_{G_{a, \nu}*_{a}f\geq\frac{1}{2}}\frac{1}{2^{p}}x^{a}dx\). Thus \(\int_{\mathbb{R}_{+}}(h(G_{a,\nu}*_{a}f))^{p}x^{a}dx\leq\int_{G_{a,\nu}*_{a}f \geq\frac{1}{2}}x^{a}dx\leq 2^{p}\varepsilon\).
Noticing that \(h(G_{a,\nu}*_{a}f)\) fulfils the requirements of Definition 3, since \(\varepsilon\) is arbitrary, the statement is proved.
## 4. Maximal measure and a Wolff type inequality
Bessel maximal function was introduced and examined e.g. in cf. e.g. [8], see also the references therein. The boundedness of the maximal operator in some Morrey spaces is studied and applied to prove a Hardy-Littlewood-Sobolev type theorem in [9]. The maximal measure presented below has proved useful in formulating a Wolff type inequality which is the main tool of the next section. Wolff type inequalities can be applied in different situations, for instance in martingale theory, see [3] or deducing trace inequalities or characterize the trace measures via Wolff's inequality, see e.g. [24], [25], [23] and the references therein.
Below we define the maximal measure with respect to Bessel convolution.
**Definition 4**.: \[M_{a}\mu(x):=\sup_{r>0}\frac{1}{\lambda_{a}(B_{+}(0,r))}\chi_{B_{+}(0,r)}*_{a} \mu(x).\]
Since
\[\lambda_{a}(B_{+}(0,r))=cr^{n+|a|},\]
we define the fractional maximal measure as
\[M_{a,d}\mu(x):=\sup_{r>0}\frac{\chi_{B_{+}(0,r)}*_{a}\mu(x)}{r^{n+|a|-d}}, \tag{19}\]
and the truncated one as
\[M_{a,d,b}\mu(x):=\sup_{0<r\leq b}\frac{\chi_{B_{+}(0,r)}*_{a}\mu(x)}{r^{n+|a|-d}}. \tag{20}\]
**Lemma 3**.: \[I_{\beta}*_{a}\mu(x)=c\int_{0}^{\infty}\frac{\chi_{B_{+}(0,r)}*_{a}\mu(x)}{r^ {n-\beta}}\frac{dr}{r}.\]
\[I_{\beta}\chi_{B_{+}(0,\delta)}*_{a}\mu(x)=c\int_{0}^{\delta}\frac{\chi_{B_{+ }(0,r)}*_{a}\mu(x)}{r^{n-\beta}}\frac{dr}{r}+c\frac{\chi_{B_{+}(0,\delta)}*_{a }\mu(x)}{\delta^{n-\beta}}.\]
Proof.: Let \(d\Theta_{x,a}(z)=T^{x}\mu(z)z^{a}dz\). Changing the order of integration we get
\[\int_{0}^{\delta}\frac{\chi_{B_{+}(0,r)}*_{a}\mu(x)}{r^{n-\beta}}\frac{dr}{r} =\int_{0}^{\delta}\frac{1}{r^{n-\beta+1}}\int_{B_{+}(0,r)}1d\Theta_{x,a}(z)dr\]
\[=\int_{B_{+}(0,\delta)}\int_{|z|}^{\delta}\frac{1}{r^{n-\beta+1}}drd\Theta_{x,a}(z)=c\int_{B_{+}(0,\delta)}I_{\beta}(z)d\Theta_{x,a}(z)-c\int_{B_{+}(0, \delta)}\frac{1}{\delta^{n-\beta}}d\Theta_{x,a}(z)\]
\[=cI_{\beta}\chi_{B_{+}(0,\delta)}*_{a}\mu(x)-c\frac{\chi_{B_{+}(0,\delta)}*_{ a}\mu(x)}{\delta^{n-\beta}}.\]
**Notation.**
\[I_{\beta}^{\delta}*_{a}\mu(x):=c\int_{0}^{\delta}\frac{\chi_{B_{+}(0,r)}*_{a} \mu(x)}{r^{n-\beta}}\frac{dr}{r}.\]
**Theorem 2**.: _Let \(1\leq p<\infty\), \(0<\nu<n+|a|\), \(\delta>0\) arbitrary. Then for all positive measure \(\mu\), there are constants \(c=c(n,a,\delta)\) (are not the same at each occurrence) such that_
\[\|I_{\nu-|a|}*_{a}\mu\|_{p,a}\leq c\|M_{a,\nu}\mu\|_{p,a} \tag{21}\]
_and_
\[\|I_{\nu-|a|}^{\delta}*_{a}\mu\|_{p,a}\leq c\|M_{a,\nu,\delta}\mu\|_{p,a}. \tag{22}\]
**Notation.**
\[H_{s}^{\mu}:=\{x:I_{\nu-|a|}*_{a}\mu(x)>s\},\ \ K_{s}^{\mu}:=\{x:M_{a,\nu} \mu(x)>s\}.\]
\[{}^{1}H_{s}^{\mu}:=\{x:I_{\nu-|a|}^{1}*_{a}\mu(x)>s\},\ \ \ {}^{1}K_{s}^{\mu}:=\{x:M_{a,\nu,1} \mu(x)>s\}.\]
**Lemma 4**.: _There is a \(\varrho>1\) and a \(b>0\), such that for all \(s>0\) and \(\varepsilon\in(0,1]\),_
\[\lambda_{a}(H_{\varrho s}^{\mu})\leq b\varepsilon^{\frac{n+|a|}{n+|a|-\nu}} \lambda_{a}(H_{s}^{\mu})+\lambda_{a}(K_{\varepsilon s}^{\mu}). \tag{23}\]
_Similarly_
\[\lambda_{a}({}^{1}H_{\varrho s}^{\mu})\leq b\varepsilon^{\frac{n+|a|}{n+|a|- \nu}}\lambda_{a}({}^{1}H_{s}^{\mu})+\lambda_{a}({}^{1}K_{\varepsilon s}^{\mu}). \tag{24}\]
Proof.: By lower semicontinuity we can take Whithney's decomposition of \(H_{s}^{\mu}\), i.e. \(H_{s}^{\mu}=\cup_{i=1}^{\infty}Q_{i}\), where \(Q_{i}\)-s are dyadic cubes, \(\mathrm{int}Q_{i}\cap\mathrm{int}Q_{j}=\emptyset\) if \(i\neq j\) and \(\mathrm{diam}Q_{i}\leq\mathrm{dist}(Q_{i},(H_{s}^{\mu})^{c})\leq 4\mathrm{diam}Q_{i}\). (Dyadic cubes means cubes with side \(2^{-k}\), \(k\in\mathbb{Z}\), whose vertices belong to the lattice \(\{m2^{-k}:m\in\mathbb{Z}^{n}\}\). For Whithney's decomposition see [22, page 16, Theorem 3].) In addition, to prove (24) if \(\mathrm{diam}Q_{i}\geq\frac{1}{8}\), then we decompose it to subcubes with diameter is between \(\frac{1}{16}\) and \(\frac{1}{8}\), and we consider this new sequence of cubes.
Let \(Q\) be an element of this decomposition. Let \(x\in Q\) be arbitrary, denote the center of \(Q\) by \(x_{c}\) and let \(d:=\mathrm{diam}Q\). Let \(G:=B(x_{c},6d)\), \(B=B(x,8d)\), that is \(Q\subset G\subset B\). Let \(\mu=\mu_{1}+\mu_{2}\), where \(\mu_{1}=\mu|_{G}\).
At first we deal with \(I_{\nu-|a|}*_{a}\mu_{2}\). To this we estimate \(T^{t}\chi_{B_{+}(0,r)}(x)\). We can assume, that \(r>\frac{11}{2}\), otherwise \(\mathrm{supp}\mu_{2}\cap B_{+}(x,r)=\emptyset\). Let \(x_{1}\in(H_{s}^{\mu})^{c}\) such that \(\mathrm{dist}(x_{1},Q)<4d\). Then by (6) if \(t\in B_{+}(x,r)\),
\[T^{t}\chi_{B_{+}(0,r)}(x)\leq c\prod_{i=1}^{n}\min\left\{1,\left(\frac{r}{x_{ i}}\right)^{a_{i}}\right\}\leq c\prod_{i=1}^{n}\min\left\{1,\left(\frac{r}{x_{1,i}} \right)^{a_{i}}\right\}=:P.\]
Indeed, it is obvious if \(x_{1,i}\leq x_{i}\). If \(x_{1,i},x_{i}<r\), then the minimum is \(1\) in both cases. If \(x_{i}<r\leq x_{1,i}\), then \(\min\left\{1,\frac{r}{x_{i}}\right\}=1\) thus we have to show that \(1\leq c\frac{r}{x_{1,i}}\). Since \(x_{1,i}<x_{i}+5d\), \(\frac{r}{x_{1,i}}>\frac{r}{x_{i}+5d}>\frac{r}{r+5d}\geq\frac{\frac{11}{2d}d}{ \frac{11}{2d}d+5d}=\frac{11}{21}\).
If \(\frac{11}{2}d<r<x_{i}<x_{1,i}\), then \(\frac{r}{x_{1,i}}>\frac{r}{x_{i}+\frac{2r}{11}r}>\frac{\frac{11}{21}}{\frac{r} {x_{i}}}\).
In view of (7) if \(t\in T_{+}(x_{1},2r)\)
\[P\leq cT^{t}\chi_{[0,4r)^{n}}(x_{1}).\]
Thus
\[\chi_{B_{+}(x,r)}*_{a}\mu_{2}\leq c\chi_{[0,4r)^{n}}*_{a}\mu_{2}(x_{1})\leq c \chi_{B_{+}(0,4\sqrt{n}r)}*_{a}\mu_{2}(x_{1}).\]
Recalling that \(x_{1}\in(H_{s}^{\mu})^{c}\), we have
\[I_{\nu-|a|}*_{a}\mu_{2}(x)\leq c\int_{\frac{11}{2d}}^{\infty}\frac{\chi_{B_{+ }(0,4\sqrt{n}r)}*_{a}\mu_{2}(x_{1})}{(4\sqrt{n}r)^{n+|a|-\nu}}dr=cI_{\nu-|a|}*_ {a}\mu_{2}(x_{1})\leq cs.\]
Now we choose \(\varrho\) so that \(I_{\nu-|a|}\ast_{a}\mu_{2}(x)\leq\frac{\varrho s}{2}\), which implies that
\[H^{\mu}_{\varrho s}\cap Q\subset H^{\mu_{1}}_{\frac{\varrho s}{2}}\cap Q. \tag{25}\]
If the diameter of \(Q\) was originally less then \(\frac{1}{8}\), then the whole construction is contained in a ball of radius less than one, so the same chain of ideas leads to
\[{}^{1}H^{\mu}_{\varrho s}\cap Q\subset\ ^{1}H^{\mu_{1}}_{\frac{\varrho s}{2}} \cap Q. \tag{26}\]
If there is no \(x_{1}\in({}^{1}H^{\mu}_{s})^{c}\) such that \(\mathrm{dist}(Q,x_{1})\leq 4d\), then \(\mathrm{diam}Q>\frac{1}{16}\). Let \(x_{0}\in Q\cap({}^{1}K_{\varepsilon s})^{c}\). Then, recalling that \(r>\frac{11}{2}d\), we have
\[I^{1}_{\nu-|a|}\ast_{a}\mu_{2}(x_{0})=c\int_{\frac{11}{32}}^{1}\frac{\chi_{B_ {+}(0,r)}\ast_{a}\mu_{2}(x_{0})}{r^{n+|a|-\nu+1}}dr\]
\[\leq\frac{32}{11}\int_{0}^{1}\frac{\chi_{B_{+}(0,r)}\ast_{a}\mu_{2}(x_{0})}{r ^{n+|a|-\nu}}dr\leq cM_{a,\nu,1}(x_{0})\leq c\varepsilon s.\]
Thus if \(Q\cap({}^{1}K_{\varepsilon s})^{c}\neq\emptyset\), we can choose \(\varrho\) again so that (26) is satisfied.
Let \(x_{0}\in Q\cap(K_{\varepsilon s})^{c}\) again. According to [8, Theorem 2 (c)]
\[\lambda_{a}\left(Q\cap H^{\mu_{1}}_{\frac{\varrho s}{2}}\right)\leq c\left( \frac{1}{\varrho s}\int_{\mathbb{R}^{n}_{+}}1d\mu_{1,a}(t)\right)^{\frac{n+|a |}{n+|a|-\nu}}=c\left(\frac{1}{\varrho s}\int_{G}1d\mu_{a}(t)\right)^{\frac{n+| a|}{n+|a|-\nu}}\]
\[\leq c\left(\frac{1}{\varrho s}\int_{B}1d\mu_{a}(t)\right)^{\frac{n+|a|}{n+|a |-\nu}}=(\ast).\]
In view of Lemma 1\(B=\mathrm{supp}T^{t}\chi_{B_{+}(0,8d)(x_{0})}\subset T_{+}(x_{0},8d)\). Thus by (7)
\[(\ast) \leq c\left(\frac{1}{\varrho s}\prod_{i=1}^{n}\max\left\{1,\left( \frac{x_{0,i}}{16d}\right)^{a_{i}}\right\}\int_{\mathbb{R}^{n}_{+}}T^{t}\chi_ {[0,16d)^{n}}(x_{0})d\mu_{a}(t)\right)^{\frac{n+|a|}{n+|a|-\nu}} \tag{27}\] \[\leq c\left(\frac{M_{a,\nu}\mu(x_{0})}{\varrho s}\right)^{\frac{n +|a|}{n+|a|-\nu}}(\prod_{i=1}^{n}\max\{1,\left(\frac{x_{0,i}}{16d}\right)^{a _{i}}\}))^{\frac{n+|a|}{n+|a|-\nu}}d^{n+|a|}.\]
Taking into consideration that \(\lambda_{a}(Q)\sim\prod_{i=1}^{n}x_{c,i}^{a_{i}}\left(\frac{d}{\sqrt{n}} \right)^{n}\sim\prod_{i=1}^{n}\left(\frac{x_{c,i}}{d}\right)^{a_{i}}d^{n+|a|}\) and recalling that \(|x_{0,i}-x_{c,i}|<d\), we have
\[\left(\prod_{i=1}^{n}\max\{1,\left(\frac{x_{0,i}}{16d}\right)^{a_{i}}\} \right)^{\frac{n+|a|}{n+|a|-\nu}}d^{n+|a|}\leq c|Q|_{a}\left(\prod_{i=1}^{n} \left(\frac{x_{c,i}}{d}\right)^{a_{i}}\right)^{\frac{\nu}{n+|a|-\nu}}\leq c \lambda_{a}(Q).\]
Finally as \(x_{0}\in(K_{\varepsilon s})^{c}\),
\[\lambda_{a}\left(Q\cap H^{\mu_{1}}_{\frac{\varrho s}{2}}\right)\leq b\varepsilon ^{\frac{n+|a|}{n+|a|-\nu}}\lambda_{a}(Q). \tag{28}\]
Since \(\mathrm{diam}Q\leq\frac{1}{8}\), similarly we have
\[\lambda_{a}\left(Q\cap\ ^{1}H^{\mu_{1}}_{\frac{\varrho s}{2}}\right)\leq b \varepsilon^{\frac{n+|a|}{n+|a|-\nu}}\lambda_{a}(Q), \tag{29}\]
cf. (27).
That is if \(Q\cap(K_{\varepsilon s})^{c}\neq\emptyset\) or \(Q\cap({}^{1}K_{\varepsilon s})^{c}\neq\emptyset\), then (28) or (29), respectively, is fulfilled, otherwise \(Q\subset K_{\varepsilon s}\mu\).
Recalling (25) or (26) and adding over all \(Q\in\{Q_{i}\}\), we obtain the required result.
Proof.: (of Theorem 2) (21) and (22) can be derived by the same way from (23) and (24). Let us see the second one, say. Let \(\delta=1\). Integrating (24) and changing the variables we have
\[\frac{1}{\varrho^{p}}\int_{0}^{\varrho R}\lambda_{a}({}^{1}H_{u}^{\mu})u^{p-1} du\leq b\varepsilon^{\frac{n+|a|}{n+|a|-\nu}}\int_{0}^{R}\lambda_{a}({}^{1}H_{s}^{ \mu})u^{p-1}du+\frac{1}{\varepsilon^{p}}\int_{0}^{\varepsilon R}\lambda_{a}({}^ {1}K_{u}^{\mu})u^{p-1}du.\]
Supposing that \(\text{supp}\mu\) is bounded, all the integrals are finite. We choose \(\varepsilon\) small enough so that \(b\varepsilon^{\frac{n+|a|}{n+|a|-\nu}}\leq\frac{1}{2\varrho^{p}}\). Then we have
\[\frac{1}{\varrho^{p}}\int_{0}^{\varrho R}\lambda_{a}({}^{1}H_{u}^{\mu})u^{p-1 }du\leq\frac{2}{\varepsilon^{p}}\int_{0}^{\varepsilon R}\lambda_{a}({}^{1}K_{ u}^{\mu})u^{p-1}du.\]
Letting \(R\to\infty\),
\[\|I_{\nu-|a|}^{1}*_{a}\mu(x)\|_{a,p}\leq\frac{\varrho}{\varepsilon}\|M_{a,\nu,1}\mu(x)\|_{a,p},\]
cf. e.g. [20, (1.46)]. If \(\text{supp}\mu\) is not compact, then let \(\mu_{n}:=\mu|_{B_{+}(0,n)}\). Since \(\|M_{a,\nu,1}\mu_{n}\|_{a,p}\leq\|M_{a,\nu,1}\mu\|_{a,p}\), we can apply the monotone convergence theorem.
**Corollary 2**.: _With the assumptions of Theorem 2 we have_
\[\|I_{\nu-|a|}^{\delta}*_{a}\mu\|_{p,a}\leq c\|G_{a,\nu}*_{a}\mu\|_{p,a}\leq c\| M_{a,\nu,\delta}\mu\|_{p,a}. \tag{30}\]
Proof.: (9) and (10) ensure the first inequality and
\[\|G_{a,\nu}*_{a}\mu\|_{p,a}\leq\|I_{\nu-|a|}\chi_{B_{+}(0,\delta)}*_{a}\mu\|_ {p,a}+\|e^{-\frac{|\cdot|}{2}}*_{a}\mu\|_{p,a}\]
\[\leq\|I_{\nu-|a|}^{\delta}*_{a}\mu\|_{p,a}+\|M_{a,\nu,\delta}\mu\|_{p,a}+\|e^ {-\frac{|\cdot|}{2}}*_{a}\mu\|_{p,a}.\]
Observe that
\[\frac{e^{-\frac{|\cdot|}{2}}*_{a}\chi_{B_{+}(0,r)}(x)}{r^{n+|a|}}\]
\[=\frac{1}{r^{n+|a|}}\int_{B_{+}(0,r)}\int_{[0,\pi)^{n}}e^{-\frac{1}{2}\sqrt{ \sum_{i=1}^{n}x_{i}^{2}+t_{i}^{2}-2x_{i}t_{i}\cos\vartheta_{i}}}d\sigma^{a}( \vartheta)d\lambda_{a}(t)\]
\[\geq\frac{1}{r^{n+|a|}}\int_{B_{+}(0,r)}e^{-\frac{1}{2}|x+t|}d\lambda_{a}(t) \geq ce^{-\frac{r}{2}}e^{-\frac{|x|}{2}}.\]
Thus
\[e^{-\frac{|x|}{2}}\leq ce^{\frac{r}{2}}\frac{e^{-\frac{|\cdot|}{2}}*_{a}\chi_ {B_{+}(0,r)}(x)}{r^{n+|a|}},\]
and so if \(r\leq\delta\),
\[e^{-\frac{|\cdot|}{2}}*_{a}\mu(x)\leq c(r)e^{-\frac{|\cdot|}{2}}*_{a}\frac{ \chi_{B_{+}(0,r)}(x)}{r^{n+|a|}}*_{a}\mu\leq c(r,\nu)e^{-\frac{|\cdot|}{2}}*_ {a}M_{a,\nu,\delta}\mu.\]
According to (5)
\[\|e^{-\frac{|\cdot|}{2}}*_{a}\mu\|_{p,a}\leq\|e^{-\frac{|\cdot|}{2}}\|_{1,a} \|M_{a,\nu,\delta}\mu\|_{p,a},\]
which, together with Theorem 2, implies the statement.
**Notation.** Denote by
\[b_{k}(x):=2^{k(n+|a|-\nu)}\chi_{B_{+}(0,2^{-k})}*_{a}\mu(x)\]
and by
\[c_{k}(x):=2^{k(n+|a|-\nu)}\chi_{B_{+}(0,2^{-k})}*_{a}\mu(x).\]
The corresponding Wolff-function is
\[W^{\mu}_{a,\nu,p}(x):=\int_{0}^{1}\left(\frac{\chi_{B_{+}(0,r)}*_{a}\mu(x)}{r^{n+ |a|-p\nu}}\right)^{p^{\prime}-1}\frac{dr}{r}.\]
**Remark 6**.: In view of Lemma 3 and (20) we can observe that
\[I_{\nu-|a|}*_{a}\mu(x)\sim\|\{b_{k}(x)\}_{-\infty}^{\infty}\|_{l^{1}},\ \ I_{\nu-|a|}*_{a}\mu(x)\sim\|\{b_{k}(x)\}_{0}^{\infty}\|_{l^{1}}, \tag{31}\]
and
\[M_{a,\nu,1}\mu(x)\sim\|\{b_{k}(x)\}_{0}^{\infty}\|_{l^{\infty}}. \tag{32}\]
We are in position to prove the next Wolff type inequality.
**Theorem 3**.: \(1<p,q<\infty\)__
\[\|G_{a,\nu}*_{a}\mu\|_{p^{\prime},a}\sim\|\|\{b_{k}(x)\}_{0}^{\infty}\|_{l^{q} }\|_{p^{\prime},a}\]
\[\sim\left(\int_{\mathbb{R}^{n}_{+}}\|\{c_{k}(x)\}_{0}^{\infty}\|_{l^{p^{\prime }-1}}^{p^{\prime}-1}d\mu_{a}(x)\right)^{\frac{1}{p^{\prime}}}\sim\left(\int_{ \mathbb{R}^{n}_{+}}W^{\mu}_{a,\nu,p}(x)d\mu_{a}(x)\right)^{\frac{1}{p^{\prime }}}.\]
Proof.: Corollary 2, (31) and (32) ensure that
\[\|G_{a,\nu}*_{a}\mu\|_{p,a}\leq c\|M_{a,\nu,1}\mu\|_{p,a}\leq c\|\|\{b_{k}(x) \}_{0}^{\infty}\|_{l^{\infty}}\|_{p,a}\]
\[\leq\|\|\{b_{k}(x)\}_{0}^{\infty}\|_{l^{p}}\|_{p,a}\leq\|\|\{b_{k}(x)\}_{0}^{ \infty}\|_{l^{1}}\|_{p,a}\leq c\|I_{\nu-|a|}^{2}*_{a}\mu\|_{p,a}\leq c\|G_{a, \nu}*_{a}\mu\|_{p,a}.\]
To prove the Wolff type inequality we have
\[\|\|\{b_{k}(x)\}_{0}^{\infty}\|_{l^{p^{\prime}}}\|_{p^{\prime},a}^{p^{\prime} }=\int_{\mathbb{R}^{n}_{+}}\sum_{k=0}^{\infty}b_{k}(t)^{p^{\prime}}d\lambda_{ a}(t)\]
\[=\sum_{k=0}^{\infty}2^{k(n+|a|-\nu)p^{\prime}}\int_{\mathbb{R}^{n}_{+}}\chi_{ B_{+}(0,2^{-k})}*_{a}\mu(t)(\chi_{B_{+}(0,2^{-k})}*_{a}\mu)^{p^{\prime}-1}(t)d \lambda_{a}(t)\]
\[=\sum_{k=0}^{\infty}2^{k(n+|a|-\nu)p^{\prime}}I_{k}.\]
\[I_{k}=\int_{\mathbb{R}^{n}_{+}}\int_{\mathbb{R}^{n}_{+}}T^{x}\chi_{B_{+}(0,2^{ -k})}(t)d\mu_{a}(x)(\chi_{B_{+}(0,2^{-k})}*_{a}\mu)^{p^{\prime}-1}(t)d\lambda_ {a}(t)\]
\[=\int_{\mathbb{R}^{n}_{+}}\int_{\mathbb{R}^{n}_{+}}\int_{\mathbb{R}^{n}_{+}}K( x,t,z)\chi_{B_{+}(0,2^{-k})}(z))d\lambda_{a}(z)d\mu_{a}(x)(\chi_{B_{+}(0,2^{-k})}*_{a} \mu)^{p^{\prime}-1}(t)d\lambda_{a}(t)\]
\[=\int_{\mathbb{R}^{n}_{+}}\int_{B_{+}(0,\frac{1}{2^{k}})}\int_{\mathbb{R}^{n}_ {+}}K(x,t,z)(\chi_{B_{+}(0,2^{-k})}*_{a}\mu)^{p^{\prime}-1}(t)d\lambda_{a}(t)d \lambda_{a}(z))d\mu_{a}(x).\]
Since \(0<z_{i}<2^{-k}\), recalling that \(t_{i}\in(|x_{i}-z_{i}|,x_{i}+z_{i})\),
\((\chi_{B_{+}(0,2^{-k})}*_{a}\mu)(t)\sim(\chi_{B_{+}(0,2^{-k})}*_{a}\mu)(x)\). As \(K\) is a reproducing kernel
\[I_{k}\sim\lambda_{a}(B_{+}(0,2^{-k}))\int_{\mathbb{R}^{n}_{+}}(\chi_{B_{+}(0, 2^{-k})}*_{a}\mu)^{p^{\prime}-1}(x)d\mu_{a}(x).\]
Thus recalling that \(\lambda_{a}(B_{+}(0,2^{-k}))\sim 2^{-k(n+|a|)}\) by Fubini's theorem again
\[\|\|\{b_{k}(x)\}_{0}^{\infty}\|_{l^{q}}\|_{p^{\prime},a}^{p^{\prime}}\sim\int_ {\mathbb{R}^{n}_{+}}\sum_{k=0}^{\infty}c_{k}(x)^{p^{\prime}-1}d\mu_{a}(x).\]
**Remark 7**.: In view of Corollary 2, instead of \(M_{a,\nu,1}\mu\) we can consider \(M_{a,\nu,\delta}\mu\) with the corresponding sequence \(\{b_{k}^{\delta}(x)\}\).
## 5. Metric properties
Applying the previous section, below we investigate some "metric" properties of B-\(p\) capacity. Since the Bessel-translation is not a geometric congruence, we need a special "Lipschitz"- condition. It is also necessary to introduce the notion of "weighted Hausdorff measure", to examine Cantor-type sets.
At the beginning of this section let us recall that for \(1<p<\infty\) the B-\(p\) capacity of \(K\subset\mathbb{R}_{+}^{n}\) is
\[C^{\frac{1}{p}}_{a,\nu,p}(K):=\sup\{\mu_{a}(K):\mu\in\mathcal{M}(K),\ \|G_{a,\nu}*_{a}\mu\|_{p,a}\leq 1\}, \tag{33}\]
and the B-\(p\) capacity is non trivial if \(1<p<\frac{n+|a|}{\nu}\).
### B-Lipschitz mappings
The next Lipschitz type property is corresponding to the Bessel translation.
**Definition 5**.: _Let \(\Phi:\mathbb{R}_{+}^{n}\to\mathbb{R}_{+}^{n}\). Consider \(z(x,t,\vartheta)=z_{1}(x,t,\vartheta),\ldots,z_{n}(x,t,\vartheta)\), where \(z_{k}(x,t,\vartheta)=x_{k}-t_{k}\cos\vartheta_{k}+it_{k}\sin\vartheta_{k}\). Let \(\Psi:\mathbb{C}_{+}^{n}\to\mathbb{C}_{+}^{n}\) such that \(\Psi(z)_{k}(x,t\vartheta)=\Phi(x)_{k}-\Phi(t)_{k}\cos\vartheta_{k}+i\Phi(t)_{ k}\sin\vartheta_{k}\). We say that \(\Phi\) fulfils the B-Lipschitz condition with B-Lipschitz constant \(L\) if for a.e. \(\vartheta\in[0\pi)^{n}\)_
\[|\Psi(z)(\vartheta)|\leq L|z(\vartheta)|. \tag{34}\]
**Remark 8**.: Of course, linear mappings posses the B-Lipschitz property.
For \(\vartheta=0\) (34) gives back the standard Lipschitz condition, and for \(\vartheta_{k}=\pi\ k=1,\ldots,n\) (34) means that \(|\Phi(x)+\Phi(t)|\leq L|x+t|\).
**Example.** Let \(f:\mathbb{R}_{+}\to\mathbb{R}_{+}\) be a Lipschitz function. Let \(K\subset\text{int}\mathbb{R}_{+}^{n}\) compact and \(\Phi:K\to\mathbb{R}_{+}^{n}\); \(\Phi(x)=f(|x|)x\). On \(K\)\(\Phi\) also fulfils the Lipschitz condition with constant \(\tilde{L}(K)\) and \(\frac{\Phi(x)_{k}}{x_{k}}\leq M(K)\). Let
\[G(\vartheta):=L^{2}L|z(\vartheta)|^{2}-|\Phi(z)(\vartheta)|^{2}\]
\[=\sum_{k=1}^{n}L^{2}(x_{k}^{2}+t_{k}^{2})-\Phi(x)_{k}^{2}-\Phi(t)_{k}^{2}-2 \sum_{k=1}^{n}\cos\vartheta_{k}(L^{2}x_{k}t_{k}-\Phi(x)_{k}\Phi(t)_{k}).\]
If \(L>M(K)\), we have
\[\frac{\partial G}{\partial\vartheta_{k}}=0\ \ \text{iff}\ \vartheta_{k}=0\ \text{or}\ \vartheta_{k}=\pi.\]
It can be readily seen, that the Hessian is positive (negative) definite if \(\vartheta=0\) (\(\vartheta_{k}=\pi\ k=1,\ldots,n\)), respectively. Thus the Lipschitz property of \(\Phi(x)\) implies that the B-Lipschitz condition fulfils for all \(\vartheta\in[0\pi)^{n}\).
**Theorem 4**.: _Let \(\nu>0\), \(1<p<\frac{n+|a|}{\nu}\). Let \(E\subset\mathbb{R}_{+}^{n}\) and \(\Phi:E\to\mathbb{R}_{+}^{n}\) is a B-Lipschitz mapping with B-Lipschitz constant \(L\). Then_
\[C_{p,a,\nu}(\Phi(E))\leq c\ C_{p,a,\nu}(E),\]
_where \(c\) depends only on \(n,p,a,L\)._
Proof.: By standard arguments it is enough to prove for any \(K\subset\mathbb{R}^{n}_{+}\), compact. Let \(\mu\in\mathcal{M}(\Phi(K))\) Then by [2, Lemma 5.2.2] there is a \(\mu_{\Phi}\in\mathcal{M}(K)\) such that
\[\int_{\Phi(K)}W^{\mu}_{a,\nu,p}(y)d\mu_{a}(y)=\int_{K}W^{\mu}_{a,\nu,p}(\Phi(x)) \Phi^{a}(x)d\mu_{\Phi}(x)\]
\[=\int_{K}\int_{0}^{1}\left(\frac{\int_{K}T^{\Phi(u)}\chi_{B_{+}(0,r)}(\Phi(x)) \Phi^{a}(u)d\mu_{\Phi}(u)}{r^{n+|a|-\nu p}}\right)^{p^{\prime}-1}\frac{dr}{r} \Phi^{a}(x)d\mu_{\Phi}(x)\]
\[\geq\int_{K}\int_{0}^{1}\left(\frac{\chi_{B_{+}(0,\frac{r}{r})}*_{a}\mu_{\Phi }(x)}{r^{n+|a|-\nu p}}\right)^{p^{\prime}-1}\frac{dr}{r}\Phi^{a}(x)d\mu_{\Phi} (x)\]
\[=c\int_{K}\int_{0}^{\frac{1}{L}}\left(\frac{\chi_{B_{+}(0,r)}*_{a}\mu_{\Phi} (x)}{r^{n+|a|-\nu p}}\right)^{p^{\prime}-1}\frac{dr}{r}\Phi^{a}(x)d\mu_{\Phi} (x),\]
where \(c=c(n,p,a,L)\).
This implies that
\[\int_{\Phi(K)}W^{\mu}_{a,\nu,p}(y)d\mu_{a}(y)>c\int_{K}W^{\mu_{\Phi}}_{a,\nu,p }(x)\Phi^{a}(x)d\mu_{\Phi}(x).\]
Indeed, if \(L\leq 1\), we have immediately the inequality above, if \(L>1\) we have to consider Remark 7 with \(\delta=\frac{1}{L}\), which leads again to the inequality above. According to Theorem 3 it proves the statement, cf. the definition above.
### Coverings
In the next subsections coverings in Bessel-weighted space are introduced. Since Bessel-convolution lives in a weighted space, B-\(p\) capacity of a set depends also on the location of the set. As capacity is in close connection with Hausdorff measure, in the next subsection we extend this notion to weighted spaces as well.
**Notation**.: Let \(K\subset\mathbb{R}^{n}_{+}\) compact.
(1) Let \(\mathcal{A}(r)\) be the minimal number of balls of radius \(r\) required to cover \(K\).
(2)
\[m^{a}:=m^{a}(u,r,K)=\max\{x^{a}:x\in\overline{B(u,r)}\cap K\}.\]
\[\mathcal{B}(r):=\inf\{\sum_{j=1}^{\mathcal{A}(r)}m_{j}^{a}:K\subset\cup_{j=1} ^{\mathcal{A}(r)}B(u_{j},r)\text{ is a minimal covering}\}.\]
**Remark 9**.: (1) \(\mathcal{A}(r)\) is obviously decreasing.
(2) Let \(\cup_{j=1}^{\mathcal{A}(r)}B(u_{j},r)\) be a minimal covering of \(K\). Then there is a constant \(C_{n}\) such that any point of \(K\) belongs to at most \(C_{n}\) balls. Indeed, let \(C_{n}=C_{n}(q)\) be the minimal number of balls of radius \(q\leq\frac{1}{2}\) which covers the unite ball. Suppose that there is a point \(x\in K\) which belongs to \(C_{n}(q)+1\) balls. Then \(B\left(x,\frac{r}{q}\right)\) contains all these balls and it can be covered by \(C_{n}(q)\) balls of radius \(r\) which contradicts with minimality, cf. e.g. [2, page 145].
(3) Let \(r_{k}:=\frac{1}{2^{k}}\), \(r_{k+1}\leq r\leq r_{k}\) and \(\frac{1}{4}\leq q:=\frac{1}{2^{k+2}r}\leq\frac{1}{2}\). Let \(\{B(v_{j},r_{1})\}_{j=1}^{\mathcal{A}(r)}\) and \(\{B(u_{i},r_{k+2})\}_{i=1}^{\mathcal{A}(r_{k+2})}\) be minimal coverings of \(K\) with the corresponding points \(\{m_{j}\}_{j=1}^{\mathcal{A}(r)}\) and \(\{M_{i}\}_{i=1}^{\mathcal{A}(r_{k+2})}\), respectively. Any \(m_{j}\) belongs to a ball, \(B(u_{i},r_{k+2})\)
and so \(m_{j}^{a}\leq M_{i}^{a}\). Since at most \(C_{n}\left(\frac{1}{q+2}\right)\) maximum points \((m_{j})\) can be in the same ball, \(\sum_{j=1}^{\mathcal{A}(r)}m_{j}^{a}\leq C_{n}\left(\frac{1}{q+2}\right)\sum_{j =1}^{\mathcal{A}(r_{k+2})}M_{j}^{a}\), thus \(\mathcal{B}(r)\leq C_{n}\left(\frac{1}{q+2}\right)\mathcal{B}(r_{k+2})\). Repeating the chain of ideas with \(r_{k-1}\) and \(q_{1}=2^{k-1}r\), we have \(\frac{1}{C(n,q_{1})}\mathcal{B}(r_{k-1})\leq\mathcal{B}(r)\).
**Theorem 5**.: _As above, let \(\nu>0\), \(1<p<\frac{n+|a|}{\nu}\). Then_
\[C_{a,\nu,p}^{\frac{1}{p}}(K)\leq c\left(\int_{0}^{1}\left(\mathcal{B}(r)r^{n-p \nu}\right)^{1-p^{\prime}}\frac{dr}{r}\right)^{1-p},\]
_where \(c=c(n,a)\)._
Proof.: Let \(\mu\in\mathcal{M}(K)\) and \(r_{k}=\frac{1}{2^{k}}\), as above. According to Corollary 2 and (31) we have
\[\|G_{a,\nu}\ast_{a}\mu\|_{p^{\prime},a}^{p^{\prime}}\geq\|I_{\nu-|a|}^{1}\ast_ {a}\mu\|_{p^{\prime},a}^{p^{\prime}}\]
\[\geq c\int_{\mathbb{R}_{+}^{n}}\left(\sum_{k=0}^{\infty}2^{k(n+|a|-\nu)}\chi_{ B_{+}(0,r_{k})}\ast_{a}\mu(x)\right)^{p^{\prime}}d\lambda_{a}(x)\]
\[\geq c\sum_{k=0}^{\infty}\frac{1}{r_{k}^{p^{\prime}(n+|a|-\nu)}}\int_{ \mathbb{R}_{+}^{n}}\left(\chi_{B_{+}(0,r_{k})}\ast_{a}\mu(x)\right)^{p^{\prime }}d\lambda_{a}(x),\]
where the last inequality follows from the monotone convergence theorem. Let \(K\subset\cup_{j=1}^{\mathcal{A}(r_{k+1})}B(u_{j},r_{k+1})\) be a minimal covering. Recalling that \(\chi_{B_{+}(0,r_{k})}\ast_{a}\mu(x)=\int_{\mathbb{R}_{+}^{n}}T^{t}\chi_{B_{+} (0,r_{k})}(x)d\mu_{a}(t)\), if \(t\in K\), \(x\in U_{k}\), where \(U_{k}\) is the neighbourhood of \(K\) of radius \(r_{k}\), otherwise \(T^{t}\chi_{B_{+}(0,r_{k})}(x)=0\). Noticing that \(\cup_{j=1}^{\mathcal{A}(r_{k+1})}B(u_{j},r_{k+1})\subset U_{k}\), in view of Remark 9 we have
\[\int_{\mathbb{R}_{+}^{n}}\left(\chi_{B_{+}(0,r_{k})}\ast_{a}\mu(x)\right)^{p^{ \prime}}d\lambda_{a}(x)\]
\[\geq\frac{1}{C_{n}}\sum_{j=1}^{\mathcal{A}(r_{k+1})}\int_{B(u_{j},r_{k+1})} \left(\int_{\mathbb{R}_{+}^{n}}T^{t}\chi_{B_{+}(0,r_{k})}(x)d\mu_{a}(t)\right) ^{p^{\prime}}d\lambda_{a}(x)=:(\ast\ast).\]
Considering again the support of \(T^{t}\chi_{B_{+}(0,r_{k})}\), by Holder's inequality we have
\[(\ast\ast)\geq\frac{1}{C_{n}}\sum_{j=1}^{\mathcal{A}(r_{k+1})}\lambda_{a}(B(u _{j},r_{k+1}))^{-\frac{p^{\prime}}{p}}\]
\[\times\left(\int_{B(u_{j},r_{k+1})}\int_{B_{+}(x,r_{k})}T^{t}\chi_{B_{+}(0,r_{ k})}(x)d\mu_{a}(t)d\lambda_{a}(x)\right)^{p^{\prime}}=:(\ast\ast).\]
Since \(x\in B\left(u_{j},r_{k+1}\right)\), \(B(x,r_{k})\supset B(u_{j},r_{k+1})\). As \(T^{t}\chi_{B_{+}(0,r_{k})}(x)\) is continuous (actually it belongs to the Lip(\(\frac{1}{2}\)) class) on \(B_{+}\left(x,r_{k}\right)\), \(T^{t}\chi_{B_{+}(0,r_{k})}(x)\geq cT^{t}\chi_{B(0,r_{k})}(u_{j})\). According to Lemma 1, if \(t\in T_{+}\left(u_{j},\frac{r_{k}}{4\sqrt{n}}\right)=:T_{k,j}\), then \(T^{t}\chi_{B(0,r_{k})}(u_{j})\geq c\prod_{i=1}^{n}\min\left\{1,\left(\frac{r_{k }}{u_{j,i}}\right)^{a_{i}}\right\}\). Thus we have
\[(\ast\ast)\]
\[\geq\frac{c}{C_{n}}\sum_{j=1}^{\mathcal{A}(r_{k+1})}\lambda_{a}(B\left(u_{j},r_{k+ 1}\right))^{-\frac{p^{\prime}}{p}}\mu_{a}^{p^{\prime}}(B\left(u_{j},r_{k+1} \right))\]
\[\times\left(\int_{T_{k,j}}\prod_{i=1}^{n}\min\left\{1,\left(\frac{r_{k}}{u_{j, i}}\right)^{a_{i}}\right\}d\lambda_{a}(x)\right)^{p^{\prime}}.\]
Estimating the last integral we have
\[\int_{T_{k,j}}\prod_{i=1}^{n}\min\left\{1,\left(\frac{r_{k}}{u_{j,i}}\right)^{ a_{i}}\right\}d\lambda_{a}(x)\]
\[\geq\prod_{i=1}^{n}\int_{u_{j,i}-\frac{r_{k+1}}{4\sqrt{n}}}^{u_{j,i}+\frac{r_{ k+1}}{4\sqrt{n}}}\min\left\{1,\left(\frac{r_{k}}{u_{j,i}}\right)^{a_{i}} \right\}x_{i}^{a_{i}}dx_{i}=\prod_{i=1}^{n}I_{j,i}.\]
If \(r_{k}>u_{j,i}\),
\[I_{j,i}\geq c\int_{0}^{\frac{r_{k+1}}{4\sqrt{n}}}x_{i}^{a_{i}}dx_{i}\geq cr_{ k+1}^{a_{i}+1}.\]
If \(r_{k}\leq u_{j,i}\),
\[I_{j,i}\geq c\int_{u_{j,i}-\frac{r_{k+1}}{4\sqrt{n}}}^{u_{j,i}+\frac{r_{k+1}}{ 4\sqrt{n}}}r_{k+1}^{a_{i}}dx_{i}\geq cr_{k+1}^{a_{i}+1}.\]
So we have
\[(***)\geq c\sum_{j=1}^{\mathcal{A}(r_{k+1})}r_{k}^{n(1-p^{\prime})}m_{j}^{a(1 -p^{\prime})}\mu_{a}^{p^{\prime}}(B(u_{j},r_{k+1}))r_{k}^{(n+|a|)p^{\prime}}\]
\[=cr_{k}^{n+|a|p^{\prime}}\sum_{j=1}^{\mathcal{A}(r_{k+1})}m_{j}^{a(1-p^{ \prime})}\mu_{a}^{p^{\prime}}(B(u_{j},r_{k+1})).\]
By Holder's inequality
\[\mu_{a}(K)\leq\left(\sum_{j=1}^{\mathcal{A}(r_{k+1})}\mu_{a}(B(u_{j},r_{k+1} ))^{p^{\prime}}m_{j}^{a(1-p^{\prime})}\right)^{\frac{1}{p^{\prime}}}\left( \sum_{j=1}^{\mathcal{A}(r_{k+1})}m_{j}^{a}\right)^{\frac{1}{p}}.\]
Thus
\[\left\|G_{a,\nu}*_{a}\mu\right\|_{p^{\prime},a}^{p^{\prime}}\geq c\mu_{a}(K)^ {p^{\prime}}\sum_{k=0}^{\infty}\left(r_{k}^{n-\nu p}\mathcal{B}(r_{k+1}) \right)^{1-p^{\prime}}.\]
Taking into consideration Remark 9 we have
\[\left\|G_{a,\nu}*_{a}\mu\right\|_{p^{\prime},a}^{p^{\prime}}\geq c\mu_{a}(K)^ {p^{\prime}}\int_{0}^{1}\left(\mathcal{B}(r)r^{n-p\nu}\right)^{1-p^{\prime}} \frac{dr}{r}.\]
Comparing it with (33) the proof is finished.
### Hausdorff measure with Bessel external field
**Definition 6**.: _Let \(h\) be an increasing function on \(\mathbb{R}_{+}\) with \(h(0)=0\). Let \(E\subset\mathbb{R}_{+}^{n}\), \(\varrho>0\)._
\[\Lambda_{h,a}^{\varrho}(E):=\inf\{\sum_{i=1}^{\infty}x_{i,r_{i}}^{a}h(r_{i}):E \subset\cup_{i=1}^{\infty}B(x_{i},r_{i}),\ x_{i}\in\mathbb{R}_{+}^{n},\ r_{i} \leq\varrho\}, \tag{35}\]
_where \(x_{i,r_{i}}^{a}:=\max\{t^{a}:t\in\overline{B(x_{i},r_{i})}\}\). Since \(\Lambda_{h,a}^{\varrho}(E)\) is a decreasing function of \(\varrho\), we can define the (finite or infinite) \(a\)-Hausdorff measure of \(E\) as_
\[\Lambda_{h,a}(E)=\lim_{\varrho\to 0}\Lambda_{h,a}^{\varrho}(E). \tag{36}\]
**Remark 10**.: (1) \(h(x,r):=x_{r}^{a}h(r)\) is an increasing function of \(r\), but it depends on \(x\) as well, that is the \(a\)-Hausdorff measure of \(E\) depends also on the location of \(E\).
(2) If \(K\subset\mathrm{int}\mathbb{R}_{+}^{n}\) is compact, then there are constants \(c_{i}=c_{i}(a,K)\), \(i=1,2\) such that \(c_{1}\Lambda_{h}(K)\leq\Lambda_{h,a}(K)\leq c_{2}\Lambda_{h}(K)\).
(3) As in the standard case \(\Lambda_{h,a}^{\infty}(E)=0\) if and only if \(\Lambda_{h,a}(E)=0\). Of course, for all \(\varrho>0\)\(\Lambda_{h,a}^{\infty}(E)\leq\Lambda_{h,a}^{\varrho}(E)\), and so \(\Lambda_{h,a}^{\infty}(E)\leq\Lambda_{h,a}(E)\). Oppositely, by standard arguments if \(\Lambda_{h,a}^{\infty}(E)>0\), then there is a constant \(c\) such that \(\Lambda_{h,a}^{\infty}(E)>c>0\). Let \(\varrho\) be so small such that \(\Lambda_{h,a}^{\varrho}(E)>c\). The for all \(\varrho\)-covering of \(E\), \(\cup_{i=1}^{\infty}B(x_{i},r_{i})\), \(\sum_{i=1}^{\infty}x_{i,r_{i}}^{a}h(r_{i})>c\). If \(\cup_{j=1}^{\infty}B(u_{j},r_{j})\) is not a \(\varrho\)-covering of \(E\), there exists an \(r_{l}>\varrho\), and because \(u_{j}\in\mathbb{R}_{+}^{n}\), \(\sum_{j=1}^{\infty}u_{j,r_{j}}^{a}h(r_{j})>u_{l,r_{l}}^{a}h(\varrho)>c(n) \varrho^{|a|}h(\varrho)\). Thus \(\Lambda_{h,a}^{\infty}(E)>\min\{c,c(n)\varrho^{|a|}h(\varrho)\}>0\).
Let us denote by \(\mathcal{Q}_{k}\) the set of the dyadic cubes in \(\mathbb{R}_{+}^{n}\) with edge length \(\frac{1}{2^{k}}\), \(k\in\mathbb{Z}\).
**Theorem 6**.: _With the notation above, let \(h\) be an increasing function, \(E\subset\mathbb{R}_{+}^{n}\) and \(\mu\in\mathcal{M}(E)\) such that for all balls \(\mu(B(x,r))\leq h(r)\). Then_
\[\mu_{a}(E)\leq\Lambda_{h,a}^{\infty}(E).\]
_Let \(h\) be an increasing function, \(E\subset Q\in\mathcal{Q}_{k}\). Then there is a constant \(c\) depending only on \(n\), \(k\) and \(a\) and a measure \(\mu\in\mathcal{M}(E)\) satisfying that \(\mu(B(x,r))\leq h(r)\) for all balls, such that_
\[\Lambda_{h,a}^{\infty}(E)\leq c\mu_{a}(E).\]
Proof.: Obviously if \(E\subset\cup_{i=1}^{\infty}B(x_{i},r_{i})\), then
\[\mu_{a}(E)\leq\sum_{i=1}^{\infty}\mu_{a}(B(x_{i},r_{i}))\leq\sum_{i=1}^{ \infty}x_{i,r_{i}}^{a}h(r_{i}).\]
The first part of the second statement is proved in [2, page 137], namely there are measures \(\mu_{ll}\) such that \(\mathrm{supp}\mu_{ll}=\{\cup_{j}Q_{j}:Q_{j}\in\mathcal{Q}_{l},\ Q_{j}\cap E \neq\emptyset\}\) and \(\mu_{ll}(Q_{i})\leq h(r_{i})\) for all \(Q_{i}\in\mathcal{Q}_{i}\), \(i=0,\ldots,l\), where \(r_{i}=\frac{1}{2^{i}}\). Moreover \(\mu_{ll}\) has constant density on each \(Q_{j}\in\mathcal{Q}_{l}\). Finally \(\mu\) is defined as a weak accumulation point of \(\{\mu_{ll}\}\). Then \(\mathrm{supp}\mu=E\) and \(\mu(Q_{k})\leq 3^{n}h(r_{k})\) for all \(Q_{k}\in\mathcal{Q}_{k}\), \(k\in\mathbb{N}\). It is also pointed out that \(E\) has a disjoint covering with dyadic cubes, \(E\subset\cup_{j}Q_{j}\), such that
and \(\mu_{ll}(Q_{j})=h(r_{n_{j}})\), \(j=1,2,\dots\). Thus \(\mu_{ll}(Q)=\sum_{j}h(r_{n_{j}})\). Then, with \(m_{n_{j}}^{a}=\max_{x\in\overline{Q}_{j}}x^{a}\), we have
\[\mu_{a,ll}(Q)=\int_{Q}x^{a}d\mu_{ll}(x)\geq c\sum_{j,n_{j}\leq l}m_{n_{j}}^{a}h( r_{n_{j}})\geq c\inf\sum_{i}m_{n_{i}}^{a}h(r_{n_{i}}),\]
where \(c=c(n,a,k)\) and the infimum is taken over all finite or denumerable coverings of \(E\). So
\[\mu_{a}(Q)=\mu_{a}(E)\geq c\inf\sum_{i}m_{n_{i}}^{a}h(r_{n_{i}}).\]
Taking into consideration that a \(Q_{i}\in\mathcal{Q}_{i}\) can be covered by \(c(n)\) balls of radius \(r_{i}\),
\[\Lambda_{h,a}^{\infty}(E)\leq c(n)\inf\sum_{i}x_{i,r_{i}}^{a}h(r_{i})\leq c\inf \sum_{i}m_{n_{i}}^{a}h(r_{n_{i}})\leq c\mu_{a}(E),\]
where \(c=c(n,a,k)\).
### Capacity of Cantor sets with Bessel external field
Let \(L:=\{l_{k}\}_{k=0}^{\infty}\) be a decreasing sequence such that \(0<2l_{k+1}<l_{k}\) for \(k\in\mathbb{N}\). Let \(C_{0}\) be a closed interval of length \(l_{0}\). \(C_{1}\) is obtained by removing an open interval of length \(l_{0}-2l_{1}\) in the middle of \(C_{0}\), etc., \(C_{k}\) consists of \(2^{k}\) closed intervals of length \(l_{k}\). Let \(C_{k}^{n}:=C_{k}\times\cdots\times C_{k}\), the Cartesian product of \(n\) copies of \(C_{k}\). Let \(C_{L}:=\cap_{k=0}^{\infty}C_{k}^{n}\). \(C_{L}=C_{L}(n,Q)\), where \(Q=C_{0}\times\cdots\times C_{0}\), the cube which contains \(C_{L}\).
**Notation**.: Let \(C_{k}^{n}=C_{k}^{n}(Q,L)=\cup_{i=1}^{2^{nk}}q_{k,i}\) as above, where \(q_{k,i}\) are the closed cubes in \(C_{k}^{n}\) of edge length \(l_{k}\). Let \(v_{k,i}^{a}:=\max_{x\in q_{k,i}}x^{a}\). Let us denote by
\[h_{L}(l_{k}):=h_{Q,L,a}(l_{k})=\frac{1}{\sum_{i=1}^{2^{nk}}v_{k,i}^{a}}. \tag{37}\]
Obviously, \(h(l_{k})>h(l_{k+1})\). Let \(h_{L}(r):=h_{Q,L,a}(r)\) be an increasing function on \([0,\infty)\), \(h_{L}(0)=0\) and \(h_{L}(l_{k})\) is given by (37).
**Theorem 7**.: _Let \(0<p\nu<n+|a|\), \(C_{L}(n,Q)\), \(h_{L}=h_{Q,L,a}\) as above. Then \(C_{a,\nu,p}(C_{L}(n,Q))>0\) if and only if_
\[\int_{0}^{1}\left(\frac{h_{L}(r)}{r^{n-p\nu}}\right)^{p^{\prime}-1}\frac{dr}{r }<\infty.\]
Proof.: With the notation above \(C_{L}\) can be covered by \(2^{kn}\) balls of radius \(l_{k}\frac{\sqrt{n}}{2}\), \(\mathcal{A}(r)\leq 2^{kn}\), and if \(l_{k+1}\frac{\sqrt{n}}{2}\leq r\leq l_{k}\frac{\sqrt{n}}{2}\),
\[\mathcal{B}(r)\leq\frac{1}{h_{L}(l_{k})}.\]
Comparing with Theorem 5 it shows that \(C_{a,\nu,p}(C_{L})=0\) if \(\int_{0}^{1}\left(\frac{h_{L}(r)}{r^{n-p\nu}}\right)^{p^{\prime}-1}\frac{dr}{r}\) diverges.
On the other hand, considering \(h_{L}\) let us construct the measure \(\mu_{L}\) ensured by Theorem 6. In view of Lemma 1
\[\chi_{B_{+}(0,r)}*_{a}\mu_{L}(x)\leq cr^{|a|}\mu_{L}(B(x,r))\leq cr^{|a|}h_{L}( r).\]
According to Theorem 3
\[\|G_{a,\nu}*_{a}\mu_{L}\|_{p^{\prime},a}^{p^{\prime}}\leq c\int_{\mathbb{R}_{+}} \int_{0}^{1}\left(\frac{\chi_{B_{+}(0,r)}*_{a}\mu_{L}(x)}{r^{n+|a|-p\nu}}\right) ^{p^{\prime}-1}\frac{dr}{r}d\mu_{L,a}(x)\]
\[\leq c\int_{\mathbb{R}_{+}}\int_{0}^{1}\left(\frac{r^{|a|}h_{L}(r)}{r^{n+|a|-p \nu}}\right)^{p^{\prime}-1}\frac{dr}{r}d\mu_{L,a}(x).\]
In view of (33)
\[C_{a,\nu,p}^{\frac{1}{p}}(C_{L})\geq\frac{\mu_{L}(C_{L})}{\|G_{a,\nu}*_{a}\mu_ {L}\|_{p^{\prime},a}}\geq c\frac{\mu_{L}(C_{L})^{1-\frac{1}{p^{\prime}}}}{I(h _{L})^{\frac{1}{p^{\prime}}}}\]
which proves the converse statement.
**Theorem 8**.: _With the notation above and supposing that \(l_{0}=1\) we have that \(C_{a,\nu,p}(C_{L})>0\) if and only if_
\[\sum_{k=0}^{\infty}\left(l_{k}^{n-p\nu}\sum_{i=1}^{2^{nk}}v_{k,i}^{a}\right)^ {1-p^{\prime}}<\infty.\]
Proof.: First we observe that
\[\frac{1}{h_{L}(l_{k+1})}=\sum_{i=1}^{2^{n(k+1)}}v_{k+1,i}^{a}=\sum_{j=1}^{2^{ nk}}\sum_{i:v_{k+1,i}\in q_{j}}v_{k+1,i}^{a}\leq 2^{n}\sum_{i=1}^{2^{nk}}v_{k,i}^{a}= 2^{n}\frac{1}{h_{L}(l_{k})}. \tag{38}\]
In view of (38)
\[I(h_{L})=\sum_{k=0}^{\infty}\int_{l_{k+1}}^{l_{k}}\left(\frac{h_{L}(r)}{r^{n- p\nu}}\right)^{p^{\prime}-1}\frac{dr}{r}\leq\sum_{k=0}^{\infty}h_{L}^{p^{ \prime}-1}(l_{k})\int_{l_{k+1}}^{l_{k}}\left(\frac{1}{r^{n-p\nu}}\right)^{p^{ \prime}-1}\frac{dr}{r}\]
\[\leq c\sum_{k=0}^{\infty}h_{L}^{p^{\prime}-1}(l_{k})l_{k+1}^{(p\nu-n)(p^{ \prime}-1)}\leq c2^{n}\sum_{k=0}^{\infty}h_{L}^{p^{\prime}-1}(l_{k+1})l_{k+1}^ {(p\nu-n)(p^{\prime}-1)}\]
\[\leq c2^{n}\sum_{k=0}^{\infty}h_{L}^{p^{\prime}-1}(l_{k})l_{k}^{(p\nu-n)(p^{ \prime}-1)}.\]
On the other hand
\[I(h_{L})\geq c\sum_{k=0}^{\infty}h_{L}^{p^{\prime}-1}(l_{k+1})\left(l_{k+1}^{( p\nu-n)(p^{\prime}-1)}-l_{k}^{(p\nu-n)(p^{\prime}-1)}\right)\]
\[\geq c\sum_{k=0}^{\infty}h_{L}^{p^{\prime}-1}(l_{k+1})l_{k}^{(p\nu-n)(p^{ \prime}-1)}\geq\frac{c}{2^{n}}\sum_{k=0}^{\infty}h_{L}^{p^{\prime}-1}(l_{k})l _{k}^{(p\nu-n)(p^{\prime}-1)},\]
where in th last but one inequality we used that \(2l_{k+1}<l_{k}\), and then (38).
**Corollary 3**.: _With the notation above \(C_{a,\nu,p}(C_{L})>0\) if and only if_
\[\sum_{k=0}^{\infty}\left(l_{k}^{n-p\nu}2^{nk}\right)^{1-p^{\prime}}<\infty.\]
Proof.: \(S_{k}:=\frac{1}{2^{nk}}\sum_{i=1}^{2^{nk}}v_{k,i}^{a}\). If \(C_{L}\subset\text{int}\mathbb{R}_{+}^{n}\), then \(S_{k}\) obviously can be estimated by a constant from above and from below. If \(C_{L}\subset[0,1]^{n}\), then
\[1\geq S_{k}\geq\frac{1}{2^{nk}}\sum_{i:v_{k,i}\in[1-l_{1},1]^{n}}v_{k,i}^{a} \geq\frac{1}{2^{n}}(1-l_{1})^{|a|}.\]
The computation is similar if \(C_{L}\subset Q\neq[0,1]^{n}\), but \(C_{L}\cap\partial\mathbb{R}_{+}^{n}\neq\emptyset\).
**Proposition 2**.: (1) _Let \(K\subset\mathbb{R}_{+}^{n}\) be an arbitrary bounded set and \(\varrho>0\). If \(\liminf_{r\to 0}\frac{h(r)}{r^{n}}=0\), then \(\Lambda_{h,a}^{\varrho}(K)=0\). If \(\liminf_{r\to 0}\frac{h(r)}{r^{n}}>0\), then there is a function \(\tilde{h}(r)\), increasing and \(\tilde{h}(0)=0\) such that \(\frac{\tilde{h}(r)}{r^{n}}\) is decreasing and \(\Lambda_{h,a}^{\varrho}(K)\sim\Lambda_{\tilde{h},a}^{\varrho}(K)\)._
(2) _Let \(L:=\{l_{k}\}\) such that \(2l_{k+1}<l_{k}\) and \(C_{L}\) be the corresponding Cantor set. Let \(h(r)\) be increasing on \([0,\infty)\), \(h(0)=0\). If \(\liminf_{k\to\infty}\frac{h(l_{k})}{h_{L}(l_{k})}>0\), then \(\Lambda_{h,a}(C_{L})>0\)._
(3) _With the notation above, there is a constant \(c=c(a,Q,n)\) such that_
\[\Lambda_{h,a}(C_{L})\leq c\liminf_{k\to\infty}\frac{h(l_{k})}{h_{L}(l_{k})}.\]
Proof.: (1) For any bounded set \(\Lambda_{\tilde{h},a}^{\varrho}(K)\leq c(a,K)\Lambda_{\tilde{h}}^{\varrho}(K)\), thus [2, Proposition 5.1.8 (a)] implies the first statement. To prove the second statement define \(\tilde{h}(r)\) with
\[\frac{\tilde{h}(r)}{r^{b}}:=\inf_{0<t\leq r}\frac{h(t)}{t^{b}}.\]
If \(\liminf_{r\to 0}\frac{h(r)}{r^{b}}>0\), \(\tilde{h}(r)>0\)\(\forall r>0\). \(\frac{\tilde{h}(r)}{r^{b}}\) decreasing and repeated the chain of ideas of [2] for an arbitrary \(\varepsilon>0\) we choose a \(t\in[r,R]\) such that \(\frac{h(t)}{t^{b}}\leq(1+\varepsilon)\frac{\tilde{h}(R)}{R^{b}}\leq(1+ \varepsilon)\frac{\tilde{h}(t)}{t^{b}}\) Thus for all \(\varepsilon>0\), \(\tilde{h}\leq h(r)\leq h(t)\leq(1+\varepsilon)\tilde{h}(R)\left(\frac{t}{R} \right)^{b}\leq(1+\varepsilon)\tilde{h}(R)\), because \(b\) must be positive. That is \(\tilde{h}\) is increasing.
Since \(\tilde{h}\leq h(r)\), it is enough to show that \(\Lambda_{h,a}(K)\leq c\Lambda_{\tilde{h},a}(K)\). To prove this we assume that \(b=n\). Let \(K\subset\cup_{i}B(x_{i},r_{i})\), \(r_{i}\leq\varrho\) is a covering such that \(\sum_{i}x_{i,r_{i}}^{a}\tilde{h}(r_{i})<\Lambda_{\tilde{h},a}^{\varrho}(K)+\varepsilon\), where \(\varepsilon>0\) is arbitrary. All \(B(x_{i},r_{i})\) can be covered by \(c(n)\left(\frac{r_{i}}{r}\right)^{n}\) balls of radius \(r\leq r_{i}\), \(B(x_{i},r_{i})\subset\cup_{j}B(x_{i,j},r)\). Taking into account that
\[c(n)\sum_{i=1}^{\infty}\frac{h(r)}{r^{n}}r_{i}^{n}\frac{1}{c(n)}\left(\frac{r }{r_{i}}\right)^{n}\sum_{j=1}^{c(n)\left(\frac{r_{i}}{r}\right)^{n}}x_{i,j,r}^ {a}\leq c(n)\sum_{i=1}^{\infty}\frac{h(r)}{r^{n}}r_{i}^{n}x_{i,r_{i}}^{a},\]
we have
\[\Lambda_{h,a}(K)\leq c(n)\sum_{i=1}^{\infty}\inf_{0<r\leq r_{i}}\frac{h(r)}{r ^{n}}r_{i}^{n}x_{i,r_{i}}^{a}\leq c(n)\sum_{i=1}^{\infty}x_{i,r_{i}}^{a}\tilde{ h}(r_{i}),\]
which implies the inequality by choice of the covering.
(2) There is a measure \(c(n)\mu_{L}\) such that \(c(n)\mu_{L}(B(x,r))\leq h(r)\) for all \(r\leq r_{0}\), cf. [2, Theorem 5.3.1]. Thus Theorem 6 ensures the result.
(3) \(C_{L}\subset Q\) can be covered by \(c(n)2^{kn}\) balls of radius \(l_{k}\), \(B(x_{i},l_{k})\). As \(x_{i,l_{k}}^{a}\leq cv_{i,k}^{a}\)
if \(\varrho\geq l_{k}\) (\(c=c(Q,n,a)\),
\[\Lambda^{\varrho}_{\hat{h},a}(C_{L})\leq cc(n)\sum_{i=1}^{2^{kn}}v^{a}_{i,k}h(l_{k })=c(cn)\frac{h(l_{k})}{h_{L}(l_{k})},\]
which implies the statement.
Another corollary of Theorem 8 is given below.
**Corollary 4**.: (a) _Let \(h(r)\) be increasing on \([0,\infty)\), \(h(0)=0\). Let \(0<p\nu\leq n\). If_
\[\int_{0}^{1}\left(\frac{h(r)}{r^{n-p\nu}}\right)^{p^{\prime}-1}\frac{dr}{r}=\infty,\]
_then there exists a compact set \(K\subset\mathbb{R}_{+}^{n}\) such that \(\Lambda_{h,a}(K)>0\) and \(C_{a,\nu,p}(K)=0\)._
(b) _With the notation above, if_
\[\liminf_{r\to 0}\frac{h(r)}{r^{n-p\nu}}=0, \tag{39}\]
_then there exists a compact set \(K\subset\mathbb{R}_{+}^{n}\) such that \(\Lambda_{h,a}(K)=0\) and \(C_{a,\nu,p}(K)>0\)._
Proof.: Comparing Theorem 8 with [2, Theorem 5.3.2], it can be seen that if \(0<p\nu\leq n\), \(C_{a,\nu,p}(C_{L})>0\) if and only if \(C_{\nu,p}(C_{L})>0\). If \(C_{L}\subset[0,1]^{n}\), \(\Lambda_{h}(C_{L})\geq\Lambda_{h,a}(C_{L})\geq\Lambda_{h,a}(C_{L}^{1})\geq(1- l_{1})^{|a|}\Lambda_{h}(C_{L}^{1})\geq c(l_{1},a,n)\Lambda_{h}(C_{L})\), where \(C_{L}^{1}\) is the Cantor set associated with \(L^{1}:=\{l_{i}\}_{i=1}^{\infty}\) and located in \([1-l_{1},1]^{n}\). According to [2, Theorem 5.4.2] assumptions of (a) ensures that there is a Cantor set \(C_{L}\) with \(\Lambda_{h}(C_{L})>0\) and \(C_{\nu,p}(C_{L})=0\), which example fulfils the requirements of part (a). According to [2, Theorem 5.4.1] assumptions of part (b) imply that there is a Cantor set \(C_{L}\) with \(\Lambda_{h}(C_{L})=0\) and \(C_{\nu,p}(C_{L})>0\), which example fulfils the requirements of part (b).
**Construction.** Comparing Theorem 7, Proposition 2 and Corollary 4 there is a sequence \(L=\{l_{k}\}_{k=0}^{\infty}\) with a corresponding Cantor set such that \(h(l_{k})\sim h_{L}(l_{k})\). If \(\frac{h(r)^{|a|}}{r^{n}}\) is decreasing, it is not difficult to construct a sequence which generates a Cantor set and fulfils that \(h(l_{k})=h_{L}(l_{k})\).
Indeed, Let \(Q=[0,1]^{n}\). We can assume that \(h(1)=1=l_{0}\). The right endpoints of the intervals of \(C_{1}\) are \(l\), \(1\). Thus all the coordinates of \(v_{1},i(l)\) are \(l\) or \(1\) and so \(u_{1}(l):=\sum_{i=1}^{2^{n}}v_{1},i^{a}(l)\) is increasing in \(l\), and \(f_{l}(l):=\frac{1}{u_{1}(l)}\) is decreasing, positive \(f_{1}(0)=1\) and \(f_{1}(1)=\frac{1}{2^{n}}\). So \(l_{1}\) is defined by \(h(l_{1})=f_{1}(l_{1})\).
\(u_{2}(l)=u_{1}(l_{1})+s_{2}(l)=\sum_{i=1}^{2^{2n}}v_{2,i}^{a}(l_{1},l)\), where \(s_{2}(l)\) is an increasing function of \(l\) since the coordinates of \(v_{2,i}\) contains the right endpoints of \(C_{2}=C_{2}(l_{1},l)\). \(u_{2}(l_{1})=2^{n}u_{1}(l_{1})\), because the right endpoints \(l;l_{1};1-l_{1}+l;1\) become \(l_{1};l_{1};1;1\). Thus \(f_{2}(l):=\frac{1}{u_{2}(l)}\) is decreasing, \(f_{2}(l_{1})=\frac{1}{2^{n}}f_{1}(l_{1})\), \(f_{2}(0)>0\), so \(l_{2}\) is defined by \(h(l_{2})=f_{2}(l_{2})\).... \(u_{k}(l)=u_{k-1}(l_{k-1})+s_{k}(l)\), \(f_{k}(l):=\frac{1}{u_{k}(l)}\) is decreasing, \(f_{k}(l_{k-1})=\frac{1}{2^{n}}f_{k-1}(l_{k-1})=\frac{1}{2^{n}}h(l_{k-1})\), \(f_{k}(0)>0\), which defines \(l_{k}\). \(L:=\{l_{k}\}\) is obviously decreasing and \(h(l_{k-1})\leq 2^{n}h(l_{k})\). It remains to prove that \(L\) defines a Cantor set. We have to show that
\[f_{k}\left(\frac{l_{k-1}}{2}\right)\leq h\left(\frac{l_{k-1}}{2}\right).\]
Consider \(u_{k}\left(\frac{l}{2}\right)=u_{k-1}(l_{k-1})+s_{k}\left(\frac{l}{2}\right)\), and the members of \(s_{k}\left(\frac{l}{2}\right)\) consist of products of terms \(\left(d+\frac{l}{2}\right)^{a_{i}}\), where \(d\geq 0\). Thus \(u_{k}\left(\frac{l}{2}\right)\geq\frac{1}{2^{|a|}}u_{k}(l)\). So
\[\frac{1}{2^{|a|}}f_{k}\left(\frac{l_{k-1}}{2}\right)\leq f_{k}(l_{k-1})=\frac{1 }{2^{n}}f_{k-1}(l_{k-1})=\frac{1}{2^{n}}h(l_{k-1})\leq\frac{1}{2^{|a|}}h\left( \frac{l_{k-1}}{2}\right),\]
where in the last inequality we used the assumption.
Notice, that together with Theorem 5 this leads to the construction of a Cantor-type set in \([0,1]^{n}\) with "prescribed" B-\(p\)-capacity. More precisely, if \(0<|a|<p\nu\), let \(h(r)=r^{n-b}\), where \(0<p\nu-b<p\nu-|a|\). Then \(0<C_{a,\nu,p}(C_{L})<c(n,a)p(p\nu-b)\), where \(h(l_{k})=h_{L}(l_{k})\).
|
2306.16391
|
Uncovering Software-Based Power Side-Channel Attacks on Apple M1/M2
Systems
|
Traditionally, power side-channel analysis requires physical access to the
target device, as well as specialized devices to measure the power consumption
with enough precision. Recently research has shown that on x86 platforms,
on-chip power meter capabilities exposed to a software interface might be used
for power side-channel attacks without physical access. In this paper, we show
that such software-based power side-channel attack is also applicable on Apple
silicon (e.g., M1/M2 platforms), exploiting the System Management Controller
(SMC) and its power-related keys, which provides access to the on-chip power
meters through a software interface to user space software. We observed
data-dependent power consumption reporting from such SMC keys and analyzed the
correlations between the power consumption and the processed data. Our work
also demonstrated how an unprivileged user mode application successfully
recovers bytes from an AES encryption key from a cryptographic service
supported by a kernel mode driver in MacOS. We have also studied the
feasibility of performing frequency throttling side-channel attack on Apple
silicon. Furthermore, we discuss the impact of software-based power
side-channels in the industry, possible countermeasures, and the overall
implications of software interfaces for modern on-chip power management
systems.
|
Nikhil Chawla, Chen Liu, Abhishek Chakraborty, Igor Chervatyuk, Ke Sun, Thais Moreira Hamasaki, Henrique Kawakami
|
2023-06-28T17:36:16Z
|
http://arxiv.org/abs/2306.16391v2
|
# The Power of Telemetry: Uncovering Software-Based Side-Channel Attacks on Apple M1/M2 Systems
###### Abstract
Power analysis is a class of side-channel attacks, where power consumption data is used to infer sensitive information and extract secrets from a system. Traditionally, such attacks required physical access to the target, as well as specialized devices to measure the power consumption with enough precision.
The PLATYPUS attack [1] has shown that on-chip power meter capabilities exposed to a software interface might form a new class of power side-channel attacks. This paper presents a software-based power side-channel attack on Apple Silicon M1/M2 platforms, exploiting the System Management Controller (SMC) and its power-related keys, which provides access to the on-chip power meters through a software interface to user space software.
We observed data-dependent power consumption reporting from such keys and analyzed the correlations between the power consumption and the processed data. Our work also demonstrated how an unprivileged user mode application successfully recovers bytes from an AES encryption key from a cryptographic service supported by a kernel mode driver in MacOS. Furthermore, we discuss the impact of software-based power side-channels in the industry, possible countermeasures, and the overall implications of software interfaces for modern on-chip power management systems.
## I Introduction
It is well-known that CMOS circuits, when processing data, generate data-dependent power consumption. This behavior has been misused by attackers to perform power analysis attacks, extracting sensitive data, such as secret keys or passwords, from the target system. In traditional power side-channel attacks, the attacker typically requires physical access to the system to measure power consumption information.
Software-based power side-channel attacks represent a new class of power side-channel attacks that can be performed by a software attacker leveraging on-chip power meter capabilities provided by the hardware, without requiring physical access to the system. One example of such attacks is the PLATYPUS attack [1], which leverages the Running Average Power Limit (RAPL) energy counters available on x86 CPUs to extract sensitive information.
In this paper, we demonstrate a software-based power side-channel attack targeting ARM-based Apple M1/M2 systems. We observed that the System Management Controller (SMC) on Apple silicon exposes power meter capabilities to software. A set of such metrics are also accessible to user mode application in macOS. With experiments, we confirmed that a subset of those metrics is correlated with the data processed by the CPU. Furthermore, we show that cryptographic operations, such as AES [2], performed by privileged software (e.g., kernel software) become vulnerable to software-based power side-channel attacks, by exploiting the unprivileged access to sensor data exposed by SMC.
This attack shows that software-based power side-channel attacks are an industry-wide problem spanning across various architectures, and show that it is not trivial to develop a suitable mitigation plan.
Our key contributions are:
* To the best of our knowledge, this is the first work that presents comprehensive software-based power side-channel analysis on ARM-based Apple silicon.
* We show that specific power meter reports provided by the Apple SMCs are data dependent. Moreover, the IOKit library exposes SMC sensor data to a user mode application, allowing an unprivileged attacker to infer secret keys from kernel mode.
* We discuss the impact of software-based power side-channels on the industry as a whole, the implication of software interfaces to modern on-chip power management systems, and possible countermeasure techniques to mitigate the risks of keeping such software interfaces available to unprivileged user mode application.
The remainder of this paper starts by presenting background and related work (Section II). In Section III, we present the details of our research on the SMC power meters reporting and data dependency finding, as well as the exploitation of the power side-channel. We then discuss possible countermeasure techniques in Section IV and Section V concludes our paper.
**Respensible Disclosure:** We responsibly disclosed our findings to Apple Inc. on November 29th, 2022. Following up on Apple's request, we provided two different Proof-of-Concept (PoC) versions on January 2023 and March 2023, respectively. Apple acknowledged the findings, validated the PoC on June 27, 2023 and do not propose to take further actions to mitigate the issue.
## II Background & Related Work
In this section, we provide background on Apple System Management Controller, software-based power side-channels, and different methods of power analysis.
### _Power meter reporting on Apple platforms_
The System Management Controller (SMC) is a co-processor responsible for power and thermal management available in multiple Apple platforms since the legacy x86-based Apple systems [3].
The SMC exposes various sensor data including temperature, voltage and power meters, battery status, fan status, and other power-related functions on the system. The sensor data is accessible to user mode software as a key-value pair, where the key is a 4-bit alphanumeric string [4] and the value can be retrieved by calling the IOConnectCallStructMethod function in the IOKit [5] library built in macOS.
### _Software-based power side-channel attacks_
The PLATYPUS attack [1] has shown that the Running Average Power Limit (RAPL) energy reporting interfaces, which are available on Intel and AMD processors, expose data-dependent power consumption information that can be used to perform power side-channel attack by a software attacker.
Furthermore, recent works [6, 7] have discovered that CPUs, when hit certain reactive limits (e.g, power or current limits), will throttle to run at lower frequencies, which are correlated with energy consumption of the chip and hence correlated with data processed by the CPU.
Concurrent to this paper, Taneja _et al._[8] discusses thermal limit induced frequency throttling side-channel on various integrated and discrete GPUs, and also mentions software-based power side-channels on ARM-based CPUs, including Apple M1/M2 systems. Compared to [8], our work conducts a more comprehensive study on the power telemetry side channel on Apple M1/M2 platforms.
### _Simple, Differential, and Correlation Power Analysis_
Power analysis can be classified according to the statistical analysis applied on the data collected [9].
Simple power analysis (SPA) is a technique that supports the identification of patterns in power consumption traces during the execution of multiple operations. With aid of those patterns, one can identify the power consumption of specific instructions or operations in a single execution.
On the other hand, differential power analysis (DPA) is a more advanced technique that involves statistically analyzing multiple power traces with various input data to infer data-dependencies in the power consumption patterns.
Correlation power analysis (CPA) is also a statistical type of attack that uses the Pearson correlation coefficient. The goal is to correlate the power consumption reading and the input values being processed. This technique is powerful because it enhances even the small variations in the power consumption and helps by increasing the Signal-to-Noise Ratio (SNR). For this reason, CPA is considered a more sophisticated attack.
## III SMC power telemetry attacks
In previous works, energy consumption reported by RAPL interfaces on x86 processors are shown to correlate with both the instruction type and the data operand [1, 10]. In this section, we provide detailed descriptions of the procedures used to identify data-dependent SMC key values and evaluate the feasibility of performing software-based side-channel attacking using the keys.
### _Experimental Setup_
We used an Apple Mac Mini M1 and an Apple MacBook Air M2 for all the experiments described in this section. Table I summarizes the specification of the systems.
### _Threat model_
In this work, we assume the attacker is a user mode software and the victim is a kernel mode driver holding a secret. The victim may provide a service to user mode software. The attacker might use these services to call/invoke kernel routines that operate on the secret data. The attacker, as a user mode application, has no direct access to the secret which belongs to the kernel mode driver, but to the SMC key-value pairs.
### _Identifying workload-dependent SMC keys_
Since official SMC key definitions for Apple M1/M2 systems are not publicly available, the initial phase of the research involved identifying the specific SMC keys that exhibit a correlation with power consumption among the extensive collection of available keys. Based on the established naming convention adopted in x86-based Mac systems, it has been observed that SMC keys associated to power-related functionalities commonly initiate with an initial capital letter "P" [11][12]. By leveraging this information, we are able to significantly narrow down the pool of candidate keys to approximately 30.
To identify SMC keys that exhibit correlation with workload, we conducted an experiment using the open source tool _smc-fuzzer_[13]. This experiment involved comparing key values under both idle and busy system conditions. Through this comparative analysis, our objective was to derive the subset of SMC keys that demonstrate correlation with workload.
Figure 1 (a) lists a set of the SMC keys on the Apple M2 system that start with the letter "P", along with their corresponding values when the system is in an idle state. Subsequently, in Figure 1 (b), we show the measurements of the same SMC keys during the execution of the _stress-ng_[14] workload, which involves performing matrix operations on all available cores. By conducting a side-by-side comparison of the results, we were able to identify certain SMC keys that exhibit variations in their values correlated with the workload. These specific SMC keys, highlighted in red, signify the impact of the workload on their respective values. This observation highlights the potential relationship between these specific SMC keys and the energy consumption during different workload scenarios.
For Apple M1, we replicated the experiment and the results of the comparison is shown in Figure 2. Table II summarizes the list of SMC keys related to the workload on M1/M2 systems, respectively.
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline Mac & \#Pcore & \#Ecore & OS & Max. Freq (GHz) \\ \hline M1 Mini & 4 & 4 & macOS 12.5 & 3.2 \\ M2 Air & 4 & 4 & macOS 13.0 & 3.5 \\ \hline \end{tabular}
\end{table} TABLE I: Specification of tested devices
### _Identifying data-dependent SMC keys_
Subsequently, we identify the data-dependent SMC keys from the previous list of workload-dependent keys. To accomplish this, we design and implement a Proof-of-Concept (PoC) that involves repetitively executing the same workload while accepting and processing distinct input data for a fixed number of iterations. During each iteration, different input data is provided to the workload. We measure and log multiple data points of each selected SMC key for each input, generating a so-called _trace_.
In order to enhance the energy consumption correlation with the input provided, we opt to replicate the workload and execute it simultaneously on three P-cores. This amplification allows us to capture a more pronounced correlation between the power consumption and the specific data inputs processed by the SMC keys.
We then apply the Test Vector Leakage Assessment (TVLA) [15] to validate if any of the pre-selected SMC key value traces show data-dependency. TVLA utilizes Welch's t-test to assess side-channel leakage of cryptographic schemes. A Welch's t-test compares two datasets, A and B, by computing a statistic score, the _t-score_. A \(|\)_t-score_\(|>4.5\) indicates that two datasets, A and B, are statistically distinguishable with 99.999% confidence.
For this experiment, we selected the implementation of AES-128 encryption from _AES-Intrinsics_[16] as the testing workload, which utilizes the ARMv8 equivalent to the cryptographic extension AES instructions (e.g., AESE and AESMC). We observed that the SMC key values are updated approximately every one second. Therefore, the victim AES encryption for the same plaintext is repeatedly executed for more than one second. We collected 10k SMC key value traces corresponding to the encryption of each of the three chosen plaintexts - (_All_0s_, _All_1s_, and _Random_) - with a fixed key. We then applied TVLA analysis between all possible pairs of those chosen plaintexts. Table III shows the TVLA results on the Apple M2 system for the selected SMC keys between the different and same plaintexts for the AES workload. Different colors mean:
* True positive: two traces with different plaintexts are distinguishable.
* True negative: two traces with the same plaintext are non-distinguishable.
* False positive: two traces with the same plaintext are distinguishable.
* False negative: two traces with different plaintexts are non-distinguishable.
Out of the selected SMC keys, one key, namely _PHPC_, stands out by demonstrating true positive and true negative results consistently. Remarkably, _PHPC_ exhibits no instances of false positive or false negative correlations. This finding strongly suggests that _PHPC_ has the most robust and reliable correlation with the data.
In the case of _PDTR_, _PMVC_, and _PSTR_, these SMC keys exhibit a combination of true positive and true negative results for a majority of the pairs. However, they also display several instances of false positive or false negative correlations. This indicates a somewhat weaker data-correlation compared to _PHPC_. Conversely, the SMC key _PHPS_ primarily generates false negative correlations for most of the pairs and does not yield any true positive correlations, suggesting a limited data-correlation.
As a result of our analysis, we have confirmed the data-dependency for all the selected SMC keys except for _PHPS_ on the Apple M2 system. Additionally, we conducted TVLA analysis on the collected traces of the _PHPC_ key values collected on the Apple M1 platform, affirming a similar data-dependency pattern for the _PHPC_ key on this system as well.
### _AES encryption key extraction_
Following the confirmation of data-dependency in the SMC key values, our research assesses the feasibility of extracting secrets from a victim program. In a manner similar to the previous experiment, our focus is on an AES-128 encryption implementation obtained from _AES-Intrinsics_[16]. The victim
\begin{table}
\begin{tabular}{|c|l|l|} \hline \multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{Mac Mini Mini} & \multicolumn{1}{c|}{Macbook Air M2} \\ \cline{2-3} \multicolumn{1}{|c|}{SMC keys} & \multicolumn{1}{c|}{PDTR, PHPC, PHPS,} & \multicolumn{1}{c|}{PDTR, PHPC, PHPS,} \\ \multicolumn{1}{|c|}{PMVR, PPMR, PSTR} & \multicolumn{1}{c|}{PMVC, PSTR} \\ \hline \end{tabular}
\end{table} TABLE II: Workload-dependent SMC Keys whose values are correlated with _stress-ng_ workload
Fig. 1: Apple M2 power related SMC key values comparison when the system is (a) idle and (b) running stressor code
Fig. 2: Apple M1 power related SMC key values (partial) comparison when the system is (a) idle and (b) running stressor code
program repeatedly carries out encryption operations using a secret key that remains inaccessible to the attacker. However, in contrast to providing a fixed plaintext like the previous experiment, the attacker process now injects random plaintext inputs into the victim program. During this process, the attacker records the plaintext, the generated ciphertext, and the corresponding SMC key values once the encryption operation is completed. It is important to note that, to minimize noise the AES-128 encryption takes place in user mode for the purpose of this PoC.
In this experiment, we collect one million traces on the M2 system and 350k traces on the M1 system, respectively. We then perform Correlation Power Analysis (CPA) to analyze the collected SMC key value traces. CPA involves computing correlations between the power side-channel traces and a hypothetical power model derived from the Hamming Weight (HW) or Hamming Distance (HD) of intermediate states. In our CPA approach, the power traces used for analysis correspond to the SMC key value traces we collected. The hypothetical power models employed are similar to those used in traditional CPA. These models incorporate the HW or HD of intermediate states, which are:
* **Rd0-HW**: HW after the first _AddRoundKey_ operation to recover initial round key
* **Rd10-HW**: HW before the last round _SubBytes_ operation to recover round #10 key
* **Rd10-HD**: HD between last round input and ciphertext to recover round #10 key.
For each targeted key byte, the hypothetical power model is constructed using HW/HD of intermediate round state for all possible key guesses. We calculate the correlation coefficient between the SMC key value trace and the hypothetical power model, then rank all key guesses in decreasing order of correlation coefficient's magnitude. The outcome of the CPA test is the rank of the correct key byte with a value of 1 indicating recovery of the secret key byte. The average rank across all key bytes is measured by _Guessing Entropy_ (GE). Lower GE indicates lower ranks across all key bytes and \(GE=0\) indicates recovery of all key bytes.
Table IV summarizes the final ranks of secret key bytes and the Guessing Entropy after applying CPA on the collected SMC key value traces with _First round Hamming Weight_ (Rd0-HW) power model on the M2/M1 system. Key bytes that are correctly recovered (rank = 1) are marked as red, while key bytes that are almost recovered (rank < 10) are marked as yellow.
On the Macbook Air M2 system, comparing key ranks across SMC key value traces, we observe CPA on _PHPC_ key value traces recover most secret key bytes (6 out of 16) with the other 6 bytes almost recovered. CPA on _PDTR_ and _PMVC_ key value traces shows the low rank for multiple key bytes (10 out of 16 key bytes - _PDTR_, 7 out of 16 key bytes - _PMVC_). _PSTR_ key traces, on the other hand, recovers none of the key bytes. On the Apple M1 mini system, we repeated the test for the _PHPC_ key and observed that 2 out of the 16 bytes can be recovered, while 6 other bytes have ranks lower than 10..
Fig. 3 shows the trend in GE with different number of _PHPC_ key value traces with different power models, for both M1 and M2 systems. The GE trend is shorter corresponding to M1 system due to shorter number of _PHPC_ key value traces (350K traces). A consistent and converging trend when analyzing an increasing number of collected SMC key value traces indicates that as more traces are gathered, the recoverability of all key bytes becomes feasible. There is a distinct convergence patterns among the different power models utilized. Notably, the _Rd0-HW_ power model exhibited the fastest convergence rate, indicating its effectiveness in recovering the key bytes. The _Rd10-HW_ power model also demonstrated convergence, albeit at a considerably slower rate. However, the _Rd10-HD_ power model exhibited minimal convergence, suggesting limited effectiveness in recovering the key bytes. This further supports the suitability and reliability of the _Rd0-HW_ model for extracting valuable information from the SMC key value traces on Apple M1/M2 platforms.
### _Targeting a kernel module_
To provide insight of the feasibility of the attack with a more realistic threat model, we implement a kernel module that serves as an encryption engine to encrypt the user provided plaintexts using AES-128. The secret key is hosted in the kernel-only memory. The attacker is a user mode application, with read access to the SMC key values through the interface provided by the IOKit framework. To implement this kernel module, we leveraged the kernel extensions support in the
\begin{table}
\begin{tabular}{c|c c c c c} & PHPC & PDTR & PMVC & PSTR & PHPC (M1) \\ \hline
0 & 7 & **1** & 3 & 211 & 9 \\
1 & 7 & 7 & 3 & 22 & 19 \\
2 & **1** & 5 & 3 & 188 & 4 \\
3 & 11 & 11 & 12 & 189 & 12 \\
4 & 5 & **1** & **1** & 151 & **1** \\
5 & 4 & 15 & 12 & 223 & 31 \\
6 & 4 & 6 & 14 & 113 & 16 \\
7 & 13 & 8 & 12 & 39 & 5 \\
8 & **1** & 15 & 17 & 201 & 9 \\
9 & 37 & 16 & 22 & 101 & 18 \\
10 & **1** & 5 & 11 & 214 & 7 \\
11 & **1** & 2 & 2 & 117 & 2 \\
12 & **1** & 2 & **1** & **146** & **1** \\
13 & 4 & 12 & 13 & 184 & 36 \\
14 & **1** & 9 & 8 & 18 & 25 \\
15 & 26 & 24 & 14 & 137 & 50 \\ \hline GE & 31.0 & 41.6 & 42.8 & 109.3 & 40.9 \\ \end{tabular}
\end{table} TABLE IV: Rank of the recovered secret key byte of AES applying CPA on collected SMC key value traces with Round 0 HW power model on Macbook Air M2 and M1 (PHPC only)
\begin{table}
\begin{tabular}{c|c c|c c c|c c c|c c c|c c c} Key & \multicolumn{3}{c|}{PHPC} & \multicolumn{3}{c|}{PDTR} & \multicolumn{3}{c|}{PHPS} & \multicolumn{3}{c|}{PMVC} & \multicolumn{3}{c}{PSTR} \\ \hline Plaintext & All 0s & All 1s & Random & All 0s & All 1s & Random & All 0s & All 1s & Random & All 0s & All 1s & Random \\ \hline All 0s\({}^{*}\) & -0.18 & 20.94 & 11.49 & 8.73 & 29.58 & 25.05 & 0.87 & 2.14 & 20.2 & 9.49 & 32.16 & 27.33 & 21.30 & 12.96 & 27.41 \\ All 1s\({}^{*}\) & -21.09 & 0.09 & -8.87 & -20.42 & 0.02 & -4.49 & -1.97 & -0.69 & 0.84 & -22.45 & 0.09 & -5.15 & 9.13 & 0.37 & 15.16 \\ Random\({}^{*}\) & -11.60 & 9.28 & 0.43 & -15.28 & 5.01 & 0.55 & -0.53 & 0.74 & 0.61 & -17.54 & 4.92 & -0.23 & 5.99 & -15.03 & -0.24 \\ \end{tabular}
\end{table} TABLE III: YULA result on Apple M2 system for selected SMC keys between different plaintexts for the AES workload.
Xcode IDE for iOS developers [17]. This kernel extension implements the AES-128 encryption from _AES-Intrinsics_ using a device driver. The device driver accepts the plaintext from a user mode application, repeatedly encrypts the plaintext for fixed amount of iterations and writes the ciphertext to a buffer, which can be read by the user mode application. In this implementation, a single thread executing on P-core invokes the device driver for encrypting the plaintext.
Next, we applied TVLA on collected SMC key traces to check if the SMC keys identified in previous test still exhibit data-dependency. We followed similar steps outlined in section III-D to collect SMC key traces corresponding to encryption of different plaintext inputs obtained from the device driver. Table V summarizes the _t-scores_ evaluated for SMC key traces corresponding to encryption of _All_zero_, _All_one_, and _Random_ plaintext input with a fixed key from device driver on Macbook Air M2 system. We followed the same color-coding convention as outlined in section III-C to interpret the TVLA test results. We observe similar trend, meaning that _PHPC_ key traces exhibit the strongest correlation, followed by _PDTR_, _PMVC_ and _PSTR_ key traces, and that _PHPS_ shows the lowest correlation. These results once more confirm our findings regarding data-dependency for most SMC keys, except _PHPS_.
We conducted a CPA test on the collected SMC key value traces that exhibited data correlation on the TVLA test. The objective was to recover the AES encryption key from the device driver. We collected a dataset of one million traces each for each of the SMC keys, _PHPC_, _PDTR_, _PMVC_ and _PSTR_. We then applied CPA with the hypothetical power models considered in section III-D and obtained the GE metric.
Fig. 4 illustrates the GE trend against the one million traces, when CPA was applied on _PHPC_ SMC key value traces using different explored power models. We observe the converging trend, indicating a reduction in the rank of correct key bytes with an increasing number of collected traces. Among the power models tested, the _Rd0-HW_ power model demonstrated the strongest correlation with the collected _PHPC_ key traces, as indicated by the fastest convergence of the GE metric. On the other hand, the _Rd10-HD_ power model did not exhibit any convergence.
Additionally, we observed that the GE metric converged at a slower rate, approximately two times slower, when CPA was applied to the collected SMC key traces from the kernel module implementation of the AES victim, compared to the user mode AES application (Fig. 3). This can be attributed to the decreased SNR of the collected traces due to system call invocations and the lower number of victim threads.
The experimental findings from the CPA test indicate that an unprivileged attacker can compromise the confidentiality of assets protected by the kernel.
## IV Discussion & Possible Countermeasures
To address the vulnerabilities highlighted by the PLATY-PUS attack [1], both Intel and AMD have taken measures to mitigate the risk. Specifically, they have removed the user space software access to the RAPL (Running Average Power Limit) energy counters from the Linux kernel drivers [18, 19]. Additionally, Intel has introduced a RAPL filtering mechanism, which introduces random energy noise into the energy reporting interface and adjusts the update interval [20]. These measures aim to decrease the SNR of observable power consumption differences, making it more challenging to extract sensitive information through similar power side-channel attacks.
Considering the issues discussed in our paper, we believe similar countermeasures can be applied to address the vulnerabilities discussed in this paper. By implementing mechanisms to reduce the SNR of power consumption differences and introducing random energy noise, the effectiveness of this type of power side-channel attacks can be significantly reduced.
\begin{table}
\begin{tabular}{l|c c c c|c c c c|c c c|c c c} Key & \multicolumn{2}{c|}{PHPC} & \multicolumn{2}{c|}{PDTR} & \multicolumn{2}{c|}{PHPS} & \multicolumn{2}{c|}{PMVC} & \multicolumn{2}{c}{PSTR} \\ \hline Plaintext & All 0s & All 1s & Random & All 0s & All 1s & Random & All 0s & All 1s & Random & All 0s & All 1s & Random \\ \hline All 0s & 2.78 & 19.28 & 9.41 & 13.84 & 41.52 & 43.01 & 2.72 & 3.60 & 6.51 & 15.13 & 45.33 & 47.10 & 40.66 & 18.45 & 37.50 \\ All 1s & -17.91 & -0.76 & -11.12 & -30.27 & -2.16 & -0.26 & -3.99 & -3.12 & -0.11 & -32.93 & -2.44 & -0.48 & -18.45 & 16.6 & 20.01 \\ Random’ & -6.77 & 10.14 & -0.04 & -30.84 & -0.73 & -0.99 & -3.85 & 0.11 & 0.03 & -33.30 & -0.56 & -1.03 & 0.73 & -20.70 & -2.16 \\ \end{tabular}
\end{table} TABLE V: TVLA analysis on selected SMC key traces corresponding to encryption of different plaintext inputs from AES device driver on the Macbook Air M2 system
Fig. 4: GE trend against number of collected _PHPC_ SMC key value traces for CPA targeting AES kernel module on the Macbook Air M2 system
Fig. 3: GE trend against collected _PHPC_ SMC key value traces for CPA targeting user space AES encryption on Apple M1 Mini and M2 Air system
However, as of the publication of this paper, we are not aware of any specific plans from Apple to provide mitigations against the vulnerabilities discussed in our research. It is important that system vendors and developers consider the potential security implications of this class of power side channel attacks and implement appropriate safeguards to protect sensitive information.
This work, along with numerous other studies conducted in recent years, highlights the widespread nature of power meters side-channel attacks across various CPU architectures. The findings underscore the importance of acknowledging this issue and prompt all vendors in the industry to recognize the significance of software-based power side-channel attacks. It is crucial for vendors to proactively explore architectural solutions and implement preventive measures to mitigate the risks associated with newer power side-channel attacks.
By raising awareness about the prevalence and potential impact of exposing power meters reports to unprivileged software, we aim to encourage industry-wide collaboration and collective efforts towards enhancing the security and privacy of CPU architectures. It is imperative that vendors prioritize the development and implementation of robust security mechanisms that effectively address the vulnerabilities exposed by these attacks. This proactive approach will contribute to the creation of more secure and resilient systems, safeguarding sensitive information from potential exploitation through software-based power side-channel attacks.
## V Conclusion
This paper has explored the vulnerabilities arising from software-based power side-channel attacks and their implications across CPU architectures. Our experiments and analysis have revealed the data dependency of the power consumption reported by the selected System Management Controller (SMC) key values on ARM-based Apple Silicon M1/M2 systems. Furthermore, we demonstrated that a user space attacker can exploit the unprivileged access to the SMC key values to extract secrets from the kernel device driver. Our research underscores the need for industry-wide awareness and proactive exploration of architectural solutions to prevent software-based power side-channel attacks. It is crucial for vendors to prioritize security measures and collaborate in order to enhance the resilience of CPU architectures against such vulnerabilities.
|
2307.09198
|
Charged $AdS$ Black Holes in $4D$ Einstein--Gauss--Bonnet Massive
Gravity
|
We investigate Einstein--Gauss--Bonnet--Maxwell massive gravity in $4D$ AdS
background and find an exact black hole solution. The horizon structure of the
black holes studied. Treating the cosmological constant as pressure and
Gauss-Bonnet coupling parameters, and massive gravity parameters as variables,
we drive the first law of black hole thermodynamics. To study the global
stability of the black holes we compute the Gibbs free energy. The local
stability of the black hole is also studied through specific heat. We analyze
the effects of graviton mass and Gauss-Bonnet coupling parameters on the phase
transition of the black holes. Finally, the effects of graviton mass and
massive gravity parameters on the Joule-Thomson expansion of the black hole are
studied.
|
Prosenjit Paul, Sudhaker Upadhyay, Dharm Veer Singh
|
2023-07-17T17:30:28Z
|
http://arxiv.org/abs/2307.09198v1
|
# Charged \(AdS\) Black Holes in \(4d\) Einstein-Gauss-Bonnet Massive Gravity
###### Abstract
We investigate Einstein-Gauss-Bonnet-Maxwell massive gravity in \(4D\) AdS background and find an exact black hole solution. The horizon structure of the black holes studied. Treating the cosmological constant as pressure and Gauss-Bonnet coupling parameters, and massive gravity parameters as variables, we drive the first law of black hole thermodynamics. To study the global stability of the black holes we compute the Gibbs free energy. The local stability of the black hole is also studied through specific heat. We analyze the effects of graviton mass and Gauss-Bonnet coupling parameters on the phase transition of the black holes. Finally, the effects of graviton mass and massive gravity parameters on the Joule-Thomson expansion of the black hole are studied.
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
+
Footnote †: Corresponding author
## I Introduction
General Relativity (GR) is a theory of gravitation that help us understand gravitational waves, gravitational lensing, an effect of gravity on time known as gravitational time dilation and black holes. Although GR is not a complete theory of quantum gravity, the simplest theory of gravity describes various astrophysical phenomena. To complete the GR, people try to modify it in various ways, for instance, by adding the higher order term to the Einstein-Hilbert action of GR but still, a complete theory is missing. Some other examples of higher order gravity theory are scalar-tensor theories [1; 2; 3; 4; 5; 6], Lovelock gravity [7; 8; 9], regular black holes [10; 11; 12; 13] and brane world cosmology [14; 15; 16].
Lovelock theories [7; 8] is a special theory of higher-order gravity in \(4D\) spacetime that preserves
diffeomorphism invariance, metricity and second-order equations of motion. From Lovelock's theories of gravity, Gauss-Bonnet gravity can be obtained in higher dimensions [17]. The Gauss-Bonnet term does not contribute to the dynamics of the theory in four dimensions but rather contributes to the dynamics when the dimensions of spacetime are greater than four. In recent days, Glavan and Lin [18] found the solution to the Einstein-Gauss-Bonnet field equation in four dimensions by rescaling Gauss-Bonnet coupling parameter \(\alpha\) by \(\alpha/D-4\). However, the charged AdS solution of Einstein-Gauss-Bonnet theory was found in Ref. [19]. For a complete discussion on \(4D\) Gauss-Bonnet gravity, see Refs. [20]. Some other static spherically symmetric black hole solutions and their thermodynamics, phase transition in \(4D\) or higher dimensions Einstein-Gauss-Bonnet gravity studied in Refs. [21; 22; 23; 24; 25; 26; 27; 28]. Einstein-Gauss-Bonnet black hole solution in nonlinear electrodynamics is studied in Refs. [29; 30; 31; 32; 33; 34; 35; 36; 37].
Another way of modifying the GR is by adding a mass to the graviton. According to the GR, the graviton is a massless spin-2 particle. But one may ask if that is a self- a consistent theory of massive gravity possible or not. In fact, people have tried to answer this question by modifying the Einstein-Hilbert action that describes massive graviton. The recent observation of gravitational waves by LIGO put a maximum limit on the graviton mass, \(m\leq 1.2\times 10^{-22}\)eV [38]. A theory of massive gravity was first constructed by Fierz and Pauli in 1939 [39; 40]. In the curved background, this theory encounters ghost instabilities [41]. A new nonlinear massive gravity theory proposed by de Rham, Gabadadze and Tolley (dRGT) [42; 43] avoids ghost problems. The charged black holes in Gauss-Bonnet massive gravity studied [44]. Some other spherically symmetric black holes in massive gravity and their thermodynamics have also been studied [45; 46; 47; 48; 49; 50].
In our theory, we consider the anti-de-Sitter (AdS) background, i.e. we add a negative cosmological constant. The negative cosmological constant is a crucial ingredient in the AdS/CFT correspondence, a duality between a theory of quantum gravity in (AdS) space and a conformal field theory (CFT) in one lower dimension. The AdS/CFT correspondence allows for the study of strongly coupled field theories using classical gravity, which is a useful tool for investigating non-perturbative phenomena that cannot be understood through standard perturbative methods. One significant consequence of the negative cosmological constant is that it leads to the presence of a holographic screen at the AdS boundary, which encodes the bulk geometry's information. This holographic principle means that the number of degrees of freedom in the AdS space is proportional to the area of the holographic screen, rather than the volume, as in ordinary theories. The AdS/CFT correspondence, therefore, implies that the degrees of freedom in the AdS space are
equivalent to those of the boundary CFT. The negative cosmological constant also plays a critical role in the AdS black hole physics. Black holes in AdS space can have a negative specific heat, which is impossible in flat space. This phenomenon is related to the AdS space's boundary conditions, which force the black hole to lose energy and mass through the AdS boundary, leading to a reduction in temperature. This behavior is known as Hawking-Page phase transition [51], where the black hole is in thermal equilibrium with a thermal AdS space. The AdS/CFT correspondence allows for the study of the thermodynamic behaviour of black holes using the corresponding CFT, providing insights into the nature of black hole thermodynamics.
Recently, researchers have considered the cosmological constant as a variable parameter and linked it to the thermodynamic pressure, which is conjugate to the thermodynamic volume [52; 53; 54; 55; 56]. This approach has resulted in an extended phase space, where the black hole mass is regarded as the enthalpy, instead of the internal energy [53]. Many studies have explored the thermodynamics and phase transitions of black holes in this extended phase space, revealing new phenomena such as P-V criticality in various black holes spacetime [57; 58].
In AdS spacetime, the black hole mass is naturally treated as the enthalpy, leading to the consideration of Joule-Thomson expansion for the black hole. This expansion investigates isenthalpic curves, which are constant mass curves. Previous investigations of the Joule-Thomson expansion for charged AdS [59] and Kerr AdS [60] black holes have been conducted within the framework of Einstein gravity. However, extended theories of Einstein gravity introduce new physical degrees of freedom, raising questions about their role and physical impact on the Joule-Thomson expansion. In this paper, we investigate how the presence of massive gravity modifies the Joule-Thomson expansion of the charged AdS black hole in Gauss-Bonnet gravity, inspired by recent progress in understanding massive gravity. The approach taken here is relevant not only for charged AdS black holes in Eiantein-Gauss-Bonnet massive gravity but also for those in other alternative theories of gravity where additional gravitational modes emerge. For instance, the Joule-Thomson expansion may be examined for charged AdS black holes in teleparallel \(f(T)\) gravity [61; 62], with \(T\) representing torsion, or in \(f(R)\) gravity [63] with a nonlinear electrodynamics field.
Since the charged AdS Einstein-Gauss-Bonnet theory in massive gravity is not studied yet. Therefore, in this paper, we investigate charged Einstein-Gauss-Bonnet massive gravity and find an exact solution in \(4D\) AdS space. we also discuss the horizon structure of charged AdS black hole in \(4D\) Einstein-Gauss-Bonnet massive gravity. Moreover, we discuss the thermal properties of this black hole. To be more precise, we compute the entropy and temperature that satisfy the first law
of black hole thermodynamics. In AdS space, the mass of the black hole is treated as enthalpy. Furthermore, we analyzed the stability and Van der walls like phase transition of the black holes. Here, we study the Joule-Thomson expansion of charged AdS black hole in 4D Einstein-Gauss-Bonnet massive gravity, and the constant mass curves are known as isenthalpic curves. Finally, we investigate the effects of graviton mass and the massive gravity parameters on the Joule-Thomson expansion of charged AdS black holes in \(4D\) Einstein-Gauss-Bonnet massive gravity.
The paper is organized as follows. In section II, we discuss the action describing the Einstein-Gauss- Bonnet-Maxwell massive gravity in \(4D\) AdS space and their field equations. Here, we find the exact black hole solution. The effects of graviton mass on the horizon structure of the black hole are also depicted. In section III, we study the first law of black hole thermodynamics and the effects of graviton mass on Hawking temperature. To investigate the global stability of the black holes, we compute the Gibbs free energy. Next, in section V, the effects of graviton mass on the local stability of the black hole are studied. The Van der walls-like phase transition of the black hole is analyzed in section VII. We numerically investigate the effects of the graviton mass, the charge of the black hole and Gauss-Bonnet coupling parameter on the critical parameters (namely, critical volume, critical pressure and critical volume) of the black hole. The effects of critical parameters on the phase transition of the black hole are also studied. we investigate the Joule-Thomson expansion of the black hole in section VIII. Here, we analyze the effects of graviton mass and massive gravity parameters on the constant mass curve and inverse curve. Finally, we compute Joule-Thomson thermodynamic coefficient as a function of the black hole horizon radius.
## II Einstein-Gauss-Bonnet massive gravity in 4D
The action for Einstein-Maxwell-Gauss-Bonnet massive gravity with a negative cosmological constant in \(D\) dimensions is given by
\[S=\frac{1}{16\pi}\int d^{D}x\sqrt{-g}\Bigg{[}R-2\Lambda+\alpha\mathcal{G}-F_{ \mu\nu}F^{\mu\nu}+m^{2}\sum_{i}c_{i}\mathcal{U}_{i}(g,h)\Bigg{]}, \tag{1}\]
where \(g\) is determinant of the metric \(g_{\mu\nu}\), \(R\) is Ricci scalar, \(\alpha\) is Gauss-Bonnet coupling parameter, \(\mathcal{G}=R_{\mu\nu\rho\sigma}R^{\mu\nu\rho\sigma}-4R_{\mu\nu}R^{\mu\nu}+R^{2}\) is the Gauss-Bonnet term, \(R_{\mu\nu\rho\sigma}\) is Riemann tensor, \(R_{\mu\nu}\) is Ricci tensor and \(F_{\mu\nu}=\partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu}\) is Maxwell tensor. Apart from that \(m\) is a parameter related to graviton mass, \(h_{\alpha\nu}\) is a fixed symmetric tensor and usually is called the reference metric,
\(c_{i}(i=1,2,3,4)\) are constant1[64] and \(\mathcal{U}_{i}(g,h)\) is symmetric polynomials of eigenvalues of matrix \(\mathcal{K}_{\nu}^{\mu}=\sqrt{g^{\mu\alpha}h_{\alpha\nu}}\), given by
Footnote 1: In order to have a self-consistent massive gravity theory, the coupling parameters \(c_{i}\) might be required to be negative if the squared mass of the graviton is positive. However, in the AdS spacetime, the coupling parameters \(c_{i}\) can still take the positive values. This is because the fluctuations of the fields with the negative squared masses in the AdS spacetime could still be stable if their squared masses obey the corresponding Breitenlohner–Freedman bounds.
\[\mathcal{U}_{1}= \big{[}\mathcal{K}\big{]}, \tag{2}\] \[\mathcal{U}_{2}= \big{[}\mathcal{K}\big{]}^{2}\!-\!\big{[}\mathcal{K}^{2}\big{]},\] \[\mathcal{U}_{3}= \big{[}\mathcal{K}\big{]}^{3}-3\big{[}\mathcal{K}\big{]}\big{[} \mathcal{K}^{2}\big{]}+2\big{[}\mathcal{K}^{3}\big{]},\] \[\mathcal{U}_{4}=\]
where parentheses \([...]\) represents trace of the matrix \(\mathcal{K}_{\nu}^{\mu}\). In \(D=4\) dimensions the Gauss-Bonnet term does not contribute to the dynamics, so we rescale the Gauss-Bonnet coupling parameter \(\alpha\to\alpha/(D-4)\)[18], therefore, the action takes the following form:
\[S=\frac{1}{16\pi}\int d^{D}x\sqrt{-g}\left[R-2\Lambda+\frac{\alpha}{D-4} \mathcal{G}-F_{\mu\nu}F^{\mu\nu}+m^{2}\sum_{i}c_{i}\mathcal{U}_{i}(g,h)\right]. \tag{3}\]
Now, we consider a static and spherically symmetric solution of the form
\[ds^{2}=-e^{2A(r)}dt^{2}+e^{2B(r)}dr^{2}+r^{2}d\Omega_{D-2}^{2}, \tag{4}\]
and following the Ref. [65] we take the reference metric as
\[h_{\mu\nu}=diag\big{(}0,0,c^{2},c^{2}\sin^{2}\theta\big{)}, \tag{5}\]
where \(c\) is a dimensionless positive constant. The reference metric \(h_{\mu\nu}\) is a rank two symmetric tensor. Physically, \(h_{\mu\nu}\) corresponds to the background metric around which fluctuations take the Fierz-Pauli form. By using equations (2) and (5), we obtain
\[\mathcal{U}_{1} =\frac{(D-2)c}{r}, \tag{6}\] \[\mathcal{U}_{2} =\frac{(D-2)(D-3)c^{2}}{r^{2}},\] \[\mathcal{U}_{3} =\frac{(D-2)(D-3)(D-4)c^{3}}{r^{3}},\] \[\mathcal{U}_{4} =\frac{(D-2)(D-3)(D-4)(D-4)c^{4}}{r^{4}}.\]
Substituting the metric and the electrostatic potential in action (3), the first integral exists [19]
\[\phi^{\prime}(r)=-\frac{Q}{r^{D-2}}e^{A+B}, \tag{7}\]
and taking the limit \(D\to 4\) and using the relation \(\Lambda=-3/l^{2}\) we obtain
\[S=\frac{\Sigma_{2}}{16\pi}\int dtdr2e^{A+B}\bigg{[}r^{3}\psi\Big{(}1+\alpha\psi \Big{)}+\frac{r^{3}}{l^{2}}+\frac{Q^{2}}{r}+m^{2}\Big{\{}\frac{c_{1}cr^{2}}{2}+ c_{2}c^{2}r\Big{\}}\bigg{]}^{\prime}, \tag{8}\]
where prime denotes differentiation with respect to r, \(\Sigma_{2}=\frac{2\pi^{\frac{3}{2}}}{\Gamma\Big{(}1+\frac{1}{2}\Big{)}}\) and \(\psi=r^{-2}\Big{(}1-e^{-2B}\Big{)}\) with
\[e^{A+B}=1. \tag{9}\]
If we choose \(m=0\) or \(c=0\) then equation (8) reduced to action in massless gravity. Now, using the action (8), we obtain solution as
\[\psi\Big{(}1+\alpha\psi\Big{)}+\frac{1}{l^{2}}+\frac{Q^{2}}{r^{4}}+\frac{m^{2} }{r^{3}}\Big{\{}\frac{c_{1}cr^{2}}{2}+c_{2}c^{2}r\Big{\}}-\frac{8\pi M}{\Sigma _{2}r^{3}}=0, \tag{10}\]
where \(M\) is the integration constant related to the mass of the black hole. Therefore, the exact solution is
\[e^{2A}=e^{-2B}=1+\frac{r^{2}}{2\alpha}\Bigg{[}1\pm\sqrt{1+4\alpha\bigg{\{} \frac{2M}{r^{3}}-\frac{Q^{2}}{r^{4}}-\frac{1}{l^{2}}-\frac{m^{2}}{2r^{2}}\Big{(} cc_{1}r+2c^{2}c_{2}\Big{)}\bigg{\}}}\Bigg{]}. \tag{11}\]
The negative branch corresponds to the \(4D\) charged AdS EGB massive black hole, whereas the +ve branch does not lead to a physically meaningful solution because the positive sign in the mass term indicates graviton instabilities, so we only take the negative branch of equation (11). For the chargeless limit, our solution reduces to a black hole solution as obtained in Ref. [50]
\[e^{2A}=e^{-2B}=1+\frac{r^{2}}{2\alpha}\Bigg{[}1-\sqrt{1+4\alpha\bigg{\{} \frac{2M}{r^{3}}-\frac{1}{l^{2}}-\frac{m^{2}}{2r^{2}}\Big{(}cc_{1}r+2c^{2}c_{2 }\Big{)}\bigg{\}}}\Bigg{]}. \tag{12}\]
In the limit \(\alpha\to 0\), equation (11) reduces to the charged AdS black hole in massive gravity [66]
\[e^{2A}=e^{-2B}=1-\frac{2M}{r}+\frac{Q^{2}}{r^{2}}+\frac{r^{2}}{l^{2}}+\frac{m^ {2}}{2}\big{(}cc_{1}r+2c^{2}c_{2}\big{)}. \tag{13}\]
Also, in the massless limit, the above equation reduces to Reissner-Nordstrom AdS solution
\[e^{2A}=e^{-2B}=1-\frac{2M}{r}+\frac{Q^{2}}{r^{2}}+\frac{r^{2}}{l^{2}}. \tag{14}\]
Now, we apply a massless limit to equation (11) and obtain charged AdS Black Hole in \(4D\) Einstein-Gauss-Bonnet gravity [19] as
\[e^{2A}=e^{-2B}=1+\frac{r^{2}}{2\alpha}\Bigg{[}1\pm\sqrt{1+4\alpha\bigg{\{} \frac{2M}{r^{3}}-\frac{Q^{2}}{r^{4}}-\frac{1}{l^{2}}\bigg{\}}}\Bigg{]}. \tag{15}\]
To find the position of the event horizon of charged AdS Black Hole in \(4D\) Einstein-Gauss-Bonnet gravity, we set equation (15) equal to zero and obtain [19]
\[1-\frac{2M}{r}+\frac{Q^{2}+\alpha}{r}+\frac{r^{2}}{l^{2}}=0. \tag{16}\]
In the absence of cosmological constant, we obtain
\[r_{\pm}=M\pm\sqrt{M^{2}-Q^{2}-\alpha}. \tag{17}\]
For Einstein-Gauss-Bonnet massive gravity with a nonvanishing cosmological constant the expression for \(r_{+}\) is complicated, so we do not represent it here. From equation (17), we can say that the black hole solution in \(4D\) EGB massless gravity exists if and only if \(M>M_{*}\) with \(M_{*}^{2}=Q^{2}+\alpha\). We will loosely follow the condition \(M>M_{*}\) for charged AdS Einstein-Gauss-Bonnet massive gravity black holes. In Fig. 1, we plot the metric function (negative branch) of charged AdS Einstein-Gauss-Bonnet massive gravity black holes for different values of \(\alpha\) and M. From Fig. 1 (a) and Fig. 1 (b), it is clear that the black hole has two horizons, as the value of graviton mass increases the position of the outer horizon increases. The position of the event horizon increases as massive gravity parameters increase. In table I two roots of the metric function (11) are estimated, the position of the horizon slowly decreases as Gauss-Bonnet coupling parameter increases. In Fig. 1 (c) and Fig. 1 (d), we plot the metric function, and it is clear that there are no horizon and no black hole solutions.
\begin{table}
\begin{tabular}{|l|l|l|} \hline \multicolumn{3}{|c|}{**Fig. 1(a)**} \\ \hline \(m\) & \(r_{-}\) & \(r_{+}\) \\ \hline
0.0 & 1.1118 & 2.4489 \\ \hline
0.5 & 1.1337 & 2.4867 \\ \hline
1.0 & 1.2061 & 2.6419 \\ \hline
1.5 & 1.3470 & 3.1853 \\ \hline \multicolumn{3}{|c|}{**Fig. 1(b)**} \\ \hline \(m\) & \(r_{-}\) & \(r_{+}\) \\ \hline
0.0 & 1.1597 & 2.4165 \\ \hline
0.5 & 1.1833 & 2.4532 \\ \hline
1.0 & 1.2609 & 2.6055 \\ \hline
1.5 & 1.4091 & 3.1500 \\ \hline \end{tabular}
\end{table}
Table 1:
In Fig. 2, we plot the metric function (negative branch) of charged AdS Einstein-Gauss-Bonnet massive gravity in \(4D\) for different values of the charge. From the figure, it is clear that the position of the outer horizon is the smallest for the massless case. As we increase the charge and keep the graviton mass fixed, the position of the outer horizon decrease but the position of the outer horizon is still greater than the massless one which is represented by the solid green line.
## III Black hole thermodynamics
In this section, we study the thermodynamics of charged AdS black holes in \(4D\) Einstein-Gauss-Bonnet massive gravity. The physical mass of the black holes can be obtained from the
metric function (11) by setting \(\left.e^{-2B}\right|_{r=r_{+}}=0\) as
\[M=\frac{1}{2r_{+}}\bigg{[}\frac{r_{+}^{4}}{l^{2}}+r_{+}^{2}+Q^{2}+\alpha+m^{2}r_ {+}^{2}\Big{(}\frac{cc_{1}r_{+}}{2}+c^{2}c_{2}\Big{)}\bigg{]}. \tag{18}\]
The Hawking temperature of the black holes can be obtained from the relation
\[T_{H}=\frac{f^{\prime}(r_{+})}{4\pi}. \tag{19}\]
For the metric (11), Hawking temperature reads
\[T_{H}=\frac{3r_{+}^{4}+l^{2}\big{(}r_{+}^{2}-Q^{2}-\alpha+m^{2}r_{+}^{2}(cc_{1 }r_{+}^{3}+c^{2}c_{2})\big{)}}{4\pi r_{+}l^{2}(r_{+}^{2}+2\alpha)}. \tag{20}\]
In the massless limit, the Hawking temperature of charged AdS black holes in Einstein-Gauss-Bonnet \(4D\) massive gravity reduces to the Hawking temperature [19] of charged AdS black holes in Einstein-Gauss-Bonnet \(4D\) gravity as
\[T_{H}=\frac{3r_{+}^{4}+l^{2}\big{(}r_{+}^{2}-Q^{2}-\alpha\big{)}}{4\pi r_{+}l ^{2}(r_{+}^{2}+2\alpha)}. \tag{21}\]
If we take the limit \(\alpha\to 0\), the Hawking temperature (21) reduces to the Hawking temperature of Reissner-Nordstrom AdS black holes as
\[T_{H}=\frac{3r_{+}^{4}+l^{2}\big{(}r_{+}^{2}-Q^{2}\big{)}}{4\pi r_{+}^{3}l^{2 }}. \tag{22}\]
If we further take the chargeless limit, then the above equation reduces to the Hawking temperature of Schwarzschild AdS black holes.
In Fig. 3 and Fig. 4, we plot the Hawking temperature of the
black holes with respect to \(r_{+}\) for different values of \(\alpha\) and charge. In Fig. 3 (a) and Fig. 3 (b), the Hawking temperature is plotted for different values of \(\alpha\). From the Fig., it is clear that for a critical value of horizon radius (say, \(r_{+}^{min}\)) the Hawking temperature is zero, and if we increase the horizon radius from \(r_{+}^{min}\) then Hawking temperature increases. Further increase of horizon radius from \(r_{+}^{min}\) Hawking temperature leads to attaining a local maximum for a particular value of \(r_{+}\) (say, \(r_{+}^{min}\)).
Figure 4: \(m=0.0\) denoted by solid green line, \(m=1.5\) denoted by dash black line, \(m=4.0\) denoted by dash red line, \(m=5.5\) denoted by dash dot blue line and \(m=10.0\) denoted by dash dot gold line with \(l=2\), \(c=1\) and \(c_{2}=1\).
Figure 3: \(m=0\) denoted by solid green line, \(m=1.5\) denoted by dash black line, \(m=4.0\) denoted by dash red line, \(m=5.5\) denoted by dash dot blue line and \(m=10.0\) denoted by dash dot gold line with \(l=2\), \(c=1\) and \(c_{2}=1\).
\(r_{+}^{b}\)) and local maxima are slowly getting absent as graviton mass decreases. Hawking temperature attains a minimum for a particular value of horizon radius (say, \(r_{+}^{a}\) and \(r_{+}^{a}>r_{+}^{b}>r_{+}^{min}\)) and the minima slowly disappear if we decrease the graviton mass. After attaining the minima if we further increase the horizon radii then Hawking temperature again increases. In Fig. 4 (a) and Fig. 4 (b), we plot the Hawking temperature with respect to the black hole horizon for different values of charge. From Fig., it is clear that the behavior is the same as Fig. 3 but the effects of increasing charge are local maxima and the minima now disappear. To observe the local maxima and the minima, we have to increase the graviton mass further, as shown by the gold dashed dot line in Fig. 4.
In Fig. 5, the effects of charge on the Hawking temperature are shown. The inclusion of charge slowly decreases the position of local maxima and minima.
To find the entropy of the black hole, we use the relation \(dM=T_{H}dS\). Using the Hawking temperature and mass of the black hole, we obtain the entropy as
\[S=\pi r_{+}^{2}+4\pi\alpha\ln(r_{+})+S_{0}, \tag{23}\]
where \(S_{0}\) is integration constant. From the above equation, one can say that the inclusion of electric charge has no effect on the entropy of the black hole. To derive the first law of black hole thermodynamics, we treat the massive gravity parameters \(c_{1}\) and \(c_{2}\) as thermodynamics variables, and the corresponding potential is \(\mathcal{C}_{1}\) and \(\mathcal{C}_{2}\). Apart from that, the potential corresponding to Einstein-Gauss-Bonnet parameter \(\alpha\) is \(\mathcal{A}\). The thermodynamic pressure is defined as \(P=3/8\pi l^{2}\). Therefore, the first law of black hole thermodynamics in extended phase space takes the following form:
\[dM=T_{H}dS+\Phi dQ+VdP+\mathcal{A}d\alpha+\mathcal{C}_{1}dc_{1}+\mathcal{C}_{2 }dc_{2}. \tag{24}\]
Now, from the first law of black hole, one can find the potential and volume as
\[\Phi = \left(\frac{\partial M}{\partial Q}\right)_{S,P,\alpha,c_{1},c_{2 }}=\frac{Q}{r_{+}}, \tag{25}\] \[V = \left(\frac{\partial M}{\partial P}\right)_{S,Q,\alpha,c_{1},c_{2 }}=\frac{4}{3}\pi r_{+}^{3},\] (26) \[\mathcal{A} = \left(\frac{\partial M}{\partial\alpha}\right)_{S,Q,P,c_{1},c_{2 }}=\frac{1}{2r_{+}},\] (27) \[\mathcal{C}_{1} = \left(\frac{\partial M}{\partial c_{1}}\right)_{S,Q,P,\alpha,c_{2 }}=\frac{cm^{2}r_{+}^{2}}{4},\] (28) \[\mathcal{C}_{2} = \left(\frac{\partial M}{\partial c_{2}}\right)_{S,Q,P,\alpha,c_{1 }}=\frac{c^{2}m^{2}r_{+}}{2}. \tag{29}\]
Global stability: Gibbs free energy
To study the global stability of the black holes we find Gibbs free energy
\[G=M-T_{H}S-Q\Phi. \tag{30}\]
Using equations (18), (20), (23) and (24), we obtain
\[G = \frac{2r_{+}^{4}+l^{2}\big{(}2r_{+}^{2}+2Q^{2}+2\alpha+m^{2}r_{+}^ {2}(cc_{1}r_{+}+2c^{2}c_{2})\big{)}}{4r_{+}l^{2}} \tag{31}\] \[- \frac{\big{(}3r_{+}^{4}+l^{2}\left(r_{+}^{2}-Q^{2}-\alpha+m^{2}r_ {+}^{2}(cc_{1}r_{+}+c^{2}c_{2})\right)\big{)}\left(4\pi\alpha\ln(r_{+})+\pi r_ {+}^{2}\right)}{4\pi r_{+}l^{2}(r_{+}^{2}+2\alpha)}-\frac{Q^{2}}{r_{+}}.\]
In Fig. 5 and Fig. 6, we plot the Gibbs free energy for different values of \(\alpha\) and charge of the black hole. From Fig. 5, it is clear that the Gibbs free energy is zero for two critical values of horizon radius (namely, \(r_{+}^{c}\), \(r_{+}^{d}\) and \(r_{+}^{c}>r_{+}^{d}\)). The Gibbs free energy is positive between \(r_{+}^{c}\) and \(r_{+}^{d}\). The positive part of the Gibbs free energy increases as the graviton mass increases and the positive part of the Gibbs free energy decrease as the graviton mass decrease. The positive part of the Gibbs free energy attains its smallest value in the massless limit. If we further increase the horizon radius \(r_{+}>r_{+}^{d}\) then Gibbs free energy goes to the negative value. In Fig. 6, we plot the effects of charge on the Gibbs free energy. Keeping the graviton mass small \(m\leq 2\), if we increase the charge of the black hole then Gibbs free energy is completely negative (Fig. 6(a) and 6(b)). The solid cyan points represent \(r_{+}^{d}\) and solid black points represent \(r_{+}^{c}\). Finally, one can say that the positive part of the Gibbs free energy slowly disappears due to the inclusion of charge for small graviton mass \(m\leq 2\). As graviton mass increases from \(m=2\) the behavior is similar to that in Fig. 5.
## V Local stability: heat capacity
In this section, we study the local thermodynamical stability of the black hole. We compute the specific heat of the black holes. The local stability of Einstein-Gauss-Bonnet \(4D\) AdS black holes is studied in Ref. [21]. The specific heat of black holes in nonlinear electrodynamics investigated in Refs. [32]-[34]. The local thermodynamical stability of the black holes can be analyzed from the sign of the specific heat. If heat capacity \(C_{\Phi}<0\) then the black holes are thermodynamically unstable and for \(C_{\Phi}>0\) then black holes are thermodynamically stable. The heat capacity of the black holes is defined as
\[C_{\Phi}=T_{H}\Bigg{(}\frac{dS}{dT_{H}}\Bigg{)}_{\Phi}. \tag{32}\]
Using equations (20) and (23), this reads
\[C_{\Phi}=\frac{2\pi(r_{+}^{2}+2\alpha)^{2}\Bigg{(}3r_{+}^{4}+\bigg{(}r_{+}^{2}-Q^ {2}-\alpha+m^{2}r_{+}^{2}\big{(}cc_{1}r_{+}+c^{2}c_{2}\big{)}\bigg{)}l^{2}\Bigg{)}} {\Big{(}-(c^{2}c_{2}m^{2}+1)r_{+}^{4}+4cm^{2}c_{1}\alpha r_{+}^{3}+(2\alpha c^{2 }c_{2}m^{2}+3Q^{2}+5\alpha)r_{+}^{2}+2Q^{2}\alpha+2\alpha^{2}\Big{)}l^{2}+3r_{+ }^{6}+18\alpha r_{+}^{4}}. \tag{33}\]
In Fig. 7, we plot the specific heat for two different values of the Einstein- Gauss-Bonnet coupling parameter and it is discontinuous at some critical value of horizon radius for a larger value of graviton mass (\(m>1\)) which indicates that a second-order phase transition occurs for charged AdS black holes in \(4D\) Einstein-Gauss-Bonnet massive gravity. For the smaller value of graviton mass(\(m=1\)), no divergences occur but the smaller-size black holes are thermodynamically unstable as specific heat is negative. As the size of the black hole increases a phase transition occurs, i.e., the specific heat of the black hole changes from a negative value to a positive value. If we take graviton mass as zero then similar kinds of phenomena occur.
Figure 5: Plot of Gibbs free energy vs. horizon radius with different values of graviton mass (\(m=0,1,2,3,4\)) with the fixed value of Gauss-Bonnet coupling (\(\alpha=0.1,.3,0.5,0.8\)) and \(l=2\), \(c=1\), \(c_{1}=-1\) and \(c_{2}=1\).
In Ref. [50], the specific heat of EGB massive gravity black hole is studied with \(Q=0\). In the chargeless case, \(Q=0\) two diverging points appear for two critical values of horizon radius which separate three regions, i.e. two-second order phase transition occurs for such a black hole. Between two diverging points, specific heat is negative, which indicates that the black hole is thermodynamically unstable in this region. The inclusion of charge removes one diverging point and we are left with only one diverging point, i.e. in the case of charged black hole only one second-order phase transition occurs. For \(m=0\) behavior of specific heat same as \(Q=0\) and \(Q\neq 0\).
## VI Dynamic stability: quasinormal modes
One of the methods to study the dynamic stability of the black holes is studying the nature of quasinormal modes (QNM) which are characterized by complex numbers. If the imaginary part of
Figure 6: Plot of heat capacity vs. horizon radius with different values of graviton mass (\(m=0,1,2,3,4\)) with the fixed value of Gauss-Bonnet coupling (\(\alpha=0.1,.3,0.5,0.8\)) and \(l=2\), \(c=1\), \(c_{1}=-1\) and \(c_{2}=1\).
QNM is positive, the black hole is unstable; however, if negative, the black hole is stable.
We compute the QNM and quasinormal frequency (QNF) of the above black hole using the scalar field perturbation. We need to consider the scalar field \(\Phi\) in the background of the black hole (11). The equation of these perturbations takes the form
\[\frac{1}{\sqrt{-g}}\partial_{\mu}\left(\sqrt{-g}g^{\mu\nu}\partial_{\nu}\right) \Phi=0, \tag{34}\]
where \(g^{\mu\nu}\) are the metric components of the metric (4). The mode decomposition of the scalar perturbation in terms of spherical harmonics is given by
\[\Phi=\frac{1}{r}\sum_{lm}e^{i\omega t}\phi_{lm}Y_{l}^{m}(\theta,\phi), \tag{35}\]
here \(l\), \(m\), \(Y_{l}^{m}\), and \(\omega\) are respectively the angular quantum number, magnetic quantum numbers, spherical harmonic, and the oscillating frequency of the scalar field. Substituting the value of \(\Phi\) in Eq. (34) and using the tortoise coordinate \(dr^{*}=dr/e^{2A}\), we get the Schrodinger-like form,
\[\left(\frac{d^{2}}{dr^{*^{2}}}+\omega^{2}-V(r^{*})\right)\phi=0, \tag{36}\]
where \(V(r^{*})\) is the effective potential and has the form
\[V(r^{*})=e^{2A}\left(A^{\prime}(r)+\frac{l(l+1)}{r^{2}}\right), \tag{37}\]
Figure 7: \(m=0.0\) denoted by solid green line, \(m=1.0\) denoted by dash black line, \(m=2.0\) denoted by dash red line, \(m=3.0\) denoted by dash dot blue line and \(m=4.0\) denoted by dash dot gold line with \(Q=1\), \(l=3\), \(c=1\), \(c_{1}=-1\) and \(c_{2}=1\).
where \(l\) is the harmonic index. To find the QNF one has to impose boundary conditions near the event horizon. These boundary conditions can be written as
\[\phi(r_{\star})\to e^{i\omega r_{\star}},\qquad\qquad r_{\star} \rightarrow-\infty, \tag{38}\] \[\phi(r_{\star})\to e^{-i\omega r_{\star}},\qquad\qquad r_{ \star}\rightarrow\infty, \tag{39}\]
where the \(+\) sign corresponds to ingoing waves at the horizon and \(-\) sign corresponds to outgoing waves at the infinity. The frequencies corresponding to the QNM are given by \(\omega=\omega_{R}+i\omega_{I}\), whose \(\omega_{R}\) and \(\omega_{I}\) are the oscillating damping components of the frequency. We use the WKB approximation to find the QNMs and QNFs of the obtained black hole solution (11). The WKB formula has the form [67; 68; 69]
\[i\frac{\omega^{2}-V_{0}}{\sqrt{-2V_{0}^{\prime\prime}}}=n+\frac{1}{2}. \tag{40}\]
where \(V_{0}\) is the height of the barrier and \(V_{0}^{\prime\prime}\) is the second derivative of the potential with respect to the tortoise coordinate. The numerical value of QNM and QNF for different values of graviton mass is depicted in Tab. 2
From the table 2, we can see clearly that the imaginary part of the QNMs in the obtained black hole solution (11) is negative. So the black hole solution is stable.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline \hline & \(\alpha=0.1\) & \(\alpha=0.2\) & \(\alpha=0.3\) \\ \hline \(m\) & \(\omega=\omega_{R}+i\omega_{I}\) & \(\omega=\omega_{R}+i\omega_{I}\) & \(\omega=\omega_{R}+i\omega_{I}\) \\ \hline
1 & 0.0036 - 0.0036 \(i\) & 0.0011 - 0.001 \(i\) & 0.0012 - 0.0012 \(i\) \\
2 & 0.0082 - 0.0091 \(i\) & 0.0036 - 0.0039 \(i\) & 0.0036 - 0.0038 \(i\) \\
3 & 0.0065 - 0.0074 \(i\) & 0.0042 - 0.0047 \(i\) & 0.0054 - 0.0058 \(i\) \\
4 & 0.0052 - 0.0060 \(i\) & 0.0056 - 0.0062 \(i\) & 0.0034 - 0.0036 \(i\) \\
5 & 0.0065 - 0.0071 \(i\) & 0.0043 - 0.0046 \(i\) & 0.0052 - 0.0055 \(i\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: The numerical values of QNMs with different values of gravitation mass (\(m\)) and Gauss-Bonnet coupling (\(\alpha\)) with a fixed value of \(M=1,Q=1,c=1,c_{1}=-1,c_{2}=1,n=1\), and \(l=10\).
## VII Van der Waals like phase transition
In this section, we study the phase transition of charged AdS black holes in \(4D\) Einstein-Gauss-Bonnet massive gravity. The phase transition of the black holes in massive gravity is studied in Refs. [50; 70]. Hawking-Page phase transition for static and rotating \(4D\) Gauss-Bonnet black hole is studied in Refs. [71; 22]. The phase transition of charged AdS black hole in Einstein-Gauss-Bonnet massless gravity is studied in Ref. [21]. Phase transition of charged AdS \(4D\) Einstein-Gauss-Bonnet black hole in nonlinear electrodynamics is also studied [32]. The Van der Waals equation of state for real fluids is given by
\[P=\frac{T}{v-b}-\frac{a}{v^{2}}, \tag{41}\]
where \(v\) is the specific volume of the fluids, \(a\) represents the interaction between the molecules of the fluids and \(b\) describes the non-zero of the molecules. Now, from the Hawking temperature (20), we obtain
\[P=\frac{T}{v}+\frac{8T\alpha}{v^{3}}+\frac{2Q^{2}}{v^{4}\pi}- \frac{1}{2v^{2}\pi}+\frac{2\alpha}{v^{4}\pi}-\frac{c^{2}c_{2}m^{2}}{2v^{2}\pi} -\frac{cc_{1}m^{2}}{4v\pi}, \tag{42}\]
where specific volume \(v\) is defined by [72]
\[v=\frac{6V}{A}\approx 2r_{+}, \tag{43}\]
where \(A\) is the area of the black hole. To obtain the critical points, we use the following conditions:
\[\left(\frac{\partial P}{\partial v}\right)_{T_{c},v_{c}}=\left( \frac{\partial^{2}P}{\partial v^{2}}\right)_{T_{c},v_{c}}=0. \tag{44}\]
Using equations (42) and (44), we obtain the condition for critical volume as
\[-2(c^{2}c_{2}m^{2}+1)v_{c}^{4}+24\alpha cc_{1}m^{2}v_{c}^{3}+48( \alpha c^{2}c_{2}m^{2}+Q^{2}+2\alpha)v_{c}^{2}+384Q^{2}\alpha+384\alpha^{2}=0. \tag{45}\]
Equation (45) can not be solved analytically, we numerically solved the above equation and estimate the critical points as shown in the tables below.
\begin{table}
\begin{tabular}{|l|l|l|l|l|} \hline \(\alpha\) & \(v_{c}\) & \(P_{c}\) & \(T_{c}\) & \(\rho_{c}\) \\ \hline \hline
0.0 & 0.4827 & 0.3516 & 0.4464 & 0.3801 \\
0.1 & 2.2693 & 0.0132 & 0.0778 & 0.3834 \\
0.2 & 3.1460 & 0.0068 & 0.0543 & 0.3939 \\
0.3 & 3.8142 & 0.0046 & 0.0437 & 0.4014 \\
0.4 & 4.3728 & 0.0035 & 0.0373 & 0.4103 \\
0.5 & 4.8610 & 0.0028 & 0.0330 & 0.4124 \\ \hline \end{tabular}
\end{table}
Table 5: Values of critical volume (\(v_{c}\)), critical pressure (\(P_{c}\)), critical temperature (\(T_{c}\)) and \(\rho_{c}=P_{c}v_{c}/T_{c}\) for different Gauss–Bonnet coupling parameter with \(Q=0.1\), \(m=0.2\), \(c=1\)\(c_{1}=-2\) and \(c_{2}=0.75\).
In tables III, IV and V, we numerically solve equation (45) for different values of graviton mass, charge and Einstein-Gauss-Bonnet coupling parameter, we estimate the value of critical volume (\(v_{c}\)), critical pressure (\(P_{c}\)), critical temperature (\(T_{c}\)) and \(\rho_{c}\). From table III, we can say that as graviton mass increases from zero then the critical volume (\(v_{c}\)), critical temperature (\(T_{c}\)) decreases, critical pressure (\(P_{c}\)), and \(\rho_{c}\) increases. The effects of black hole charge on the critical parameters are shown in table IV, keeping the graviton mass fixed. As the charge of the black holes increases critical volume (\(v_{c}\)) and \(\rho_{c}\) increase, however, critical pressure and critical temperature decrease. The effects of the Gauss-Bonnet coupling parameter (\(\alpha\)) on the critical parameters are shown in table V. In Fig. 7, we plot the Hawking temperature for different values of Gauss-Bonnet coupling parameter and charge with \(P<P_{c}\), \(P=P_{c}\) and \(P>P_{c}\).
In Fig. 8(a) and 8(b), the Hawking temperature is depicted for different values of \(\alpha\) keeping pressure fixed. When pressure is less than critical pressure (\(P_{c}\)), the curve has two critical points (one of them is maxima and another is minima). For the pressure equal to the critical pressure (\(P_{c}\)), two turning points come to an inflection point and when \(P>P_{c}\) the curve does not attain any turning points. The effects of charge on the Hawking temperature are shown in Fig. 8(c) and 8(d). The inclusions of charge basically reduced the position of local maxima and minima when \(P<P_{c}\). Furthermore, if we increase the charge then the position of local maxima and minima decrease by a significant amount (Fig. 8(d)). The rest of the behaviour is similar to Fig. 8(a) and Fig. 8(b). The behaviour of Hawking temperature of charged less black hole for \(P\leq P_{c}\) and \(P>P_{c}\) is shown in Ref. [50]. The Hawking temperature of the black hole with \(Q=0\) attains local maxima and minima when \(P<P_{c}\). In the chargeless case position of local maxima and minima is higher than the charged black hole. In Fig. 9, we plot Gibbs free energy Vs. temperature for different values of Gauss-Bonnet coupling parameter and black holes charge with \(P<P_{c}\), \(P=P_{c}\) and \(P>P_{c}\). In Fig. 9(a) and 9(b), Gibbs free energy for different values of Gauss-Bonnet coupling parameters is depicted. When pressure is less than the critical pressure (\(P_{c}\)), Gibbs free energy shows swallow tail (triangular shape) behavior, which indicates that the system undergoes a first-order phase transition, i.e., below the critical pressure a transition between small black hole (SBH) and large black hole (LBH) occurs. The Gibbs free energy of LBH is smaller compared to the SBH. At the point of intersection of the curve (\(P<P_{c}\)), where first-order phase transition occurs, the entropy of the system is discontinuous as entropy depends on the horizon radius of the black holes and the radius of the SBH and LBH is different. For \(P=P_{c}\), the swallow tail behavior disappears at which a second-order phase transition occurs. For \(P>P_{c}\) swallow tail behavior completely disappears at \(P>P_{c}\).
and no phase transition occurs. A similar kind of behavior is shown in Fig.9(c) and Fig.9(d) for different values of charge.
## VIII Joule-Thomson expansion
In this section, we discuss the effects of massive gravity on the Joule-Thomson expansion of charged AdS black holes in \(4D\) Einstein-Gauss-Bonnet massive gravity. The Joule-Thomson expansion of charged AdS black holes was first studied in Ref. [59]. After that Joule-Thomson expansion of \(D\) dimensional black holes was studied in Ref. [73]. Using numerical investigation, Joule-Thomson expansion of Kerr-AdS and Kerr-Newman-AdS is also studied [60; 74]. Joule
Thomson expansion of charged AdS black hole in \(4D\) Einstein massive gravity discussed in Ref. [66]. Joule-Thomson expansion of charged AdS \(4D\) Einstein massive gravity black hole in Maxwell and Born-Infeld electrodynamics discussed in Ref. [21; 75]. The Joule-Thomson thermodynamic coefficient is given by
\[\mu_{J}=\left(\frac{\partial T}{\partial P}\right)_{M}=\frac{1}{C_{P}}\bigg{[}T \Big{(}\frac{\partial V}{\partial P}\Big{)}_{P}-V\bigg{]}=\frac{(\partial T/ \partial r_{+})_{M}}{(\partial P/\partial r_{+})_{M}}. \tag{46}\]
The Joule-Thomson effect is an isenthalpic process, which means that enthalpy remains constant during the process. In the Joule-Thomson process pressure always decrease but the temperature can increase/decrease, thus Joule-Thomson thermodynamic coefficient (\(\mu\)) can be negative/positive. When \(\mu>0\) the Joule-Thomson expansion corresponds to the cooling region of the isenthalpic or constant mass curve and \(\mu<0\) corresponds to the heating region of the isenthalpic or constant
mass curve. The Joule - Thomson thermodynamic coefficient vanishes for some particular value of temperature, which is known as inverse temperature (\(T_{i}\)) and corresponding pressure is known as inverse pressure (\(P_{i}\)). The cooling and heating regions are separated by the set of points (\(P_{i}\), \(T_{i}\)) and the curved formed by the set of points (\(P_{i}\), \(T_{i}\)) known as the inverse curve. Clearly, the region above the inverse curve is known as the cooling region and the region below the inverse curve is known as the heating region. At inverse temperature sign of Joule-Thomson coefficient changes \(\mu_{J}(T_{i})=0\). From the above equation, we obtain inverse temperature
\[T_{i}=V\bigg{(}\frac{\partial T}{\partial V}\bigg{)}_{P}=\frac{r_{+}}{3}\bigg{(} \frac{\partial T}{\partial r_{+}}\bigg{)}_{P}. \tag{47}\]
From equation (18) and using \(P=3/8\pi l^{2}\), we obtain pressure in terms of black hole mass
\[P=\frac{3}{4\pi r_{+}^{2}}\left[\frac{M}{r_{+}}-\frac{Q^{2}}{2r_{+}^{2}}-\frac {1}{2}-\frac{\alpha}{2r_{+}^{2}}-\frac{m^{2}c^{2}c_{2}}{2}-\frac{m^{2}cc_{1}r _{+}}{4}\right]. \tag{48}\]
Now, from Hawking temperature and using the relations \(P=1/8\pi l^{2}\), we obtain
\[T=\frac{1}{4\pi r(r_{+}^{2}+2\alpha)}\left[8\pi Pr_{+}^{4}-Q^{2}+r_{+}^{2}- \alpha+c^{2}c_{2}m^{2}r_{+}^{2}+m^{2}cc_{1}r_{+}^{3}\right]. \tag{49}\]
Using equation (48) and equation (21) constant mass curve can be obtained. In Fig. 10-Fig. 14 we plot the constant mass curve. In Fig. 10, constant mass and an inverse curve are shown for different values of black hole mass. The left region of the inverse curve represents cooling and the
right region represents heating. In Fig. 11 and Fig. 12 constant mass curve is shown for different values of Gauss-Bonnet coupling parameter and charge of the black hole. The effects of parameters \(c_{1}\) and \(c_{2}\) are shown in Fig. 13 - Fig. 15.
We use equations (47) and (49) and obtain the inverse
Figure 11: \(m=0\) denoted by solid black line, \(m=1\) denoted by red dash line, \(m=2\) denoted by blue dash dot line and \(m=3\) denoted by green dash dot line with \(M=5\), \(Q=1\), \(c=1\), \(c_{1}=-1\) and \(c_{2}=1\).
Figure 12: \(M=20\), \(\alpha=0.5\) and \(c=1\). Left panel: \(m=0\) denoted by the solid black line, \(m=3\) denoted by a red dash line, \(m=5\) denoted by a blue dash-dot line and \(m=7\) denoted by an orange dash-dot line. Right panel: \(m=0\) denoted by the solid black line, \(m=1\) denoted by a red dash line, \(m=2\) denoted by a blue dash-dot line and \(m=3\) denoted by an orange dash-dot line.
pressure as
\[P_{i}=\frac{6Q^{2}r_{+}^{2}-4r^{4}+8Q^{2}\alpha+2\alpha r_{+}^{2}+8\alpha^{2}-4c^ {2}c_{2}m^{2}r_{+}^{4}-3cc_{1}m^{2}r_{+}^{5}-4\alpha c^{2}c_{2}m^{2}r_{+}^{2}-2 \alpha cc_{1}m^{2}r_{+}^{3}}{16\pi r_{+}^{6}}. \tag{50}\]
Now, using the relations (49) and (50), we obtain
\[T_{i}=\frac{-m^{2}cc_{1}r_{+}^{3}+(-2c^{2}c_{2}m^{2}-2)r_{+}^{2}+4Q^{2}+4\alpha }{8\pi r_{+}^{3}}. \tag{51}\]
Finally, we will drive the Joule - Thomson thermodynamic coefficient using equation (46), equation (48) and equation (49)
\[\mu_{J}=\frac{8r_{+}^{3}\Big{(}6Mr_{+}^{3}-6Q^{2}r_{+}^{2}-r_{+}^{4}-4Q^{2} \alpha-4\alpha r_{+}^{2}-4\alpha^{2}-c^{2}c_{2}m^{2}r_{+}^{4}+2\alpha c^{2}c_ {2}m^{2}r_{+}^{2}+\alpha cc_{1}m^{2}r_{+}^{3}\Big{)}}{3(r_{+}^{2}+2\alpha)^{2 }\Big{(}12Mr_{+}-8Q^{2}-4r_{+}^{2}-8\alpha-4c^{2}c_{2}m^{2}r_{+}^{2}-m^{2}cc_{ 1}r_{+}^{3}\Big{)}}. \tag{52}\]
In Fig. 19 and Fig. 18, Joule - Thomson thermodynamic coefficient is plotted. \(\mu<0\) represent heating phase and \(\mu>0\)
Figure 13: \(M=2\), \(Q=1\), \(\alpha=0.5\) and \(c=1\). Left panel: \(c_{2}=0.1\) denoted by an orange dash line, \(c_{2}=1\) denoted by a red dash line, \(c_{2}=-1\) denoted by a green dash-dot line, and \(c_{2}=-2\) denoted by a blue dash-dot line. Right panel: \(c_{1}=1\) denoted by a red dash line, \(c_{1}=2\) denoted by an orange dash line, \(c_{1}=-1\) denoted by a green dash-dot line and \(c_{2}=-2\) denoted by a blue dash-dot line.
## IX Conclusions
In this work, we have found an exact solution of Einstein-Gauss-Bonnet massive gravity with charge in \(4D\) AdS space and the horizon structure of the black holes is discussed. The physical mass (enthalpy) and Hawking temperature of the black holes were computed. Treating the cosmological
Figure 14: \(m=0\) denoted by solid black line, \(m=0.5\) denoted by dash red line, \(m=1\) denoted by dash orange line, \(m=1.5\) denoted by blue dash dot line and \(m=2\) denoted by dash dot green line with \(M=2\), \(Q=1\), \(\alpha=0.5\) and \(c=1\).
Figure 15: \(m=0\) denoted by solid black line, \(m=0.5\) denoted by dash red line, \(m=1\) denoted by dash orange line, \(m=1.5\) denoted by blue dash dot line and \(m=2\) denoted by dash dot green line with \(M=2\), \(Q=1\), \(\alpha=0.5\) and \(c=1\).
constant as the pressure, we drive the first law of black hole thermodynamics. To check global stability, the Gibbs free energy of the black hole was computed. For the local stability of the black holes we estimated the specific heat. Here, we found that as the mass of the graviton increases one divergent point appears. For \(m=0,1\), the specific heat of the black hole goes from a negative value (unstable phase) to a positive one (stable phase) and when \(m>1\) a second-order phase transition occurs. Furthermore, we investigated the Van der Walls-like phase transition of the black holes. The effects of graviton mass, the charge of the black holes and the Gauss-Bonnet coupling parameter on the critical points were also studied. As the mass of the graviton increase, critical pressure also increases and critical temperature shows the opposite behavior. When charge and Gauss-Bonnet coupling parameters of the black hole increase, we found that both critical pressure and temperature decrease. From the \(G-T\) plot, we observed that a swallow tail appears below the critical point which indicates a first-order phase transition, and at the critical point, a second-order phase transition occurs. Finally, Joule-Thomson expansion of charged AdS black hole in \(4D\) Einstein-Gauss-Bonnet massive gravity was studied. The effects of the Gauss-Bonnet coupling parameter and massive gravity parameters on the constant mass and inverse curve are shown in Fig. 16.
Figure 16: \(Q=1\), \(c=1\), \(c_{1}=-1\) and \(c_{2}=1\). Left panel: \(m=0\) denoted by the solid black line, \(m=1\) denoted by red dash line, \(m=2\) denoted by orange dash line, \(m=3\) denoted by blue dash-dot line, \(m=4\) denoted by green dash-dot line and \(m=5\) denoted by a gold dash-dot line. Right panel: \(\alpha=0\) denoted by a solid black line, \(\alpha=0.1\) denoted by red dash line, \(\alpha=0.2\) denoted by orange dash line, \(\alpha=0.3\) denoted by blue dash-dot line, \(\alpha=0.4\) denoted by green dash-dot line and \(\alpha=0.5\) denoted by a gold dash-dot line.
also depicted.
## Acknowledgement
D.V.S. thanks University Grant Commission for the Start-Up Grant No. 30-600/2021(BSR)/1630.
## Data Availability Statement
No Data is associated with the manuscript.
|
2306.06182
|
The effect of approximate coarsest-level solves on the convergence of
multigrid V-cycle methods
|
The multigrid V-cycle method is a popular method for solving systems of
linear equations. It computes an approximate solution by using smoothing on
fine levels and solving a system of linear equations on the coarsest level.
Solving on the coarsest level depends on the size and difficulty of the
problem. If the size permits, it is typical to use a direct method based on LU
or Cholesky decomposition. In settings with large coarsest-level problems,
approximate solvers such as iterative Krylov subspace methods, or direct
methods based on low-rank approximation, are often used. The accuracy of the
coarsest-level solver is typically determined based on the experience of the
users with the concrete problems and methods.
In this paper we present an approach to analyzing the effects of approximate
coarsest-level solves on the convergence of the V-cycle method for symmetric
positive definite problems. Using these results, we derive coarsest-level
stopping criterion through which we may control the difference between the
approximation computed by a V-cycle method with approximate coarsest-level
solver and the approximation which would be computed if the coarsest-level
problems were solved exactly. The coarsest-level stopping criterion may thus be
set up such that the V-cycle method converges to a chosen finest-level accuracy
in (nearly) the same number of V-cycle iterations as the V-cycle method with
exact coarsest-level solver. We also utilize the theoretical results to discuss
how the convergence of the V-cycle method may be affected by the choice of a
tolerance in a coarsest-level stopping criterion based on the relative residual
norm.
|
Petr Vacek, Erin Carson, Kirk M. Soodhalter
|
2023-06-09T18:18:25Z
|
http://arxiv.org/abs/2306.06182v3
|
# The effect of approximate coarsest-level solves on the convergence of multigrid V-cycle methods+
###### Abstract
The multigrid V-cycle method is a popular method for solving systems of linear equations. It computes an approximate solution by using smoothing on fine levels and solving a system of linear equations on the coarsest level. Solving on the coarsest level depends on the size and difficulty of the problem. If the size permits, it is typical to use a direct method based on LU or Cholesky decomposition. In the settings with large coarsest-level problems approximate solvers such as iterative Krylov subspace methods, or direct methods based on low-rank approximation, are often used. The accuracy of the coarsest-level solver is typically determined based on the experience of the users with the concrete problems and methods.
In this paper we present an approach to analyzing the effects of approximate coarsest-level solves on the convergence of the V-cycle method for symmetric positive definite problems. Using this approach we discuss how the convergence of the V-cycle method may be affected by (1) the choice of the tolerance in a stopping criterion based on the relative residual norm for an iterative coarsest-level solver or (2) by the choices of the low-rank threshold parameter and finite precision arithmetic for a block low-rank direct coarsest-level solver. Furthermore we present new coarsest-level stopping criteria tailored to the multigrid method and suggest a heuristic strategy for their effective use in practical computations.
m tugirid method, V-cycle method, coarse level solvers, stopping criteria, iterative methods, approximate solvers
65F10, 65N55, 65N22, 65F50
## 1 Introduction
Multigrid methods [3, 4, 24, 8] are frequently used when solving systems of linear equations, and can be applied either as standalone solvers or as preconditioners for iterative methods. There are two types of multigrid; _geometric_: wherein the hierarchy of systems is obtained by discretizations of an infinite dimensional problem on a sequence of nested meshes; and _algebraic_: wherein the coarse systems are assembled based on the algebraic properties of the matrix. Within each cycle, the approximation is computed using smoothing on fine levels and solving a system of linear equations on the coarsest level. Smoothing on the fine levels is typically done via a few iterations of a stationary iterative method. The particular solver used for the problem on the coarsest level depends on its size and difficulty. If the size of the problem permits, it is typical to use a direct solver based on LU or Cholesky decomposition.
In this text, we focus on settings where the problem on the coarsest level is large and the use of direct solvers based on LU or Cholesky decomposition may be ineffective and sometimes impossible to realize. Such settings may arise, for example, when using geometric multigrid methods to solve problems on complicated domains. The mesh associated with the coarsest level must resolve the domain with certain accuracy. This can yield a large number of degrees of freedom. One possible solution to this issue is to solve the coarsest-level problem using algebraic multigrid, which can introduce additional coarse levels that are not related to the geometry of the problem.
Another setting where large coarsest-level problems may be present is when we use multigrid methods on parallel computers. In parallel computing, the degrees of freedom are assigned to different processors or accelerators. The computation is done in parallel on the individual processors and the results are communicated between them. A challenge for effective parallel implementation of multigrid methods is that the amount of computation on coarse levels decreases at a faster rate than the amount of communication; see e.g., the discussion in the introduction of [5]. One possible solution is to treat this issue by agglomerating the degrees of freedom associated with coarse levels to a smaller number of processors or by using communication-avoiding methods; see e.g., [27].
In this paper, we instead consider treating the still large-scale coarsest-level problem by solving _inexactly_. Frequently used solvers for large scale coarsest-level problems include Krylov subspace methods and direct approximate solvers; see, e.g., [11], where the author considers the preconditioned conjugate gradient method, or [5], where the authors study the use of a block low-rank (BLR) low precision direct solver. These solvers approximate the coarsest-level solution to an accuracy which is determined by the choice of a stopping criteria or affected by the choice of the low-rank threshold and finite precision. These parameters are often chosen in practice based on the experience of the user with concrete problems and methods with the goal of balancing the cost of the coarsest-level solve and the total number of V-cycles required for convergence. In Subsection 2.1 we present a motivating numerical experiments, which illustrate how the choice of the accuracy of the coarsest-level solver may affect the convergence of the multigrid V-cycle method.
A general analysis of the effects of the accuracy of the coarsest-level solver on the convergence behaviour of multilevel methods is, to our knowledge, not present in the literature. Multigrid methods are typically analyzed under the assumption that the problem on the coarsest level is solved exactly; see, e.g., [30, 28]. An algebraic analysis of perturbed two grids methods and its application to the analysis of other multigrid schemes with approximate coarsest-level solvers can be found in [18, 29]. The authors, however, assume that the action of the solver on the coarsest level can be expressed using a symmetric positive definite matrix. This is not true for frequently used solvers, e.g., for a Krylov subspace method stopped using a relative residual stopping criterion. A more general setting is considered in the paper [14], which presents the first analysis of mixed precision multigrid solvers. The authors assume that the action of the solver on the coarsest level can be expressed using a non-singular matrix.
In this paper, we propose an approach to algebraically analyze the effect of approximate coarsest-level solves in the multigrid V-cycle method for symmetric positive definite (SPD) problems. The main methodology of our approach is to view the inexact V-cycle (inV-cycle) method as a perturbation of the exact V-cycle (exV-cycle) method in the following sense. We express the error of the approximation computed by one V-cycle with an approximate coarsest-level solver as the error of the approximation computed by one V-cycle with an exact coarsest-level solver minus the difference of the two approximations. We show that the difference can be expressed as a matrix times the error of the coarsest-level solver. The matrix describes how the error from the coarsest level is propagated to the finest level. The analysis is done assuming exact arithmetic computations, aside from the computation of the coarsest level solutions. The model is agnostic about what coarsest-level solver is used; we only assume that the error on the coarsest level satisfies certain assumptions.
The paper is organized as follows. In Section 2 we establish the notation, state the V-cycle method and present a motivating numerical experiments, which illustrate
that the choice of the accuracy of the coarsest-level solver can significantly affect the convergence of the V-cycle method. In Section 3 we present an analysis of the V-cycle method with an approximate coarsest-level solver. The results are applied to describe the possible effects of the choice of the tolerance in a coarsest-level relative residual stopping criterion in Section 4. Section 5 contains an analogous discussion for the choice of the low-rank threshold parameter and finite precision arithmetic for the BLR coarsest-level solvers. New stopping criteria tailored to multigrid methods are derived in Section 6. Section 7 contain heuristic strategy for effective choice of the accuracy of the coarsest-level solver. Finally, we present a series of numerical experiments illustrating the obtained results in Section 8. The text closes with conclusions and discussion of open problems in Section 9.
## 2 Notation and motivating experiments
We study the multigrid V-cycle method for finding an approximate solution of the following problem. Given an SPD matrix \(\mathbf{A}\in\mathbb{R}^{n\times n}\) and a right-hand side vector \(\mathbf{b}\in\mathbb{R}^{n}\) find the vector \(\mathbf{x}\in\mathbb{R}^{n}\) such that
\[\mathbf{A}\mathbf{x}=\mathbf{b}. \tag{1}\]
We consider a hierarchy of \(J+1\) levels numbered from zero to \(J\). Each level contains a system matrix \(\mathbf{A}_{j}\in\mathbb{R}^{n_{j}\times n_{j}}\), with \(\mathbf{A}_{J}=\mathbf{A}\). Information is transferred between the \((j-1)\)th level and the \(j\)th level using a full rank prolongation matrix \(\mathbf{P}_{j}\in\mathbb{R}^{n_{j}\times n_{j-1}}\), respectively its transpose. We assume that the system matrices and the prolongation matrices satisfy the so called _Galerkin condition_, i.e.,
\[\mathbf{A}_{j-1}=\mathbf{P}_{j}^{\top}\mathbf{A}_{j}\mathbf{P}_{j},\quad j=1 \ldots,J. \tag{2}\]
We use the notation \(\mathbf{A}_{0:j}\), for the sequence of matrices \(\mathbf{A}_{0},\ldots,\mathbf{A}_{j}\). Let \(\|\cdot\|\) denote the Euclidean vector norm and let \(\|\cdot\|_{\mathbf{A}_{j}}=\|\mathbf{A}_{j}^{\frac{1}{2}}\cdot\|\) denote the \(\mathbf{A}_{j}\) vector norm, also called the energy norm. We use the same notation for the matrix norms generated by the associated vector norms. By \(\|\cdot\|_{F}\) we denote the matrix Frobenius norm. Let \(\mathbf{I}_{j}\in\mathbb{R}^{n_{j}\times n_{j}}\) denote the identity matrix on the \(j\)th level.
We assume that the pre- and post- smoothing on levels \(j=1,\ldots,J\) can be expressed in the form
\[\mathbf{v}_{j}=\mathbf{v}_{j}+\mathbf{M}_{j}(\mathbf{f}_{j}-\mathbf{A}_{j} \mathbf{v}_{j})\quad\text{and}\quad\mathbf{v}_{j}=\mathbf{v}_{j}+\mathbf{N}_{ j}(\mathbf{f}_{j}-\mathbf{A}_{j}\mathbf{v}_{j}),\]
respectively, where \(\mathbf{v}_{j}\) and \(\mathbf{f}_{j}\) are an approximation and a right-hand side on the \(j\)th level and \(\mathbf{M}_{j}\in\mathbb{R}^{n_{j}\times n_{j}}\) and \(\mathbf{N}_{j}\in\mathbb{R}^{n_{j}\times n_{j}}\) are non-singular matrices satisfying
\[\|\mathbf{I}_{j}-\mathbf{M}_{j}\mathbf{A}_{j}\|_{\mathbf{A}_{j}}<1\quad\text{ and}\quad\|\mathbf{I}_{j}-\mathbf{N}_{j}\mathbf{A}_{j}\|_{\mathbf{A}_{j}}<1. \tag{3}\]
This assumption yields monotone convergence of the smoothers as standalone solvers in the \(\mathbf{A}_{j}\)-norms. Frequently used smoothers, e.g., a few iterations of a classic stationary iterative method such as damped Jacobi or Gauss-Seidel, typically satisfy these assumptions; see, e.g., the discussion in [30, p. 293] or [28]. We also consider multilevel schemes, where either pre- or post- smoothing is not used, i.e., where formally either \(\mathbf{M}_{j}\), \(j=1,\ldots,J\) or \(\mathbf{N}_{j}\), \(j=1,\ldots,J\), are zero matrices.
Given an approximation \(\mathbf{x}^{\text{prev}}\) to the solution \(\mathbf{x}\), the approximation after one iteration of the V-cycle method is computed by calling Algorithm 1 as (see, e.g., [24, pp. 47-48])
\[\mathbf{x}^{\text{new}}=\mathbf{V}(\mathbf{A}_{0:J},\mathbf{M}_{1:J}, \mathbf{N}_{1:J},\mathbf{P}_{1:J},\mathbf{b},\mathbf{x}^{\text{prev}},J).\]
We distinguish between the _exV-cycle method_ and the _inV-cycle method_ based on whether the coarsest-level problem is solved exactly or not.
```
if\(j\neq 0\)then \(\mathbf{v}_{j}^{[1]}=\mathbf{v}_{j}^{[0]}+\mathbf{M}_{j}(\mathbf{f}_{j}-\mathbf{ A}_{j}\mathbf{v}_{j}^{[0]})\) {pre-smoothing} \(\mathbf{f}_{j-1}=\mathbf{P}_{j}^{\top}(\mathbf{f}_{j-1}-\mathbf{A}_{j}\mathbf{ v}_{j}^{[1]})\) {restriction} \(\mathbf{v}_{j-1}^{[3]}=\mathbf{V}(\mathbf{A}_{0:j-1},\mathbf{M}_{1:j-1}, \mathbf{N}_{1:j-1},\mathbf{P}_{1:j-1},\mathbf{f}_{j-1},\,\mathbf{0},\,j-1)\) \(\mathbf{v}_{j}^{[2]}=\mathbf{v}_{j}^{[1]}+\mathbf{P}_{j}\mathbf{v}_{j-1}^{[3]}\) { prolongation and correction} \(\mathbf{v}_{j}^{[3]}=\mathbf{v}_{j}^{[2]}+\mathbf{N}_{j}(\mathbf{f}_{j}- \mathbf{A}_{j}\mathbf{v}_{j}^{[2]})\) {post-smoothing} return\(\mathbf{v}_{j}^{[3]}\) endif Find (approximate) solution \(\mathbf{v}_{0,\mathrm{in}}\) of the problem \[\text{find }\mathbf{v}_{0}:\quad\mathbf{A}_{0}\mathbf{v}_{0}=\mathbf{f}_{0}.\] return\(\mathbf{v}_{0,\mathrm{in}}\)
```
**Algorithm 1** V-cycle scheme, \(\mathbf{V}(\mathbf{A}_{0:j},\mathbf{M}_{1:j},\mathbf{N}_{1:j},\mathbf{P}_{1:j},\mathbf{f}_{j},\,\mathbf{v}_{j}^{[0]},j)\).
### Motivating experiments
We illustrate the relevance of the forthcoming analysis with numerical experiments, which demonstrate how the choice of the accuracy of the coarsest-level solve affects the convergence of the V-cycle method.
We consider the Poisson equation on a unit square domain with manufactured solution
\[u(x,y)=10\sin(2\pi x)\sin(\pi y)\exp((x-3/4)^{2}+(y-3/4)^{2}).\]
This model problem is considered, e.g., in [2]. The problem is discretized using the Galerkin finite element (FE) method with continuous piecewise affine functions on a hierarchy of nested triangulations obtained from the initial triangulation by uniform refinement. We consider the V-cycle method to find an approximate solution of the discrete problem on the finest level. Pre-smoothing is not used, post-smoothing on the fine levels is accomplished via 3 iterations of the Gauss-Seidel method. We use the standard prolongation matrices associated with the finite element spaces. The restriction matrices are transposes of the prolongation matrices.
The matrices are generated in the FE software FEniCS (version 2019.1.0) [1, 13]. In FEniCS the stiffness matrix is assembled using all nodes of the mesh. The homogeneous Dirichlet boundary condition is then applied by setting to zero all non-diagonal elements in rows and columns which correspond to nodes on the boundary and setting to zero the corresponding elements in the right-hand side vector. We modify the stiffness matrices, the prolongation matrices and the right-hand side vector so that the Galerkin condition (2.2) is satisfied. The computation is done in MATLAB 2023a. The codes for all experiments presented in this paper can be found at [https://github.com/vacek-petr/inVcycle](https://github.com/vacek-petr/inVcycle).
We will first compare the convergence behavior of V-cycle methods with iterative coarsest-level solvers with different choices of the coarsest-level accuracy. We consider three variants of the coarsest-level solver: the conjugate gradient method
(CG) [9], the minimal residual method (MINRES) [19], and for reference also the MATLAB backslash operator. Within CG and MINRES we use a relative residual stopping criterion, i.e., for a chosen tolerance \(\tau\in(0,1)\) the solver is stopped when \(\|\mathbf{f}_{0}-\mathbf{A}_{0}\mathbf{v}_{0,\mathrm{in}}\|/\|\mathbf{f}_{0}\| \leq\tau\). We consider various choices of the tolerance \(\tau=2^{-i}\), \(i=1,\ldots,20\). We consider the V-cycle method with 8 or 2 levels. We run the V-cycle methods starting with a zero initial approximation and stop when the relative \(\mathbf{A}\)-norm of the error is (approximately) lower than \(10^{-11}\), i.e., \(\|\mathbf{x}-\mathbf{x}_{\mathrm{in}}^{(n)}\|_{\mathbf{A}}/\|\mathbf{x}\|_{ \mathbf{A}}\leq 10^{-11}\). To approximate the relative \(\mathbf{A}\)-norm of the error on the finest level we compute the solution using the MATLAB backslash operator.
The results are summarized in Figure 1. The variants with MATLAB backslash operator required 13 and 7 V-cycle iterations to reach the desired accuracy for the setting with 8 and 2 levels, respectively. For the variants with CG and MINRES and high tolerances (\(\tau=2^{-j}\), \(j=1,2\)) we see a significant delay in the rate of convergence in comparison to the variant with MATLAB backslash. If a stricter tolerance is imposed, this delay becomes smaller and smaller and eventually there is a tolerance (highlighted in the table) for which the method converges in the same number of V-cycles as the method with MATLAB backslash. Lowering this tolerance further is not beneficial since it does not yield to less number of V-cycles but it requires more computational work on the coarsest level. The highlighted tolerance differs for the variants with CG and MINRES and also for the settings with 8 and 2 levels. We
Figure 1: V-cycle method with iterative coarsest-level solvers. The coarsest-level solvers are stopped using a relative residual stopping criterion with various tolerances \(\tau\). The green color highlights variants with the largest tolerance \(\tau\) for which the method converges in the same number of V-cycle as the corresponding variant with MATLAB backslash on the coarsest level.
see that it is decreasing with increasing size of the problem on the coarsest level.
Even though CG and MINRES were stopped using the same relative residual stopping criterion, the variants of the V-cycle method with MINRES with a high tolerance required many more V-cycle iterations than CG. This indicates that the Euclidean norm of the relative residual on the coarsest level is perhaps not the most important quantity affecting the convergence behavior of the V-cycle method.
Further, we consider an analogous experiment with the V-cycle method with the BLR direct approximate coarsest-level solver. The accuracy of the BLR solver is affected by choice of the low-rank threshold parameter; see [10]. We thus consider various choices of the low-rank threshold parameter \(\epsilon\), \(\epsilon=2^{-i},i=1,\ldots,24\). We consider the V-cycle methods with 6, 5, or 4 levels. We use the MATLAB implementation of the BLR solver provided alongside the paper [10]. In particular we utilize the variant with UFC factorization algorithm, global scaled threshold, intermediate recompressions and double precision arithmetic; for detail see [10]. We set the block size for the coarsest-level matrices as 39, 79 and 53 for the V-cycle methods with 6, 5, or 4 levels, respectively. We again run all the variants of the V-cycle method starting with a zero initial approximate solution and stop when the relative \(\mathbf{A}\)-norm of the error is (approximately) lower than \(10^{-11}\), or when the number of V-cycle iterations is larger or equal to 30.
The results are summarized in Figure 2. For comparison the V-cycle method with MATLAB backslash required 11, 10 and 9 V-cycles to reach the accuracy for the setting with 6, 5 and 4 levels, respectively. We see that the variants with high choices of the parameter \(\epsilon\) does not reach the required accuracy in the first 30 V-cycles, or there is a significant delay in the rate of convergence in comparison to the corresponding variants with MATLAB backslash operator. Lowering the parameter \(\epsilon\) yields smaller and smaller delay and eventually there is a parameter (highlighted in the table) for which the variant converges in the same number of V-cycles as the variant with MATLAB backslash solver. We see that these highlighted threshold parameters are decreasing with increasing size of the problem on the coarsest level. The results are analogous to the results of previous experiments where the accuracy is determined by relative tolerance.
These experiment demonstrate that the choice of coarsest-level solver accuracy can significantly affect the convergence behavior of the V-cycle method and the overall amount of work that has to be done. This relationship is not yet well understood. This leads us to pose the following questions, which drive the work in this paper.
1. Can we analytically describe how the accuracy of the solver on the coarsest level affects the convergence behavior of the V-cycle method?
2. Can we define coarsest-level stopping criteria that would yield a computed V-cycle approximation "close" to the V-cycle approximation which would be obtained by solving the coarsest-level problem exactly, and at the same time, do the least amount of work necessary on the coarsest level?
## 3 Convergence analysis of the inV-cycle method
We start by stating a few results and assumption on the convergence of the exV-cycle method. Let \(\mathbf{x}_{\mathrm{ex}}^{\mathrm{new}}\) be an approximation computed by one iteration of the exV-cycle method starting with an approximation \(\mathbf{x}^{\mathrm{prev}}\). The error of the approximation \(\mathbf{x}_{\mathrm{ex}}^{\mathrm{new}}\) can be written as the error of the previous approximation \(\mathbf{x}^{\mathrm{prev}}\) times the error propagation matrix \(\mathbf{E}\), i.e.,
\[\mathbf{x}-\mathbf{x}_{\mathrm{ex}}^{\mathrm{new}}=\mathbf{E}(\mathbf{x}- \mathbf{x}^{\mathrm{prev}}).\]
An expression for the error propagation matrix \(\mathbf{E}\) using the system matrices and matrices corresponding to the smoothers and prolongation matrices can be found, e.g., in [24, Theorem 2.4.1]. We assume that the error propagation matrix \(\mathbf{E}\) corresponds to an operator which is a contraction with respect to the \(\mathbf{A}\)-norm, i.e., \(\|\mathbf{E}\|_{\mathbf{A}}<1\). Proofs of this property for geometric multigrid methods can be found, e.g., in [28], [30]. The contraction property implies that each iteration of the exV-cycle method reduces the \(\mathbf{A}\)-norm of the error by at least a factor \(\|\mathbf{E}\|_{\mathbf{A}}\), i.e.,
\[\|\mathbf{x}-\mathbf{x}_{\mathrm{ex}}^{\mathrm{new}}\|_{\mathbf{A}}\leq\| \mathbf{E}\|_{\mathbf{A}}\|\mathbf{x}-\mathbf{x}^{\mathrm{prev}}\|_{\mathbf{A }}\quad\forall\mathbf{x}^{\mathrm{prev}}.\]
We remark that this is a worst-case scenario analysis. The actual rate of convergence depends on the right-hand side and the current approximation and cannot be accurately described by a one-number characteristic.
In contrast to the exV-cycle method, the error of the approximation computed after one iteration of the inV-cycle method might not be able to be written as an error propagation matrix times the previous error. This is due to the fact that we consider a general solver on the coarsest level, whose application might not be able to be expressed as a matrix times vector. To obtain insight into the convergence behavior
Figure 2: V-cycle method with BLR direct coarsest-level solver with various choices of the low-rank threshold parameter \(\epsilon\). “x” means that the method did not reach the required accuracy in the first \(30\) V-cycles. The green color highlights variants with the largest low-rank threshold parameter \(\epsilon\) for which the method converges in the same number of V-cycle as the corresponding variant with MATLAB backslash on the coarsest level.
of the inV-cycle method, we view it as a perturbation of the exV-cycle method.
Let \(\mathbf{x}_{\mathrm{in}}^{\mathrm{new}}\) denote the approximation computed after one iteration of the inV-cycle method starting with \(\mathbf{x}^{\mathrm{prev}}\). The error of the inV-cycle approximation can be written as the error of the approximation \(\mathbf{x}_{\mathrm{ex}}^{\mathrm{new}}\) computed after one iteration of the exV-cycle method starting with the same \(\mathbf{x}^{\mathrm{prev}}\) plus the difference of the two approximations, i.e.,
\[\mathbf{x}-\mathbf{x}_{\mathrm{in}}^{\mathrm{new}}=\mathbf{x}-\mathbf{x}_{ \mathrm{ex}}^{\mathrm{new}}+\mathbf{x}_{\mathrm{ex}}^{\mathrm{new}}-\mathbf{x }_{\mathrm{in}}^{\mathrm{new}}=\mathbf{E}(\mathbf{x}-\mathbf{x}^{\mathrm{prev}})+ \mathbf{x}_{\mathrm{ex}}^{\mathrm{new}}-\mathbf{x}_{\mathrm{in}}^{\mathrm{new}}.\]
Taking \(\mathbf{A}\)-norms on the left and right sides, using the triangle inequality and the norm of \(\mathbf{E}\) yields
\[\|\mathbf{x}-\mathbf{x}_{\mathrm{in}}^{\mathrm{new}}\|_{\mathbf{A}}\leq\| \mathbf{E}\|_{\mathbf{A}}\|\mathbf{x}-\mathbf{x}^{\mathrm{prev}}\|_{\mathbf{A }}+\|\mathbf{x}_{\mathrm{ex}}^{\mathrm{new}}-\mathbf{x}_{\mathrm{in}}^{ \mathrm{new}}\|_{\mathbf{A}}. \tag{10}\]
We turn our focus to the difference \(\mathbf{x}_{\mathrm{ex}}^{\mathrm{new}}-\mathbf{x}_{\mathrm{in}}^{\mathrm{new}}\). When applying one step of the inV-cycle method or one step of the exV-cycle method, all intermediate results \(\mathbf{v}_{j}^{[1]}\), \(j=1,\ldots,J\), \(\mathbf{f}_{j}\), \(j=0,\ldots,J\) are the same until the coarsest level is reached. In the exV-cycle method, the exact solution \(\mathbf{v}_{0}\) of the problem on the coarsest level is used, while in the inV-cycle method its computed approximation \(\mathbf{v}_{0,\mathrm{in}}\) is used. Writing down the difference \(\mathbf{x}_{\mathrm{ex}}^{\mathrm{new}}-\mathbf{x}_{\mathrm{in}}^{\mathrm{ new}}\) using the individual steps in Algorithm 1 yields (the subscripts "ex" and "in" indicate that the term corresponds to the exV-cycle method and the inV-cycle method, respectively)
\[\mathbf{x}_{\mathrm{ex}}^{\mathrm{new}}-\mathbf{x}_{\mathrm{in}}^ {\mathrm{new}} =\mathbf{v}_{J,\mathrm{ex}}^{[3]}-\mathbf{v}_{J,\mathrm{in}}^{[3]}\] \[=\mathbf{v}_{J,\mathrm{ex}}^{[2]}+\mathbf{N}_{J}(\mathbf{f}_{J}- \mathbf{A}_{J}\mathbf{v}_{J,\mathrm{ex}}^{[2]})-(\mathbf{v}_{J,\mathrm{in}}^{ [2]}+\mathbf{N}_{J}(\mathbf{f}_{J}-\mathbf{A}_{J}\mathbf{v}_{J,\mathrm{in}}^{ [2]}))\] \[=(\mathbf{I}_{J}-\mathbf{N}_{J}\mathbf{A}_{J})(\mathbf{v}_{J, \mathrm{ex}}^{[2]}-\mathbf{v}_{J,\mathrm{in}}^{[2]})\] \[=(\mathbf{I}_{J}-\mathbf{N}_{J}\mathbf{A}_{J})(\mathbf{v}_{J}^{[1] }+\mathbf{P}_{J}\mathbf{v}_{J-1,\mathrm{ex}}^{[3]}-(\mathbf{v}_{J}^{[1]}+ \mathbf{P}_{J}\mathbf{v}_{J-1,\mathrm{in}}^{[3]}))\] \[=(\mathbf{I}_{J}-\mathbf{N}_{J}\mathbf{A}_{J})\mathbf{P}_{J}( \mathbf{v}_{J-1,\mathrm{ex}}^{[3]}-\mathbf{v}_{J-1,\mathrm{in}}^{[3]})\] \[=(\mathbf{I}_{J}-\mathbf{N}_{J}\mathbf{A}_{J})\mathbf{P}_{J}\ldots (\mathbf{I}_{1}-\mathbf{N}_{1}\mathbf{A}_{1})\mathbf{P}_{1}(\mathbf{v}_{0}- \mathbf{v}_{0,\mathrm{in}}).\]
Denoting by \(\mathbf{S}\) the matrix
\[\mathbf{S}=(\mathbf{I}_{J}-\mathbf{N}_{J}\mathbf{A}_{J})\mathbf{P}_{J}\ldots (\mathbf{I}_{1}-\mathbf{N}_{1}\mathbf{A}_{1})\mathbf{P}_{1}\in\mathbb{R}^{n_{J }\times n_{0}} \tag{11}\]
gives
\[\mathbf{x}_{\mathrm{ex}}^{\mathrm{new}}-\mathbf{x}_{\mathrm{in}}^{\mathrm{ new}}=\mathbf{S}(\mathbf{v}_{0}-\mathbf{v}_{0,\mathrm{in}}). \tag{12}\]
We have expressed the difference of the inV-cycle and exV-cycle approximation as a matrix \(\mathbf{S}\) times the error of the coarsest-level solver. The matrix \(\mathbf{S}\) describes how the error is propagated to the finest level. Let \(\|\mathbf{S}\|_{\mathbf{A}_{0},\mathbf{A}}\) denote the norm of \(\mathbf{S}\) generated by the vector norms \(\|\cdot\|_{\mathbf{A}_{0}}\) and \(\|\cdot\|_{\mathbf{A}}\), i.e.,
\[\|\mathbf{S}\|_{\mathbf{A}_{0},\mathbf{A}}=\max_{\mathbf{v}\in\mathbb{R}^{n_{ 0}},\mathbf{v}\neq\mathbf{0}}\frac{\|\mathbf{S}\mathbf{v}\|_{\mathbf{A}}}{\| \mathbf{v}\|_{\mathbf{A}_{0}}}.\]
The relation (12) implies
\[\|\mathbf{x}_{\mathrm{ex}}^{\mathrm{new}}-\mathbf{x}_{\mathrm{in}}^{\mathrm{ new}}\|_{\mathbf{A}}\leq\|\mathbf{S}\|_{\mathbf{A}_{0},\mathbf{A}}\|\mathbf{v}_{0}- \mathbf{v}_{0,\mathrm{in}}\|_{\mathbf{A}_{0}}. \tag{13}\]
Let \(\gamma\) be a constant such that the \(\mathbf{A}_{0}\)-norm of the error of the coarsest-level solver is less than \(\gamma\) times the \(\mathbf{A}\)-norm of the error of the previous approximation on the finest level, i.e.,
\[\|\mathbf{v}_{0}-\mathbf{v}_{0,\mathrm{in}}\|_{\mathbf{A}_{0}}\leq\gamma\| \mathbf{x}-\mathbf{x}^{\mathrm{prev}}\|_{\mathbf{A}}. \tag{10}\]
Combining (11) and (10) yields an estimate on the \(\mathbf{A}\)-norm of the relative difference of the exV-cycle and inV-cycle approximations after one V-cycle iteration
\[\frac{\|\mathbf{x}_{\mathrm{ex}}^{\mathrm{new}}-\mathbf{x}_{\mathrm{in}}^{ \mathrm{new}}\|_{\mathbf{A}}}{\|\mathbf{x}-\mathbf{x}^{\mathrm{prev}}\|_{ \mathbf{A}}}\leq\|\mathbf{S}\|_{\mathbf{A}_{0},\mathbf{A}}\gamma. \tag{11}\]
Returning back to the estimate of the \(\mathbf{A}\)-norm of the error of the inV-cycle approximation, using (10) and (11) we have
\[\|\mathbf{x}-\mathbf{x}_{\mathrm{in}}^{\mathrm{new}}\|_{\mathbf{A}}\leq\left( \|\mathbf{E}\|_{\mathbf{A}}+\|\mathbf{S}\|_{\mathbf{A}_{0},\mathbf{A}}\gamma \right)\|\mathbf{x}-\mathbf{x}^{\mathrm{prev}}\|_{\mathbf{A}}. \tag{12}\]
Assuming that the error of the coarsest-level solver satisfies estimate (10) with \(\gamma\) such that
\[\|\mathbf{E}\|_{\mathbf{A}}+\|\mathbf{S}\|_{\mathbf{A}_{0},\mathbf{A}}\gamma<1,\]
the inV-cycle method converges and we have a bound on its convergence rate in terms of the bound on the rate of convergence of the exV-cycle method and \(\|\mathbf{S}\|_{\mathbf{A}_{0},\mathbf{A}}\gamma\).
We derive a bound on the norm \(\|\mathbf{S}\|_{\mathbf{A}_{0},\mathbf{A}}\). Denoting by \(\mathbf{S}_{j}\), \(j=2,\ldots,J-1\), the matrix
\[\mathbf{S}_{j}=(\mathbf{I}_{j}-\mathbf{N}_{j}\mathbf{A}_{j})\mathbf{P}_{j} \ldots(\mathbf{I}_{1}-\mathbf{N}_{1}\mathbf{A}_{1})\mathbf{P}_{1}\in\mathbb{R} ^{n_{j}\times n_{0}},\]
and using the definition of \(\|\mathbf{S}\|_{\mathbf{A}_{0},\mathbf{A}}\) leads to
\[\|\mathbf{S}\|_{\mathbf{A}_{0},\mathbf{A}} =\max_{\mathbf{v}\in\mathbb{R}^{n_{0}},\mathbf{v}\neq \mathbf{0}}\frac{\|(\mathbf{I}_{J}-\mathbf{N}_{J}\mathbf{A}_{J})\mathbf{P}_{J} \mathbf{S}_{J-1}\mathbf{v}\|_{\mathbf{A}}}{\|\mathbf{v}\|_{\mathbf{A}_{0}}}\] \[=\max_{\mathbf{v}\in\mathbb{R}^{n_{0}},\mathbf{v}\neq\mathbf{0}} \frac{\|(\mathbf{I}_{J}-\mathbf{N}_{J}\mathbf{A}_{J})\mathbf{P}_{J}\mathbf{S}_ {J-1}\mathbf{v}\|_{\mathbf{A}}}{\|\mathbf{P}_{J}\mathbf{S}_{J-1}\mathbf{v}\|_{ \mathbf{A}}}\frac{\|\mathbf{P}_{J}\mathbf{S}_{J-1}\mathbf{v}\|_{\mathbf{A}}}{ \|\mathbf{v}\|_{\mathbf{A}_{0}}}\] \[\leq\max_{\mathbf{v}\in\mathbb{R}^{n_{0}},\mathbf{v}\neq \mathbf{0}}\|\mathbf{I}_{J}-\mathbf{N}_{J}\mathbf{A}_{J}\|_{\mathbf{A}}\frac{ \|\mathbf{P}_{J}\mathbf{S}_{J-1}\mathbf{v}\|_{\mathbf{A}}}{\|\mathbf{v}\|_{ \mathbf{A}_{0}}} \tag{13}\] \[=\|\mathbf{I}_{J}-\mathbf{N}_{J}\mathbf{A}_{J}\|_{\mathbf{A}}\max _{\mathbf{v}\in\mathbb{R}^{n_{0}},\mathbf{v}\neq\mathbf{0}}\frac{\|\mathbf{S}_ {J-1}\mathbf{v}\|_{\mathbf{A}_{J-1}}}{\|\mathbf{v}\|_{\mathbf{A}_{0}}}\] \[\leq\prod_{j=1}^{J}\|\mathbf{I}_{j}-\mathbf{N}_{j}\mathbf{A}_{j} \|_{\mathbf{A}_{j}}\max_{\mathbf{v}\in\mathbb{R}^{n_{0}},\mathbf{v}\neq \mathbf{0}}\frac{\|\mathbf{I}_{0}\mathbf{v}\|_{\mathbf{A}_{0}}}{\|\mathbf{v}\| _{\mathbf{A}_{0}}}\] \[=\prod_{j=1}^{J}\|\mathbf{I}_{j}-\mathbf{N}_{j}\mathbf{A}_{j}\|_{ \mathbf{A}_{j}},\]
where we have used the Galerkin condition (2) to obtain (13). The monotone convergence of the post-smoothers (3) in the \(\mathbf{A}_{j}\)-norms implies that \(\|\mathbf{S}\|_{\mathbf{A}_{0},\mathbf{A}}<1\). If post-smoothing is not used, i.e., \(\mathbf{N}_{j}=\mathbf{0}\), then \(\|\mathbf{S}\|_{\mathbf{A}_{0},\mathbf{A}}=1\).
We summarize the results in the following theorem.
**Theorem 3.1**: _Let \(\mathbf{x}_{\mathrm{ex}}^{\mathrm{new}}\) be the approximation of \(\mathbf{x}=\mathbf{A}^{-1}\mathbf{b}\) computed after one iteration of the exV-cycle method with error propagation matrix \(\mathbf{E}\), \(\|\mathbf{E}\|_{\mathbf{A}}<1\), starting with an approximation \(\mathbf{x}^{\mathrm{prev}}\). Let \(\mathbf{x}_{\mathrm{in}}^{\mathrm{new}}\) be an approximation of \(\mathbf{x}=\mathbf{A}^{-1}\mathbf{b}\) computed after one iteration of the inV-cycle method starting with the same approximation, and assume the error of the coarsest-level solver \(\mathbf{v}_{0}-\mathbf{v}_{0,\mathrm{in}}\) satisfies_
\[\|\mathbf{v}_{0}-\mathbf{v}_{0,\mathrm{in}}\|_{\mathbf{A}_{0}}\leq\gamma\| \mathbf{x}-\mathbf{x}^{\mathrm{prev}}\|_{\mathbf{A}}, \tag{10}\]
_for some constant \(\gamma\). Then the following estimate on the \(\mathbf{A}\)-norm of the relative difference of the exV-cycle and inV-cycle approximations after one V-cycle iteration holds:_
\[\frac{\|\mathbf{x}_{\mathrm{ex}}^{\mathrm{new}}-\mathbf{x}_{\mathrm{in}}^{ \mathrm{new}}\|_{\mathbf{A}}}{\|\mathbf{x}-\mathbf{x}^{\mathrm{prev}}\|_{ \mathbf{A}}}\leq\|\mathbf{S}\|_{\mathbf{A}_{0},\mathbf{A}}\gamma, \tag{11}\]
_where \(\mathbf{S}\) is the matrix defined in (11) satisfying \(\|\mathbf{S}\|_{\mathbf{A}_{0},\mathbf{A}}\leq 1\). Moreover,_
\[\|\mathbf{x}-\mathbf{x}_{\mathrm{in}}^{\mathrm{new}}\|_{\mathbf{A}}\leq\left( \|\mathbf{E}\|_{\mathbf{A}}+\|\mathbf{S}\|_{\mathbf{A}_{0},\mathbf{A}}\gamma \right)\|\mathbf{x}-\mathbf{x}^{\mathrm{prev}}\|_{\mathbf{A}},\]
_and if the error of the coarsest-level solver satisfies (10) with \(\gamma\) such that_
\[\|\mathbf{E}\|_{\mathbf{A}}+\|\mathbf{S}\|_{\mathbf{A}_{0},\mathbf{A}}\gamma<1, \tag{12}\]
_the inV-cycle method converges._
A multigrid method is said to be uniformly convergent if there exist a bound on the rate of convergence which is independent of the number of levels and of the size of the problem on the coarsest level; see e.g., [28, 30]. If we assume that the exV-cycle method converges uniformly and the error of the coarsest-level solver in the inV-cycle method satisfies (10) with \(\gamma\) such that (12) holds and \(\gamma\) is independent of the number of levels and the size of the problem on the coarsest level, inequality (9) and the fact that \(\|\mathbf{S}\|_{\mathbf{A}_{0},\mathbf{A}}<1\) yield that the inV-cycle method converges uniformly.
Let us focus on the difference between the approximation \(\mathbf{x}_{\mathrm{in}}^{(n)}\) computed after \(n\) iterations of the inV-cycle method and the approximation \(\mathbf{x}_{\mathrm{ex}}^{(n)}\) computed after \(n\) iterations of the exV-cycle method, both from the same initial approximation \(\mathbf{x}^{(0)}\). The following development is inspired by [26, Section 4], where the authors derive a bound on the Euclidean norm of the difference between the residual computed by the exact and inexact Richarson methods. Let \(\mathbf{g}^{(k)}\), \(k=1,\ldots,n\), denote the difference between \(x_{\mathrm{in}}^{(k)}\) and the approximation computed by one iteration of the exV-cycle method starting with \(\mathbf{x}_{\mathrm{in}}^{(k-1)}\). Then
\[\mathbf{x}-\mathbf{x}_{\mathrm{in}}^{(k)}=\mathbf{E}(\mathbf{x}-\mathbf{x}_{ \mathrm{in}}^{(k-1)})+\mathbf{g}^{(k)}.\]
The difference \(\mathbf{x}_{\text{ex}}^{(n)}-\mathbf{x}_{\text{in}}^{(n)}\) can be rewritten using the terms \(\mathbf{g}^{(k)}\) as
\[\mathbf{x}_{\text{ex}}^{(n)}-\mathbf{x}_{\text{in}}^{(n)} =(\mathbf{x}-\mathbf{x}_{\text{in}}^{(n)})-(\mathbf{x}-\mathbf{x}_ {\text{ex}}^{(n)})\] \[=\mathbf{E}(\mathbf{x}-\mathbf{x}_{\text{in}}^{(n-1)})+\mathbf{g} ^{(n)}-\mathbf{E}^{n}(\mathbf{x}-\mathbf{x}^{(0)})\] \[=\mathbf{E}(\mathbf{E}(\mathbf{x}-\mathbf{x}_{\text{in}}^{(n-2)} )+\mathbf{g}^{(n-1)})+\mathbf{g}^{(n)}-\mathbf{E}^{n}(\mathbf{x}-\mathbf{x}^{( 0)})\] \[=\mathbf{E}^{2}(\mathbf{x}-\mathbf{x}_{\text{in}}^{(n-2)})+ \mathbf{E}\mathbf{g}^{(n-1)}+\mathbf{g}^{(n)}-\mathbf{E}^{n}(\mathbf{x}- \mathbf{x}^{(0)})\] \[=\mathbf{E}^{n}(\mathbf{x}-\mathbf{x}^{(0)})+\sum_{k=1}^{n} \mathbf{E}^{n-k}\mathbf{g}^{(k)}-\mathbf{E}^{n}(\mathbf{x}-\mathbf{x}^{(0)})\] \[=\sum_{k=1}^{n}\mathbf{E}^{n-k}\mathbf{g}^{(k)}.\]
Taking the \(\mathbf{A}\)-norm of both sides, using the triangle inequality and the multiplicativity of the matrix norm \(\|\cdot\|_{\mathbf{A}}\) we obtain
\[\|\mathbf{x}_{\text{ex}}^{(n)}-\mathbf{x}_{\text{in}}^{(n)}\|_{\mathbf{A}}=\| \sum_{k=1}^{n}\mathbf{E}^{n-k}\mathbf{g}^{(k)}\|_{\mathbf{A}}\leq\sum_{k=1}^{ n}\|\mathbf{E}\|_{\mathbf{A}}^{n-k}\|\mathbf{g}^{(k)}\|_{\mathbf{A}}\leq\sum_{k=1}^{ n}\|\mathbf{E}\|_{\mathbf{A}}^{n-k}\|\mathbf{g}^{(k)}\|_{\mathbf{A}}.\]
Since \(\mathbf{g}^{(k)}\) is the difference between \(\mathbf{x}_{\text{in}}^{(k)}\) and the approximation computed by one iteration of the exV-cycle scheme starting with \(\mathbf{x}_{\text{in}}^{(k-1)}\), it can be written as the product of the matrix \(\mathbf{S}\) and the error of the solver on the coarsest level in the \(k\)th iteration of the inV-cycle method, i.e.,
\[\mathbf{g}^{(k)}=\mathbf{S}(\mathbf{v}_{0}^{(k)}-\mathbf{v}_{0,\text{in}}^{(k) }).\]
Substituting into (3.12) and using the norm of \(\mathbf{S}\) leads to
\[\|\mathbf{x}_{\text{ex}}^{(n)}-\mathbf{x}_{\text{in}}^{(n)}\|_{\mathbf{A}} \leq\sum_{k=1}^{n}\|\mathbf{E}\|_{\mathbf{A}}^{n-k}\|\mathbf{S}\|_{\mathbf{A}_ {0},\mathbf{A}}\|\mathbf{v}_{0}^{(k)}-\mathbf{v}_{0,\text{in}}^{(k)}\|_{ \mathbf{A}_{0}}.\]
This bound provides information on how the accuracy of the solver on the coarsest level during the individual solves affects the \(\mathbf{A}\)-norm of the difference of the approximations \(\mathbf{x}_{\text{in}}^{(n)}\) and \(\mathbf{x}_{\text{ex}}^{(n)}\).
Using (3.5) and using the bound (3.7) recursively, we have
\[\|\mathbf{x}_{\text{ex}}^{(n)}-\mathbf{x}_{\text{in}}^{(n)}\|_{ \mathbf{A}} \leq\sum_{k=1}^{n}\|\mathbf{E}\|_{\mathbf{A}}^{n-k}\gamma\|\mathbf{ S}\|_{\mathbf{A}_{0},\mathbf{A}}\|\mathbf{x}-\mathbf{x}_{\text{in}}^{(k-1)}\|_{ \mathbf{A}}\] \[\leq\sum_{k=1}^{n}\|\mathbf{E}\|_{\mathbf{A}}^{n-k}\gamma\| \mathbf{S}\|_{\mathbf{A}_{0},\mathbf{A}}(\|\mathbf{E}\|_{\mathbf{A}}+\gamma\| \mathbf{S}\|_{\mathbf{A}_{0},\mathbf{A}})^{k-1}\|\mathbf{x}-\mathbf{x}^{(0)} \|_{\mathbf{A}}\] \[=\left(\gamma\|\mathbf{S}\|_{\mathbf{A}_{0},\mathbf{A}}\sum_{k=1} ^{n}(\|\mathbf{E}\|_{\mathbf{A}}+\gamma\|\mathbf{S}\|_{\mathbf{A}_{0},\mathbf{A} })^{k-1}\|\mathbf{E}\|_{\mathbf{A}}^{n-k}\right)\|\mathbf{x}-\mathbf{x}^{(0)} \|_{\mathbf{A}}.\]
Utilizing the relation \(a^{n}-b^{n}=(a-b)\sum_{k=1}^{n}a^{k-1}b^{n-k}\) for \(a=\|\mathbf{E}\|_{\mathbf{A}}+\gamma\|\mathbf{S}\|_{\mathbf{A}_{0},\mathbf{A}}\) and \(b=\|\mathbf{E}\|_{\mathbf{A}}\) leads to
\[\|\mathbf{x}_{\text{ex}}^{(n)}-\mathbf{x}_{\text{in}}^{(n)}\|_{\mathbf{A}}\leq \left[\left(\|\mathbf{E}\|_{\mathbf{A}}+\|\mathbf{S}\|_{\mathbf{A}_{0},\mathbf{A} }\gamma\right)^{n}-\|\mathbf{E}\|_{\mathbf{A}}^{n}\right]\|\mathbf{x}-\mathbf{x}^ {(0)}\|_{\mathbf{A}}.\]
We thus have an estimate on the distance in the \(\mathbf{A}\)-norm between the approximations \(\mathbf{x}_{\mathrm{in}}^{(n)}\) and \(\mathbf{x}_{\mathrm{ex}}^{(n)}\) in terms of \(\|\mathbf{E}\|_{\mathbf{A}}\), \(\gamma\), \(\|\mathbf{S}\|_{\mathbf{A}_{0},\mathbf{A}}\) and \(n\). We summarize this result in the following theorem.
**Theorem 3.1**: _Let \(\mathbf{x}_{\mathrm{ex}}^{(n)}\) be the approximation of \(\mathbf{x}=\mathbf{A}^{-1}\mathbf{b}\) computed after \(n\) iterations of the exV-cycle method with error propagation matrix \(\mathbf{E}\), \(\|\mathbf{E}\|_{\mathbf{A}}<1\), starting with approximation \(\mathbf{x}^{(0)}\). Let \(\mathbf{x}_{\mathrm{in}}^{(n)}\) be an approximation of \(\mathbf{x}=\mathbf{A}^{-1}\mathbf{b}\) computed after \(n\) iterations of the inV-cycle method, starting with the same approximation, and assume the errors of the coarsest-level solver \(\mathbf{v}_{0}^{(k)}-\mathbf{v}_{0,\mathrm{in}}^{(k)}\) satisfy_
\[\|\mathbf{v}_{0}^{(k)}-\mathbf{v}_{0,\mathrm{in}}^{(k)}\|_{\mathbf{A}_{0}} \leq\gamma\|\mathbf{x}-\mathbf{x}_{\mathrm{in}}^{(k-1)}\|_{\mathbf{A}},\quad k =1,\ldots,n, \tag{10}\]
_for a constant \(\gamma\). Then the following estimate on the \(A\)-norm of the difference of \(\mathbf{x}_{\mathrm{ex}}^{(n)}\) and \(\mathbf{x}_{\mathrm{in}}^{(n)}\) holds:_
\[\|\mathbf{x}_{\mathrm{ex}}^{(n)}-\mathbf{x}_{\mathrm{in}}^{(n)}\|_{\mathbf{A} }\leq\left[\left(\|\mathbf{E}\|_{\mathbf{A}}+\|\mathbf{S}\|_{\mathbf{A}_{0}, \mathbf{A}}\gamma\right)^{n}-\|\mathbf{E}\|_{\mathbf{A}}^{n}\right]\|\mathbf{ x}-\mathbf{x}^{(0)}\|_{\mathbf{A}},\]
_where \(\mathbf{S}\) is the matrix defined in (11) and \(\|\mathbf{S}\|_{\mathbf{A}_{0},\mathbf{A}}\leq 1\)._
## 4 Effects of the choice of the tolerance in relative residual stopping criterion
Stopping an iterative coarsest-level solver based on the size of the relative residual is frequently done both in the literature and in practice. One chooses a tolerance \(\tau\in(0,1)\) and stops the solver when
\[\frac{\|\mathbf{f}_{0}-\mathbf{A}_{0}\mathbf{v}_{0,\mathrm{in}}\|}{\|\mathbf{ f}_{0}\|}\leq\tau. \tag{11}\]
In this section we use the results from Section 3 to analyze the effect of the choice of the tolerance on the convergence of the inV-cycle method. We show that if inequality (11) holds then inequality (10) holds with a certain \(\gamma\) depending on the tolerance \(\tau\), and consequently we may use the results developed in the previous section.
We start by showing that the Euclidean norm of the right-hand side on the coarsest level can be bounded by the Euclidean norm of the residual of the previous approximation on the finest level. Rewriting \(\mathbf{f}_{0}\) using the individual steps in Algorithm 1, we have (note that \(\mathbf{v}_{j}^{[0]}=\mathbf{0}\), \(j=1,\ldots,J-1\))
\[\mathbf{f}_{0} =\mathbf{P}_{1}^{\top}(\mathbf{f}_{1}-\mathbf{A}_{1}\mathbf{v}_ {1}^{[1]})=\mathbf{P}_{1}^{\top}(\mathbf{f}_{1}-\mathbf{A}_{1}(\mathbf{v}_{1} ^{[0]}+\mathbf{M}_{1}(\mathbf{f}_{1}-\mathbf{A}_{1}\mathbf{v}_{1}^{[0]})) \tag{12}\] \[=\mathbf{P}_{1}^{\top}(\mathbf{I}_{1}-\mathbf{A}_{1}\mathbf{M}_{ 1})\mathbf{f}_{1}=\prod_{j=1}^{J-1}\mathbf{P}_{j}^{\top}(\mathbf{I}_{j}- \mathbf{A}_{j}\mathbf{M}_{j})\mathbf{f}_{J-1}.\]
The vector \(\mathbf{f}_{J-1}\) can be expressed as
\[\mathbf{f}_{J-1} =\mathbf{P}_{J}^{\top}(\mathbf{b}-\mathbf{A}\mathbf{v}_{J}^{[1]} )=\mathbf{P}_{J}^{\top}(\mathbf{b}-\mathbf{A}(\mathbf{x}^{\mathrm{prev}}+ \mathbf{M}_{J}(\mathbf{b}-\mathbf{A}\mathbf{x}^{\mathrm{prev}}))) \tag{13}\] \[=\mathbf{P}_{J}^{\top}(\mathbf{I}_{J}-\mathbf{A}\mathbf{M}_{J})( \mathbf{b}-\mathbf{A}\mathbf{x}^{\mathrm{prev}}).\]
Denoting by \(\mathbf{T}\) the matrix
\[\mathbf{T}=\prod_{j=1}^{J}\mathbf{P}_{j}^{\top}(\mathbf{I}_{j}-\mathbf{A}_{j} \mathbf{M}_{j}),\]
and combining (11) and (12), we have \(\mathbf{f}_{0}=\mathbf{T}\left(\mathbf{b}-\mathbf{A}\mathbf{x}^{\mathrm{prev}}\right)\). The matrix \(\mathbf{T}\) describes how the residual from the finest level is propagated to the coarsest level. Based on this relation, we can estimate the Euclidean norm of \(\mathbf{f}_{0}\) as
\[\|\mathbf{f}_{0}\|\leq\|\mathbf{T}\|\|\mathbf{b}-\mathbf{A}\mathbf{x}^{ \mathrm{prev}}\|. \tag{13}\]
The norm of \(\mathbf{T}\) can be bounded as
\[\|\mathbf{T}\|\leq\prod_{j=1}^{J}\|\mathbf{P}_{j}^{\top}\|\|\mathbf{I}_{j}- \mathbf{A}_{j}\mathbf{M}_{j}\|,\]
by a procedure analogous to that used in bounding the norm of \(\|\mathbf{S}\|_{\mathbf{A}_{0},\mathbf{A}}\); see Section 3.
Using (13) and the estimates
\[\|\mathbf{b}-\mathbf{A}\mathbf{x}^{\mathrm{prev}}\| \leq \|\mathbf{A}\|^{\frac{1}{2}}\|\mathbf{x}-\mathbf{x}^{\mathrm{prev }}\|_{\mathbf{A}}, \tag{14}\] \[\|\mathbf{A}_{0}^{-1}\|^{-\frac{1}{2}}\|\mathbf{v}_{0}-\mathbf{v }_{0,\mathrm{in}}\|_{\mathbf{A}_{0}} \leq \|\mathbf{f}_{0}-\mathbf{A}_{0}\mathbf{v}_{0,\mathrm{in}}\|,\]
to bound the terms in (10), we obtain
\[\|\mathbf{v}_{0}-\mathbf{v}_{0,\mathrm{in}}\|_{\mathbf{A}_{0}}\leq\tau\| \mathbf{T}\|\|\mathbf{A}\|^{\frac{1}{2}}\|\mathbf{A}_{0}^{-1}\|^{\frac{1}{2}} \|\mathbf{x}-\mathbf{x}^{\mathrm{prev}}\|_{\mathbf{A}}, \tag{15}\]
i.e., the inequality (12) holds with \(\gamma=\tau\|\mathbf{T}\|\|\mathbf{A}\|^{\frac{1}{2}}\|\mathbf{A}_{0}^{-1}\|^{ \frac{1}{2}}\). Using Theorems 3 and 2 we have an answer to the question of how the choice of the tolerance in the relative residual stopping criterion for the coarsest-level solver affects the convergence of the V-cycle method.
We note that since (15) was derived using the estimates (14), which may be a large overestimate, the resulting estimates may be loose and the actual quantities much smaller. We carry out numerical experiments investigating the accuracy of the estimates for the methods used in the motivating numerical experiment in Section 8.
## 5 Effect of the choice of parameters for the BLR solver
Solving the coarsest-level problem using a BLR direct solver, potentially executed in low precision, was studied, e.g., in [5]. To run the BLR solver, the user has to choose a low-rank threshold parameter \(\epsilon\) and a finite precision arithmetic. These parameters, which affect the solver accuracy, are chosen in the first V-cycle iteration, where the approximate factorization \(\mathbf{A}_{0}\approx\widetilde{\mathbf{L}}_{0}\widetilde{\mathbf{U}}_{0}\) is computed together with the coarsest-level approximate solution. In subsequent V-cycles, the coarsest-level approximations are computed by reusing the pre-computed factorization.
Higham and Mary [10] proved the following backward stability result for the BLR solver. Let \(u\) be the unit roundoff, \(p\) the number of blocks in a row of \(\mathbf{A}_{0}\), \(\xi_{p}\) a constant depending on \(p\), and \(c\) a constant depending on \(p\), the block-size, and the so called BLR rank of the LU factors of \(\mathbf{A}_{0}\). For any right-hand side \(\mathbf{f}_{0}\), the approximation \(\mathbf{v}_{0,\mathrm{in}}\) computed by the BLR solver satisfies
\[(\mathbf{A}_{0}+\Delta\mathbf{A}_{0})\mathbf{v}_{0,\mathrm{in}} = \mathbf{f}_{0}+\Delta\mathbf{f}_{0}, \tag{16}\] \[\|\Delta\mathbf{A}_{0}\| \leq \left(\xi_{p}\epsilon+pu\right)\|\mathbf{A}_{0}\|_{F}+cu\| \widetilde{\mathbf{L}}_{0}\|_{F}\|\widetilde{\mathbf{U}}_{0}\|_{F}+\mathcal{O} (u\epsilon+u^{2}),\] (17) \[\|\Delta\mathbf{f}_{0}\| \leq pu(\|\mathbf{f}_{0}\|+\|\widetilde{\mathbf{L}}_{0}\|_{F}\| \widetilde{\mathbf{U}}_{0}\|_{F}\|\mathbf{v}_{0,\mathrm{in}}\|)+\mathcal{O}(u ^{2}). \tag{18}\]
We use this result to derive an estimate on the Euclidean norm of the relative residual, which we later use to discuss how the choice of the low-rank threshold parameter \(\epsilon\) and the finite precision arithmetic may affect the convergence of the V-cycle method.
Using (5.1) and (5.3), the Euclidean norm of the residual can be estimated as
\[\begin{split}\|\mathbf{f}_{0}-\mathbf{A}_{0}\mathbf{v}_{0,\mathrm{in }}\|&=\|\Delta\mathbf{f}_{0}-\Delta\mathbf{A}_{0}\mathbf{v}_{0, \mathrm{in}}\|\\ &\leq\|\Delta\mathbf{f}_{0}\|+\|\Delta\mathbf{A}_{0}\|\|\mathbf{v }_{0,\mathrm{in}}\|\\ &\leq pu(\|\mathbf{f}_{0}\|+\|\widetilde{\mathbf{L}}_{0}\|_{F}\| \widetilde{\mathbf{U}}_{0}\|_{F}\|\mathbf{v}_{0,\mathrm{in}}\|)+\mathcal{O}(u ^{2})+\|\Delta\mathbf{A}_{0}\|\|\mathbf{v}_{0,\mathrm{in}}\|\\ &=\left(\|\Delta\mathbf{A}_{0}\|+pu\|\widetilde{\mathbf{L}}_{0}\| _{F}\|\widetilde{\mathbf{U}}_{0}\|_{F}\right)\|\mathbf{v}_{0,\mathrm{in}}\|+ pu\|\mathbf{f}_{0}\|+\mathcal{O}(u^{2}).\end{split} \tag{5.4}\]
Combining (5.4) with the estimate on the norm of \(\mathbf{v}_{0,\mathrm{in}}\),
\[\begin{split}\|\mathbf{v}_{0,\mathrm{in}}\|&\leq\| \mathbf{v}_{0,\mathrm{in}}-\mathbf{v}_{0}\|+\|\mathbf{v}_{0}\|\\ &\leq\|\mathbf{A}_{0}^{-1}\|\|\mathbf{f}_{0}-\mathbf{A}_{0} \mathbf{v}_{0,\mathrm{in}}\|+\|\mathbf{A}_{0}^{-1}\|\|\mathbf{f}_{0}\|,\end{split}\]
and rearranging the terms yields
\[\frac{\|\mathbf{f}_{0}-\mathbf{A}_{0}\mathbf{v}_{0,\mathrm{in}}\|}{\|\mathbf{ f}_{0}\|}\leq\frac{\|\Delta\mathbf{A}_{0}\|\|\mathbf{A}_{0}^{-1}\|+pu\| \widetilde{\mathbf{L}}_{0}\|_{F}\|\widetilde{\mathbf{U}}_{0}\|_{F}\|\mathbf{A} _{0}^{-1}\|+pu}{1-(\|\Delta\mathbf{A}_{0}\|\|\mathbf{A}_{0}^{-1}\|+pu\| \widetilde{\mathbf{L}}_{0}\|_{F}\|\widetilde{\mathbf{U}}_{0}\|_{F}\|\mathbf{A} _{0}^{-1}\|)}+\mathcal{O}(u^{2}).\]
Denoting \(\zeta=\|\Delta\mathbf{A}_{0}\|\|\mathbf{A}_{0}^{-1}\|+pu\|\widetilde{\mathbf{ L}}_{0}\|_{F}\|\widetilde{\mathbf{U}}_{0}\|_{F}\|\mathbf{A}_{0}^{-1}\|\) and using
\[\frac{\zeta+pu}{1-\zeta}=pu+(1+pu)\zeta+\mathcal{O}(\zeta^{2})\]
leads to
\[\begin{split}\frac{\|\mathbf{f}_{0}-\mathbf{A}_{0}\mathbf{v}_{0, \mathrm{in}}\|}{\|\mathbf{f}_{0}\|}\leq& pu+(1+pu)\left(\|\Delta \mathbf{A}_{0}\|\|\mathbf{A}_{0}^{-1}\|+pu\|\widetilde{\mathbf{L}}_{0}\|_{F}\| \widetilde{\mathbf{U}}_{0}\|_{F}\|\mathbf{A}_{0}^{-1}\|\right)\\ &+\mathcal{O}\left(\left(\|\Delta\mathbf{A}_{0}\|\|\mathbf{A}_{0}^{ -1}\|+pu\|\widetilde{\mathbf{L}}_{0}\|_{F}\|\widetilde{\mathbf{U}}_{0}\|_{F}\| \mathbf{A}_{0}^{-1}\|\right)^{2}\right)+\mathcal{O}(u^{2}).\end{split}\]
Utilizing the estimate (5.2) yields an estimate on the Euclidean norm of the relative residual,
\[\begin{split}\frac{\|\mathbf{f}_{0}-\mathbf{A}_{0}\mathbf{v}_{0, \mathrm{in}}\|}{\|\mathbf{f}_{0}\|}\leq&\underbrace{(\xi_{p} \epsilon+pu)\|\mathbf{A}_{0}\|_{F}\|\mathbf{A}_{0}^{-1}\|+(3cu+pu)\|\widetilde{ \mathbf{L}}_{0}\|_{F}\|\widetilde{\mathbf{U}}_{0}\|_{F}\|\mathbf{A}_{0}^{-1} \|+pu}_{\nu}\\ &+\mathcal{O}(u^{2}+\epsilon u+\epsilon^{2}).\end{split} \tag{5.5}\]
Having the estimate \(\nu\) on the upper bound of the Euclidean norm of the relative residual, we know (from the previous section) that the inequality (3.9) is satisfied, with
\[\gamma=\nu\|\mathbf{T}\|\|\mathbf{A}\|^{\frac{1}{2}}\|\mathbf{A}_{0}^{-1}\|^{ \frac{1}{2}}.\]
Theorems 3.1 and 3.2 thus provide an answer to the question of how the choice of \(\epsilon\) and \(u\) for the coarsest-level solver may affect the convergence of the V-cycle method. These results may be used to guide the choice of \(\epsilon\) and \(u\) in concrete settings. We however note that the resulting estimates are a worst-case scenario estimates. The actual value of the estimated quantities might be much smaller. We run numerical experiments studying the accuracy of the estimates for methods used in the motivating experiment in Section 8.
## 6 Coarsest-level stopping criteria tailored to multigrid methods
In Section 4, we used the results from Section 3 to show how the choice of the tolerance in the relative residual stopping criterion may affect the convergence of the V-cycle method. In the derivation, we used the estimates (20), which may result in a large overestimate. The resulting estimate also contains the term \(\|\mathbf{T}\|\), which may be difficult to evaluate.
In this section we derive coarsest-level stopping criteria for which we are able to derive an estimate (14) with \(\gamma\) which does not depend on the term \(\|\mathbf{T}\|\) and or its derivation does not rely on the estimates (20). First in Subsection 6.1, we present a residual-based stopping criterion. Further, we state the CG algorithm, present an expression for the norms of the errors in CG, and reference a few computable upper bounds on the \(\mathbf{A}\)-norm of the error in Subsection 6.2. The multilevel error estimator due to Rude [20, 21] is stated Subsection 6.3. In Subsection 6.4, we combine these results and present a coarsest-level stopping criterion for CG in the inV-cycle method, in the context of FE discretization of elliptic PDEs.
### Residual-based criterion
Given a tolerance \(\widetilde{\tau}\), assume that the approximation on the coarsest level satisfies
\[\frac{\|\mathbf{f}_{0}-\mathbf{A}_{0}\mathbf{v}_{0,\mathrm{in}}\|}{\|\mathbf{ b}-\mathbf{A}\mathbf{x}^{\mathrm{prev}}\|}\leq\widetilde{\tau}. \tag{21}\]
Using the estimates (20), we can show that (14) holds with \(\gamma=\widetilde{\tau}\|\mathbf{A}\|^{\frac{1}{2}}\|\mathbf{A}_{0}^{-1}\|^{ \frac{1}{2}}\). The main advantage over the relative residual stopping criterion is that the resulting estimate does not include the term \(\|\mathbf{T}\|\). However, the resulting estimate may still be loose since we have used (20). We perform numerical experiments examining the accuracy of the estimate in the numerical experiments in the following section.
### CG and the upper bound on the \(\mathbf{A}\)-norm of the error
The conjugate gradient method (Algorithm 1) is one of the most popular methods for solving systems of linear equation with SPD matrices; see the original paper [9] and, e.g., [12]. We consider it as the coarsest-level solver in the V-cycle method. In the derivation of its stopping criteria, we make use of the following expression for the norms of the errors. Let \(\mathbf{x}_{\mathrm{CG}}^{(m)}\) be an approximation of the solution \(\mathbf{x}=\mathbf{A}^{-1}\mathbf{b}\) computed after \(m\) iterations of CG with initial guess \(\mathbf{x}_{\mathrm{CG}}^{(0)}\), and \(\mathbf{x}_{\mathrm{CG}}^{(m)}\neq\mathbf{x}\). Then (the subscript CG highlights that the residuals are those from Algorithm 1)
\[\|\mathbf{x}-\mathbf{x}_{\mathrm{CG}}^{(0)}\|_{\mathbf{A}}^{2}=\sum_{i=0}^{m- 1}\alpha^{(i)}\|\mathbf{r}_{\mathrm{CG}}^{(i)}\|^{2}+\|\mathbf{x}-\mathbf{x}_ {\mathrm{CG}}^{(m)}\|_{\mathbf{A}}^{2}. \tag{22}\]
The formula has already been shown in the original paper [9, Theorem 6:1, equation (22)]; see also the discussion in [12, Section 5.6.1]. It holds also during finite precision computations (up to a small insignificant inaccuracy) for numerically computed quantities; see [22, 23].
For the derivation of the stopping criteria below we need a computable upper bound \(\eta_{\mathrm{CG}}^{(m)}\) on the \(\mathbf{A}\)-norm of the error, i.e.,
\[\|\mathbf{x}-\mathbf{x}_{\mathrm{CG}}^{(m)}\|_{\mathbf{A}}\leq\eta_{\mathrm{ CG}}^{(m)}.\]
There is an extensive literature on upper bounds on the \(\mathbf{A}\)-norm of the error in CG; see, e.g., [7] and the references therein, as well as [6, 16, 15, 17]. Most of these
estimates are derived based on the interpretation of CG as a procedure for computing a Gauss quadrature approximation to a Riemann-Stieltjes integral. In our development below, we may use any computable upper bound \(\eta_{\text{CG}}^{(m)}\); the accuracy of the resulting convergence estimates of the V-cycle method will, however, be affected by the accuracy of the bound \(\eta_{\text{CG}}^{(m)}\).
```
\(\mathbf{r}^{(0)}=\mathbf{b}-\mathbf{A}\mathbf{x}^{(0)}\) \(\mathbf{p}^{(0)}=\mathbf{r}^{(0)}\) for\(i=1,2,\dots,\) until stopping criterion is satisfied do \(\alpha^{(i-1)}=\frac{\|\mathbf{r}^{(i-1)}\|^{2}}{(\mathbf{p}^{(i-1)})^{\top} \mathbf{A}\mathbf{p}^{(i-1)}}\) \(\mathbf{x}^{(i)}=\mathbf{x}^{(i-1)}+\alpha^{(i-1)}\mathbf{p}^{(i-1)}\) \(\mathbf{r}^{(i)}=\mathbf{r}^{(i-1)}-\alpha^{(i-1)}\mathbf{A}\mathbf{p}^{(i- 1)}\) \(\delta^{(i)}=\frac{\|\mathbf{r}^{(i)}\|^{2}}{\|\mathbf{r}^{(i-1)}\|^{2}}\) \(\mathbf{p}^{(i)}=\mathbf{r}^{(i)}+\delta^{(i)}\mathbf{p}^{(i-1)}\) endfor return\(\mathbf{x}^{(i)}\)
```
**Algorithm 1** Conjugate gradient algorithm, \(\mathbf{CG}(\mathbf{A},\mathbf{b},\mathbf{x}^{(0)})\).
### Multilevel error estimator
Let us further assume that the hierarchy of system matrices comes from the discretization of elliptic PDEs by piecewice affine finite element method on a sequence of uniformly refined triangulations in 2D or 3D. In this case, we may make use of the so-called multilevel error estimators on the \(\mathbf{A}\)-norm of the error derived based on the stable splitting of Sobolev spaces into the finite element spaces; see [20, 21].
Let \(\widetilde{\mathbf{x}}\) be an approximation of \(\mathbf{x}=\mathbf{A}^{-1}\mathbf{b}\), \(\mathbf{r}_{J}\) the residual on the finest level, \(\mathbf{r}_{J}=\mathbf{b}-\mathbf{A}\widetilde{\mathbf{x}}\), and \(\mathbf{r}_{j}\) its restriction to the \(j\)th level, i.e.,
\[\mathbf{r}_{j}=\prod_{\ell=j+1}^{J}\mathbf{P}_{\ell}^{\top}\mathbf{r}_{J}, \quad j=0,\dots,J-1.\]
There exist positive constants \(c_{S}\) and \(C_{S}\), depending on the computational domain, coefficients of the elliptic PDE and shapes of the elements in the triangulations but not on the number of levels used in the estimate such that
\[c_{S}\eta_{\text{ML}}(\widetilde{\mathbf{x}})\leq\|\mathbf{x}-\widetilde{ \mathbf{x}}\|_{\mathbf{A}}\leq C_{S}\eta_{\text{ML}}(\widetilde{\mathbf{x}}), \tag{10}\]
where
\[\eta_{\text{ML}}(\widetilde{\mathbf{x}})=\left(\sum_{j=1}^{J}\mathbf{r}_{j}^{ \top}\operatorname{diag}\left(\mathbf{A}_{j}\right)^{-1}\mathbf{r}_{j}+ \mathbf{r}_{0}^{\top}\mathbf{A}_{0}^{-1}\mathbf{r}_{0}\right)^{\frac{1}{2}}.\]
Evaluation of the terms associated with fine levels is straightforward. Evaluation of the term associated with the coarsest level \(\mathbf{r}_{0}^{\top}\mathbf{A}_{0}^{-1}\mathbf{r}_{0}\) may be difficult, especially in large scale settings. We will, however, use the multilevel error estimator in a way that does not require the computation of \(\mathbf{r}_{0}^{\top}\mathbf{A}_{0}^{-1}\mathbf{r}_{0}\).
A drawback of the multilevel error estimator is that the constants \(c_{S}\) and \(C_{S}\) are generally unknown. In practice they are typically chosen based on the experience of the users with the concrete problem and method.
### Stopping criterion for CG in the V-cycle method
Given a constant \(\gamma\), we want to stop the CG iterations when inequality (11) is satisfied. Based on the multilevel error estimator (12) for \(\mathbf{x}^{\mathrm{prev}}\), we may stop when
\[\|\mathbf{v}_{0}-\mathbf{v}_{0,\mathrm{in}}^{(m)}\|_{\mathbf{A}_{0}}^{2}\leq \gamma^{2}c_{S}\left(\sum_{j=1}^{J}\mathbf{r}_{j}^{\top}\mathbf{D}_{j}^{-1} \mathbf{r}_{j}+\mathbf{r}_{0}^{\top}\mathbf{A}_{0}^{-1}\mathbf{r}_{0}\right). \tag{13}\]
Let us further assume that we are using the inV-cycle method without pre-smoothing. The projected residuals in the multilevel error estimator (12) are then equal to the right-hand sides on the individual levels in the inV-cycle method, i.e., \(\mathbf{r}_{j}=\mathbf{f}_{j}\), \(j=0,1,\ldots,J\). The term corresponding to the coarsest level can be rewritten as
\[\mathbf{r}_{0}^{\top}\mathbf{A}_{0}^{-1}\mathbf{r}_{0}=\mathbf{f}_{0}^{\top} \mathbf{A}_{0}^{-1}\mathbf{f}_{0}=\|\mathbf{v}_{0}\|_{\mathbf{A}_{0}}^{2},\]
where \(\mathbf{v}_{0}\) is the exact solution of the problem on the coarsest level. Using these equalities and the expression (11) for the norms of the errors in CG for the coarsest-level problem with zero initial approximation, the inequality (13) can be rewritten as
\[\|\mathbf{v}_{0}-\mathbf{v}_{0,\mathrm{in}}^{(m)}\|_{\mathbf{A}_{0}}^{2}\leq \gamma^{2}c_{S}\left(\sum_{j=1}^{J}\mathbf{f}_{j}^{\top}\mathbf{D}_{j}^{-1} \mathbf{f}_{j}+\sum_{i=0}^{m-1}\alpha^{(i)}\|\mathbf{r}_{\mathrm{CG}}^{(i)}\| ^{2}+\|\mathbf{v}_{0}-\mathbf{v}_{0,\mathrm{in}}^{(m)}\|_{\mathbf{A}_{0}}^{2} \right).\]
Assuming that \(1-\gamma^{2}c_{S}>0\), the inequality can be reformulated as
\[\|\mathbf{v}_{0}-\mathbf{v}_{0,\mathrm{in}}^{(m)}\|_{\mathbf{A}_{0}}^{2} \leq\frac{\gamma^{2}c_{S}}{1-\gamma^{2}c_{S}}\left(\sum_{j=1}^{J} \mathbf{f}_{j}^{\top}\mathbf{D}_{j}^{-1}\mathbf{f}_{j}+\sum_{i=0}^{m-1}\alpha ^{(i)}\|\mathbf{r}_{\mathrm{CG}}^{(i)}\|^{2}\right).\]
Based on this estimate we suggest stopping the CG computation using an upper bound on the \(\mathbf{A}_{0}\)-norm of the error in CG \(\eta_{\mathrm{CG}}^{(m)}\) when
\[\left(\eta_{\mathrm{CG}}^{(m)}\right)^{2}\leq\frac{\gamma^{2}c_{S}}{1-\gamma^{ 2}c_{S}}\left(\sum_{j=1}^{J}\mathbf{f}_{j}^{\top}\mathbf{D}_{j}^{-1}\mathbf{f} _{j}+\sum_{i=0}^{m-1}\alpha^{(i)}\|\mathbf{r}_{\mathrm{CG}}^{(i)}\|^{2}\right). \tag{14}\]
Besides the constant \(c_{S}\), this criterion is fully computable and it implies satisfaction of the inequality (11). We carry out numerical experiments investigating the accuracy of the estimate for methods used in the motivating experiment in the next section.
## 7 Heuristic strategy for choosing the coarsest-level accuracy
Thus far we presented estimates describing how the choice of the parameters in the coarsest-level stopping criteria may affect the convergence of the V-cycle method. In this section, we use these results to propose a heuristic strategy for choosing the parameters in the stopping criteria in order to try to fulfill the goal formulated in the second question in Subsection 2.1. That is, we aim to choose an accuracy for the coarsest-level solver such that the computed inV-cycle approximation is "close" to the approximation which would be computed by the exV-cycle method, and at the same time, do the least amount of work necessary on the coarsest level.
We formally formulate this goal as computing an inV-cycle approximation after \(n\) iterations, \(\mathbf{x}_{\mathrm{in}}^{(n)}\), whose distance in the \(\mathbf{A}\)-norm from the exV-cycle approximation
after \(n\) iterations, \(\mathbf{x}_{\mathrm{ex}}^{(n)}\), is less than an estimate on the \(\mathbf{A}\)-norm of the error of \(\mathbf{x}_{\mathrm{ex}}^{(n)}\), i.e.,
\[\|\mathbf{x}_{\mathrm{ex}}^{(n)}-\mathbf{x}_{\mathrm{in}}^{(n)}\|_{\mathbf{A}} \leq\|\mathbf{E}\|_{\mathbf{A}}^{n}\|\mathbf{x}-\mathbf{x}^{(0)}\|_{\mathbf{A}}. \tag{10}\]
According to Theorem 3, inequality (10) is satisfied if the errors of the coarsest-level solver in the first \(n\) V-cycle iterations satisfy (12) with \(\gamma\) chosen such that
\[(\|\mathbf{E}\|_{\mathbf{A}}+\|\mathbf{S}\|_{\mathbf{A}_{0},\mathbf{A}}\gamma) ^{n}-\|\mathbf{E}\|_{\mathbf{A}}^{n}=\|\mathbf{E}\|_{\mathbf{A}}^{n},\]
i.e.,
\[\gamma=(2^{\frac{1}{n}}-1)\|\mathbf{E}\|_{\mathbf{A}}\|\mathbf{S}\|_{\mathbf{ A}_{0},\mathbf{A}}^{-1}. \tag{11}\]
If we knew or had estimates of \(\|\mathbf{E}\|_{\mathbf{A}}\), \(\|\mathbf{S}\|_{\mathbf{A}_{0},\mathbf{A}}\), and the number of V-cycles which will be carried out on the finest level \(n\), we could choose the parameters in the stopping criteria (11), (12) or (13) such that (12) is satisfied with \(\gamma\) given by (11). We present a numerical experiment illustrating use of this strategy for choosing the parameter in the stopping criteria for CG (13) in Section 8.
## 8 Numerical experiments
In this section we present numerical experiments illustrating some of the key results derived in this paper. We consider the same model problem and analogous V-cycle methods as in the motivating experiment in Subsection 2.1.
### Comparison of the stopping criteria
In Sections 4 and 6 we studied how the choice of the parameters in the coarsest-level stopping criteria may affect the convergence of the V-cycle method. In particular we derived estimates on the \(\mathbf{A}\)-norm of the relative difference of the exV-cycle and inV-cycle approximations after one V-cycle iteration (15) with \(\gamma\) depending on the choices of the parameters in the stopping criteria. In the following numerical experiment we examine the accuracy of these estimates.
We consider in total 14 different variants of the V-cycle method, based on the number of levels (8 or 2), the solver on the coarsest level (CG or MINRES), and the coarsest-level stopping criteria. We apply the relative residual criterion (11), the residual based criterion (12), and the criterion for CG (13). We also consider stopping the coarsest-level solver when inequality (14) is approximately satisfied. To compare the accuracy of the estimates (15) for the different stopping criteria, we choose their parameters such that we know that (15) a priori holds with \(\gamma=2^{-10}=9.77E-04\). In the criterion for CG (13) and the criterion based on (14) we thus set \(\gamma=2^{-10}\). For the relative residual criterion (11) this means choosing \(\tau\) as
\[\tau=2^{-10}\|\mathbf{T}\|^{-1}\|\mathbf{A}\|^{-\frac{1}{2}}\|\mathbf{A}_{0} ^{-1}\|^{-\frac{1}{2}},\]
and for the residual based criterion (12) choosing \(\tilde{\tau}\) as
\[\tilde{\tau}=2^{-10}\|\mathbf{A}\|^{-\frac{1}{2}}\|\mathbf{A}_{0}^{-1}\|^{- \frac{1}{2}}.\]
To approximate the errors on the finest and coarsest level we compute the solutions using the MATLAB backslash operator. We use the MATLAB functions eigs to approximate the norms \(\|\mathbf{A}\|\), \(\|\mathbf{A}_{0}^{-1}\|\) and the function normest to approximate \(\|\mathbf{T}\|\). In the criterion for CG (13), we choose \(c_{S}=0.85\) and we use the upper-bound on the
\(\mathbf{A}_{0}\)-norm of the error in CG stated in [16, inequality (6)]. This upper bound requires a lower bound on the smallest eigenvalue of \(\mathbf{A}_{0}\). We estimate it by the MATLAB function eigs.
We run the V-cycle method starting with a zero initial approximate solution and stop when the relative \(\mathbf{A}\)-norm of the error is (approximately) lower than \(10^{-11}\). After each inV-cycle iteration we (approximately) compute the \(\mathbf{A}\)-norm of the relative difference of the exV-cycle and inV-cycle approximations, i.e.,
\[\frac{\|\mathbf{x}_{\mathrm{ex}}^{\mathrm{new}}-\mathbf{x}_{\mathrm{in}}^{ \mathrm{new}}\|_{\mathbf{A}}}{\|\mathbf{x}-\mathbf{x}^{\mathrm{prev}}\|_{ \mathbf{A}}}, \tag{10}\]
for \(\mathbf{x}^{\mathrm{prev}}=\mathbf{x}_{\mathrm{in}}^{(k)}\), \(k=0,1,\dots\). The vectors \(\mathbf{x}_{\mathrm{ex}}^{\mathrm{new}}\) are approximated by one iteration of the V-cycle method with MATLAB backslash as the solver on the coarsest level.
The results are summarized in Figure 3. They also include the number of coarsest-level solver iterations performed in each V-cycle. The top four and bottom four plots corresponds to the variants with CG and MINRES, respectively. The plots in the left and right columns corresponds to the variants with 8 and 2 levels, respectively.
We first focus on the results of the variants where the solver on the coarsest level is stopped based on (11). We see that the ratios (10) are close to the expected estimate \(\gamma\). We remark that the values computed after the last V-cycles are larger than \(\gamma\). We observed that in these cases the approximation of the difference \(\|\mathbf{x}_{\mathrm{ex}}^{\mathrm{new}}-\mathbf{x}_{\mathrm{in}}^{\mathrm{ new}}\|_{\mathbf{A}}\) is close to the maximal level of attainable accuracy. We thus believe that these outliers are caused by the use of finite precision arithmetic. Dividing the computed ratios (besides the mentioned outliers) by \(\gamma\) and finding the maximum we get the following lower bounds on the norm of \(\mathbf{S}\), see (12), \(\|\mathbf{S}\|_{\mathbf{A}_{0},\mathbf{A}}\geq 0.664\) and \(\|\mathbf{S}\|_{\mathbf{A}_{0},\mathbf{A}}\geq 0.9997\) for variants with 8 and 2 levels, respectively.
The results of the variants with CG stopped using the criterion (12) are close to the results of the variants where CG is stopped based on (11) and also close to the expected estimate \(\gamma\). The number of CG iterations performed in each V-cycle does not differ much for these two variants. This means that the stopping criterion (12) can be effectively used to control the ratios (10), in this experiment.
Looking at the results of the variants with residual based criteria (13) and (14), the computed ratios (10) are significantly lower than the expected estimate \(\gamma\). This may be a consequence of the usage of the estimates (11) and or (12) in the derivation of the stopping criteria. In most of the settings the coarsest-level solvers perform significantly more iterations than when the stopping criterion based on (11) is used. In the setting with 2 levels and MINRES, the difference in the number of iterations is less significant. The results of the variants with (14) are better than the results of the variant with (13) in all settings. The ratios (10) starts increasing after the 8 and 4 V-cycle iteration for the variant with 2 and 8 levels, respectively. We observed that in these cases the approximation of the difference \(\|\mathbf{x}_{\mathrm{ex}}^{\mathrm{new}}-\mathbf{x}_{\mathrm{in}}^{\mathrm{ new}}\|_{\mathbf{A}}\) starts to approach the maximal level of attainable accuracy and we thus think that this behavior is caused by the use of finite precision arithmetic.
### Heuristic strategy for choosing the coarsest-level accuracy
In the next experiment we illustrate the use of the heuristic strategy described in Section 7. We consider solving the coarsest-level problem in the V-cycle method by CG stopped using the criterion (12). The parameter \(\gamma\) for the criterion (12) is automatically chosen based the heuristic strategy (11). We consider \(n=10\) and bound \(\|\mathbf{S}\|_{\mathbf{A}_{0},\mathbf{A}}\) from above by 1. Since we have no a priori information about \(\|\mathbf{E}\|_{\mathbf{A}}\) we approximate it after each V-cycle iteration. The approximation of \(\|\mathbf{E}\|_{\mathbf{A}}\) is based on the following
Figure 3: V-cycle method with iterative coarsest level solvers. The coarsest-level solvers are stopped using relative residual criterion (4.6) (-), residual based criterion (6.1) (-), the criterion for CG (6.5) (-) and based on the inequality (3.9) (-) with parameters chosen such that (3.10) holds with \(\gamma=2^{-10}(-)\).
simplifications
\[\|\mathbf{E}\|_{\mathbf{A}}\geq\frac{\|\mathbf{x}-\mathbf{x}_{\mathrm{ex}}^{(k-1)} \|_{\mathbf{A}}}{\|\mathbf{x}-\mathbf{x}_{\mathrm{ex}}^{(k-2)}\|_{\mathbf{A}}} \approx\frac{\|\mathbf{x}-\mathbf{x}_{\mathrm{in}}^{(k-1)}\|_{\mathbf{A}}}{\| \mathbf{x}-\mathbf{x}_{\mathrm{in}}^{(k-2)}\|_{\mathbf{A}}}\geq\frac{c_{S} \tilde{\eta}_{\mathrm{ML}}(\mathbf{x}_{\mathrm{in}}^{(k-1)})}{C_{S}\tilde{\eta }_{\mathrm{ML}}(\mathbf{x}_{\mathrm{in}}^{(k-2)})},\]
where \(\tilde{\eta}_{\mathrm{ML}}\) is the version of the multilevel error estimator presented in [25] with approximate computation of the term corresponding to the coarsest level. We choose the constants \(c_{S}\) and \(C_{S}\) as \(0.85\) and \(1\), respectively. To sum up, for the first V-cycle iteration we choose an initial parameter \(\gamma^{(1)}\), in the subsequent \(k\)th V-cycle iterations, \(\gamma^{(k)}\) is computed as
\[\gamma^{(k)}=(2^{\frac{1}{10}}-1)\frac{0.85\tilde{\eta}_{\mathrm{ML}}(\mathbf{ x}_{\mathrm{in}}^{(k-1)})}{\tilde{\eta}_{\mathrm{ML}}(\mathbf{x}_{\mathrm{in}}^{(k-2 )})}. \tag{10}\]
We again consider the V-cycle scheme with \(8\) or \(2\) levels and stop the computation when the relative \(\mathbf{A}\)-norm of the error is (approximately) lower than \(10^{-11}\). The results for different choices of the initial parameter \(\gamma^{(1)}\) are summarized in Figure 4. The computed values of \(\gamma^{(k)}\) are plotted in Figure 5.
We see that the stopping strategy is not overly sensitive on the choice of the initial parameter \(\gamma^{(1)}\). Comparing the results with the results of the motivating experiment in Subsection 2.1 the strategy was mostly successful in setting the coarsest-level accuracy such that the method converged in the same number of V-cycles as the variant with MATLAB backslash. At the same time, in the setting with \(2\) levels, the total number of CG iterations performed on the coarsest level for all the choices of the initial parameter \(\gamma^{(1)}\) was not significantly higher that the total number of CG iterations is \(10^{-11}\). The results of the experiments are shown in Figure 5.
Figure 4: V-cycle method with CG as the solver on the coarsest level. CG is stopped using the criterion (10) with parameters \(\gamma^{(k)}\) chosen based on the heuristic strategy (10) with different choices of the initial parameter \(\gamma^{(1)}\).
ations for the highlighted tolerance in Figure 1. The performance of the strategy in the setting with 8 levels is worse, the total number of CG iterations however does not dramatically increase with decreasing initial parameter \(\gamma^{(1)}\).
### Accuracy of the estimates for the BLR coarsest-level solver
In Section 5 we discussed how the choice of the low-rank threshold parameter and finite precision arithmetic for the coarsest-level BLR solver may affect the convergence of the V-cycle method. The derivation of the results was based on the estimate on the Euclidean norm of relative residual of the approximation computed by the BLR solver (12). In this section we perform a numerical experiment investigating the accuracy of the estimate (12) and then discuss the accuracy of the derived results.
We consider the V-cycle methods with BLR direct coarsest-level solver with low-rank threshold parameter \(\epsilon=2^{-9}\). The other parameters of the BLR solver are chosen the same as in the corresponding motivating experiment in Subsection 2.1. We run the V-cycle method starting with a zero initial approximate solution and stop when the relative \(\mathbf{A}\)-norm of error is (approximately) less than \(10^{-11}\), or when the number of V-cycles is larger or equal to \(30\).
The values of the Euclidean norm of the relative residuals of the approximation on the coarsest level in each V-cycle iteration are plotted in Figure 6 together with the first term of the estimate (12), \(\xi_{p}\epsilon\|\mathbf{A}_{0}\|_{F}\|\mathbf{A}_{0}^{-1}\|\). Since we are using the variant of the BLR solver with global scaled threshold and intermediate recompressions, \(\xi_{p}=p^{2}/\sqrt{6}\); see [10]. We see that even the first term of the estimate (12) provide a significant overestimate of the norms of the relative residuals on the coarsest level in this setting.
The results discussed in Section 5, which are derived based on the estimate (12) thus may not be used to guide the choice of the low-rank threshold parameter and finite precision arithmetic for the BLR solver in this setting.
## 9 Conclusions and open problems
We presented an approach for analyzing the effects of approximate coarsest-level solves on the convergence of the V-cycle method for SPD problems. Based on this approach we discussed several coarsest-level stopping criteria, using which the relative difference between the exV-cycle and inV-cycle approximation after one V-cycle iteration can be controlled. The numerical experiments indicate that the most effective coarsest-level stopping criteria, in terms of the lowest number of coarsest-level iterations necessary, are those based on accurate estimates of the \(\mathbf{A}\)-norms of the errors on the finest and coarsest level.
We also studied the effects of the choices of the low-rank threshold parameter and finite precision arithmetic for the BLR direct coarsest-level solver. The numerical
Figure 5: Parameters \(\gamma^{(k)}\) computed based on the heuristic strategy (11) with different choices of the initial parameter \(\gamma^{(1)}\).
experiments indicate that these results may be a large overestimation and their use in practice may be limited. Obtaining more accurate estimates for these solvers remains an open problem. It appears that the quantities we have used in the estimates are not sufficient for describing the resulting behavior; obtaining an improvement may involve taking into account other properties of the matrices or the multigrid methods.
Other open problems include the generalization to non-symmetric problems, the study of inexact W-cycle schemes, and determining how to choose the coarsest-level accuracy when using multigrid methods as preconditioners.
## Acknowledgments
The authors wish to thank Petr Tichy for his useful comments on error estimation in CG and Jaroslav Hron for his suggestions when generating the system matrices in FEniCS. The authors acknowledge the support of the Erasmus+ program that enabled Petr Vacek to spend the Winter semester 2021-2022 at Trinity College Dublin. During this visit the basis of the paper was developed.
|
2309.01068
|
A Cartesian grid-based boundary integral method for moving interface
problems
|
This paper proposes a Cartesian grid-based boundary integral method for
efficiently and stably solving two representative moving interface problems,
the Hele-Shaw flow and the Stefan problem. Elliptic and parabolic partial
differential equations (PDEs) are reformulated into boundary integral equations
and are then solved with the matrix-free generalized minimal residual (GMRES)
method. The evaluation of boundary integrals is performed by solving equivalent
and simple interface problems with finite difference methods, allowing the use
of fast PDE solvers, such as fast Fourier transform (FFT) and geometric
multigrid methods. The interface curve is evolved utilizing the $\theta-L$
variables instead of the more commonly used $x-y$ variables. This choice
simplifies the preservation of mesh quality during the interface evolution. In
addition, the $\theta-L$ approach enables the design of efficient and stable
time-stepping schemes to remove the stiffness that arises from the curvature
term. Ample numerical examples, including simulations of complex viscous
fingering and dendritic solidification problems, are presented to showcase the
capability of the proposed method to handle challenging moving interface
problems.
|
Han Zhou, Wenjun Ying
|
2023-09-03T03:43:05Z
|
http://arxiv.org/abs/2309.01068v1
|
# A Cartesian grid-based boundary integral method for moving interface problems
###### Abstract
This paper proposes a Cartesian grid-based boundary integral method for efficiently and stably solving two representative moving interface problems, the Hele-Shaw flow and the Stefan problem. Elliptic and parabolic partial differential equations (PDEs) are reformulated into boundary integral equations and are then solved with the matrix-free generalized minimal residual (GMRES) method. The evaluation of boundary integrals is performed by solving equivalent and simple interface problems with finite difference methods, allowing the use of fast PDE solvers, such as fast Fourier transform (FFT) and geometric multigrid methods. The interface curve is evolved utilizing the \(\theta-L\) variables instead of the more commonly used \(x-y\) variables. This choice simplifies the preservation of mesh quality during the interface evolution. In addition, the \(\theta-L\) approach enables the design of efficient and stable time-stepping schemes to remove the stiffness that arises from the curvature term. Ample numerical examples, including simulations of complex viscous fingering and dendritic solidification problems, are presented to showcase the capability of the proposed method to handle challenging moving interface problems.
keywords: Hele-Shaw flow; The Stefan problem; Cartesian grid; Boundary integral equations; Kernel-free boundary integral method; Small scale decomposition
## 1 Introduction
Moving interface problems are encountered in various fields of natural sciences and industrial applications, ranging from mathematics [1; 2; 3] and fluid mechanics [4; 5; 6] to material sciences [7; 8; 9; 10] and imaging sciences [11; 12]. In these problems, interfaces are present, dividing the surrounding region into different sub-regions where the underlying physics is governed by PDEs.
When the motion of the interface is not known in advance and needs to be determined as part of solving the entire problem, it is referred to as a free boundary problem. Free boundary problems are inherently nonlinear and pose computational challenges due to the coupling between the dynamics of the interface and the underlying PDEs.
In this paper, we focus on two representative free boundary problems: the Hele-Shaw flow [13; 14; 15] and the Stefan problem [9; 16; 17]. The Hele-Shaw problem describes the flow of
a viscous fluid in a thin gap between two parallel plates and is an elliptic type problem. The Stefan problem, on the other hand, models the solidification or melting of a material with a moving interface and is a parabolic type problem. These problems have been extensively studied in terms of their numerical methods and analysis, due to their significance in applications and the computational difficulties they pose.
Developing accurate and efficient numerical methods for moving interface problems poses several challenges. The first challenge lies in accurately approximating a complex and evolving interface. Although various numerical methods have been developed, such as the front-tracking method [18; 19; 16], the level set method [20; 21; 22; 23; 24], the volume of fluid method [25; 26], and phase-field methods [27; 28; 29], obtaining both simplicity and accuracy in approximations remains challenging. The second challenge is related to solving PDEs on complex and time-varying domains. Due to the evolving interface, methods like the classical finite element method, which relies on body-fitted meshes, require frequent re-meshing procedures to maintain mesh quality. This not only leads to computational inefficiencies but also requires additional implementation efforts. Additionally, conventional boundary integral equation methods [17; 14; 30; 31; 32; 15] are efficient for homogeneous PDEs but have limitations such as the need for dense matrix-vector multiplications and evaluations of singular integrals. In recent decades, Cartesian grid-based methods have gained popularity for moving interface problems. These methods, including the immersed boundary method (IBM) [33; 34; 35], the immersed interface method (IIM) [36; 37; 38; 39], and the ghost fluid method (GFM) [40; 41; 42; 43], involve immersing the moving interface into a fixed background mesh, typically a Cartesian grid. This approach simplifies the algorithm and improves computational efficiency.
When considering the effect of surface tension on the interface, the problem formulation includes the Laplace-Young equation or the Gibbs-Thomson relation to account for local curvature. However, the presence of high-order derivatives in the curvature introduces stiffness into the evolution problem and imposes strict stability constraints on the time step when using explicit time-stepping schemes. Conversely, using a straightforward implicit scheme becomes complicated and computationally expensive due to the nonlinear and nonlocal nature of the interface velocity as a function of the interface position. To address these challenges, Hou et al. developed the small scale decomposition (SSD) method [14]. This method successfully removes stiffness induced by surface tension by employing a special \(\theta-L\) formulation of the interface and an implicit discretization for the stiff but linear part of the evolution equation. As a result, the SSD method allows for the application of large time steps, improving computational efficiency. The SSD method has been adapted for solving multiple moving interface problems, including microstructural evolution in inhomogeneous elastic media [44], elastic membranes in viscous flows [45; 46], solid tumor growth [47], and crystal growth [48; 49; 50], among others.
The main objective of this paper is to develop an efficient and stable Cartesian grid-based boundary integral method for solving the Hele-Shaw flow and the Stefan problem. By reformulating the PDEs as boundary integral equations, we use a finite difference analogue of the boundary integral method known as the kernel-free boundary integral (KFBI) method. The KFBI method is based on potential theory and is specifically designed to solve boundary value problems of elliptic equations on irregular domains [51]. It takes advantage of fast PDE solvers on Cartesian grids and eliminates the need to evaluate singular and nearly singular integrals. The method has been extended to higher-order versions and has been successfully applied to solve various problems in the past [52; 53; 54]. To accurately track the moving interface, the \(\theta-L\) approach is utilized to represent the Jordan curve and achieve equidistant interface discretization. In addition, the SSD method is employed to enhance the efficiency and stability of numerical
schemes for interface evolutions in both the Hele-Shaw flow and the Stefan problem.
The remainder of this paper is organized as follows: In Section 2, we outline the governing equations of the Hele-Shaw problem and the Stefan problem. Section 3 introduces time discretization methods for time-dependent PDEs and formulations of boundary integral equations. The kernel-free boundary integral method is elaborated upon in Section 4. Numerical approaches for interface evolution are described in Section 5. Numerical results are presented in Section 6. Finally, in Section 7, we briefly discuss the advantages of the proposed method and potential areas for improvement.
## 2 Moving interface problems
Let \(\Gamma:[0,2\pi)\times[0,T]\to\mathbb{R}^{2}\) be a time-dependent and closed curve separating a domain \(\mathcal{U}\subset\mathbb{R}^{2}\) into an interior domain \(\Omega^{+}\) and an exterior domain \(\Omega^{-}\). Generally, the domain \(\mathcal{U}\) can be bounded or unbounded. We shall assume that the interior domain \(\Omega^{+}\) is bounded and can be covered by a bounding box \(\mathcal{B}\), see Fig 1. In moving interface problems, the physical quantities defined in \(\Omega^{+}\) and \(\Omega^{-}\) satisfy certain PDEs. Interface or boundary conditions are also prescribed on the moving interface \(\Gamma\) for the PDEs.
### The Hele-Shaw flow
We consider an exterior moving interface problem of the Hele-Shaw flow that describes the motion of an air bubble in a radial Hele-Shaw cell. The moving interface \(\Gamma\) separates the full space \(\mathbb{R}^{2}\) into an air domain \(\Omega^{+}\) and an oil domain \(\Omega^{-}\). In the air domain, the pressure is assumed to be constant, which we may take to be zero. In the oil domain, the fluid velocity \(\mathbf{u}\) and pressure \(p\) satisfy Darcy's law, together with the incompressibility constraint:
\[\mathbf{u}=-M\nabla p,\quad\nabla\cdot\mathbf{u}=0,\quad\text{in }\Omega^{-}, \tag{1}\]
where \(M=\frac{b^{2}}{12\mu}\) is the mobility of the fluid, \(b\) is the gap width of the Hele-Shaw cell, and \(\mu\) is the viscosity. Consider constant injection of air at the origin,
\[\nabla\cdot\mathbf{u}=2\pi J\delta(\mathbf{x}),\quad\text{in }\Omega^{-}, \tag{2}\]
where \(J\geq 0\) is a constant injection rate and \(\delta\) is the Dirac delta function modeling the air injection. Combining equations (1) and (2), and assuming \(M=1\), we obtain the Poisson equation for the pressure
\[\Delta p=-2\pi J\delta(\mathbf{x}),\quad\text{in }\Omega^{-}. \tag{3}\]
Figure 1: A Schematic of a moving interface problem.
Although the Dirac delta function in the right-hand side of (3) vanishes in \(\Omega^{-}\), it prescribes the behavior of the solution at infinity \(p=-J\ln|\mathbf{x}|+C+o(1)\) as \(|\mathbf{x}|\to\infty\), where \(C\) is a constant ambient pressure. On the moving interface \(\Gamma\), the pressure is given by the Laplace-Young condition
\[p=-\sigma\kappa,\quad\text{on }\Gamma, \tag{4}\]
where \(\sigma>0\) is the surface tension coefficient and \(\kappa\) is the local curvature of \(\Gamma\). In addition, the motion of the moving interface \(\Gamma\) follows the kinematic condition
\[\frac{d\mathbf{x}}{dt}=\mathbf{u},\quad\text{for }\mathbf{x}\in\Gamma. \tag{5}\]
As a matter of fact, since we are only interested in the shape of \(\Gamma\), it suffices only to prescribe the normal velocity \(V_{n}\),
\[V_{n}=\frac{d\mathbf{x}}{dt}\cdot\mathbf{n}=\mathbf{u}\cdot\mathbf{n}=- \partial_{\mathbf{n}}p,\quad\text{for }\mathbf{x}\in\Gamma. \tag{6}\]
where \(\mathbf{n}\) is the unit outward normal of \(\Gamma\).
### The Stefan problem
We also consider the Stefan problem that models diffusion-driven phase changes between solid and liquid phases. Here, the moving solid-liquid interface \(\Gamma\) separates the box domain \(\mathcal{B}\) into the solid region \(\Omega^{+}\) and the liquid region \(\Omega^{-}\). In the classical Stefan problem, the temperature field \(T\) satisfies the heat equation in both solid and liquid regions,
\[\partial_{t}T=\Delta T,\quad\text{in }\Omega^{+}\cup\Omega^{-}. \tag{7}\]
On the solid-liquid interface, the temperature is continuous and is coupled with the surface tension and the molecular kinematic effect through the Gibbs-Thomson relation
\[T+\varepsilon_{C}(\mathbf{n})\kappa+\varepsilon_{V}(\mathbf{n})V_{n}=0,\quad \text{on }\Gamma, \tag{8}\]
where \(\varepsilon_{C}(\mathbf{n})\) and \(\varepsilon_{V}(\mathbf{n})\) are surface tension and molecular kinetic coefficients, which are non-negative and may depend on the orientation of the interface. The normal velocity of \(\Gamma\) is determined by the Stefan equation.
\[V_{n}=[\partial_{\mathbf{n}}T]. \tag{9}\]
Here, the notation \([\cdot]\) means the jump value of a quantity across the interface. For example, given a piece-wise continuous function \(q\), we have
\[[q](\mathbf{x})=q^{+}(\mathbf{x})-q^{-}(\mathbf{x})=\lim_{\mathbf{y}\in \Omega^{+},\mathbf{y}\to\mathbf{x}}q(\mathbf{y})-\lim_{\mathbf{y}\in\Omega^{- },\mathbf{y}\to\mathbf{x}}q(\mathbf{y}),\quad\text{for }\mathbf{x}\in\Gamma. \tag{10}\]
A suitable boundary condition should also be prescribed on the outer boundary \(\partial\mathcal{B}\), for which we choose a no-flux boundary condition \(\partial_{\mathbf{n}}T=0\). For modeling solidification problems, initially, a solid seed is placed in an undercooled surrounding liquid. The temperature in the solid seed is assumed to be equal to the melt temperature \(T_{m}\), and the temperature in the undercooled liquid is given by \(T_{\infty}\). The degree of undercooling is described by the Stefan number \(St=T_{\infty}-T_{m}\).
In a more general setting, the temperature differences in the liquid phase lead to changes in the specific volume of fluid parcels and, hence, the fluid density. The density changes further
lead to buoyancy force-driven convection of the fluid. With the Boussinesq approximation [55], we assume the fluid is incompressible, and the effect of density changes only appears in the buoyancy force. Incorporating the natural convection effect, the Stefan problem is governed by the following system
\[\partial_{t}T=\Delta T,\quad\text{in}\ \Omega^{+}, \tag{11}\] \[\partial_{t}T+\mathbf{u}\cdot\nabla T=\Delta T,\quad\text{in}\ \Omega^{-},\] (12) \[\partial_{t}\mathbf{u}+(\mathbf{u}\cdot\nabla)\mathbf{u}=\Delta \mathbf{u}-\nabla p+\mathbf{G},\quad\text{in}\ \Omega^{-},\] (13) \[\nabla\cdot\mathbf{u}=0,\quad\text{in}\ \Omega^{-}, \tag{14}\]
where \(\mathbf{u}\) is fluid velocity, \(p\) is pressure. Here, we have assumed that the density and viscosity of the fluid are 1. The buoyancy force \(\mathbf{G}\) under the Boussinesq approximation is linearly proportional to the temperature difference,
\[\mathbf{G}=-g\beta(T-T_{\infty})\mathbf{j}, \tag{15}\]
where \(g\) represents the gravity acceleration, \(\beta\) denotes the thermal expansion coefficient, \(\mathbf{j}\) denotes the unit vector in the vertical direction. On the solid-liquid interface \(\Gamma\), the Gibbs-Thomson relation (8) and the Stefan equation (9) are given for the temperature field, and the no-slip boundary condition is specified for the fluid. On the outer boundary \(\partial\mathcal{B}\), we set \(\partial_{\mathbf{n}}T=0\) and \(\mathbf{u}=\mathbf{u}_{b}\) where \(\mathbf{u}_{b}\) is the boundary data describing in/out-flow or no-slip boundary conditions. It is worth mentioning that the model is a two-way coupling of the temperature and the velocity field through the fluid convection and buoyancy force.
## 3 Time discretization and boundary integral equations
The PDEs in the last section are all solved in a boundary integral equation framework. Time-dependent parabolic PDEs are discretized in time to obtain a sequence of elliptic PDEs, for which we can formulate equivalent boundary integral equations. Let \(t_{n}=n\tau\), \(n=0,1,\cdots,N_{T}\) be the uniform temporal mesh where \(\tau=T/N_{T}\) is the time step. For a function \(f\), denote by \(f^{n}\) the numerical approximation of \(f(t_{n})\).
### The Hele-Shaw flow
In the Hele-Shaw flow, the solution to the Poisson equation (3) with the boundary condition (4) can be represented as the sum of two functions \(v\) and \(w\), where
\[v(\mathbf{x})=-J\ln|\mathbf{x}|. \tag{16}\]
comes from the point-source term and \(w(\mathbf{x})\) satisfies the exterior problem of the Laplace equation
\[\Delta w=0,\quad\text{in}\ \Omega^{-}, \tag{17}\] \[w=-\sigma\kappa-v,\quad\text{on}\ \Gamma. \tag{18}\]
Let \(G_{0}(\mathbf{y},\mathbf{x})=(1/2\pi)\ln|\mathbf{y}-\mathbf{x}|\) be the free-space Green function associated with the Laplacian \(\Delta\), the solution \(w(\mathbf{x})\) can be represented as a modified double-layer potential
\[w(\mathbf{x})=(D\varphi)(\mathbf{x})+\int_{\Gamma}\varphi(\mathbf{y})\,d \mathbf{s}_{\mathbf{y}}=\int_{\Gamma}\varphi(\mathbf{y})\left(\frac{\partial G _{0}(\mathbf{y},\mathbf{x})}{\partial\mathbf{n}_{\mathbf{y}}}+1\right)\,d \mathbf{s}_{\mathbf{y}}, \tag{19}\]
where \(\varphi\) is an unknown dipole density function defined on \(\Gamma\). The boundary integral formulation of \(w\) naturally matches the boundary condition at infinity. Restricting (19) on \(\Gamma\) and using the boundary condition (18), we obtain a boundary integral equation for the density function \(\varphi\),
\[-\frac{1}{2}\varphi(\mathbf{x})+\int_{\Gamma}\varphi(\mathbf{y})\left(\frac{ \partial G_{0}(\mathbf{y},\mathbf{x})}{\partial\mathbf{n}_{\mathbf{y}}}+1 \right)d\mathbf{s}_{\mathbf{y}}=-\sigma\kappa-v(\mathbf{x}),\quad\text{for }\mathbf{x}\in\Gamma. \tag{20}\]
The boundary integral equation is a Fredholm integral equation of the second kind and is well-conditioned.
### The classical Stefan problem
We first discretize the time-dependent heat equation in time to reduce the problem into solving an elliptic equation in each time step. For better accuracy and stability, the second-order backward differentiation formula (BDF2) is employed for the time discretization
\[\frac{3T^{n+1}-4T^{n}+T^{n-1}}{2\tau}=\Delta T^{n+1},\quad\text{in }\Omega^{+}\cup\Omega^{-}. \tag{21}\]
It leads to a modified Helmholtz equation for \(T^{n+1}\)
\[(\Delta-\frac{3}{2\tau})T^{n+1}=\frac{T^{n-1}-4T^{n}}{2\tau},\quad\text{in }\Omega^{+}\cup\Omega^{-}, \tag{22}\]
The modified Helmholtz equation is also subject to jump conditions \([T^{n+1}]=0\) and \([\partial_{\mathbf{n}}T^{n+1}]=V_{n}\) on \(\Gamma\) and an outer boundary condition \(\partial_{\mathbf{n}}T^{n+1}=0\) on \(\partial\mathcal{B}\). Let \(c=\sqrt{3/(2\tau)}\). We split \(T^{n+1}\) into two parts \(T^{n+1}=T_{1}+T_{2}\), where \(T_{1}\) is the solution to the modified Helmholtz equation with an inhomogeneous right-hand side,
\[\Delta T_{1}-c^{2}T_{1}=\frac{T^{n-1}-4T^{n}}{2\tau}, \quad\text{in }\mathcal{B}, \tag{23}\] \[\partial_{\mathbf{n}}T_{1}=0, \quad\text{on }\partial\mathcal{B}, \tag{24}\]
and \(T_{2}\) is the solution to the interface problem with a homogeneous right-hand side,
\[\Delta T_{2}-c^{2}T_{2}=0, \quad\text{in }\Omega^{+}\cup\Omega^{-}, \tag{25}\] \[[T_{2}]=0, \quad\text{on }\Gamma,\] (26) \[[\partial_{\mathbf{n}}T_{2}]=V_{n}, \quad\text{on }\Gamma,\] (27) \[\partial_{\mathbf{n}}T_{2}=0, \quad\text{on }\partial\mathcal{B}. \tag{28}\]
Since the temperature field \(T\) is continuous across the interface \(\Gamma\), the right-hand side of (23) is also continuous. Therefore, the part \(T_{1}\) is much more regular than \(T_{2}\), and the problem (23) and (24) can be solved with a standard finite difference method. For each fixed \(\mathbf{x}\in\mathcal{B}\), let \(G_{c}(\mathbf{y},\mathbf{x})\) be the Green function that satisfies
\[(\Delta-c^{2})G_{c}(\mathbf{y},\mathbf{x})=\delta(\mathbf{y}- \mathbf{x}), \quad\text{for }\mathbf{y}\in\mathcal{B}, \tag{29}\] \[\partial_{\mathbf{n}}G_{c}(\mathbf{y},\mathbf{x})=0, \quad\text{for }\mathbf{y}\in\partial\mathcal{B}. \tag{30}\]
The part \(T_{2}\) can be expressed as a single layer potential
\[T_{2}(\mathbf{x})=-(S\psi)(\mathbf{x})=-\int_{\Gamma}G_{c}(\mathbf{y},\mathbf{x}) \psi(\mathbf{y})\,d\mathbf{s}_{y},\quad\text{for }\mathbf{x}\in\mathcal{B}. \tag{31}\]
where \(\psi\) is an unknown density function defined on \(\Gamma\). Note that we also have \(\psi=[\partial_{\mathbf{n}}T_{2}]=V_{n}\). Further, with the Gibbs-Thomson relation (8), a boundary integral equation can be obtained for the density function \(\psi\),
\[\varepsilon_{V}\psi(\mathbf{x})-\int_{\Gamma}G_{c}(\mathbf{y},\mathbf{x}) \psi(\mathbf{y})\,d\mathbf{s}_{y}=-\varepsilon_{C}\kappa(\mathbf{x})-T_{1}( \mathbf{x}),\quad\text{for }\mathbf{x}\in\Gamma. \tag{32}\]
For a nonzero \(\varepsilon_{V}\), the boundary integral equation (32) is a Fredholm integral equation of the second kind. It degenerates to a first-kind Fredholm integral equation if \(\varepsilon_{V}\) vanishes.
### The Stefan problem with natural convection
Similar to the heat equation, we first discretize time derivatives in the time-dependent advection-diffusion equation (12) and the Navier-Stokes equation (13), (14), and reduce the problems into elliptic equations in each time step. To avoid treating the nonlinear advection term, we advance \(T,\mathbf{u},p\) with a semi-Lagrangian method in time. In the semi-Lagrangian discretization, the advection terms are incorporated into material derivatives, namely,
\[\frac{dT}{dt}=\Delta T,\quad\text{in }\Omega^{+}\cup\Omega^{-}, \tag{33}\] \[\frac{d\mathbf{u}}{dt}=\Delta\mathbf{u}-\nabla p+\mathbf{G},\quad \text{in }\Omega^{-},\] (34) \[\nabla\cdot\mathbf{u}=0,\quad\text{in }\Omega^{-}, \tag{35}\]
where \(\frac{d}{dt}=\partial_{t}+\mathbf{u}\cdot\nabla\) is the material derivative. Here, we assume \(\mathbf{u}\) equals zero in the solid domain, so the temperature in both domains satisfies the same equation (33). A second-order semi-implicit is used to discretize equations (33) to (35),
\[\frac{3T^{n+1}-4\tilde{T}^{n}+\tilde{T}^{n-1}}{2\tau}=\Delta T^{n +1},\quad\text{in }\Omega^{+}\cup\Omega^{-}, \tag{36}\] \[\frac{3\mathbf{u}^{n+1}-4\tilde{\mathbf{u}}^{n}+\tilde{\mathbf{u }}^{n-1}}{2\tau}=\Delta\mathbf{u}^{n+1}-\nabla p^{n+1}+2\mathbf{G}^{n}- \mathbf{G}^{n-1},\quad\text{in }\Omega^{-},\] (37) \[\nabla\cdot\mathbf{u}^{n+1}=0,\quad\text{in }\Omega^{-}. \tag{38}\]
where \(\tilde{\mathbf{u}}^{n}\), \(\tilde{T}^{n}\), \(\tilde{\mathbf{u}}^{n-1}\), and \(\tilde{T}^{n-1}\) are the fluid velocities and temperatures at the departure points \(\mathbf{x}^{n}\) and \(\mathbf{x}^{n-1}\), respectively. The scheme treats the buoyancy term explicitly such that the resulting discrete system is not coupled and can be solved separately. The departure points \(\mathbf{x}^{n}\) and \(\mathbf{x}^{n-1}\) can be found by solving the initial value problem backward in time,
\[\frac{d\mathbf{x}(t)}{dt}=\mathbf{u}(\mathbf{x}(t),t),\quad\mathbf{x}(t_{n+1} )=\mathbf{x}_{0}. \tag{39}\]
A second-order mid-point method is used for computing the positions of the departure points,
\[\mathbf{x}^{*}=\mathbf{x}_{0}-\frac{\tau}{2}\mathbf{u}\left( \mathbf{x}_{0}-\frac{\tau}{2}\mathbf{u}^{n+\frac{1}{2}},t_{n+\frac{1}{2}} \right),\quad\mathbf{x}^{n}=\mathbf{x}_{0}-\tau\mathbf{u}\left(\mathbf{x}^{*},t_{n+\frac{1}{2}}\right), \tag{40}\] \[\mathbf{x}^{*}=\mathbf{x}_{0}-\tau\mathbf{u}\left(\mathbf{x}_{0}- \tau\mathbf{u}^{n},t_{n}\right),\quad\mathbf{x}^{n-1}=\mathbf{x}_{0}-2\tau \mathbf{u}\left(\mathbf{x}^{*},t_{n}\right). \tag{41}\]
Off-grid values of \(\mathbf{u}\) are computed with the cubic Lagrangian interpolation and the velocity at \(t_{n+\frac{1}{2}}\) is computed with a second-order extrapolation scheme \(\mathbf{u}^{n+\frac{1}{2}}=\frac{3}{2}\mathbf{u}^{n}-\frac{1}{2}\mathbf{u}^{n-1}\). After making some rearrangements, we arrive at the modified Helmholtz equation
\[(\Delta-\frac{3}{2\tau})T^{n+1}=\frac{\tilde{T}^{n-1}-4\tilde{T}^{n}}{2\tau}, \quad\text{in}\;\Omega^{+}\cup\Omega^{-},, \tag{42}\]
subject to interface and boundary conditions on \(\Gamma\) and \(\partial\mathcal{B}\) as mentioned before, and the modified Stokes equation
\[(\Delta-\frac{3}{2\tau})\mathbf{u}^{n+1}-\nabla p^{n+1}=\frac{ \tilde{\mathbf{u}}^{n-1}-4\tilde{\mathbf{u}}^{n}}{2\tau}+2\mathbf{G}^{n}- \mathbf{G}^{n-1},\quad\text{in}\;\Omega^{-}, \tag{43a}\] \[\nabla\cdot\mathbf{u}^{n+1}=0,\quad\text{in}\;\Omega^{-}, \tag{43b}\]
subject to boundary conditions
\[\mathbf{u}^{n+1}=\mathbf{0},\quad\text{on}\;\Gamma,\quad\mathbf{u}^{n+1}= \mathbf{u}_{b},\quad\text{on}\;\partial\mathcal{B}, \tag{44}\]
The boundary integral formulation for the modified Helmholtz (42) is similar to that for (22), and we omit it. We shall give the boundary integral equation for the modified Stokes (43). We split the solution pair into two parts \((\mathbf{u}^{n+1},p^{n+1})=(\mathbf{u}_{1},p_{1})+(\mathbf{u}_{2},p_{2})\). The particular solution \((\mathbf{u}_{1},p_{1})\) satisfies the modified Stokes equation with an inhomogeneous right-hand side
\[(\Delta-c^{2})\mathbf{u}_{1}-\nabla p_{1} =\mathbf{f},\quad\text{in}\;\mathcal{B}, \tag{45a}\] \[\nabla\cdot\mathbf{u}_{1} =0,\quad\text{in}\;\mathcal{B},\] (45b) \[\mathbf{u}_{1} =\mathbf{u}_{b},\quad\text{on}\;\partial\mathcal{B}, \tag{45c}\]
where \(c=\sqrt{3/(2\tau)}\) and \(\mathbf{f}\) is given by
\[\mathbf{f}=\begin{cases}\dfrac{\tilde{\mathbf{u}}^{n-1}-4\tilde{\mathbf{u}}^{ n}}{2\tau}+2\mathbf{G}^{n}-\mathbf{G}^{n-1},\quad\text{in}\;\Omega^{-},\\ 2\mathbf{G}^{n}-\mathbf{G}^{n-1},\quad\text{in}\;\Omega^{+}.\end{cases} \tag{46}\]
Note that \(\mathbf{f}\) is a continuous extension of the right-hand side of (43a) due to the no-slip boundary condition on \(\Gamma\). Then the part \((\mathbf{u}_{1},p_{1})\) has high regularity, and standard finite difference methods for the Stokes equation can be applied. The second part \((\mathbf{u}_{2},p_{2})\) satisfies an exterior Dirichlet boundary value problem
\[(\Delta-c^{2})\mathbf{u}_{2}-\nabla p_{2} =\mathbf{0},\quad\text{in}\;\Omega^{-}, \tag{47}\] \[\nabla\cdot\mathbf{u}_{2} =0,\quad\text{in}\;\Omega^{-},\] (48) \[\mathbf{u}_{2} =-\mathbf{u}_{1},\quad\text{on}\;\Gamma,\] (49) \[\mathbf{u}_{2} =\mathbf{0},\quad\text{on}\;\partial\mathcal{B}. \tag{50}\]
For each fixed \(\mathbf{x}\in\mathcal{B}\), let \((\mathbf{G}_{\mathbf{u}}(\mathbf{y},\mathbf{x}),\mathbf{G}_{p}(\mathbf{y}, \mathbf{x}))\) be the Green function pair that satisfies
\[(\Delta-c^{2})\mathbf{G}_{\mathbf{u}}(\mathbf{y},\mathbf{x})- \nabla\mathbf{G}_{p}(\mathbf{y},\mathbf{x})=\delta(\mathbf{y}-\mathbf{x}) \mathbf{I},\quad\text{for}\;\mathbf{y}\in\mathcal{B}, \tag{51}\] \[\nabla\cdot\mathbf{G}_{p}(\mathbf{y},\mathbf{x})=0,\quad\text{for} \;\mathbf{y}\in\mathcal{B},\] (52) \[\mathbf{G}_{\mathbf{u}}(\mathbf{y},\mathbf{x})=\mathbf{0},\quad \text{for}\;\mathbf{y}\in\partial\mathcal{B}. \tag{53}\]
Then the solution pair \((\mathbf{u}_{2},\,p_{2})\) can be represented as a double-layer potential \(D\boldsymbol{\varphi}=(D_{\mathbf{u}}\boldsymbol{\varphi},D_{p}\boldsymbol{ \varphi})^{T}\)[56],
\[\mathbf{u}_{2}(\mathbf{x}) =(D_{\mathbf{u}}\boldsymbol{\varphi})(\mathbf{x})=\int_{\Gamma}T( \mathbf{G_{u}},\mathbf{G}_{p})(\mathbf{y},\mathbf{x})\boldsymbol{\varphi}( \mathbf{y})\,d\mathbf{s}_{\mathbf{y}}, \tag{54}\] \[p_{2}(\mathbf{x}) =(D_{p}\boldsymbol{\varphi})(\mathbf{x})=2\int_{\Gamma}\frac{ \partial\mathbf{G}_{p}(\mathbf{y},\mathbf{x})}{\partial\mathbf{n}_{\mathbf{y} }}\boldsymbol{\varphi}(\mathbf{y})\,d\mathbf{s}_{\mathbf{y}}. \tag{55}\]
where \(\boldsymbol{\varphi}=(\varphi_{1},\varphi_{2})^{T}\) is a vector-valued unknown density function defined on \(\Gamma\) and \(T(\mathbf{u},p)=-p\mathbf{n}+(\nabla\mathbf{u}+\nabla\mathbf{u}^{T})\mathbf{n}\) is the traction operator. Restricting (54) to \(\Gamma\) leads to the boundary integral equation
\[-\frac{1}{2}\boldsymbol{\varphi}(\mathbf{x})+\int_{\Gamma}T(\mathbf{G_{u}}, \mathbf{G}_{p})(\mathbf{y},\mathbf{x})\boldsymbol{\varphi}(\mathbf{y})\,d \mathbf{s}_{\mathbf{y}}=-\mathbf{u}_{1}(\mathbf{x}),\quad\text{for}\; \mathbf{x}\in\Gamma. \tag{56}\]
The boundary integral equation is also a Fredholm integral equation of the second kind and is well-conditioned.
## 4 Kernel-free boundary integral method
The boundary integral equations (20), (32) and (56) are solved with the kernel-free boundary integral method. The discrete systems of boundary integral equations are solved with a Krylov subspace iterative method, the GMRES method [57]. In each iteration, one only needs to compute the matrix-vector multiplication operation, which mainly consists of evaluations of boundary integrals. The procedure can be implemented in a matrix-free manner to avoid forming the full matrix. The main idea of the KFBI method is to make use of a Cartesian grid-based PDE solver instead of numerical quadratures for evaluating the boundary integrals.
### Equivalent interface problems
Let \(\mathcal{A}\) be an elliptic differential operator that can be the Laplacian, modified Helmholtz, or modified Stokes operator. The evaluation of boundary integral operators associated with \(\mathcal{A}\) can be described in the same framework. According to the classical potential theory, the single-layer potential \(S\boldsymbol{\psi}(\mathbf{x})\) and the double-layer potential \(D\boldsymbol{\varphi}(\mathbf{x})\) satisfy equivalent interface problems [56; 51], which can be unified as
\[\begin{cases}\mathcal{A}\mathbf{v}=\mathbf{0},&\text{in}\;\Omega^{+}\cup \Omega^{-},\\ \left[\boldsymbol{\pi}_{D}(\mathbf{v})\right]=\mathbf{\Phi},&\text{on}\; \Gamma,\\ \left[\boldsymbol{\pi}_{N}(\mathbf{v})\right]=\mathbf{\Psi},&\text{on}\; \Gamma,\\ \text{some suitable BCs},&\partial\mathcal{B}.\end{cases} \tag{57}\]
where \((\boldsymbol{\pi}_{D},\boldsymbol{\pi}_{N})\) is the Cauchy data pair and is specified in (1). The functions \(\mathbf{\Phi}\) and \(\mathbf{\Psi}\) are given
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline & \(\mathbf{v}\) & \(\boldsymbol{\pi}_{D}(\mathbf{v})\) & \(\boldsymbol{\pi}_{N}(\mathbf{v})\) \\ \hline \(\mathcal{A}=\Delta-c^{2}\) & \(u\) & \(u\) & \(\partial_{\mathbf{n}}u\) \\ \hline \(\mathcal{A}=((\Delta-c^{2})\mathbf{I}-\nabla,\nabla\cdot)^{T}\) & \((\mathbf{u},\,p)^{T}\) & \(\mathbf{u}\) & \(T(\mathbf{u},\,p)\) \\ \hline \end{tabular}
\end{table}
Table 1: Cauchy data pair for different elliptic differential operators.
in (2). The boundary condition on \(\partial\mathcal{B}\) depends on which Green's function is used. For the Hele-Shaw flow, since the problem is defined on an unbounded domain and the box boundary \(\partial\mathcal{B}\) is only an artificial boundary. A natural choice is to directly use the potential value on \(\partial\mathcal{B}\) as a Dirichlet-type boundary condition. The integral form boundary condition is discretized with the composite trapezoidal rule. Due to the periodicity of the integrand function on \(\Gamma\), the composite trapezoidal rule is highly accurate. No singular or nearly singular integral needs to be handled here since the quadrature points are on \(\Gamma\) and target points are on \(\partial\mathcal{B}\). In the Stefan problem, boundary conditions for the temperature or the fluid field are of Dirichlet or Neumann-type, depending on the problem at hand.
The equivalent interface problem (57) is much simpler than the original problems since the interface condition can be easily decoupled into derivative jumps in each direction, which is shown in A. After solving the interface problem (57) on a Cartesian grid with an efficient PDE solver, boundary integral values, as well as their normal derivatives, can be extracted from grid values with an interpolation procedure.
We stress that the analytical expression of Green's function is only used in the Hele-Shaw flow to convert the infinite boundary condition to a bounded one. In other cases, the method is kernel-free in the sense that there is no need to use any of Green's functions explicitly. In all cases, evaluations of singular or nearly singular integrals are avoided. Accelerated by fast PDE solvers, the computation of boundary integrals with the KFBI method is efficient and is comparable with the fast multipole method [52].
### Corrected finite difference scheme
The interface problem (57) is solved with a finite difference method with additional corrections near the interface. For simplicity, suppose the computational domain \(\mathcal{B}\) is a unit square \([0,1]^{2}\) and is uniformly partitioned into \(N\) intervals in each spatial direction. The grid nodes are denoted as \((x_{i},y_{j})\), \(x_{i}=ih\), \(y_{j}=jh\), \(i,j=0,\cdots,N\) where \(h=1/N\) is the mesh size.
#### 4.2.1 The modified Helmholtz equation
Denote by \(u_{i,j}\) the numerical approximation of \(u(x_{i},y_{j})\). A standard second-order five-point central difference scheme for the modified Helmholtz equation is given by
\[\Delta_{h}u_{i,j}-c^{2}u_{i,j}=\frac{u_{i+1,j}+u_{i-1,j}+u_{i,j+1}+u_{i,j-1}-4 u_{i,j}}{h^{2}}-c^{2}u_{i,j}=0. \tag{58}\]
A grid node \((x_{i},y_{j})\) is called an irregular node if at least one of the stencil nodes is on the other side of the interface. Clearly, irregular nodes are always in the vicinity of \(\Gamma\). Since the solution has certain jumps on the interface \(\Gamma\), the local truncation error at an irregular node is on the order of \(\mathcal{O}(h^{-2})\) and is too large to achieve an accurate approximation. To show this, we assume at an irregular node \((x_{i},y_{j})\in\Omega^{+}\), a stencil node \((x_{i+1},y_{j})\in\Omega^{-}\) is on the different side of \(\Gamma\) and other stencil nodes are all in \(\Omega^{+}\). Suppose the interface \(\Gamma\) intersects the grid lines segment between
\begin{table}
\begin{tabular}{|c|c|c|} \hline & \(\mathbf{\Phi}\) & \(\mathbf{\Psi}\) \\ \hline \(\mathbf{v}=S\mathbf{\psi}\) & \(\mathbf{0}\) & \(-\mathbf{\psi}\) \\ \hline \(\mathbf{v}=D\mathbf{\varphi}\) & \(\mathbf{\varphi}\) & \(\mathbf{0}\) \\ \hline \end{tabular}
\end{table}
Table 2: Jump relations of the single- and double-layer potentials.
\((x_{i},y_{j})\) and \((x_{i+1},y_{j})\) at \((\xi,y_{j})\). By Taylor expansion, the local truncation error at \((x_{i},y_{j})\) is given by
\[E_{h}(x_{i},y_{j})=(\Delta_{h}-c^{2})u(x_{i},y_{j})=-\frac{1}{h^{2}}\{[u]+(x_{i +1}-\xi)[u_{x}]+\frac{1}{2}(x_{i+1}-\xi)^{2}[u_{xx}]\})+\mathcal{O}(h). \tag{59}\]
To avoid the large local truncation error, one can use the leading term as a correction term to the right-hand side of (58). It gives the corrected scheme
\[\Delta_{h}u_{i,j}-c^{2}u_{i,j}=C_{i,j}=-\frac{1}{h^{2}}\{[u]+(x_{i+1}-\xi)[u_{ x}]+\frac{1}{2}(x_{i+1}-\xi)^{2}[u_{xx}]\}. \tag{60}\]
The resulting local truncation error becomes \(\mathcal{O}(h)\). Here, the correction term \(C_{i,j}\) is from the contribution of the intersection point \((\xi,y_{j})\) and is a linear combination of jump values \([u]\), \([u_{x}]\), and\([u_{yy}]\), which are known before solving the interface problem, see A. Similarly, one can derive correction terms for different intersection patterns by Taylor expansions. Evidently, \(C_{i,j}\) is non-zero only at irregular nodes and is always a linear combination of the jump values \([u],[u_{x}],[u_{y}],\cdots\).
Since irregular nodes are near the interface \(\Gamma\), which has co-dimension one, the overall accuracy can still be second-order [58]. The correction terms only appear on the right-hand side and the coefficient matrix is not altered, FFT-based fast Poisson solvers can be applied to solve the resulting linear system efficiently.
#### 4.2.2 The modified Stokes equation
The discretization of the modified Stokes equations is based on the marker and cell (MAC) scheme on a staggered grid. The pressure is at the cell center, the x-component of the velocity is at the center of the east and west edges of a cell, and the y-component of the velocity is at the center of the north and south edges of a cell. The discrete mesh functions \(u_{i,j},v_{i,j},p_{i,j}\) are defined as
\[u_{i,j} =u(x_{i},y_{j}-h/2),\quad i=0,1\cdots,N_{x},\quad j=1,2\cdots,N_{y}, \tag{61}\] \[v_{i,j} =v(x_{i}-h/2,y_{j}),\quad i=1,2\cdots,N_{x},\quad j=0,1\cdots,N_{ y},\] (62) \[p_{i,j} =p(x_{i}-h/2,y_{j}-h/2),\quad i=1,2\cdots,N_{x},\quad j=1,2\cdots,N_{y}. \tag{63}\]
The MAC scheme for the modified Stokes equation is given by
\[(\Delta_{h}-c^{2})\mathbf{u}_{i,j}-\nabla_{h}p_{i,j} =\mathbf{0}, \tag{64a}\] \[\nabla_{h}\cdot\mathbf{u} =0, \tag{64b}\]
where \(\Delta_{h},\nabla_{h}\) are the discrete approximations for \(\Delta,\nabla\), respectively,
\[\Delta_{h}\mathbf{u}_{ij}=\frac{\mathbf{u}_{i+1,j}+\mathbf{u}_{i -1,j}+\mathbf{u}_{i,j+1}+\mathbf{u}_{i,j-1}-4\mathbf{u}_{i,j}}{h^{2}}, \tag{65}\] \[\nabla_{h}p_{i,j}=(\frac{p_{i+1,j}-p_{i,j}}{h},\frac{p_{i,j+1}-p_ {i,j}}{h})^{T},\] (66) \[\nabla_{h}\cdot\mathbf{u}_{i,j}=\frac{u_{i,j}-u_{i-1,j}}{h}+\frac {v_{i,j}-v_{i,j-1}}{h}. \tag{67}\]
The Dirichlet boundary condition is discretized with a symmetric approach for the \(x\)-component velocity on the top and bottom boundary and the \(y\)-component velocity on the right and left
boundary. In the presence of interfaces, one can also derive local truncation errors at irregular nodes for each finite difference equation in (64). Similarly, by using the leading term in the local truncation error as a correction term of the right-hand side in (64), we arrived at the corrected MAC scheme
\[(\Delta_{h}-c^{2})\mathbf{u}_{i,j}-\nabla_{h}p_{i,j} =\mathbf{C}_{i,j}, \tag{68a}\] \[\nabla_{h}\cdot\mathbf{u} =D_{i,j}, \tag{68b}\]
where \(\mathbf{C}_{i,j}\) and \(D_{i,j}\) are correction terms, which are linear combinations of jump values \([u]\), \([v]\), \([p]\), \([u_{x}]\), \([u_{y}]\), \(\cdots\). The resulting linear system (68) is solved with an efficient V-cycle geometric multigrid method with the Distributive Gauss-Seidel (DGS) smoother [59; 60].
_Remark 4.1_.: Since particular solution parts have higher regularity, classical numerical approaches, such as finite difference methods, finite element methods, and spectral methods, can be applied. In this work, the particular parts are also solved with a five-point central difference scheme and the MAC scheme without any correction terms near the interface.
### Extracting boundary values from the grid data
The single- and double-layer potential functions \(S\boldsymbol{\psi}\) and \(D\boldsymbol{\varphi}\) defined by boundary integrals are not smooth on \(\Gamma\). Define the single- and double-layer boundary integral operators \(\mathcal{L}\boldsymbol{\psi}\) and \(\mathcal{K}\boldsymbol{\varphi}\) as values of the potential functions \(S\boldsymbol{\psi}\) and \(D\boldsymbol{\varphi}\) on \(\Gamma\), respectively. By potential theory, for smooth \(\Gamma\), the integral operators satisfy
\[\mathcal{L}\boldsymbol{\psi}(\mathbf{x}) =\frac{1}{2}((S\boldsymbol{\psi})^{+}(\mathbf{x})+(S\boldsymbol{ \psi})^{-}(\mathbf{x})),\quad\mathbf{x}\in\Gamma, \tag{69}\] \[\mathcal{K}\boldsymbol{\varphi}(\mathbf{x}) =\frac{1}{2}((D\boldsymbol{\varphi})^{+}(\mathbf{x})+(D \boldsymbol{\varphi})^{-}(\mathbf{x})),\quad\mathbf{x}\in\Gamma. \tag{70}\]
The normal derivatives of the potential functions \(\partial_{\mathbf{n}}(S\boldsymbol{\psi})\) and \(\partial_{\mathbf{n}}(D\boldsymbol{\varphi})\) have similar expressions
\[\partial_{\mathbf{n}}(S\boldsymbol{\psi})(\mathbf{x}) =\mathbf{n}(\mathbf{x})\cdot\frac{1}{2}((\nabla S\boldsymbol{ \psi})^{+}(\mathbf{x})+(\nabla S\boldsymbol{\psi})^{-}(\mathbf{x})),\quad \mathbf{x}\in\Gamma, \tag{71}\] \[\partial_{\mathbf{n}}(D\boldsymbol{\varphi})(\mathbf{x}) =\mathbf{n}(\mathbf{x})\cdot\frac{1}{2}((\nabla D\boldsymbol{ \varphi})^{+}(\mathbf{x})+(\nabla D\boldsymbol{\varphi})^{-}(\mathbf{x})), \quad\mathbf{x}\in\Gamma. \tag{72}\]
The normal derivatives of potential functions are closely related to adjoint double-layer integral and hyper-singular integral operators [56]. As long as the grid-valued single- and double-layer potential functions \((S\boldsymbol{\psi})_{h}\) and \((D\boldsymbol{\varphi})_{h}\) are obtained, boundary integral operators can be computed by extracting boundary values and normal derivatives of the potential functions from grid data via polynomial interpolation
In doing the interpolation, one should take into account the jump values of the potential functions. Suppose one needs to evaluate the one-sided limit of a discontinuous function \(v(\mathbf{x})\) at a point \(\mathbf{p}=(p_{1},p_{2})^{T}\in\Gamma\) from the side of \(\Omega^{+}\). Denote by \(v^{+}(\mathbf{x})\) and \(v^{-}(\mathbf{x})\) two functions that coincide with \(v(\mathbf{x})\) in \(\Omega^{+}\) and \(\Omega^{-}\), respectively. Given a set of interpolation points \(\{\mathbf{q}_{m}\}_{m=1}^{M}\), by Taylor expansion at \(\mathbf{p}\), we have
\[v(\mathbf{q}_{m}) =v^{+}(\mathbf{p})+(\Delta\mathbf{x}_{m})^{T}\nabla v^{+}( \mathbf{p})+\frac{1}{2}(\Delta\mathbf{x}_{m})^{T}\nabla^{2}v^{+}(\mathbf{p}) \Delta\mathbf{x}_{m}+\mathcal{O}(|\Delta\mathbf{x}_{m}|^{3}),\quad\text{ if }\mathbf{q}_{m}\in\Omega^{+}, \tag{73}\] \[v(\mathbf{q}_{m})+C(\mathbf{q}_{m}) =v^{+}(\mathbf{p})+(\Delta\mathbf{x}_{m})^{T}\nabla v^{+}( \mathbf{p})+\frac{1}{2}(\Delta\mathbf{x}_{m})^{T}\nabla^{2}v^{+}(\mathbf{p}) \Delta\mathbf{x}_{m}+\mathcal{O}(|\Delta\mathbf{x}_{m}|^{3}),\quad\text{ if }\mathbf{q}_{m}\in\Omega^{-}, \tag{74}\]
where \(\Delta\mathbf{x}_{m}=\mathbf{q}_{m}-\mathbf{p},m=1,\cdots,M\) and \(C(\mathbf{p})\) is given by
\[C(\mathbf{q}_{m})=[v](\mathbf{p})+(\Delta\mathbf{x}_{m})^{T}[\nabla v](\mathbf{ p})+\frac{1}{2}(\Delta\mathbf{x}_{m})^{T}[\nabla^{2}v](\mathbf{p})\Delta\mathbf{x}_{m}. \tag{75}\]
Dropping the \(\mathcal{O}(|\Delta\mathbf{x}_{m}|^{3})\) term, one obtains an \(M\times M\) linear system to be solved for the boundary values \(v^{+},\nabla v^{+},\nabla^{2}v^{+}\). Then, the values \(v^{-},\nabla v^{-},\nabla^{2}v^{-}\) can also be obtained after simple algebraic manipulations. To select interpolation points, for example, we first find the closest grid node \((x_{i_{0}},y_{j_{0}})\) to \(\mathbf{p}\). Let \(d_{1},d_{2}\) be two integers,
\[d_{1}=\begin{cases}1,\text{ if }p_{1}>x_{i_{0}},\\ -1,\text{ if }p_{1}\leq x_{i_{0}},\end{cases}\quad d_{2}=\begin{cases}1,\text{ if }p_{2}>y_{j_{0}},\\ -1,\text{ if }p_{2}\leq y_{j_{0}},\end{cases} \tag{76}\]
Then six interpolation points are chosen
\[\{(x_{i_{0}+r},y_{j_{0}+s})\}_{(r,s)\in\mathcal{I}},\quad\mathcal{I}=\{(0,0),( 1,0),(0,1),(0,-1),(-1,0),(d_{1},d_{2})\}. \tag{77}\]
This choice of interpolation points results in a \(6\times 6\) system, which can be solved with a direct method, such as the LU decomposition.
## 5 Interface evolution method
### The \(\theta-L\) approach
Suppose the interface \(\Gamma\) is represented as a closed curve given by \(\mathbf{X}(\alpha,t)=(x(\alpha,t),y(\alpha,t))\), where \(\alpha\in[0,2\pi)\) parameterizes the curve. The evolution of the curve is given by
\[\mathbf{X}_{t}=U\mathbf{n}+V\mathbf{s}, \tag{78}\]
where \(\mathbf{n}\) is the unit outward normal, \(\mathbf{s}\) is the unit tangent, \(U\) and \(V\) are the normal and tangential components of the curve velocity, respectively. In most cases, only the shape of the moving interface is of practical interest, which can be solely determined by the normal motion since the tangential motion only changes the parameterization. However, pure normal motion for tracking a moving interface is not a good choice for numerical computation due to the clustering or spread of marker points on the interface. Another issue is that the curvature term introduces numerical stiffness into the interface evolution and causes strict stability constraints on the time step. To mitigate these issues, we use the \(\theta-L\) approach instead of (78) for the curve evolution,
\[L_{t} =\int_{0}^{2\pi}\theta_{\alpha^{\prime}}U\,d\alpha^{\prime}, \tag{79}\] \[\theta_{t} =(\frac{2\pi}{L})(\theta_{\alpha}V-U_{\alpha}). \tag{80}\]
where \(\theta\) is the tangent angle to the curve \(\Gamma\) and \(L\) is the curve length. By setting \(s_{\alpha}=\sqrt{x_{\alpha}^{2}+y_{\alpha}^{2}}\equiv L/2\pi\) for equal-arclength parameterization, one obtains the artificial tangential velocity
\[V(\alpha,t)=\frac{\alpha}{2\pi}\int_{0}^{2\pi}\theta_{\alpha^{\prime}}U\,d \alpha^{\prime}-\int_{0}^{\alpha}\theta_{\alpha^{\prime}}U\,d\alpha^{\prime}, \tag{81}\]
The mapping from \((\theta,L)\) to \(\mathbf{X}=(x,y)\) still needs two more integration constants to determine the position of the curve, for which we track \(\mathbf{\tilde{X}}=\frac{1}{2\pi}\int_{0}^{2\pi}\mathbf{X}\,d\alpha\) using the evolution equation
\[\mathbf{\tilde{X}}_{t}=\frac{1}{2\pi}\int_{0}^{2\pi}U\mathbf{n}+V\mathbf{s}\,d\alpha. \tag{82}\]
While it is possible to track a single point, as demonstrated in [14], we find that using the averaged value provides better rotational symmetry.
### Small scale decomposition
In the Hele-Shaw flow, the mapping from curvature \(\kappa\) to velocity \(U\) is a one-phase Dirichlet-to-Neumann (DtN) mapping [61], denoted as \(\mathcal{T}_{1}:\sigma\kappa\to U\). Upon small scale decomposition, it is found that the highest-order term in \(U\), which contributes to the stiffness, is linear. Specifically, we have
\[U\sim-\frac{\sigma}{2}\left(\frac{2\pi}{L}\right)^{2}\mathcal{H}[\theta_{ \alpha\alpha}], \tag{83}\]
where \(\mathcal{H}\) represents the Hilbert transform [14]. This approach offers a straightforward means to develop efficient semi-implicit schemes for alleviating the stiffness associated with the curvature term.
For the Stefan problem, the mapping \(\mathcal{T}_{2}:\varepsilon_{C}\kappa\to U\) is a two-phase DtN mapping. We shall also derive a small-scale decomposition for the normal velocity \(U\). For simplicity, we assume \(\varepsilon_{V}\) and \(\varepsilon_{C}\) are constants. Due to the relation \(U=\psi\), the DtN mapping is defined through the boundary integral equation (32). Since the integral equation (32) becomes a Fredholm integral equation of the first kind if \(\varepsilon_{V}=0\), we need to consider two cases: (1) \(\varepsilon_{V}=\mathcal{O}(1)\), and (2) \(\varepsilon_{V}\ll 1\).
#### 5.2.1 The first case: \(\varepsilon_{V}=\mathcal{O}(1)\)
Denote by \(K\psi=(1/\varepsilon_{V})\int_{\Gamma}G_{c}(\mathbf{y},\mathbf{x})\psi( \mathbf{y})\,d\mathbf{s_{y}}\) and \(g=-(\varepsilon_{C}/\varepsilon_{V})\kappa-T_{1}/\varepsilon_{V}\). We write the boundary integral equation (32) as an operator equation
\[(I-K)\psi=g, \tag{84}\]
where the operator \(K\) is compact and \(I-K\) has a bounded inverse. Then, we have
\[\psi=g+K\psi=g+K[(I-K)^{-1}g]. \tag{85}\]
The integral operator \(K\) is a pseudo-differential operator of order \(-1\) and defines the mapping \(K:C^{\beta}(\Gamma)\to C^{1+\beta}(\Gamma)\) for a fix constant \(\beta\). Therefore, the term \(S[(I-S)^{-1}g]\) is more regular than \(g\). Further, since \(T_{1}\) is equivalent to a volume integral, which is more regular than \(\kappa=\theta_{\alpha}/s_{\alpha}=2\pi\theta_{\alpha}/L\), we obtain
\[U\sim-\frac{\varepsilon_{C}}{\varepsilon_{V}}\left(\frac{2\pi}{L}\right)\theta _{\alpha}. \tag{86}\]
#### 5.2.2 The second case: \(\varepsilon_{V}\ll 1\)
We rewrite \(\psi\) as a formal asymptotic expansion \(\psi=\psi_{0}+\varepsilon_{V}\psi_{1}+\varepsilon_{V}^{2}\psi_{2}+\cdots\) and substitute it into the boundary integral equation (32). The leading order equation implies that \(\psi_{0}\) should satisfy
\[\int_{\Gamma}G_{c}(\mathbf{y},\mathbf{x})\psi_{0}(\mathbf{y})\,d\mathbf{s}_{y} =\varepsilon_{C}\kappa(\mathbf{x})+T_{1}(\mathbf{x}),\quad\text{for }\mathbf{x}\in\Gamma. \tag{87}\]
Since we only need the highest-order term in the small scale decomposition, we can simply proceed by assuming \(\varepsilon_{V}=0\) in (32) and use a similar technique as that in [30] to obtain the small scale decomposition. Note that the Green function \(G_{c}(\mathbf{y},\mathbf{x})\) associated with the modified Helmholtz operator \(\Delta-c^{2}\) has the same singular behavior as the Green function \(G_{0}(\mathbf{y},\mathbf{x})\) associated with the Laplacian \(\Delta\). A simple singularity subtraction leads to
\[\frac{1}{2\pi}\int_{\Gamma}\ln|\mathbf{y}-\mathbf{x}|\psi(\mathbf{y})\,d \mathbf{s}_{y}-\int_{\Gamma}(G_{0}(\mathbf{y},\mathbf{x})-G_{c}(\mathbf{y}, \mathbf{x}))\psi(\mathbf{y})\,d\mathbf{s}_{y}=\varepsilon_{C}\kappa(\mathbf{ x})+T_{1}(\mathbf{x}),\quad\text{for }\mathbf{x}\in\Gamma, \tag{88}\]
where the kernel \(G_{0}(\mathbf{y},\mathbf{x})-G_{c}(\mathbf{y},\mathbf{x})\) no longer has a \(log\)-type singularity. Hence, in the left-hand side of (88), the second term is more regular than the first one. For every \(2\pi\)-periodic function \(v\), define the operators \(\mathcal{M}\) and \(\mathcal{R}\) by
\[\mathcal{M}[v](\alpha) =\frac{1}{2\pi}\int_{0}^{2\pi}\ln\left|2\sin\frac{\alpha-\alpha^{ \prime}}{2}\right|v(\alpha^{\prime})\,d\alpha^{\prime}, \tag{89}\] \[\mathcal{R}[v](\alpha) =\frac{1}{2\pi}\int_{0}^{2\pi}\ln\left|\frac{\mathbf{x}(\alpha)- \mathbf{x}(\alpha^{\prime})}{2\sin\frac{\alpha-\alpha^{\prime}}{2}}\right|v( \alpha^{\prime})\,d\alpha^{\prime}. \tag{90}\]
Using the relation \(s_{\alpha}\equiv L/2\pi\), the equation (88) can be written as
\[\frac{L}{2\pi}(\mathcal{M}[\psi]+\mathcal{R}[\psi])-\int_{\Gamma}(G_{0}( \mathbf{y},\mathbf{x})-G_{c}(\mathbf{y},\mathbf{x}))\psi(\mathbf{y})\,d \mathbf{s}_{y}=\varepsilon_{C}\frac{2\pi}{L}\theta_{\alpha}+T_{1}. \tag{91}\]
Note that for every \(2\pi\)-periodic function \(v\), the relation \(\mathcal{M}^{-1}[v]=c-2\mathcal{H}[v]\) holds, where \(c\) is an arbitrary constant. Taking \(\mathcal{M}^{-1}\) on both sides of (91), it yields
\[\psi-\bar{\psi}=-2\varepsilon_{C}\left(\frac{2\pi}{L}\right)^{2}\mathcal{H}[ \theta_{\alpha\alpha}]-\frac{4\pi}{L}\mathcal{H}\int_{\Gamma}(G_{0}(\mathbf{ y},\mathbf{x})-G_{c}(\mathbf{y},\mathbf{x}))\psi(\mathbf{y})\,d\mathbf{s}_{y}+T_{1}- \frac{L}{2\pi}\mathcal{R}[\psi]]_{\alpha}, \tag{92}\]
where \(\bar{\psi}\) is the mean of \(\psi\). Finally, since the second term in the right-hand side of (92) is more regular than the first term, we have
\[U\sim-2\varepsilon_{C}\left(\frac{2\pi}{L}\right)^{2}\mathcal{H}[\theta_{ \alpha\alpha}], \tag{93}\]
By inserting \(U\) into the \(\theta\) evolution equation (80) and using the small scale expressions equations (86) and (93), we can extract the linear and highest-order terms
\[\theta_{t}=\begin{cases}\frac{\varepsilon_{C}}{\varepsilon_{V}} \left(\frac{2\pi}{L}\right)^{2}\theta_{\alpha\alpha}+\mathcal{N}(\alpha),&\text {if }\varepsilon_{V}=\mathcal{O}(1),\\ 2\varepsilon_{C}\left(\frac{2\pi}{L}\right)^{3}\mathcal{H}[\theta_{\alpha \alpha\alpha}]+\mathcal{N}(\alpha),&\text{if }\varepsilon_{V}\ll 1,\end{cases} \tag{94}\]
where \(\mathcal{N}\) consists of the remaining lower-order terms. The case \(\varepsilon_{V}=\mathcal{O}(1)\) is second-order diffusive and is similar to a heat equation or a mean curvature flow. The stiffness of the second-order derivative term can be removed by employing an implicit time-stepping scheme for the stiff term. The second case \(\varepsilon_{V}\ll 1\) is third-order diffusive [14]. Implicit schemes for the case are more difficult due to the nonlocal Hilbert transform \(\mathcal{H}\). Using the fact that \(\mathcal{H}\) is diagonalizable under the Fourier transform, an accurate and efficient semi-implicit scheme can be devised.
### Semi-implicit scheme
The evolution equations (79) and (82) are not stiff, they can be discretized with an explicit scheme. We use the second-order Adams-Bashforth scheme,
\[L^{n+1}=L^{n}+\tau(3M^{n}-M^{n-1}),\quad M=-\int_{0}^{2\pi}\theta _{\alpha^{\prime}}U\,d\alpha^{\prime}, \tag{95}\] \[\mathbf{\tilde{X}}^{n+1}=\mathbf{\tilde{X}}^{n}+\tau(3Q^{n}-Q^{n -1}),\quad Q=\frac{1}{2\pi}\int_{0}^{2\pi}V_{n}\mathbf{n}+V_{s}\mathbf{s}\,d\alpha. \tag{96}\]
The evolution equation (80) involves a stiff term, but it is hidden in the normal velocity \(U\) through a DtN mapping. In order to both remove the stiffness and avoid solving nonlinear algebraic systems, only the highest-order term in (80) is discretized implicitly. We first rewrite equation (80) as
\[\theta_{t}=(\frac{2\pi}{L})(U_{\alpha}+\theta_{\alpha}T)+(\mathcal{L}(\alpha) -\mathcal{L}(\alpha))=\mathcal{L}(\alpha)+\mathcal{N}(\alpha), \tag{97}\]
where \(\mathcal{L}\) is the highest-order and linear term and \(\mathcal{N}\) consists of the remaining lower-order terms. We have \(\mathcal{L}=\mathcal{L}_{1}=\lambda_{1}(2\pi/L)^{2}\theta_{\alpha\alpha}\) for (86) and \(\mathcal{L}=\mathcal{L}_{2}=\lambda_{2}(2\pi/L)^{3}\mathcal{H}[\theta_{\alpha \alpha\alpha}]\) for (83) and (93), where \(\lambda_{1}\) and \(\lambda_{2}\) are two constant parameters. For the Hele-Shaw flow and the Stefan problem with constant \(\varepsilon_{C}\) and \(\varepsilon_{V}\), the parameters \(\lambda_{1}\) and \(\lambda_{2}\) can be chosen to match the corresponding constant factors in the highest-order terms. For the Stefan problem with anisotropic \(\varepsilon_{C}\) and \(\varepsilon_{V}\), \(\lambda_{1}\) and \(\lambda_{2}\) are regarded as stabilization parameters, which should be chosen as sufficiently large to ensure stability. With a frozen coefficient analysis, we choose the parameters as
\[\lambda_{1}=\max_{\alpha\in[0,2\pi)}\left|\frac{\varepsilon_{C}(\mathbf{n}( \alpha))}{\varepsilon_{V}(\mathbf{n}(\alpha))}\right|,\quad\lambda_{2}=\max_{ \alpha\in[0,2\pi)}|2\varepsilon_{C}(\mathbf{n}(\alpha))|. \tag{98}\]
In the Fourier space, the Hilbert transform \(\mathcal{H}\) becomes diagonal, and (97) simplifies to
\[\hat{\theta}_{t}(k)=-\lambda_{1}\left(\frac{2\pi}{L}\right)^{2}k^{ 2}\hat{\theta}(k)+\hat{\mathcal{N}}(k)\quad\text{for }\mathcal{L}=\mathcal{L}_{1}, \tag{99}\] \[\hat{\theta}_{t}(k)=-\lambda_{2}\left(\frac{2\pi}{L}\right)^{3}|k |^{3}\hat{\theta}(k)+\hat{\mathcal{N}}(k)\quad\text{for }\mathcal{L}=\mathcal{L}_{2}. \tag{100}\]
A linear propagator and a second-order Adams-Bashforth method are used to discretize the stiff part and the non-stiff part in (99) and (100), respectively,
\[\hat{\theta}^{n+1}(k)=e_{k}(t_{n},t_{n+1})\hat{\theta}^{n}(k)+\frac{\tau}{2}(3 \varepsilon_{k}(t_{n},t_{n+1})\hat{\mathcal{N}}^{n}(k)-e_{k}(t_{n-1},t_{n+1}) \hat{\mathcal{N}}^{n-1}(k)), \tag{101}\]
where the factors \(e_{k}(t_{n},t_{n+1})\) and \(e_{k}(t_{n-1},t_{n+1})\) are specified as
\[e_{k}(t_{n},t_{n+1})=\exp\left(-\frac{\lambda_{1}\tau}{2}(2\pi k)^ {2}\left[\frac{1}{\left(L^{n}\right)^{2}}+\frac{1}{\left(L^{n+1}\right)^{2}} \right]\right), \tag{102}\] \[e_{k}(t_{n-1},t_{n+1})=\exp\left(-\lambda_{1}\tau(2\pi k)^{2} \left[\frac{1}{2\left(L^{n-1}\right)^{2}}+\frac{1}{\left(L^{n}\right)^{2}}+ \frac{1}{2\left(L^{n+1}\right)^{2}}\right]\right), \tag{103}\]
for the case \(\mathcal{L}=\mathcal{L}_{1}\) and
\[e_{k}(t_{n},t_{n+1})=\exp\left(-\frac{\lambda_{2}\tau}{2}(2\pi| k|)^{3}\left[\frac{1}{\left(L^{n}\right)^{3}}+\frac{1}{\left(L^{n+1}\right)^{3}} \right]\right), \tag{104}\] \[e_{k}(t_{n-1},t_{n+1})=\exp\left(-\lambda_{2}\tau(2\pi|k|)^{3} \left[\frac{1}{2\left(L^{n-1}\right)^{3}}+\frac{1}{\left(L^{n}\right)^{3}}+ \frac{1}{2\left(L^{n+1}\right)^{3}}\right]\right). \tag{105}\]
for the case \(\mathcal{L}=\mathcal{L}_{2}\).
_Remark 5.1_.: The numerical operations for the evolution of \(\theta\) and \(L\), such as differentiation, integration, and solving ODEs, are all performed in the Fourier space using FFTs. This allows the interface evolution method to have spectral accuracy, resulting in smaller errors compared to second-order time integration methods. It is important to note that \(\theta\) is periodic but has an increment of \(2\pi\) from \(\alpha=0\) to \(\alpha=2\pi\). To obtain accurate results, one can use the auxiliary variable \(\eta=\theta-\alpha\) when performing Fourier transforms.
## 6 Numerical results
In this section, we demonstrate the application of the proposed method in solving moving interface problems through numerical examples. In the first part, we focus on the Hele-Shaw flow. We examine the convergence of the method and utilize it to simulate the various dynamics of the Hele-Shaw bubble. By using the proposed method, we are able to accurately capture the complex behavior of the bubble, such as its growth and deformation. Furthermore, we showcase the method's ability to handle long-time computations by modeling a bubble that exhibits intricate finger-like structures. Moving on to the second part, we tackle the Stefan problem. We evaluate the convergence and stability of the method using multiple examples. The proposed method proves to be robust and accurate in solving the Stefan problem, allowing us to accurately track the solidification interface and capture its evolution over time. Additionally, we utilize the method to simulate dendritic solidification problems, both with and without the presence of flow effects in the liquid region. Through these examples, we demonstrate the versatility of the method in handling different scenarios and accurately capturing the complex dynamics of moving boundaries in solidification processes.
### The Hele-Shaw flow
#### 6.1.1 Convergence test
In this example, we investigate the convergence of the method for solving the Hele-Shaw flow. The initial shape we consider is a four-fold flower, defined as
\[(x(\alpha,0),y(\alpha,0))=r(\alpha)(\cos\alpha,\sin\alpha),\quad r(\alpha)=0.8 +0.2\cos 4\alpha,\quad\alpha\in[0,2\pi). \tag{106}\]
The surface tension coefficient is chosen to be \(0.01\), and the numerical computation is performed within the bounding box of \([-1.5,1.5]^{2}\). The air injection rate is set to zero, leading to the area-preserving property of the Hele-Shaw flow (since the fluid is incompressible). The numerical error is measured by comparing the enclosed area of the curve at \(T=1\) with the initial area. To study the spatial accuracy, we use a fixed time step size of \(\tau=1\times 10^{-3}\) and vary the mesh size as \(\Delta x=3\times 2^{-l}\), with \(l=5,6,\cdots,9\). Similarly, to analyze the temporal accuracy, we use a fixed mesh size of \(\Delta x=3\times 2^{-10}\) and vary the time step size as \(\tau=2^{-l}\times 10^{-3}\), with \(l=1,2,\cdots,5\). We compute the enclosed area of the curve by numerically evaluating the integral \(A[\Gamma]=\frac{1}{2}\int_{\Gamma}\mathbf{x}\cdot\mathbf{n}\,d\mathbf{s}\). The numerical results are shown in Figure 2, and it can be observed that the method exhibits second-order accuracy both spatially and temporally.
#### 6.1.2 Bubble relaxation
The Hele-Shaw flow without air injection presents interesting characteristics due to the combined effects of surface tension and incompressibility. These effects result in a curve-shortening and area-preserving behavior [62]. In order to investigate this phenomenon, we consider an initial curve in the form of a six-fold flower, described by the parametric equation:
\[(x(\alpha,0),y(\alpha,0))=r(\alpha)(\cos\alpha,\sin\alpha),\quad r(\alpha)=0.8 +0.2\cos 6\alpha,\quad\alpha\in[0,2\pi). \tag{107}\]
A surface tension coefficient of \(0.01\) is used, and the evolution is computed within a computational domain of \([-1.5,1.5]^{2}\). The numerical simulation is conducted on a \(512\times 512\) grid, with a time step size of \(\tau=0.001\). The evolution of the curve is illustrated in Figure 3. Additionally, the area and length profiles of the curve, as well as the iteration numbers of the GMRES method, are presented in Figure 4. As a result of the stabilizing influence of surface tension, the initially irregular interface gradually relaxes and approaches a circular shape over time. Notably, the enclosed area remains constant throughout the evolution, while the length of the curve decreases over time and eventually converges. These observations align with the theoretical understanding of Hele-Shaw flow without air injection. The GMRES iteration number remains relatively stable and decreases as the curve approaches a circular shape.
Figure 2: (a) shows the spatial accuracy of the method with different mesh sizes \(\Delta x=3\times 2^{-l},l=5,6,\cdots,9\). (b) shows the temporal accuracy of the method with different time steps \(\tau=2^{-l}\times 10^{-3},l=1,2,\cdots,5\).
Figure 4: (a) shows the time evolutions of the enclosed area and the length of the curve. (b) shows the iteration number of the GMRES method.
Figure 3: Morphologies of the interface from \(t=0\) to \(t=1.2\) with a time increment of \(0.2\).
#### 6.1.3 Unstable viscous fingering
In this example, the initial curve is given by a three-fold flower,
\[(x(\alpha,0),y(\alpha,0))=r(\alpha)(\cos\alpha,\sin\alpha),\quad r(\alpha)=0.8+0. 2\cos 3\alpha,\quad\alpha\in[0,2\pi). \tag{108}\]
The air injection rate is chosen as \(J=1\) such that the air bubble demonstrates an unstable growth. The surface tension coefficient varies from \(1\times 10^{-2}\) to \(5\times 10^{-4}\). The computational domain is chosen as \([-4,4]^{2}\). We solve the example on a \(512\times 512\) grid with a time step \(\tau=0.005\). The morphologies of the evolutionary curve are shown in Figure 5 in time increments of \(0.2\) to the final time \(T=3\). The competition between the stabilizing effect of the surface tension and the destabilizing effect of the driven force due to the injection leads to the viscous fingering feature of the Hele-Shaw flow. With small surface tension, which has a less stabilizing effect, the growing bubble develops more branches. It also can be observed that for small surface tensions, the symmetry of the interface is difficult to capture since grid-induced anisotropy dominates the interface evolution.
#### 6.1.4 Long-time computation
In this example, we perform long-time computation with the present method and spatial-temporal rescaling scheme [15] for simulating a large Hele-Shaw bubble. The computation is performed in the scaled frame and then mapped back onto the nonscaled frame. The initial shape is chosen as a nucleus
\[(\bar{x}(\alpha,0),\bar{y}(\alpha,0))=\bar{r}(\alpha)(\cos\alpha,\sin\alpha), \quad\bar{r}(\alpha)=1.0+0.1(\sin 2\alpha+\cos 3\alpha),\quad\alpha\in[0,2\pi). \tag{109}\]
The computational domain is chosen as \([-1.7,1.7]^{2}\). We use a \(1024\times 1024\) grid and a time step of \(\Delta\bar{t}=2\times 10^{-4}\). The interface points are adaptively refined with the criteria \(\Delta\bar{s}>1.5\Delta\bar{x}\). The injection rate and the surface tension coefficient are set as \(J=1\) and \(\sigma=0.001\), respectively. The morphologies of the curve in the nonscaled frame are shown in (scaled) time increments of \(0.2\), in Figure 6. The computation takes \(5\) hours to reach the final scaled time \(\bar{T}=3\) (nonscaled time \(T=203\)). At the final time, the number of the marker point on the interface is refined to \(16384\), and the enclosed area and the length of the interface are \(A=\bar{R}^{2}\bar{A}=1280\) and \(L=\bar{R}\bar{L}/2\pi=169\).
### The Stefan problem
#### 6.2.1 Grid refinement analysis
In this example, we consider a benchmark grid refinement test for the Stefan problem. In the beginning, a solid seed is put into an undercooling liquid domain \(\mathcal{B}=[-2,2]^{2}\). The initial shape is a four-fold flower, which is given by
\[(x(\alpha,0),y(\alpha,0))=r(\alpha)(\cos\alpha,\sin\alpha),\quad r(\alpha)=0.1 +0.02\cos 4\alpha,\quad\alpha\in[0,2\pi). \tag{110}\]
The initial undercooling is chosen as \(St=-0.5\). Isotropic surface tension and kinematic coefficients are chosen as \(\varepsilon_{C}=\varepsilon_{V}=2\times 10^{-3}\). We take the time step as \(\tau=0.001\) and successively refine the grid from \(64\) to \(512\). The morphologies of the liquid-solid interface in time increments of \(0.05\) to the final time \(T=0.8\) are shown in Figure 7. By comparing the results with previous works by the level-set method and the front-tracking method [16; 20], it can be observed that our method has less grid-induced surface tension, which leads to better convergence. The method is also more able to preserve the symmetries of the interface, even with a coarse grid.
Figure 5: Detailed morphologies of the curve with different surface tension coefficients: (a) \(1\times 10^{-2}\);(b) \(5\times 10^{-3}\);(c) \(1\times 10^{-3}\);(d) \(5\times 10^{-4}\).
Figure 6: Numerical results of long-time computation of the Hele-Shaw flow: (a) morphology histories of the curve; (b) and (c) time evolution of the curve length and enclosed area; (d) iteration numbers of the GMRES method.
Figure 7: Grid refinement analysis of the Stefan problem with different grids: (a) \(64\times 64\); (b) \(128\times 128\); (c) \(256\times 256\); (d) \(512\times 512\).
#### 6.2.2 Stability test
In order to demonstrate the stability of the method, we compare the semi-implicit scheme with an explicit Adams-Bashforth scheme. We consider the case that \(\varepsilon_{V}=0\) and \(\varepsilon_{C}=0.05\), which results in a third-order stiffness. The initial shape is chosen as a slightly perturbed circle
\[(x(\alpha,0),y(\alpha,0))=r(\alpha)(\cos\alpha,\sin\alpha),\quad r(\alpha)=1+0.0 2\cos 4\alpha,\quad\alpha\in[0,2\pi). \tag{111}\]
We use a \(256\times 256\) grid for the computational domain \(\mathcal{B}=[-2,2]^{2}\). The interface is discretized with \(128\) points. First, the problem is solved with an explicit scheme with time-steps \(\tau=5\times 10^{-5}\) and \(\tau=2.5\times 10^{-5}\). The curves at \(t=0.1\) are plotted in Figure 8(a). The smaller time-step \(\tau=2.5\times 10^{-5}\) is stable for the computation, while the larger one \(\tau=5\times 10^{-5}\) is unstable, and the solution quickly blows up. Then, the problem is solved with the semi-implicit scheme with a time-step \(\tau=0.01\), and the solution at \(t=0\) is plotted in Figure 8(b). Compared with the explicit scheme, the semi-implicit scheme is stable and can obtain a correct result with a much larger time step, which makes the semi-implicit scheme more efficient.
#### 6.2.3 Comparison with the solvability theory
In a dendritic growth problem, the growth rate of a dendrite can be predicted by the solvability theory [9; 63]. We compare the numerical result with the solvability result to assess the accuracy of the method. In this example, the initial seed is chosen as a circle with a radius of \(0.1\). The parameters in the Gibbs-Thomson relation are chosen as \(\varepsilon_{V}=0,\varepsilon_{C}(\alpha)=0.001[1+0.4(1-\cos(4\alpha))]\) where \(\alpha\) is the angle between the normal to the interface and the \(x\)-axis. The computation domain is chosen as \([-6,6]\). We set \(\tau=0.001\) for the computation. The morphologies of the liquid-solid interface and the time-evolution of the tip velocity from \(t=0\) to \(t=2.2\) are plotted in Figure 9. The tip velocity finally converges and agrees with the solvability result of \(1.7\).
#### 6.2.4 Anisotropic dendritic growth
initial seed is a circle with a radius \(0.05\) at the center of the computational domain \([-4,4]^{2}\). undercooling \(St=-0.65\) four-fold anisotropy \(\varepsilon_{V}=0.002\), \(\varepsilon_{C}(\alpha)=0.002(8/3\sin^{4}(2(\alpha-\alpha_{0})))\)
Figure 8: Numerical results of the stability test for the Stefan problem with different time-stepping methods: (a) the Adams-Bashforth scheme; (b) the semi-implicit scheme.
grid \(512\times 512\) with a time step \(\tau=0.0002\).
The initial seed is a circle with a radius \(0.05\) at the center of the computational domain \(\mathcal{B}=[-2,2]^{2}\). The undercooling parameter varies from \(St=-0.55\) to \(St=-0.65\). The parameters in the Gibbs-Thomson relation are chosen as \(\varepsilon_{V}=0.002\), \(\varepsilon_{C}(\alpha)=0.002(8/3\sin^{4}(3\alpha))\), which is a six-fold anisotropy. The six-fold anisotropy leads to snowflake-shaped dendritic growths, which are shown in Figure 11.
#### 6.2.5 Dendritic growth with external flow
We consider the convection effect in the dendritic growth problem. In this example, the Stefan problem with natural convection is solved for simulations. Initially, the seed is a circle with a radius \(0.05\) at the center of the computational domain \([-2,2]^{2}\) surrounded by undercooled fluid with temperature \(St=-0.5\). Four-fold anisotropic surface tension \(\varepsilon_{C}(\alpha)=0.002(8/3\sin^{4}(2\alpha))\) and local kinematic equilibrium \(\varepsilon_{V}=0\) are applied. The computation is performed with a \(512\times 512\) grid and a time step \(\tau=0.0002\). Inflow and outflow boundary conditions, \(\mathbf{u}=(u_{0},0)^{T}\), are applied on the left and right boundaries, respectively. No-slip boundary condition, \(\mathbf{u}=\mathbf{0}\), is used on the top and bottom boundaries. In this example, we ignore the buoyancy force, i.e. \(\beta=0\). As a result, the convection of the flow is only driven by boundary conditions. When \(u_{0}=0\), the problem is identical to the classical Stefan problem without natural convection. In order to examine the convection effect on the growth pattern, we compare numerical results obtained with different flow velocities in the fluid phase. In Figure 12, profiles of dendritic growth, temperature, and flow fields at \(t=0.1\) are presented. Evolution histories of \(x\)-components of the left and right tips are shown in Figure 13. It can be observed that convection of the flow leads to faster growth of the left branch and slower growth of the right branch. This effect is more evident as the flow velocity increases. Due to the flow convection, the released latent heat is carried by the flow from left to right and consequently leads to non-symmetric temperature distribution in the \(x\)-direction.
Figure 9: Numerical results of the dendritic growth problem and comparison with the solvability theory: (a) morphology histories of the interface; (b) time evolution of the tip velocity.
Figure 11: Interface morphologies and the temperature field of the dendritic growth problem with six-fold anisotropy. Snapshots are taken at \(t=0,0.02,0.06\), and \(0.1\).
Figure 10: Interface morphologies and the temperature field of the dendritic growth problem with four-fold anisotropy. Snapshots are taken at \(t=0,0.02,0.06\), and \(0.1\).
#### 6.2.6 Dendritic growth with buoyancy-driven flow
In the final example, we consider the dendritic growth problem with buoyancy-driven flow. The anisotropic surface tension is chosen as a rotated one, \(\varepsilon_{C}(\alpha)=0.002(8/3\sin^{4}(2(\alpha-\pi/4)))\). The no-slip boundary condition is applied for the fluid equation on the four boundaries. Different thermal expansion coefficients are chosen so that the flow in the fluid phase is driven by buoyancy force. The gravity acceleration is chosen as \(g=10\). The reference temperature is chosen as the temperature of surrounding fluid \(T_{0}=-0.5\). Other parameters are chosen as the same as those in the last example. Numerical results with increasing thermal expansion coefficients are presented in Figure 14. Near the solid-liquid interface, the released latent heat increases the fluid temperature and, as a result, causes fluid density changes and the buoyancy force. Driven by the buoyancy force, the fluid carries the heat and flows from bottom to top, which leads to non-symmetric temperature distribution in \(y\)-direction. The two upper branches are restrained from growing due to accumulated heat, while the two lower branches grow much faster since the heat flows away.
## 7 Discussion
This study presents a novel numerical method for solving two representative moving interface problems using a Cartesian grid-based boundary integral method combined with an interface evolution approach. The proposed method offers several advantages that combine the strengths of Cartesian grid-based solvers and boundary integral methods.
Elliptic and parabolic PDEs with irregular boundaries or interfaces are reformulated as well-conditioned boundary integral equations and are solved using the KFBI method. The KFBI
Figure 12: Details of dendritic growth histories, temperature fields, and flow fields with different inflow velocities at \(t=0.1\) (\(u_{0}=0,2,4\) and \(8\) from left to right).
Figure 13: Evolution histories of \(x\)-components of the left and right tips with different flow velocities.
method utilizes a Cartesian grid-based solver for integral evaluations, eliminating the need for analytical expressions of Green's functions and allowing for efficient utilization of fast PDE solvers like FFTs and geometric multigrid methods. It is important to note that the method is not entirely kernel-free in this work. Green's function is still required to impose artificial boundary conditions on the rectangular domain boundary when dealing with infinite boundary conditions. However, the KFBI method avoids the evaluation of singular and nearly singular integrals, which are typically challenging in quadrature-based boundary integral methods.
The interface evolution is solved using the \(\theta-L\) formulation instead of the more commonly used \(x-y\) formulation. Due to the periodicity of the interface, an accurate Fourier pseudo-spectral method is employed for spatial approximation in the interface evolution problem. Furthermore, the use of the \(\theta-L\) formulation enables the application of a small-scale decomposition approach to address the curvature-induced stiffness in the Hele-Shaw flow and Stefan problem. This approach, combined with a semi-implicit scheme, allows for efficient and stable evolution of the interface by solving the semi-implicit scheme in the Fourier space with the FFT.
While the current work primarily focuses on moving interface problems in two dimensions, certain improvements are needed to solve models in three dimensions, such as solidification problems and two-phase incompressible flows. First, elliptic PDEs with irregular boundaries and interfaces in three dimensions require a three-dimensional version of the KFBI method [52]. Additionally, for parabolic PDEs, dimension-splitting techniques can be employed to accelerate computation [64]. Second, the \(\theta-L\) approach is only applicable to evolutionary curves in two dimensions. To accurately track evolutionary surfaces in three dimensions, different numerical approaches, such as the front-tracking method or the level-set method, are needed. Finally, addressing the stiffness induced by mean curvature in three dimensions poses greater challenges than in two dimensions. Developing an efficient semi-implicit time-stepping scheme is crucial for ensuring the stable evolution of the interface.
## Appendix A Computation of derivative jumps
Suppose \(\Gamma\) is parameterized as \(\mathbf{X}(\alpha)=(x(\alpha),y(\alpha))\) where \(\alpha\) is an arbitrary parameter. Let \(s\) be the arc-length parameter. Suppose \(\Gamma\) is sufficiently smooth, at least in the class \(C^{2}\). The unit outward normal is given by \(\mathbf{n}=(y_{\alpha}/s_{\alpha},-x_{\alpha}/s_{\alpha})\) where \(s_{\alpha}=\sqrt{x_{\alpha}^{2}+y_{\alpha}^{2}}\). Given an interface problem with a constant coefficient, we shall derive jump values of the solution and its derivatives at the point \((x(\alpha),y(\alpha))\in\Gamma\).
Figure 14: Details of dendritic growth histories, temperature fields, and flow fields with different thermal expansion coefficients at \(t=0.1\) (\(\beta=0,10^{3},2\times 10^{3}\) and \(4\times 10^{3}\) from left to right).
### The modified Helmholtz equation
Consider the interface problem of the modified Helmholtz equation
\[\Delta u-c^{2}u =f, \text{in}\ \Omega^{+}\cup\Omega^{-}, \tag{10}\] \[=\Phi, \text{on}\ \Gamma,\] (11) \[=\Psi, \text{on}\ \Gamma. \tag{12}\]
The interface condition (11) implies the zeroth-order jump value
\[[u]=\Phi. \tag{13}\]
By taking the derivative of both sides of the interface condition (11) with respect to \(\alpha\) and combining the interface condition (12), we have a \(2\times 2\) linear system
\[x_{\alpha}[u_{x}]+y_{\alpha}[u_{y}] =\Phi_{\alpha}, \tag{14}\] \[y_{\alpha}[u_{x}]-x_{\alpha}[u_{y}] =s_{\alpha}\Psi. \tag{15}\]
Solving the linear system gives the values of \([u_{x}]\) and \([u_{y}]\). By taking the derivative of both sides of equations (14) and (15), and using equation (10), we have a \(3\times 3\) linear system for \([u_{xx}]\), \([u_{yy}]\) and \([u_{xy}]\)
\[(x_{\alpha})^{2}[u_{xx}]+(y_{\alpha})^{2}[u_{yy}]+2x_{\alpha}y_{ \alpha}[u_{xy}]=\Phi_{\alpha\alpha}-x_{\alpha\alpha}[u_{x}]-y_{\alpha\alpha}[ u_{y}], \tag{16}\] \[x_{\alpha}y_{\alpha}[u_{xx}]-x_{\alpha}y_{\alpha}[u_{yy}]+((y_{ \alpha})^{2}-(x_{\alpha})^{2})[u_{xy}]\] \[=s_{\alpha\alpha}\Psi+s_{\alpha}\Psi_{\alpha}-y_{\alpha\alpha}[ u_{x}]+x_{\alpha\alpha}[u_{y}],\] (17) \[[u_{xx}]+[u_{yy}]=c^{2}\Phi+[f]. \tag{18}\]
After solving the linear system, the derivative jump values are obtained.
### The modified Stokes equation
Consider the interface problem of the modified Stokes equation
\[\Delta\mathbf{u}-c^{2}\mathbf{u}-\nabla p =\mathbf{f}, \text{in}\ \Omega^{+}\cup\Omega^{-}, \tag{19}\] \[\nabla\cdot\mathbf{u} =0, \text{in}\ \Omega^{+}\cup\Omega^{-},\] (20) \[[\mathbf{u}] =\mathbf{\Phi}, \text{on}\ \Gamma,\] (21) \[[T(\mathbf{u},p)] =\mathbf{\Psi}, \text{on}\ \Gamma. \tag{22}\]
where \(\mathbf{u}=(u,v)^{T}\), \(T(\mathbf{u},p)=-p\mathbf{n}+(\nabla\mathbf{u}+\nabla\mathbf{u}^{T})\mathbf{n },\mathbf{f}=(f_{1},f_{2})^{T}\), \(\mathbf{\Phi}=(\Phi_{1},\Phi_{2})^{T}\), \(\mathbf{\Psi}=(\Psi_{1},\Psi_{2})^{T}\). The zeroth-order jump values are obtained from equation (21),
\[[u]=\Phi_{1},\quad[v]=\Phi_{2}. \tag{23}\]
Taking the derivative of both sides of the interface condition (21) with respect to \(\alpha\) and using equations (20) and (22), we obtain a \(5\times 5\) system for the first-order jump values \([u_{x}]\), \([u_{y}]\), \([v_{x}]\), \([v_{y}]\) and \([p]\),
\[x_{\alpha}[u_{x}]+y_{\alpha}[u_{y}] =\Phi_{1,\alpha}, \tag{24}\] \[x_{\alpha}[v_{x}]+y_{\alpha}[v_{y}] =\Phi_{2,\alpha},\] (25) \[2y_{\alpha}[u_{x}]-x_{\alpha}[u_{y}]-x_{\alpha}[v_{x}]-y_{\alpha }[p] =s_{\alpha}\Psi_{1},\] (26) \[y_{\alpha}[u_{y}]+y_{\alpha}[v_{x}]-2x_{\alpha}[v_{y}]+x_{\alpha }[p] =s_{\alpha}\Psi_{2},\] (27) \[[u_{x}]+[v_{y}]=0. \tag{28}\]
Taking the derivative of both sides of equations (A.15) to (A.18) with respect to \(\alpha\) and the derivatives of both sides of equation (A.11) with respect to \(x\) and \(y\), and using equation (A.10), we obtain an \(8\times 8\) system for the second-order jump values \([u_{xx}]\), \([u_{yy}]\), \([u_{xy}]\), \([v_{xx}]\), \([v_{yy}]\), \([p_{x}]\) and \([p_{y}]\),
\[(x_{\alpha})^{2}[u_{xx}]+(y_{\alpha})^{2}[u_{yy}]+2x_{\alpha}y_{ \alpha}[u_{xy}]=r_{1},\] (A.20) \[(x_{\alpha})^{2}[v_{xx}]+(y_{\alpha})^{2}[v_{yy}]+2x_{\alpha}y_{ \alpha}[v_{xy}]=r_{2},\] (A.21) \[2x_{\alpha}y_{\alpha}[u_{xx}]-x_{\alpha}y_{\alpha}[u_{yy}]+(2(y_ {\alpha})^{2}-(x_{\alpha})^{2})[u_{xy}]\] \[-(x_{\alpha})^{2}[v_{xx}]-x_{\alpha}y_{\alpha}[v_{xy}]-x_{\alpha} y_{\alpha}[p_{x}]-(y_{\alpha})^{2}[p_{y}]=r_{3},\] (A.22) \[(y_{\alpha})^{2}[u_{yy}]+x_{\alpha}y_{\alpha}[u_{xy}]+x_{\alpha} y_{\alpha}[v_{xx}]-2x_{\alpha}y_{\alpha}[v_{yy}]\] \[+((y_{\alpha})^{2}-2(x_{\alpha})^{2})[v_{xy}]+(x_{\alpha})^{2}[p _{x}]+x_{\alpha}y_{\alpha}[p_{y}]=r_{4},\] (A.23) \[[u_{xx}]+[u_{yy}]-[p_{x}]=c^{2}\Phi_{1}+[f_{1}],\] (A.24) \[[v_{xx}]+[v_{yy}]-[p_{y}]=c^{2}\Phi_{2}+[f_{2}],\] (A.25) \[[u_{xx}]+[v_{xy}]=0,\] (A.26) \[[u_{xy}]+[v_{yy}]=0.\] (A.27)
where \(r_{i},i=1,2,\cdots,4\) are given by
\[r_{1} =\Phi_{1,\alpha\alpha}-x_{\alpha\alpha}[u_{x}]-y_{\alpha\alpha}[u _{y}],\] (A.28) \[r_{2} =\Phi_{2,\alpha\alpha}-x_{\alpha\alpha}[v_{x}]-y_{\alpha\alpha}[v _{y}],\] (A.29) \[r_{3} =s_{\alpha\alpha}\Psi_{1}+s_{\alpha}\Psi_{1,\alpha}-2y_{\alpha \alpha}[u_{x}]+x_{\alpha\alpha}[u_{y}]+x_{\alpha\alpha}[v_{x}]+y_{\alpha\alpha }[p],\] (A.30) \[r_{4} =s_{\alpha\alpha}\Psi_{2}+s_{\alpha}\Psi_{2,\alpha}-y_{\alpha \alpha}[u_{y}]-y_{\alpha\alpha}[v_{x}]+2x_{\alpha\alpha}[v_{y}]-x_{\alpha \alpha}[p].\] (A.31)
By solving the three linear systems, the derivative jump values of \(u,v,p\) can be obtained.
|
2303.17651
|
Self-Refine: Iterative Refinement with Self-Feedback
|
Like humans, large language models (LLMs) do not always generate the best
output on their first try. Motivated by how humans refine their written text,
we introduce Self-Refine, an approach for improving initial outputs from LLMs
through iterative feedback and refinement. The main idea is to generate an
initial output using an LLMs; then, the same LLMs provides feedback for its
output and uses it to refine itself, iteratively. Self-Refine does not require
any supervised training data, additional training, or reinforcement learning,
and instead uses a single LLM as the generator, refiner, and feedback provider.
We evaluate Self-Refine across 7 diverse tasks, ranging from dialog response
generation to mathematical reasoning, using state-of-the-art (GPT-3.5, ChatGPT,
and GPT-4) LLMs. Across all evaluated tasks, outputs generated with Self-Refine
are preferred by humans and automatic metrics over those generated with the
same LLM using conventional one-step generation, improving by ~20% absolute on
average in task performance. Our work demonstrates that even state-of-the-art
LLMs like GPT-4 can be further improved at test time using our simple,
standalone approach.
|
Aman Madaan, Niket Tandon, Prakhar Gupta, Skyler Hallinan, Luyu Gao, Sarah Wiegreffe, Uri Alon, Nouha Dziri, Shrimai Prabhumoye, Yiming Yang, Shashank Gupta, Bodhisattwa Prasad Majumder, Katherine Hermann, Sean Welleck, Amir Yazdanbakhsh, Peter Clark
|
2023-03-30T18:30:01Z
|
http://arxiv.org/abs/2303.17651v2
|
# Self-Refine:
###### Abstract
Like humans, large language models (LLMs) do not always generate the best output on their first try. Motivated by how humans refine their written text, we introduce Self-Refine, an approach for improving initial outputs from LLMs through iterative feedback and refinement. The main idea is to generate an initial output using an LLM; then, the same LLM provides _feedback_ for its output and uses it to _refine_ itself, iteratively. Self-Refine does not require any supervised training data, additional training, or reinforcement learning, and instead uses a single LLM as the generator, refiner and the feedback provider. We evaluate Self-Refine across 7 diverse tasks, ranging from dialog response generation to mathematical reasoning, using state-of-the-art (GPT-3.5 and GPT-4) LLMs. Across all evaluated tasks, outputs generated with Self-Refine are preferred by humans and automatic metrics over those generated with the same LLM using conventional one-step generation, improving by \(\sim\)20% absolute on average in task performance. Our work demonstrates that even state-of-the-art LLMs like GPT-4 can be further improved at test-time using our simple, standalone approach.1.
Footnote 1: Code and data at [https://selfrefine.info/](https://selfrefine.info/)
## 1 Introduction
Although large language models (LLMs) can generate coherent outputs, they often fall short in addressing intricate requirements. This mostly includes tasks with multifaceted objectives, such as dialogue response generation, or tasks with hard-to-define goals, such as enhancing program readability. In these scenarios, modern LLMs may produce an intelligible initial output, yet may benefit from further iterative refinement--i.e., iteratively mapping a candidate output to an improved one--to ensure that the desired quality is achieved. Iterative refinement typically involves training a refinement model that relies on domain-specific data (e.g., Reid and Neubig (2022); Schick et al. (2022); Welleck et al. (2022)). Other approaches that rely on external supervision or reward models require large training sets or expensive human annotations (Madaan et al., 2021; Ouyang et al., 2022), which may not always be feasible to obtain. These limitations underscore the need for an effective refinement approach that can be applied to various tasks without requiring extensive supervision.
Iterative _self_-refinement is a fundamental characteristic of human problem-solving (Simon, 1962; Flower and Hayes, 1981; Amabile, 1983). Iterative self-refinement is a process that involves creating an initial draft and subsequently refining it based on self-provided feedback. For example, when
drafting an email to request a document from a colleague, an individual may initially write a direct request such as "_Send me the data ASAP_". Upon reflection, however, the writer recognizes the potential implicitness of the phrasing and revises it to "_Hi Ashley, could you please send me the data at your earliest convenience?_". When writing code, a programmer may implement an initial "quick and dirty" implementation, and then, upon reflection, refactor their code to a solution that is more efficient and readable. In this paper, we demonstrate that LLMs can provide iterative self-refinement without additional training, leading to higher-quality outputs on a wide range of tasks.
We present Self-Refine: an iterative self-refinement algorithm that alternates between two generative steps-feedback and refine. These steps work in tandem to generate high-quality outputs. Given an initial output generated by a model \(\mathcal{M}\), we pass it back to the same model \(\mathcal{M}\) to get _feedback_. Then, the feedback is passed back to the same model to _refine_ the previously-generated draft. This process is repeated either for a specified number of iterations or until \(\mathcal{M}\) determines that no further refinement is necessary. We use few-shot prompting (Brown et al., 2020) to guide \(\mathcal{M}\) to both generate feedback and incorporate the feedback into an improved draft. Figure 1 illustrates the high-level idea, that Self-Refine _uses the same underlying language model to generate feedback and refine its outputs_.
We evaluate Self-Refine on 7 generation tasks that span diverse domains, including natural language and source-code generation. We show that Self-Refine outperforms direct generation from strong LLMs like GPT-3.5 (text-davinci-003 and gpt-3.5-turbo; OpenAI; Ouyang et al., 2022) and GPT-4 (OpenAI, 2023) by 5-40% absolute improvement. In code-generation tasks, Self-Refine improves the initial generation by up to absolute 13% when applied to strong code models such as Codex (code-davinci-002; Chen et al., 2021). We release all of our code, which is easily extensible to other LLMs. In essence, our results show that even when an LLM cannot generate an optimal output on its first try, the LLM can often provide useful feedback and improve its own output accordingly. In turn, Self-Refine provides an effective way to obtain better outputs from a single model without any additional training, via iterative (self-)feedback and refinement.
## 2 Iterative Refinement with Self-Refine
Given an input sequence, Self-Refine generates an initial output, provides feedback on the output, and refines the output according to the feedback. Self-Refine iterates between feedback and refinement until a desired condition is met. Self-Refine relies on a suitable language model and three prompts (for initial generation, feedback, and refinement), and does not require training. Self-Refine is shown in Figure 1 and Algorithm 1. Next, we describe Self-Refine in more detail.
Initial generationGiven an input \(x\), prompt \(p_{\text{gen}}\), and model \(\mathcal{M}\), Self-Refine generates an initial output \(y_{0}\):
\[y_{0}=\mathcal{M}\left(p_{\text{gen}}\|x\right). \tag{1}\]
Figure 1: Given an input (�
For example, in Figure 2(d), the model generates functionally correct code for the given input. Here, \(p_{\text{gen}}\) is a task-specific few-shot prompt (or instruction) for an initial generation, and \(\|\) denotes concatenation. The few-shot prompt contains input-output pairs \(\langle x^{(k)},y^{(k)}\rangle\) for the task.2
Footnote 2: Few-shot prompting (also referred to as “in-context learning”) provides a model with a prompt consisting of \(k\) in-context examples of the target task, each in the form of input-output pairs \(\langle x_{i},y_{i}\rangle\) (Brown et al., 2020).
feedbackNext, Self-Refine uses the same model \(\mathcal{M}\) to provide feedback \(fb_{t}\) on its own output, given a task-specific prompt \(p_{\text{fb}}\) for generating feedback:
\[fb_{t}=\mathcal{M}\left(p_{\text{fb}}\|x\|y_{t}\right). \tag{2}\]
Intuitively, the feedback may address multiple aspects of the output. For example, in code optimization, the feedback might address the efficiency, readability, and overall quality of the code.
Figure 3: The Self-Refine algorithm. See (§2) for a discussion of each component.
Figure 2: Examples of Self-Refine: an initial output generated by the base LLM and then passed back to the _same_ LLM to receive feedback to the _same_ LLM to refine the output. The top row illustrates this for dialog generation where an initial dialogue response can be transformed into a more engaging one that also understands the user by applying feedback. The bottom row illustrates this for code optimization where the code is made more efficient by applying feedback.
Here, the prompt \(p_{\text{fb}}\) provides examples of feedback in the form of input-output-feedback triples \(\langle x^{(k)},y^{(k)},fb^{(k)}\rangle\). We prompt the model to write feedback that is actionable and specific via \(fb^{(k)}\). By 'actionable', we mean the feedback should contain a concrete action that would likely improve the output. By'specific', we mean the feedback should identify concrete phrases in the output to change. For example, the feedback in Figure 2(e) is _"This code is slow as it uses a for loop which is brute force. A better approach is to use the formula... (n(n+1))/2"_. This feedback is actionable, since it suggests the action 'use the formula...'. The feedback is specific since it mentions the 'for loop'.
refineNext, Self-Refine uses \(\mathcal{M}\) to refine its most recent output, given its own feedback:
\[y_{t+1}=\mathcal{M}\left(p_{\text{refine}}\|x\|y_{t}\|fb_{t}\right). \tag{3}\]
For example, in Figure 2(f), given the initial output and the generated feedback, the model generates a re-implementation that is shorter and runs much faster than the initial implementation. The prompt \(p_{\text{refine}}\) provides examples of improving the output based on the feedback, in the form of input-output-feedback-refined quadruples \(\langle x^{(k)},y_{t}^{(k)},fb_{t}^{(k)},y_{t+1}^{(k)}\rangle\).
Iterating Self-Refine Self-Refine alternates between feedback and refine steps until a stopping condition is met. The stopping condition \(\operatorname{stop}(fb_{t},t)\) either stops at a specified timestep \(t\), or extracts a stopping indicator (e.g. a scalar stop score) from the feedback. In practice, the model can be prompted to generate a stopping indicator in \(p_{\text{fb}}\), and the condition is determined per-task.
To inform the model about the previous iterations, we retain the history of previous feedback and outputs by appending them to the prompt. Intuitively, this allows the model to learn from past mistakes and avoid repeating them. More precisely, Equation (3) is in fact instantiated as:
\[y_{t+1}=\mathcal{M}\left(p_{\text{refine}}\|x\|y_{0}\|fb_{0}\|...\|y_{t}\|fb_{ t}\right). \tag{4}\]
Finally, we use the last refinement \(y_{t}\) as the output of Self-Refine.
Algorithm 1 summarizes Self-Refine, and Figure 2 shows an example of Self-Refine in the Dialogue Response Generation (Mehri and Eskenazi, 2020) and Code Optimization (Madaan et al., 2023) tasks. Appendix S provides examples of the \(p_{\text{gen}}\), \(p_{\text{fb}}\), \(p_{\text{refine}}\) prompts for various tasks. The key idea is that Self-Refine uses the same underlying LLM to generate, get feedback, and refine its outputs given its own feedback. It relies only on supervision present in the few-shot examples.
## 3 Evaluation
We evaluate Self-Refine on 7 diverse tasks: Dialogue Response Generation (Appendix M; Mehri and Eskenazi, 2020), Code Optimization (Appendix N; Madaan et al., 2023), Code Readability Improvement (Appendix L; Puri et al., 2021), Math Reasoning (Appendix O; Cobbe et al., 2021), Sentiment Reversal (Appendix P; Zhang et al., 2015), and we introduce two new tasks: Acronym Generation (Appendix Q) and Constrained Generation (a harder version of Lin et al. (2020) with 20-30 keyword constraints instead of 3-5; Appendix R)
Examples for all tasks and dataset statistics are provided in Table 4 (Appendix A).
### Instantiating Self-Refine
We instantiate Self-Refine following the high-level description in Section 2. The feedback-refine iterations continue until the desired output quality or task-specific criterion is reached, up to a maximum of 4 iterations. To make our evaluation consistent across different models, we implemented both feedback and refine as few-shot prompts even with models that respond well to instructions, such as ChatGPT and GPT-4.
Base LLMsOur main goal is to evaluate whether we can improve the performance of any strong base LLMs using Self-Refine. Therefore, we compare Self-Refine to the same base LLMs but without feedback-refine iterations. We used three main strong base LLM across all tasks: GPT-3.5 (text-davinci-003), ChatGPT (gpt-3.5-turbo), and GPT-4 (OpenAI, 2023). For code-based tasks, we also experimented with Codex (code-davinci-002). In all tasks, either GPT-3.5 or GPT-4 is the previous state-of-the-art.3 We used the same prompts from previous work when
available (such as for Code Optimization and Math Reasoning); otherwise, we created prompts as detailed in Appendix S. We use greedy decoding with a temperature of 0.7 for all setups.
### Metrics
We report three types of metrics:
* Task specific metric: When available, we use automated metrics from prior work (Math Reasoning: % solve rate; Code Optimization: % programs optimized; Constrained Gen: coverage %)
* Human-pref: In Dialogue Response Generation, Code Readability Improvement, Sentiment Reversal, and Acronym Generation, since no automated metrics are available, we perform a blind human A/B evaluation on a subset of the outputs to select the preferred output. Additional details are provided in Appendix C.
* GPT-4-pref: In addition to human-pref, we use GPT-4 as a proxy for human preference following prior work (Fu et al., 2023; Chiang et al., 2023; Geng et al., 2023; Sun et al., 2023), and found high correlation (82% for Sentiment Reversal, 68% for Acronym Generation, and 71% for Dialogue Response Generation) with human-pref. For Code Readability Improvement, we prompt GPT-4 to calculate fraction of the variables that are appropriately named given the context (e.g., x = [] \(\rightarrow\) input_buffer = []). Additional details are provided in Appendix D.
### Results
Table 1 shows our main results:
**Self-Refine consistently improves over base models** across all model sizes, and additionally outperforms the previous state-of-the-art across all tasks. For example, GPT-4+Self-Refine improves over the base GPT-4 by 8.7% (absolute) in Code Optimization, increasing optimization percentage from 27.3% to 36.0%. Confidence intervals are provided in Appendix J. For code-based tasks, we found similar trends when using Codex; those results are included in Appendix F.
One of the tasks in which we observe the highest gains compared to the base models is Constrained Generation, where the model is asked to generate a sentence containing up to 30 given concepts. We believe that this task benefits significantly from Self-Refine because there are more opportunities to miss some of the concepts on the first attempt, and thus Self-Refine allows the model to fix these mistakes subsequently. Further, this task has an extremely large number of reasonable outputs, and thus Self-Refine allows to better explore the space of possible outputs.
In preference-based tasks such as Dialogue Response Generation, Sentiment Reversal, and Acronym Generation, Self-Refine leads to especially high gains. For example in Dialogue Response Generation, GPT-4 preference score improve by 49.2% - from 25.4% to 74.6%. Similarly, we see remarkable improvements in the other preference-based tasks across all models.
The modest performance gains in Math Reasoning can be traced back to the inability to accurately identify whether there is any error. In math, errors can be nuanced and sometimes limited to a single line or incorrect operation. Besides, a consistent-looking reasoning chain can deceive LLMs to
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline & \multicolumn{2}{c}{GPT-3.5} & \multicolumn{2}{c}{ChatGPT} & GPT-4 \\ \cline{2-6} Task & Base & +Self-Refine & Base & +Self-Refine & Base & +Self-Refine \\ \hline Sentiment Reversal & 8.8 & **30.4** (\(\uparrow\)21.6) & 11.4 & **43.2** (\(\uparrow\)31.8) & 3.8 & **36.2** (\(\uparrow\)32.4) \\ Dialogue Response & 36.4 & **63.6** (\(\uparrow\)27.2) & 40.1 & **59.9** (\(\uparrow\)19.8) & 25.4 & **74.6** (\(\uparrow\)49.2) \\ Code Optimization & 14.8 & **23.0** (\(\uparrow\)8.2) & 23.9 & **27.5** (\(\uparrow\)3.6) & 27.3 & **36.0** (\(\uparrow\)8.7) \\ Code Readability & 37.4 & **51.3** (\(\uparrow\)13.9) & 27.7 & **63.1** (\(\uparrow\)35.4) & 27.4 & **56.2** (\(\uparrow\)28.8) \\ Math Reasoning & **64.1** & **64.1** (\(\uparrow\)0.7) & 74.8 & **75.0** (\(\uparrow\)0.2) & 92.9 & **93.1** (\(\uparrow\)0.2) \\ Acronym Generation & 41.6 & **56.4** (\(\uparrow\)14.8) & 27.2 & **37.2** (\(\uparrow\)10.0) & 30.4 & **56.0** (\(\uparrow\)25.6) \\ Constrained Generation & 28.0 & **37.0** (\(\uparrow\)9.0) & 44.0 & **67.0** (\(\uparrow\)23.0) & 15.0 & **45.0** (\(\uparrow\)30.0) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Self-Refine results on various tasks using GPT-3.5, ChatGPT, and GPT-4 as base LLM. Self-Refine consistently improves LLM. Metrics used for these tasks are defined in Section 3.2.
think that "everything looks good" (e.g., ChatGPT feedback for 94% instances is 'everything looks good'). In Appendix H.1, we show that the gains with Self-Refine on Math Reasoning are much bigger (5%+) if an external source can identify if the current math answer is incorrect.
**Improvement is consistent across base LLMs sizes** Generally, GPT-4+Self-Refine performs better than GPT-3.5+Self-Refine and ChatGPT+Self-Refine across all tasks, even in tasks where the initial base results of GPT-4 were lower than GPT-3.5 or ChatGPT. We thus believe that Self-Refine allows stronger models (such as GPT-4) to unlock their full potential, even in cases where this potential is not expressed in the standard, single-pass, output generation. Comparison to additional strong baselines is provided in Appendix F.
## 4 Analysis
The three main steps of Self-Refine are feedback, refine, and repeating them iteratively. In this section, we perform additional experiments to analyze the importance of each of these steps.
The impact of the feedback qualityFeedback quality plays a crucial role in Self-Refine. To quantify its impact, we compare Self-Refine, which utilizes specific, actionable feedback, with two ablations: one using generic feedback and another without feedback (the model may still iteratively refine its generations, but is not explicitly provided feedback to do so). For example, in the Code Optimization task: actionable feedback, such as _Avoid repeated calculations in the for loop_, pinpoints an issue and suggests a clear improvement. Generic feedback, like _Improve the efficiency of the code_, lacks this precision and direction. Table 2 shows feedback's clear influence.
In Code Optimization, performance slightly dips from 27.5 (Self-Refine feedback) to 26.0 (generic feedback), and further to 24.8 (no feedback). This suggests that while generic feedback offers some guidance - specific, actionable feedback yields superior results.
This effect is more pronounced in tasks like Sentiment Transfer, where changing from our feedback to generic feedback leads to a significant performance drop (43.2 to 31.2), and the task fails without feedback. Similarly, in Acronym Generation, without actionable feedback, performance drops from 56.4 to 48.0, even with iterative refinements. These results highlight the importance of specific, actionable feedback in our approach. Even generic feedback provides some benefit, but the best results are achieved with targeted, constructive feedback.
How important are the multiple iterations of feedback-refine?Figure 4 demonstrates that on average, the quality of the output improves as the number of iterations increases. For instance, in the Code Optimization task, the initial output (\(y_{0}\)) has a score of 22.0, which improves to 28.8 after three iterations (\(y_{3}\)). Similarly, in the Sentiment Reversal task, the initial output has a score of 33.9, which increases to 36.8 after three iterations. This trend of improvement is also evident in Constrained Generation, where the score increases from 29.0 to 49.7 after three iterations. Figure 4 highlights the diminishing returns in the improvement as the number of iterations increases. Overall, having multiple feedback-refine iterations significantly enhances the quality of the output, although the marginal improvement naturally decreases with more iterations.
The performance may not always monotonically increase with iterations: in multi-aspect feedback tasks like Acronym Generation, where the output quality can vary during iteration with improvement in one aspect but decline in another aspect. To counter this, Self-Refine generates numerical scores for different quality aspects, leading to a balanced evaluation and appropriate output selection.
\begin{table}
\begin{tabular}{l c c c} \hline \hline Task & Self-Refine feedback & Generic feedback & No feedback \\ \hline Code Optimization & **27.5** & 26.0 & 24.8 \\ Sentiment Reversal & **43.2** & 31.2 & 0 \\ Acronym Generation & **56.4** & 54.0 & 48.0 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Prompting to generate generic feedback (or having the model generate no feedback at all) leads to reduced scores, indicating the importance of the feedback step of Self-Refine. These experiments were performed with ChatGPT (Code Optimization and Sentiment Reversal) and GPT-3.5 (Acronym Generation), and metrics used are defined in Section 3.2.
**Can we just generate multiple outputs instead of refining?**Does Self-Refine improve because of the iterative refinement, or just because it generates _more_ outputs? We compare Self-Refine with ChatGPT, when ChatGPT generates \(k=4\) samples (but without feedback and refinement). Then, we compare the performance of Self-Refine against these \(k\) initial outputs in a 1 vs. \(k\) evaluation. In other words, we assess whether Self-Refine can outperform _all_\(k\) initial outputs. The results of this experiment are illustrated in Figure 6 (Appendix H). Despite the increased difficulty of the 1 vs. \(k\) setting, the outputs of Self-Refine are still preferred by humans over _all_\(k\) initial outputs. This shows the importance of refinement according to feedback over the alternative of just generating multiple initial outputs.
Does Self-Refine work with weaker models?The experiments in Section 3.3 were performed with some of the strongest available models; does Self-Refine work with smaller or weaker models as well? To investigate this, we instantiated Self-Refine with Vicuna-13B (Chiang et al., 2023), a
Figure 4: **Left**: Iteration-wise score improvements. Early iterations significantly improve output quality, and scores generally keep improving with more iterations. **Right**: Self-Refine Performance improvements with iterations. Most gains(\(\Delta\)) are in the initial iterations for both Code Opt. and Sentiment Reversal. The numbers are averaged over ChatGPT, GPT-3.5, and GPT-4. Task abbreviations: C. Opt. (Code Optimiz.), S. Rev. (Sentiment Reversal), C. Gen. (Constrained Generation).
Figure 5: Comparison of code generated by Madaan et al. (2023) (left) and the output after applying Self-Refine (right). The initial code by the baseline, which is nearly identical to the slower input program, fails to improve the efficiency and merely alters the logic for reading input. Self-Refine first generates feedback that diagnoses that _This code is slow because it is using six nested loops to iterate through all possible combinations of coins to pay the amount,_ and suggests that _a more efficient approach would be..._. Self-Refine then uses this feedback to generate the revised code (right), reducing the time complexity to \(\mathcal{O}(amount*coins)\). The full example is provided in Appendix H
less powerful base model. While Vicuna-13B is capable of generating initial outputs, it struggles significantly with the refinement process. Specifically, Vicuna-13B was not able to consistently generate the feedback in the required format. Furthermore, even when provided with Oracle or hard-coded feedback, it often failed to adhere to the prompts for refinement. Instead of refining its output, Vicuna-13B either repeated the same output or generated a hallucinated conversation, rendering the outputs less effective. We thus hypothesize that since Vicuna-13B was trained on conversations, it does not generalize as well as instruction-based models to test-time few-shot tasks. Example output and analysis is provided in Appendix G.
Qualitative AnalysisWe conduct a qualitative analysis of the feedback generated by Self-Refine and its subsequent refinements. We manually analyze 70 samples in total (35 success cases and 35 failure cases) for Code Optimization (Madaan et al., 2023) and Math Reasoning (Cobbe et al., 2021). For both Math Reasoning and Code Optimization, we found that the feedback was predominantly actionable, with the majority identifying problematic aspects of the original generation and suggesting ways to rectify them.
When Self-Refine failed to improve the original generation, the majority of issues were due to erroneous feedback rather than faulty refinements. Specifically, 33% of unsuccessful cases were due to feedback inaccurately pinpointing the error's location, while 61% were a result of feedback suggesting an inappropriate fix. Only 6% of failures were due to the refiner incorrectly implementing good feedback. These observations highlight the vital role of accurate feedback plays in Self-Refine.
In successful cases, the refiner was guided by accurate and useful feedback to make precise fixes to the original generation in 61% of the cases. Interestingly, the refiner was capable of rectifying issues even when the feedback was partially incorrect, which was the situation in 33% of successful cases. This suggests resilience to sub-optimal feedback. Future research could focus on examining the refiner's robustness to various types of feedback errors and exploring ways to enhance this resilience. In Figure 5, we illustrate how Self-Refine significantly improves program efficiency by transforming a brute force approach into a dynamic programming solution, as a result of insightful feedback. Additional analysis on other datasets such as Dialogue Response Generation is provided in Appendix H.
Going Beyond BenchmarksWhile our evaluation focuses on benchmark tasks, Self-Refine is designed with broader applicability in mind. We explore this in a real-world use case of website generation, where the user provides a high-level goal and Self-Refine assists in iteratively developing the website. Starting from a rudimentary initial design, Self-Refine refines HTML, CSS, and JS to evolve the website in terms of both usability and aesthetics. This demonstrates the potential of Self-Refine in real-world, complex, and creative tasks. See Appendix I for examples and further discussion, including broader, societal impact of our work.
## 5 Related work
Leveraging human- and machine-generated natural language (NL) feedback for refining outputs has been effective for a variety of tasks, including summarization (Scheurer et al., 2022), script generation (Tandon et al., 2021), program synthesis (Le et al., 2022; Yasunaga and Liang, 2020), and other tasks (Bai et al., 2022; Schick et al., 2022; Saunders et al., 2022; Bai et al., 2022; Welleck et al., 2022). Refinement methods differ in the source and format of feedback, and the way that a refiner is obtained. Table 3 summarizes some related approaches; see Appendix B for an additional discussion.
Source of feedback.Humans have been an effective source of feedback (Tandon et al., 2021; Elgohary et al., 2021; Tandon et al., 2022; Bai et al., 2022). Since human feedback is costly, several approaches use a scalar reward function as a surrogate of (or alternative to) human feedback (e.g., (Bai et al., 2022; Liu et al., 2022; Lu et al., 2022; Le et al., 2022; Welleck et al., 2022)). Alternative sources such as compilers (Yasunaga and Liang, 2020) or Wikipedia edits (Schick et al., 2022) can provide domain-specific feedback. Recently, LLMs have been used to generate feedback for general domains (Fu et al., 2023; Peng et al., 2023; Yang et al., 2022), However, ours is the only method that generates feedback using an LLM on its _own_ output, for the purpose of refining with the same LLM.
Representation of feedback.The form of feedback can be generally divided into natural language (NL) and non-NL feedback. Non-NL feedback can come in human-provided example pairs (Dasgupta
et al., 2019) or scalar rewards (Liu et al., 2022; Le et al., 2022b). In this work, we use NL feedback, since this allows the model to easily provide _self_-feedback using the same LM that generated the output, while leveraging existing pretrained LLMs such as GPT-4.
Types of refiners.Pairs of feedback and refinement have been used to learn supervised refiners (Schick et al., 2022b; Du et al., 2022; Yasunaga and Liang, 2020; Madaan et al., 2021). Since gathering supervised data is costly, some methods learn refiners using model generations (Welleck et al., 2022; Peng et al., 2023). However, the refiners are trained for each new domain. Finally, (Yang et al., 2022) use prompted feedback and refinement specifically tailored for story generation. In this work, we avoid training a separate refiner, and show that the same model can be used as both the refiner and the source of feedback across multiple domains.
Non-refinement reinforcement learning (RL) approaches.Rather than having explicit refinement, an alternative way to incorporate feedback is by optimizing a scalar reward function, e.g. with reinforcement learning (e.g., Stiennon et al. (2020); Lu et al. (2022); Le et al. (2022a)). These methods differ from Self-Refine in that the model does not access feedback on an intermediate generation. Second, these RL methods require updating the model's parameters, unlike Self-Refine.
## 6 Limitations and Discussion
The main limitation of our approach is that the base models need to have sufficient few-shot modeling or instruction-following abilities, in order to learn to provide feedback and to refine in an in-context fashion, without having to train supervised models and rely on supervised data.
Further, the experiments in this work were performed with language models that are not open-sourced, namely GPT-3.5, ChatGPT, GPT-4, and Codex. Existing literature (Ouyang et al., 2022) does not fully describe the details of these models, such as the pretraining corpus, model sizes, and model biases. Further, these models are not free to use, and using them for research requires some funding. Nonetheless, we release our code and model outputs to ensure the reproducibility of our work.
Another limitation of our work is that we exclusively experiment with datasets in English. In other languages, the current models may not provide the same benefits.
Finally, there is a possibility for bad actors to use prompting techniques to steer a model to generate more toxic or harmful text. Our approach does not explicitly guard against this.
## 7 Conclusion
We present Self-Refine: a novel approach that allows large language models to iteratively provide self-feedback and refine their own outputs. Self-Refine operates within a single LLM, requiring neither additional training data nor reinforcement learning. We demonstrate the simplicity and ease of use of Self-Refine across a wide variety of tasks. By showcasing the potential of Self-Refine in diverse tasks, our research contributes to the ongoing exploration and development of large language models, with the aim of reducing the cost of human creative processes in real-world settings. We
\begin{table}
\begin{tabular}{l l l l l} \hline \hline & Supervision- & Supervision- & Multi-aspect & Iterative \\ & free refiner & free feedback & feedback & \\ \hline
**Learned refiners**: PEER (Schick et al., 2022b), Self-critique (Saunders et al., 2022b), CodeRL (Le et al., 2022b), Self-correction (Welleck et al., 2022). & & & \\ \hline
**Prompted refiners**: Augmenter (Peng et al., 2023), Re\({}^{3}\)(Yang et al., 2022), Reflexion (Shinn et al., 2023). & & & \\ \hline
**Self-Refine** (this work) & & & & \\ \hline \hline \end{tabular}
\end{table}
Table 3: A comparison of Self-Refine to closely related prior refinement approaches.
hope that our iterative approach will help drive further research in this area. To this end, we make all our code, data and prompts anonymously available at [https://selfrefine.info/](https://selfrefine.info/).
|
2307.14877
|
Spectral Metric and Einstein Functionals for Hodge-Dirac operator
|
We examine the metric and Einstein bilinear functionals of differential forms
introduced in Adv.Math.,Vol.427,(2023)1091286, for Hodge-Dirac operator
$d+\delta$ on an oriented even-dimensional Riemannian manifold. We show that
they reproduce these functionals for the canonical Dirac operator on a spin
manifold up to a numerical factor. Furthermore, we demonstrate that the
associated spectral triple is spectrally closed, which implies that it is
torsion-free.
|
Ludwik Dąbrowski, Paweł Zalecki, Andrzej Sitarz
|
2023-07-27T14:07:58Z
|
http://arxiv.org/abs/2307.14877v3
|
# Spectral metric and Einstein functionals
###### Abstract.
We examine the metric and Einstein bilinear functionals of differential forms introduced in [8], for Hodge-Dirac operator \(d+\delta\) on an oriented even-dimensional Riemannian manifold. We show that they reproduce these functionals for the canonical Dirac operator on a spin manifold up to a numerical factor. Furthermore, we demonstrate that the associated spectral triple is spectrally closed, which implies that it is torsion-free.
This work is supported by the Polish National Science Centre grant 2020/37/B/ST1/01540. Keywords: _Noncommutative geometry, Einstein tensor, spectral geometry, Wodzicki residue._
## 1. Introduction
Spectral geometry investigates relationships between geometric structures of manifolds and the spectra of certain differential operators. Its direct and inverse problems are inextricably linked to other areas of mathematics such as number theory, representation theory, and areas of mathematical physics such as quantum mechanics and general relativity. In this regard, starting with the Laplace-Beltrami operator on a closed Riemannian manifold, general Laplace-type operators have been extensively studied, and their spectra provide insights into the geometry and topology of the underlying space. The distribution of eigenvalues, for example, reveals information about the curvature or shape and global geometric properties such as diameter or volume, connectivity, or the presence of holes. In this vein, the Dirac-type operators have also been studied, beginning with the canonical Dirac operator on the spin manifold. When subsumed into Connes' concept of spectral triples [1, 2], they "can hear the shape of a drum" [13] in the sense that their equivalence (a suitably strengthened isospectrality) implies the isometricity of manifolds in virtue of the reconstruction theorem [4]. Furthermore, they allow for broad and captivating generalisations in noncommutative geometry.
Various (interrelated) spectral schemes that generate geometric objects on manifolds such as volume, scalar curvature, and other scalar combinations of curvature tensors and their derivatives are the small-time asymptotic expansion of the (localised) trace of the heat kernel [10, 11], certain values or residues of the (localised) zeta function of the Laplacian, the spectral action, and the Wodzicki residue \(\mathcal{W}\) (also known as noncommutative residue). In this paper, we focus on the latter one, which is the unique (up to multiplication by a constant) tracial state on the algebra of pseudo-differential operators (\(\Psi\)DO) on a complex vector bundle \(E\) over a compact manifold \(M\) of dimension \(n\geq 2\)[12, 18]. For the oriented manifold \(M\) it is given by a simple
###### Contents
* 1 Introduction
* 2 The Dirac type operator
* 2.1 The Dirac type operator
* 2.2 The Dirac type operator
* 2.3 The Dirac type operator
* 2.4 The Dirac type operator
* 2.5 The Dirac type operator
* 2.6 The Dirac type operator
* 2.7 The Dirac type operator
* 2.8 The Dirac type operator
* 2.9 The Dirac type operator
* 2.10 The Dirac type operator
* 2.11 The Dirac type operator
* 2.12 The Dirac type operator
* 2.12 The Dirac type operator
* 2.12 The Dirac type operator
* 2.12 The Dirac type operator
* 2.12 The Dirac type operator
* 2.12 The Dirac type operator
* 2.12 The Dirac type operator
* 2.12 The Dirac type operator
* 2.12 The Dirac type operator
* 2.12 The Dirac type operator
* 2.12 The Dirac type operator
* 2.12 The Dirac type operator
* 2.12 The Dirac type operator
* 2.12 The Dirac type operator
* 2.12 The Dirac type operator
* 2.12 The Dirac type operator
* 2.12 The Dirac type operator
* 2.12 The Dirac type operator
* 2.12 The Dirac type operator
* 2.12 The Dirac type operator
* 2.12 The Dirac type operator
* 2.12 The Dirac type operator
* 2.12 The Dirac type operator
* 2.12 The Dirac type operator
* 2.12 The Dirac type operator
* 2.12 The Dirac type operator
* 2.12 The Dirac type operator
* 2.12 The Dirac type operator
* 2.12 The Dirac type operator
* 2.12 The Dirac type operator
* 2.12 The Dirac type operator
* 2.12 The Dirac type operator
* 2.12 The Dirac type operator
* 2.12 The Dirac type operator
* 2.12 The Dirac type operator
* 2.12 The Dirac type operator
* 2.12 The Dirac type operator
* 2.12.12 The Dirac type operator
* 2.12.12 The Dirac type operator
* 2.12.12 The Dirac type operator
* 2.12.12 The Dirac type operator
* 2.12.12 The Dirac type operator
* 2.12.12 The Dirac type operator
* 2.12.12 The Dirac type operator
* 2.12.12 The Dirac type operator
* 2.12.12 The Dirac type operator
* 2.12.12 The Dirac type operator
* 2.12.12 The Dirac type operator
* 2.12.12 The Dirac type operator
* 2.12.12 The Dirac type operator
* 2.12.12 The Dirac type operator
* 2.12.12 The Dirac type operator
* 2.12.12 The Dirac type operator
* 2.12.12 The Dirac type operator
* 2.12.12 The Dirac type operator
* 2.12.12 The Dirac type operator
* 2.12.12 The Dirac type operator
* 2.12.12 The Dirac type operator
* 2.12.12 The Dirac type operator
operator \(d+\delta\) acting on (complex) differential forms \(\Omega(M)\) of arbitrary order on a oriented even-dimensional Riemannian manifold \(M\). It is worth mentioning that the associated Hodge-Dirac spectral triple is characterised [7] by the fact that the Hilbert space of square-integrable forms provides a Morita equivalence \(Cl(M)-Cl(M)\) bimodule, where \(Cl(M)\) is the \(C^{*}\)-algebra of continuous sections of the bundle of Clifford algebras on \(M\). As is well known, the canonical spectral triple on a spin manifold is instead characterised by the fact that its Hilbert space of square-integrable Dirac spinors provides a Morita equivalence \(Cl(M)-C(M)\) bimodule, where \(C(M)\) is the algebra of continuous complex functions on \(M\). As our first main result, we demonstrate that these two different pivotal cases yield in fact equal spectral metric and Einstein functionals (up to a numerical factor). Moreover, as our second main result, we prove that the associated spectral triple is spectrally closed, that is, for any operator \(T\) of zero-order,
\[\mathfrak{N}\!\!\!/(TD|D|^{-n})\equiv 0.\]
A forthcoming result [9] demonstrates that, as a consequence, the Hodge-Dirac operator has no torsion.
## 2. Preliminaries
Let \(n=2k\) be the dimension of an oriented, closed, smooth Riemannian manifold \(M\). We will use capital letters to denote increasing sequences of numbers between \(1\) and \(n\), of fixed length \(0\leq\ell\leq n\). A differential \(\ell\)-form \(\omega=\sum_{J}\omega_{J}dx^{J}\) is determined by its coefficients \(\omega_{J}\), with respect to coordinates indicated by the multi-index \(J\), where with a slight abuse of notation \(0\)-forms (i.e. functions) will correspond to \(J=\emptyset\).
We introduce the operators \(\lambda_{+}^{j}\) and \(\lambda_{-}^{j}\) which respectively raise/lower the degree of forms, with components given by
\[(\lambda_{+}^{p})_{J}^{I}=\epsilon_{pJ}^{I},\qquad(\lambda_{-}^{p})_{J}^{I}= \epsilon_{J}^{PI},\]
where \(\epsilon_{pJ}^{I}=(-)^{|\pi|}\) if the juxtaposed index \(pJ\) is a permutation \(\pi\) of \(I\) and \(\epsilon_{pJ}^{I}=0\) otherwise, and similarly for \(\epsilon_{J}^{PI}\). They satisfy
\[\begin{split}\lambda_{+}^{p}\lambda_{+}^{r}+\lambda_{+}^{r} \lambda_{+}^{p}&=0,\\ \lambda_{-}^{p}\lambda_{-}^{r}+\lambda_{-}^{r}\lambda_{-}^{p}& =0,\\ \lambda_{+}^{p}\lambda_{-}^{r}+\lambda_{-}^{r}\lambda_{+}^{p}& =\delta_{pr}\,\mathrm{id},\end{split} \tag{2.1}\]
which follow from the relations (c.f. [17])
\[\begin{split}\sum_{K}\epsilon_{pK}^{I}\epsilon_{rJ}^{K}=\epsilon _{prJ}^{I},&\sum_{K}\epsilon_{K}^{pI}\epsilon_{J}^{rK}=\epsilon_ {rpJ}^{I},\\ \sum_{K}\epsilon_{pK}^{I}\epsilon_{J}^{rK}=\delta_{pr}\epsilon_{J }^{I}-\epsilon_{pJ}^{rI},&\sum_{K}\epsilon_{K}^{rI}\epsilon_{pJ}^{ K}=\epsilon_{pJ}^{rI},\end{split} \tag{2.2}\]
where the juxtaposed indices can be ordered using a signed permutation. We also introduce
\[\gamma^{p}=-i(\lambda_{+}^{p}-\lambda_{-}^{p}),\]
which satisfy the following Clifford algebra relation
\[\{\gamma^{p},\gamma^{r}\}=2\delta_{pr}.\]
In the rest of the paper, we employ normal coordinates \(x\) centred around some fixed point on the manifold. Recall that then the components of the metric tensor \(g\), its covariant (raised) components, and the square root of the determinant of the matrix of the components of \(g\) and the components of the Levi-Civita connection have the following Taylor expansion:
\[\begin{split}& g_{ab}=\delta_{ab}-\frac{1}{3}R_{acbd}x^{c}x^{d}+o( \mathbf{x^{2}}),\\ & g^{ab}=\delta_{ab}+\frac{1}{3}R_{acbd}x^{c}x^{d}+o(\mathbf{x^{2 }}),\\ &\sqrt{\text{det}(g)}=1-\frac{1}{6}\text{Ric}_{ab}x^{a}x^{b}+o( \mathbf{x^{2}}),\\ &\Gamma^{a}_{bc}(x)=-\frac{1}{3}(R_{abcd}+R_{acbd})x^{d}+o( \mathbf{x^{2}}).\end{split} \tag{2.3}\]
Here \(R_{acbd}\) and \(\text{Ric}_{ab}\) are the components of the Riemann and Ricci tensors, respectively, at the point with \(\mathbf{x}=0\) and we use the notation \(o(\mathbf{x^{k}})\) to denote that we expand a function up to the polynomial of order \(k\) in the normal coordinates.
### Hodge-Dirac operator
We focus on the Hodge-Dirac operator \(D=d+d^{*}\), where \(d\) is the exterior derivative and \(d^{*}\) is its (formal) adjoint. Using our notation, we compute the symbol of \(D\),
\[\sigma(D)=(i\lambda_{+}^{p}-ig^{pr}\lambda_{-}^{r})\xi_{p}+\lambda_{-}^{p} \lambda_{+}^{r}\lambda_{-}^{s}\Gamma_{rt}^{s}g^{pt}, \tag{2.4}\]
which in normal coordinates takes form
\[\sigma(D)=-\gamma^{p}\xi_{p}-\frac{1}{3}i\lambda_{-}^{p}R_{sapb}x^{a}x^{b}\xi_ {s}-\frac{1}{3}\lambda_{-}^{p}\lambda_{+}^{r}\lambda_{-}^{s}(R_{srpa}+R_{spra} )x^{a}+o(\mathbf{x^{2}}). \tag{2.5}\]
We compute then the symbols of the Hodge-Dirac Laplacian \(D^{2}\) in normal coordinates up to orders relevant for our purposes.
**Lemma 2.1**.: _The three homogeneous symbols of \(D^{2}\) read_
\[\begin{split}\mathfrak{a}_{2}=&\big{(}\delta_{ab}+ \frac{1}{3}R_{acbd}x^{c}x^{d}\big{)}\xi_{a}\xi_{b}+o(\mathbf{x^{2}}),\\ \mathfrak{a}_{1}=&+\frac{2}{3}iRic_{ab}\xi_{a}x^{b}- \frac{2}{3}i\lambda_{+}^{p}\lambda_{-}^{r}(R_{rpb}+R_{rapb})x^{b}\xi_{a}+o( \mathbf{x}),\\ \mathfrak{a}_{0}=&+\frac{2}{3}\lambda_{+}^{p}\lambda_ {-}^{r}Ric_{pr}+\frac{1}{3}\lambda_{+}^{p}\lambda_{+}^{r}\lambda_{-}^{s} \lambda_{-}^{t}(R_{tsrp}+R_{trsp})+o(\mathbf{1}).\end{split}\]
Proof. The computation of the principal symbol \(\mathfrak{a}_{2}\) is obvious for the symbol of order 1:
\[\mathfrak{a}_{1}= -\frac{1}{3}i\{\lambda_{+}^{t},\lambda_{-}^{p}\lambda_{+}^{r} \lambda_{-}^{s}\}(R_{srpa}+R_{spra})x^{a}\xi_{t}+\frac{1}{3}i\{\lambda_{-}^{t}, \lambda_{-}^{p}\lambda_{+}^{r}\lambda_{-}^{s}\}(R_{srpa}+R_{spra})x^{a}\xi_{t}\] \[\quad+\frac{1}{3}\gamma^{p}\lambda_{-}^{r}(R_{aprb}+R_{abrp})x^{b }\xi_{a}\] \[=-\frac{1}{3}i\lambda_{+}^{r}\lambda_{-}^{s}(R_{stra}+R_{stra})x^{ a}\xi_{t}-\frac{1}{3}i\lambda_{-}^{p}\lambda_{+}^{r}(R_{trpa}+R_{tpra})x^{a}\xi_{t}\] \[\quad-\frac{1}{3}i\lambda_{-}^{p}\lambda_{-}^{s}(R_{stpa}+R_{spta} )x^{d}\xi_{t}-\frac{1}{3}i\lambda_{+}^{p}\lambda_{-}^{s}(R_{tpsa}+R_{tasp})x^{ a}\xi_{t}\] \[\quad+\frac{1}{3}i\lambda_{-}^{p}\lambda_{-}^{s}(R_{tpsa}+R_{tasp} )x^{a}\xi_{t}\] \[=\frac{2}{3}iRic_{sa}x^{a}\xi_{s}-\frac{1}{3}i\lambda_{+}^{r} \lambda_{-}^{s}(R_{srta}+R_{stra}-R_{trsa}-R_{trsa}+R_{trsa}+R_{tasr})x^{a}\xi_{t}\] \[\quad-\frac{1}{3}i\lambda_{-}^{r}\lambda_{-}^{s}(R_{stra}+R_{stra} -R_{trsa}-R_{tasr})x^{a}\xi_{t}\] \[=\frac{2}{3}iRic_{sa}x^{a}\xi_{s}-\frac{2}{3}i\lambda_{+}^{r} \lambda_{-}^{s}(R_{stra}+R_{stra})x^{a}\xi_{t}.\]
and the order \(0\) symbol:
\[\mathfrak{a}_{0}= -\frac{1}{3}i\gamma^{p}\lambda_{-}^{q}\lambda_{+}^{r}\lambda_{-} ^{s}(R_{sqrp}+R_{srqp})\] \[= -\frac{1}{3}\lambda_{+}^{p}\lambda_{-}^{q}\lambda_{+}^{r}\lambda_ {-}^{s}(R_{sqrp}+R_{srqp})\] \[=\frac{2}{3}\lambda_{+}^{p}\lambda_{-}^{s}Ric_{ps}+\frac{1}{3} \lambda_{+}^{p}\lambda_{+}^{r}\lambda_{-}^{q}\lambda_{-}^{s}(R_{sqrp}+R_{srqp}).\]
### The inverse of \(D^{2}\) and its powers
In this section, we present the results that can be applied to a more general situation than the Hodge-Dirac operator. Let us start with the following lemma:
**Lemma 2.2**.: _Let \(L\) be a Laplace-type operator with symbol_
\[\sigma(L)=\mathfrak{a}_{2}+\mathfrak{a}_{1}+\mathfrak{a}_{0}\]
_expressed in normal coordinates as_
\[\mathfrak{a}_{2} =\left(\delta_{ab}+\frac{1}{3}R_{acbd}x^{c}x^{d}\right)\xi_{a} \xi_{b}+o(\mathbf{x^{2}}),\] \[\mathfrak{a}_{1} =iP_{ab}\xi_{a}x^{b}+o(\mathbf{x}),\] \[\mathfrak{a}_{0} =Q+o(\mathbf{1}).\]
_Then the threee leading symbols of \(\sigma(L^{-k})=\mathfrak{c}_{2k}+\mathfrak{c}_{2k+1}+\mathfrak{c}_{2k+2}\) are:_
\[\mathfrak{c}_{2k}=||\xi||^{-2k-2}\left(\delta_{ab}-\frac{k}{3}R_ {acbd}x^{c}x^{d}\right)\xi_{a}\xi_{b}+o(\mathbf{x^{2}}),\] \[\mathfrak{c}_{2k+1}=-ik||\xi||^{-2k-2}P_{ab}\xi_{a}x^{b}+o( \mathbf{x}),\] \[\mathfrak{c}_{2k+2}=-k||\xi||^{-2k-2}Q+k(k+1)||\xi||^{-2k-4}\left( P_{ab}-\frac{1}{3}Ric_{ab}\right)\xi_{a}\xi_{b}+o(\mathbf{1}).\]
Proof. We start with computing leading symbols of the inverse of \(L\), i.e. \(\sigma(L^{-1})=\mathfrak{b}_{2}+\mathfrak{b}_{3}+\mathfrak{b}_{4}\) using the fact, that \(\sigma(LL^{-1})=\sigma(1)=1\). We have:
\[\mathfrak{b}_{2} =(\mathfrak{a}_{2})^{-1}=||\xi||^{-4}\left(\delta_{ab}-\frac{1}{ 3}R_{acbd}x^{c}x^{d}\right)\xi_{a}\xi_{b}+o(\mathbf{x}^{2}),\] \[\mathfrak{b}_{3} =\mathfrak{b}_{2}(-\mathfrak{a}_{1}\mathfrak{b}_{2}+i\partial_{ \xi}^{a}\mathfrak{a}_{2}\partial_{a}\mathfrak{b}_{2})=-i||\xi||^{-4}P_{ab}\xi _{a}x^{b}+o(\mathbf{x}),\] \[\mathfrak{b}_{4} =\mathfrak{b}_{2}\left(-\mathfrak{a}_{0}\mathfrak{b}_{2}- \mathfrak{a}_{1}\mathfrak{b}_{3}+i\partial_{\xi}^{a}\mathfrak{a}_{2}\partial_ {a}\mathfrak{b}_{3}+i\partial_{\xi}^{a}\mathfrak{a}_{1}\partial_{a}\mathfrak{ b}_{2}+\frac{1}{2}\partial_{\xi}^{ab}\mathfrak{a}_{2}\partial_{ab} \mathfrak{b}_{2}\right)\] \[=-\,||\xi||^{-4}Q+2||\xi||^{-6}\left(P_{ab}-\frac{1}{3}Ric_{ab} \right)\xi_{a}\xi_{b}+o(\mathbf{1}).\]
To finish we apply Lemma A1 in [8] and compute the three leading symbols of the powers of the pseudodifferential operator \(L^{-k}\).
Using the above lemma for \(L=D^{2}\), with the Hodge-Dirac operator \(D\) (2.4), we get the following result.
**Proposition 2.3**.: _The leading symbols of \(D^{-2k}\) are, up to the appropriate order in \(\mathbf{x}\),_
\[\mathfrak{c}_{2k}=||\xi||^{-2k-2}\left(\delta_{ab}-\frac{k}{3}R_{ acbc}x^{c}x^{d}\right)\xi_{a}\xi_{b}+o(\mathbf{x}^{2}),\] \[\mathfrak{c}_{2k+1} =-\frac{2}{3}ki||\xi||^{-2k-2}\mathrm{Ric}_{ab}x^{b}\xi_{a}+\frac {2}{3}ki||\xi||^{-2k-2}\lambda_{+}^{r}\lambda_{-}^{s}\big{(}R_{srba}+R_{sbra} \big{)}x^{a}\xi_{b}+o(\mathbf{x}) \tag{2.6}\] \[\mathfrak{c}_{2k+2} =\frac{k(k+1)}{3}||\xi||^{-2k-4}\mathrm{Ric}_{ab}\xi_{a}\xi_{b}\] \[\qquad-\frac{2}{3}k(k+1)||\xi||^{-2k-4}\,\lambda_{+}^{r}\lambda_ {-}^{s}(R_{srab}+R_{sarb})\xi_{a}\xi_{b}\] \[\qquad+\frac{1}{3}k||\xi||^{-2k-2}\lambda_{+}^{p}\lambda_{-}^{q} \lambda_{+}^{r}\lambda_{-}^{s}(R_{sqrp}+R_{srqp})+o(\mathbf{1}).\]
Proof. For \(L=D^{2}\) we substitute in Lemma 2.2
\[P_{ab}=\frac{2}{3}Ric_{ab}-\frac{2}{3}\lambda_{+}^{r}\lambda_{-}^{s}(R_{srab} +R_{sarb}),\]
\[Q=-\frac{1}{3}\lambda_{+}^{p}\lambda_{-}^{q}\lambda_{+}^{r}\lambda_{-}^{s}(R_{ sqrp}+R_{srqp}).\]
## 3. Spectral functionals
In [8], we defined two spectral functionals for finitely summable spectral triples, which for the canonical spectral triple over the spin manifold \(M\) allow to recover the metric and the Einstein tensors, viewed as bilinear functionals over a pair of one-forms. We recall the definition:
**Definition 3.1** (cf. [8], Definition 5.4).: If \((\mathfrak{s}\mathfrak{l},D,\mathscr{H})\) is a \(n\)-summable spectral triple, let \(\Omega^{1}_{D}\) be the \(\mathfrak{s}\mathfrak{l}\) bimodule of one form generated by \(\mathfrak{s}\mathfrak{l}\) and \([D,\mathfrak{s}\mathfrak{l}]\). Moreover, assume there exists a generalised algebra of pseudodifferential operators which contains \(\mathfrak{s}\mathfrak{l}\), \(D\), \(|D|^{\ell}\) for \(\ell\in\mathbb{Z}\) with a tracial state \(\mathscr{W}\) over this algebra (called a noncommutative residue), which identically vanishes on \(T|D|^{-k}\) for any \(k>n\) and a zero-order operator \(T\) (an operator in the algebra generated by \(\mathfrak{s}\mathfrak{l}\) and \(\Omega^{1}(\mathfrak{s}\mathfrak{l})\)). Then, denoting by \(\hat{u},\hat{v}\) the image of \(u,w\in\Omega^{1}_{D}(\mathfrak{s}\mathfrak{l})\) in the Clifford algebra, we call
\[\mathfrak{q}_{D}(u,w)=\mathscr{W}(uw|D|^{-n}),\]
metric functional, and
\[\mathscr{S}_{D}(u,w)=\mathscr{W}(u\{D,w\}D|D|^{-n}).\]
Einstein functional.
First, let us compute the metric functional \(\mathpzc{g}\). For one-forms \(u,w\) the suitable expansion (up to \(o(\mathbf{1})\)) in normal coordinates) of Clifford multiplication by \(u=u_{a}dx^{a}\) and \(w=w_{b}dx^{b}\) is
\[\hat{u}=\gamma^{p}u_{p}+o(\mathbf{1}),\qquad\hat{w}=\gamma^{r}w_{r}+o(\mathbf{ 1}).\]
**Proposition 3.2**.: _The metric spectral functional reads_
\[\mathpzc{g}(u,w)=2^{n}v_{n-1}\int\limits_{M}\sqrt{g}\,u_{p}w_{p}, \tag{3.1}\]
_where \(v_{n-1}\) is the volume of \((n-1)\)-dimensional sphere._
Proof.: We compute explicitly:
\[\mathscr{W}\left((\gamma^{p}\gamma^{r}u_{p}w_{r})|D|^{-n}\right)= \int\limits_{M}\sqrt{g}\int_{||\xi||=1}\text{Tr}\;(\gamma^{p}\gamma^{r}u_{p}w_{ r})\mathfrak{c}_{2m}(D)\] \[=v_{n-1}\int\limits_{M}\sqrt{g}\text{Tr}\;(\gamma^{p}\gamma^{r}u_ {p}w_{r})=2^{n}v_{n-1}\int\limits_{M}\sqrt{g}\,u_{p}w_{p}.\]
The last factor comes from the trace of \(1\) over the space of differential forms.
Next, we have,
**Proposition 3.3**.: _The Einstein functional is, similarly to the case of the canonical spectral triple,_
\[\mathscr{S}(u,w)=\frac{2^{n}}{6}v_{n-1}\int\limits_{M}\sqrt{g}\,G_{pq}u_{p}w_{ q}.\]
_where \(G_{pq}\) is the Einstein tensor for \(M\)._
Before we begin with the proof let us demonstrate some useful lemmas. The first computes \(\mathscr{W}(ED^{-2m+2})\) for two specific cases of endomorphism \(E\).
**Lemma 3.4**.: _If,_
\[E=e^{(0)}+e^{(2)}_{pq}\gamma^{p}\gamma^{q}\]
_the functional_
\[\mathscr{W}(ED^{-2m+2})=\frac{n-2}{24}2^{n}v_{n-1}\int\limits_{M}\sqrt{g}R(-e^ {(0)}-e^{(2)}_{pp}).\]
_On the other hand, if,_
\[\tilde{E}=\tilde{e}^{(0)}+\tilde{e}^{(2)}_{pq}\lambda^{p}_{+}\lambda^{q}_{-},\]
_then the functional_
\[\mathscr{W}(\tilde{E}D^{-2m+2})=-\frac{n-2}{24}2^{n}v_{n-1}\int\limits_{M} \sqrt{g}R\tilde{e}^{(0)}.\]
The proof is based on direct calculations using Proposition 2.3.
\[\begin{split}\mathscr{W}(ED^{-2m+2})&=\int\text{Tr} \,\left(E\int_{||\xi||=1}\mathfrak{c}_{2(m-1)+2}\right)\\ &\qquad=\frac{n-2}{12}v_{n-1}\int\text{Tr}\,\left(E\left[2(R_{ srqp}+R_{sqrp})\lambda_{+}^{p}\lambda_{-}^{q}\lambda_{+}^{r}\lambda_{-}^{s}+R-2 Ric_{qp}\lambda_{+}^{p}\lambda_{-}^{q}\right]\right)\end{split} \tag{3.2}\]
Next, we consider functionals of the type \(\mathscr{W}(PD^{-2m})\), where \(\sigma(P)=F^{ab}\xi_{a}\xi_{b}+G^{a}\xi_{a}+H\).
**Lemma 3.5**.: _For \(P\), such that \(\sigma(P)=F^{ab}\xi_{a}\xi_{b}+G^{a}\xi_{a}+H\), we have that, if_
\[F^{ab}=f^{(0)ab}+f^{(2)ab}_{pq}\gamma^{p}\gamma^{q},\]
_then_
\[\mathscr{W}(PD^{-2m})=v_{n-1}\int\limits_{M}\text{Tr}\,H+\frac{2^{n}}{24}v_{ n-1}\int R(-f^{(0)aa}-f^{(2)aa}_{pp}),\]
_and, if_
\[\tilde{F}^{ab}=\tilde{f}^{(0)ab}+\tilde{f}^{(2)ab}_{pq}\lambda_{+}^{p}\lambda _{-}^{q},\]
_then_
\[\mathscr{W}(\tilde{P}D^{-2m})=v_{n-1}\int\limits_{M}\text{Tr}\,H+\frac{2^{n}}{ 48}v_{n-1}\int\limits_{M}\left[-2R\tilde{f}^{(0)aa}-\tilde{f}^{(2)aa}_{pp}R+2( \tilde{f}^{(2)ab}_{pq}+\tilde{f}^{(2)ba}_{pq})R_{paqb}\right].\]
The proof follows directly by computation using Proposition 2.3 and Lemma A.3 applied to the explicit expression:
\[\begin{split}\int_{||\xi||=1}\sigma_{-2m}(PD^{-2m})& =\frac{1}{6}F^{aa}\left[(R_{srqp}+R_{bqrs})\lambda_{+}^{p}\lambda _{-}^{q}\lambda_{+}^{r}\lambda_{-}^{s}+\frac{1}{2}R-R_{pq}\lambda_{+}^{p} \lambda_{-}^{q}\right]\\ &+\frac{1}{6}F^{ab}[-R_{ab}+(R_{qapb}+R_{paqb})\lambda_{+}^{p} \lambda_{-}^{q}]+H.\end{split} \tag{3.3}\]
Proof. [Proof of Proposition 3.3] We begin with computing the symbol of \(\hat{u}D\hat{w}D\) at \(x=0\), where it suffices to expand the Clifford multiplication by one-forms \(u\) and \(w\) as:
\[\hat{u}=\gamma^{p}u_{p}+o(\mathbf{1}),\qquad\hat{w}=\gamma^{s}w_{s}+\gamma^{s }w_{sa}x^{a}+o(\mathbf{x}),\]
and thus
\[\hat{u}D\hat{w}D= \gamma^{p}\gamma^{q}\gamma^{r}\gamma^{s}u_{p}w_{r}\xi_{q}\xi_{s}- i\gamma^{p}\gamma^{q}\gamma^{r}\gamma^{s}u_{p}w_{rq}\xi_{s}\] \[-\frac{1}{3}i\gamma^{p}\gamma^{q}\gamma^{r}\lambda_{-}^{s}\lambda _{+}^{t}\lambda_{-}^{z}(R_{ztsq}+R_{zstq})u_{p}w_{r}+o(\mathbf{1}).\]
Then, we use lemma 3.4 for \(P=\hat{u}D\hat{w}D\) and \(E=\hat{u}\hat{w}\). In this case we have
\[\begin{split}& E=u_{p}w_{q}\gamma^{p}\gamma^{q},\\ & H=-\frac{1}{3}i\gamma^{p}\gamma^{q}\gamma^{r}\lambda_{-}^{s} \lambda_{+}^{t}\lambda_{-}^{z}(R_{ztsq}+R_{zstq})u_{p}w_{r}\\ &\quad=-\frac{2}{3}i\gamma^{p}\lambda_{-}^{q}\lambda_{+}^{r} \lambda_{-}^{s}(R_{srqt}+R_{sqrtt})u_{p}w_{t}+\frac{1}{3}i\gamma^{p}\gamma^{q} \gamma^{r}\lambda_{-}^{s}\lambda_{+}^{t}\lambda_{-}^{z}(R_{ztsr}+R_{zstr})u_{ p}w_{q},\\ & F^{pq}\xi_{p}\xi_{q}=\gamma^{r}\gamma^{p}\gamma^{s}\gamma^{q}u_{ r}w_{s}\xi_{p}\xi_{q}=(2u_{r}w_{p}\delta_{qs}\gamma^{r}\gamma^{s}-u_{r}w_{s} \gamma^{r}\gamma^{s}\delta_{pq})\xi_{p}\xi_{q},\end{split}\]
where we used that \(\gamma^{p}\gamma^{q}\xi_{p}\xi_{q}=\delta_{pq}\xi_{p}\xi_{q}\). Next, we see,
\[\begin{split} e^{(0)}&=0,\qquad\quad e^{(2)}_{ab}=u_ {a}w_{b},\\ f^{(0)}&=0,\qquad\quad f^{(2)ab}_{cd}=2u_{c}w_{a} \delta_{bd}-u_{c}w_{d}\delta_{ab}.\end{split}\]
Finally, the contribution arising from \(E\) gives:
\[-\frac{n-2}{24}2^{n}v_{n-1}R\,u_{a}w_{a}.\]
whereas the part from \(F\) is,
\[-\frac{2^{n}}{24}v_{n-1}Rf_{ii}^{(2)aa}=\frac{n-2}{24}2^{n}v_{n-1}R\,u_{a}w_{a}.\]
These two terms cancel each other, and we are left with terms that arise from \(H\). The only possible terms in \(\text{Tr}\;H\) are linear combinations of \(u_{a}w_{a}R\) and \(u_{a}w_{b}Ric_{ab}\), thus, we know, that the result is symmetric in \(u_{a},w_{b}\). It allows us to simplify the second term in \(H\):
\[H=-\frac{2}{3}i\gamma^{p}\lambda_{-}^{q}\lambda_{+}^{s}\lambda_{-}^{t}(R_{tsqb }+R_{tqsb})u_{p}w_{b}+\frac{1}{3}i\gamma^{r}\lambda_{-}^{q}\lambda_{+}^{s} \lambda_{-}^{t}(R_{tsqr}+R_{tqsr})u_{a}w_{a}+\ldots,\]
where "\(\ldots\)" are terms antisymmetric in \(u_{a},w_{b}\), which we can neglect. We can also insert \(-i\lambda_{+}\) instead of the remaining \(\gamma\)'s because the part with \(\lambda_{-}\) will be traceless. Now, using the lemma A.1 we get:
\[\text{Tr}\;H=\frac{1}{6}\text{Ric}_{ab}\,u_{a}w_{b}-\frac{1}{12}R\,u_{a}w_{a}= \frac{1}{6}G_{ab}\,u_{a}w_{b}.\]
This proves the result.
### Spectral closedness and torsion
In this section, we will prove that the Hodge-Dirac spectral triple has the property of being spectrally closed.
**Theorem 3.6**.: _Let \(T\) be an operator of order \(0\) from the algebra generated by \(a[D,b]\), \(a,b\in C^{\infty}(M)\). Then,_
\[\mathcal{W}(TD|D|^{n})=0.\]
Proof. If we compute the symbol of \(TD\) at a chosen point on the manifold \(M\) in normal coordinates at \(x=0\) we obtain,
\[\sigma(TD)=T(-\gamma^{p}\xi_{p}).\]
Next, if we combine it with Proposition 2.3 we see that the symbol of order \(-n\) of \(TD|D|^{n}\) is:
\[\sigma_{-n}(TD|D|^{-n})=0+o(\mathbf{1}).\]
This ends the proof.
As a consequence, we demonstrate in [9] that the Hodge-Dirac spectral triple is torsion-free. It is interesting to study generalised Hodge-Dirac operators, which are defined through an arbitrary linear connection, are not metric compatible, and have a torsion.
## Appendix A Details of computations
**Lemma A.1**.: _A direct computation of traces of products of \(\lambda\) matrices is based on the following recursive formula:_
\[\begin{split}\operatorname{Tr}&\lambda_{+}^{p_{1}} \cdots\lambda_{+}^{p_{k}}\lambda_{-}^{q_{1}}\cdots\lambda_{-}^{q_{k}}\\ &=\frac{1}{2}\sum_{j=1}^{k}(-1)^{k-j}\delta^{p_{1}q_{j}}\, \operatorname{Tr}\;\big{(}\lambda_{+}^{p_{2}}\cdots\lambda_{+}^{p_{k}}\lambda _{-}^{q_{1}}\cdots\lambda_{-}^{q_{j-1}}\lambda_{-}^{q_{j+1}}\cdots\lambda_{-}^ {q_{k}}\big{)}.\end{split}\]
_In particular, we have_
(A.1) \[\begin{split}&\operatorname{Tr}\,(\lambda_{+}^{p}\lambda_{-}^{q})=2^{n-1 }\delta^{pq},\\ &\operatorname{Tr}\,(\lambda_{+}^{p_{1}}\lambda_{-}^{q_{1}} \lambda_{+}^{p_{2}}\lambda_{-}^{q_{2}})=2^{n-2}(\delta^{p_{1}q_{1}}\delta^{p_{2 }q_{2}}+\delta^{p_{1}q_{2}}\delta^{p_{2}q_{1}}),\end{split}\]
_and_
(A.2) \[\begin{split}&\operatorname{Tr}\,(\lambda_{+}^{p_{1}}\lambda_{-}^{q _{1}}\lambda_{+}^{p_{2}}\lambda_{-}^{q_{2}}\lambda_{+}^{q_{2}}\lambda_{-}^{q_{ 3}})=\\ &\qquad\qquad=2^{n}\big{(}\frac{1}{8}\sum_{\sigma\in S_{3}}\delta^ {p_{1}q_{\sigma(1)}}\delta^{p_{2}q_{\sigma(2)}}\delta^{p_{3}q_{\sigma(3)}}- \frac{1}{4}\delta^{p_{1}q_{2}}\delta^{p_{2}q_{3}}\delta^{p_{3}q_{1}}\big{)}. \end{split}\]
Next, we present the results on traces of products of \(\gamma\) and \(\lambda\) matrices.
**Lemma A.2**.: (A.3) \[\begin{split}&\operatorname{Tr}\,(\gamma^{p}\gamma^{q}\lambda_{+}^{r} \lambda_{-}^{s})=2^{n-2}\big{(}2\delta^{pq}\delta^{rs}+\delta^{ps}\delta^{qr}- \delta^{pr}\delta^{qs}\big{)}=2^{n-1}\big{(}\delta^{pq}\delta^{rs}-\frac{1}{2} \varepsilon_{rs}^{pq}\big{)},\end{split}\]
(A.4) \[\begin{split}&\operatorname{Tr}\,(\gamma^{p}\gamma^{q}\lambda_{+}^{r} \lambda_{-}^{s}\lambda_{+}^{t}\lambda_{-}^{z})=2^{n-2}\delta^{pq}(\delta^{rs} \delta^{tz}+\delta^{rz}\delta^{st})-2^{n-3}(\delta^{rs}\varepsilon_{tz}^{pq}+ \delta^{st}\varepsilon_{rz}^{pq}+\delta^{tz}\varepsilon_{rs}^{pq}+\delta^{rz }\varepsilon_{st}^{pq})\end{split}\]
We skip the computational proof, which is based on expressing \(\gamma\)-matrices in terms of \(\lambda\)-matrices,
\[\gamma^{p}\gamma^{q}=-\lambda_{+}^{q}\lambda_{-}^{p}+\lambda_{+}^{p}\lambda_{- }^{q}+\delta^{pq}+\ldots,\]
and using the results of Lemma A.1.
As a consequence, we obtain the following identities for the geometric quantities:
**Lemma A.3**.: _In normal coordinates around \(\mathbf{x}=0\) we have the following identities_
(A.5) \[\begin{split}&\operatorname{Tr}\,(\lambda_{+}^{p}\lambda_{-}^{q}) Ric_{pq}=2^{n-1}R,\\ &\operatorname{Tr}\,(\lambda_{+}^{p}\lambda_{-}^{q})(R_{paab}+R_{ qapb})=2^{n}\mathrm{Ric}_{ab},\\ &\operatorname{Tr}\,(\lambda_{+}^{p}\lambda_{-}^{q}\lambda_{+}^{ r}\lambda_{-}^{s})(R_{srqp}+R_{sqrp})=-2^{n-2}R,\\ &\operatorname{Tr}\,(\lambda_{+}^{p}\lambda_{-}^{q}\lambda_{+}^{ r}\lambda_{-}^{s})\mathrm{Ric}_{rs}=2^{n-2}(\delta_{pq}R+\mathrm{Ric}_{pq}), \end{split}\]
(A.6) \[\begin{split}&\operatorname{Tr}\,(\gamma^{p}\gamma^{q}\lambda_{+}^{ r}\lambda_{-}^{s})\mathrm{Ric}_{rs}=2^{n-1}\delta_{pq}R,\\ &\operatorname{Tr}\,(\lambda_{+}^{p}\lambda_{-}^{q}\lambda_{+}^{ r}\lambda_{-}^{s})(R_{rasb}+R_{sarb})=2^{n-2}(2\delta_{pq}Ric_{ab}+R_{qapb}+R_{ paqb}),\\ &\operatorname{Tr}\,(\gamma^{p}\gamma^{q}\lambda_{+}^{r}\lambda_{- }^{s})(R_{rasb}+R_{sarb})=2^{n}\delta_{pq}\mathrm{Ric}_{ab},\\ &\operatorname{Tr}\,(\lambda_{+}^{p}\lambda_{-}^{q}\lambda_{+}^{ s}\lambda_{-}^{t}\lambda_{+}^{z})(R_{ztsr}+R_{zstr})=2^{n-3}(-R\delta_{pq}+2 \mathrm{Ric}_{pq}),\\ &\operatorname{Tr}\,(\gamma^{p}\gamma^{q}\lambda_{+}^{r}\lambda_{- }^{s}\lambda_{+}^{t}\lambda_{-}^{z})(R_{ztsr}+R_{zstr})=-2^{n-2}\delta_{pq}R. \end{split}\]
Proof. Direct computation using (A.1)-(A.3) and the properties of the Riemann and Ricci tensors.
**Acknowledgements:** LD acknowledges that this work is part of the project Graph Algebras partially supported by EU grant HORIZON-MSCA-SE-2021 Project 101086394.
|
2310.17769
|
Social Contract AI: Aligning AI Assistants with Implicit Group Norms
|
We explore the idea of aligning an AI assistant by inverting a model of
users' (unknown) preferences from observed interactions. To validate our
proposal, we run proof-of-concept simulations in the economic ultimatum game,
formalizing user preferences as policies that guide the actions of simulated
players. We find that the AI assistant accurately aligns its behavior to match
standard policies from the economic literature (e.g., selfish, altruistic).
However, the assistant's learned policies lack robustness and exhibit limited
generalization in an out-of-distribution setting when confronted with a
currency (e.g., grams of medicine) that was not included in the assistant's
training distribution. Additionally, we find that when there is inconsistency
in the relationship between language use and an unknown policy (e.g., an
altruistic policy combined with rude language), the assistant's learning of the
policy is slowed. Overall, our preliminary results suggest that developing
simulation frameworks in which AI assistants need to infer preferences from
diverse users can provide a valuable approach for studying practical alignment
questions.
|
Jan-Philipp Fränken, Sam Kwok, Peixuan Ye, Kanishk Gandhi, Dilip Arumugam, Jared Moore, Alex Tamkin, Tobias Gerstenberg, Noah D. Goodman
|
2023-10-26T20:27:03Z
|
http://arxiv.org/abs/2310.17769v2
|
# Social Contract AI: Aligning AI Assistants with Implicit Group Norms
###### Abstract
We explore the idea of aligning an AI assistant by inverting a model of users' (unknown) preferences from observed interactions. To validate our proposal, we run proof-of-concept simulations in the economic _ultimatum game_, formalizing user preferences as policies that guide the actions of simulated players. We find that the AI assistant accurately _aligns_ its behavior to match standard policies from the economic literature (e.g., selfish, altruistic). However, the assistant's learned policies lack robustness and exhibit limited _generalization_ in an out-of-distribution setting when confronted with a currency (e.g., grams of medicine) that was not included in the assistant's training distribution. Additionally, we find that when there is _inconsistency_ in the relationship between language use and an unknown policy (e.g., an altruistic policy combined with rule language), the assistant's learning of the policy is slowed. Overall, our preliminary results suggest that developing simulation frameworks in which AI assistants need to infer preferences from diverse users can provide a valuable approach for studying practical alignment questions.1
Footnote 1: Code and Prompts
Socially Responsible Language Modelling Research Workshop (NeurIPS 2023). \({}^{\dagger}\)Equal Contribution.
## 1 Introduction
Developing scalable methods for effectively steering AI systems is a key challenge for alignment research [1]. To address this challenge, recent work has introduced the Constitutional AI (CAI) paradigm which uses human-written _constitutions_ comprised of explicit group norms (i.e., "do not be hateful") as guiding principles for AI assistants [see Fig. 1a; Bai et al., 2022]. While these methods provide effective means to align AI assistants, they also face challenges. For example,
Figure 1: Illustration of Constitutional AI (CAI) and Social Contract AI (SCAI) in the ultimatum game [1]. In the ultimatum game, one player (the proposer) proposes a division of a pot of money (e.g., §10) with another player (the responder). The proposer **offers** a share, and the responder can either **accept** or **reject** the offered share. If the responder accepts, the money is distributed as proposed; if they reject it, neither player receives anything. [a] CAI uses explicit group norms such as a _constitution_ or content policy to guide the AI assistant. [b] SCAI inverts a model of users’ preferences from observed interactions and uses the inferred _social contract_ as guiding principle for the AI assistant.
assessing the robustness of a constitutional principle can be challenging in real-world applications of language models, especially when a user's request is consistent with more than one task (Tamkin et al., 2022), or when the user requests the assistant to perform a task that is outside of the assistant's training distribution (Amodei et al., 2016). Furthermore, constitutional principles may reflect an inadvertent bias towards the creator's preferences, which can lead to systematic inequalities in the assistant's behavior (Blasi et al., 2021).
Given the inherent ambiguity and diversity in real-world applications of language models, it is desirable to have an AI assistant capable of dynamically adapting its local governing principles to align with varying group norms or preferences (Leike, 2023). Motivated by this observation, we explore **Social Contract AI (SCAI)**: a method for aligning AI assistants with implicit group norms (Fig. 0(b)). Unlike CAI, which operates on a set of fixed, formal rules or constitutional principles, SCAI aims to infer group norms from observed interactions among users. As such, the only fixed principle in SCAI is the _meta-principle_ of finding out what the group norms or preferences are in order to align the AI assistant's behavior with users.
To evaluate the potential of SCAI, we conduct proof-of-concept simulations using the _ultimatum game_2 (see Fig. 1), formalizing group norms (i.e., user preferences) as policies that guide the actions of simulated players. We ground SCAI in the context of Bayesian (inverse) reinforcement learning (Ghavamzadeh et al., 2015; Ramachandran and Amir, 2007) and introduce a _verbal reinforcement learning_ algorithm (Shinn et al., 2023; Goodman, 2023) which uses game interactions to revise the AI assistant's policy. Overall, our **contributions** are as follows: (1) We introduce Social Contract AI (SCAI), a method for aligning AI assistants with implicit group norms; (2) we present a simulator for implementing SCAI using verbal reinforcement; and (3) we validate SCAI by comparing the alignment between the shares offered by the AI assistant and those proposed by simulated users in the ultimatum game.
Footnote 2: Due both to its simplicity and its ability to capture much of the psychology of negotiation, the ultimatum game has been a mainstay of cooperative game theory since at least the mid-twentieth century (e.g., Harsanyi, 1961; Aher et al., 2022)
## 2 Related Work
**Social Simulation.** Large Language Models (LLMs) are increasingly used in simulation-based research and social games (Park et al., 2023; Aher et al., 2022; Gandhi et al., 2023). For example, Park et al. (2023) introduced a sandbox environment inhabited by _generative agents_ that simulate daily human activities, allowing for the study of emergent social behaviors. Such simulation-based approaches provide a useful framework for side-stepping issues related with reinforcement learning from human feedback (RLHF) (Ouyang et al., 2022) such as reward misspecification (Pan et al., 2022) or reward hacking (Amodei et al., 2016) by shifting the responsibility of supervising AI to simulated human agents whose capabilities and incentives are defined within the simulation. Moreover, simulation-based approaches can generate synthetic datasets which can be leveraged for downstream fine-tuning of models. For example, Liu et al. (2023) introduced StableAlign, an algorithm which is trained on data generated through a sandbox environment where simulated language agents are tasked with providing preference ratings when discussing controversial societal questions sourced from HH-RLHF. This approach has resulted in competitive performance on alignment benchmarks such as helpful, honest, and harmless (HHH) (Bai et al., 2022). Our work builds on these findings and uses simulated social interactions to study the alignment of an AI assistant.
**Social Contracts and Virtual Bargaining.** Much of human interaction is guided by implicit norms or informal agreements (i.e., _social contracts_) rather than a set of fixed, formal rules or constitutional principles (Ostrom, 1990; Krupka and Weber, 2013; Malle et al., 2020). Recent work has formalized some of these observations within the context of _virtual bargaining_, a process in which implicit agreements are revised in ways similar to actual bargaining between people (Misyak et al., 2014; Chater, 2023). Specifically, rather than having a predefined set of preferences or agreement, people construct their agreements and preferences dynamically based on the context and actions of others. This involves mental simulations that consider not only individual preferences but also those of other parties, facilitating a form of "virtual" negotiation even before any actual interaction occurs. Building on this idea, Levine et al. (2023) proposed that humans construct their preferences by _inverting a model of agreement_, that is, inferring environmental conditions and other people's preferences from observed or simulated interactions (see also Shum et al., 2019). Motivating SCAI as a form
of **inversion of agreement**, we explore the possibility of aligning an AI assistant with a group by inverting a model of users' preferences from observed game interactions.
## 3 Aligning AI Assistants with Implicit Group Norms
**Preliminaries.** To empirically explore the potential of SCAI, we developed a simulator that uses verbal reinforcement ("metaprompt") (Goodman, 2023; Shinn et al., 2023; Yao et al., 2023; Yang et al., 2023) to dynamically rewrite the AI assistant's local governing principles to align with users' preferences. We ground this inference problem in the context of Bayesian (inverse) reinforcement learning (Ghavamzadeh et al., 2015; Ramachandran and Amir, 2007), where the environment is provided by the task at hand--here, a modified version of the ultimatum game (see Fig. 2). We represent users' preferences (i.e., the shared group norm(s)) as a shared **policy**, such as _"be selfish when making offers"_ or _"be alltruistic when making offers"_. Each user is instantiated as a separate language model whose actions are determined by the shared policy. The AI assistant's goal is to learn this shared policy from observed game interactions. Unlike users, whose policy is set at the beginning of the game and remains fixed across training epochs, the AI assistant is seeded with a random policy and refines its policy after each training epoch to meet the meta-principle's objective. See App. A, for technical details.
**Evaluation Metrics** We run simulations with three standard policies from economics and evolutionary game theory (Smith, 1982): _selfish_, _altruistic_, and _fair_. Our primary evaluation metric is the **offered share**3, measured as a percentage of the total amount that an agent (user, AI assistant), acting as player 1 (the proposer), offers to share with player 2 (the decider). Using this metric, we can first assess whether a policy such as _"be selfish when making offers"_ results in selfish offers that benefit the proposer more than the responder (e.g., a 9:1 split of S10) by observing the offers made by users. This **sanity check** is important for determining whether users' observed offers align with the (latent) policy the assistant aims to learn. Further, we can use the assistant's offered shares to explore the following **research questions**: (1) _alignment_: Can the AI assistant learn a policy from observed game interactions that results in offers matching the offers made by users? (2) _generalization_: Does the AI assistant's learned policy generalize to an out-of-distribution (OOD) setting in which the assistant is exposed to a potentially controversial currency not present during training (e.g., grams of medicine instead of dollars)? (3) _inconsistency_: Does inconsistent use of language (e.g., an altruistic policy combined with rude language) affect the assistant's learning of users' shared policy?
Footnote 3: We also collected data on accept/reject behaviors and computed the overall utility for both users and the AI assistant. We will present these evaluation metrics in further extensions of the present work.
**Simulation Setup.** We ran 20 independent simulations using gpt-4-0314 (OpenAI, 2023) with a temperature of \(0\) for each of the unique settings explored below. Each simulation ran for five training epochs. We varied the number of user and assistant interactions within each r
Figure 2: Illustration of SCAI in the ultimatum game. Given a meta-principle, the AI assistant dynamically writes a new policy at the start of each training epoch to steer its actions throughout the game. Upon completion by all users and the assistant, game interactions are analyzed and fed back into the assistant to write a new policy that aligns with the meta-principle’s objective. Importantly, the AI assistant does not have access to the meta-principle or past game interactions while engaging in the game. This is achieved by using one language model to revise the policy based on the meta-principle’s objective, and instantiating an additional language model for each interaction the assistant has within the game. See App. A, for technical details.
and present results from simulations with 8 user-user interactions and 2 assistant-user interactions (i.e., one interaction in which the assistant is the proposer, and one interaction in which the assistant is the responder) in Fig. 3 (Fig. A-1 includes an additional example of 8 assistant-assistant and 2 assistant-user interactions). Unless otherwise specified, we vary currencies and amounts randomly between simulations.
### Simulation Results
**Sanity Checks.** We find that the shares offered by users correspond to the expected behavior under a given policy. For instance, users following a selfish policy consistently make offers in which they propose to share nothing (i.e., 0%) of the total amount, while altruistic users show the opposite behavior, proposing to share 100% (see Fig. 3a, left panel). We note that the lack of variation in users' offers can be attributed to a temperature of \(0\) which lead to deterministic actions across users. This choice was intentional to control for potential effects of simulation noise on the assistant's ability to learn the latent policy. We will explore the impact of noise in users' actions in future extensions of our work.
**Alignment.** To examine whether the assistant's offered shares _align_ with the offers of users, we explored settings with both one (i.e., every user has the same policy) and mixed group norms (i.e., proportions of selfish versus altruistic norms varied between users). For the one-group norm setting (Fig. 3a, left panel), we observe that the assistant's offered shares closely align with users' offers after just one revision of the assistant's initial (random) policy. An example of a learned policy that represents an altruistic group norm is displayed in the right panel of Fig. 3a. 4 Overall, findings
Figure 3: Simulation results (refer to main text for details). Error bars represent 95% confidence intervals of the mean across 20 independent simulations. [a] The AI assistant learns a policy resulting in offered shares aligning with the offers of users, both in a one-group norm (left panel) and a mixed-group (middle panel) norm setting. [b] Testing a learned selfish policy in an out-of-distribution setting (middle panel) reveals different generalization behaviors compared to an in-distribution setting (left panel). [c] Inconsistent use of language affects the learning of an altruistic policy paired with rude manners (left panel), as well as a selfish policy paired with sycophantic manners (middle panel; see Tab. A-2 for examples of manners).
from our first simulation suggests that, in the present setting, the AI assistant accurately learns the latent policy guiding users' interactions. The results from our mixed-group norm showed that the assistant's offered shares converged to the _distribution_ of offers expected from the distribution of policies present in the group. Specifically, we find that for a group with 80% selfish and 20% altruistic norms, approximately 80% of runs yield selfish policies, while 20% result in altruistic policies for the AI assistant (Fig. 2(a), middle panel; see right panel for example policies learned in two of the 20 runs). We observe a similar convergence pattern for groups with 20% selfish and 80% altruistic norms, as well as 50% selfish and 50% altruistic norms. These findings suggest that the assistant can learn a distribution over policies (across simulation runs) that aligns with the distribution of policies observed in the user group. An important extension could be to prompt the assistant to learn multiple policies within a given run (instead of learning a single policy) to see if the assistant can recover the distribution of user policies within a run rather than only matching the distribution across runs.
**Generalization.** Next, we investigated if the AI assistant's learned policies generalize to out-of-distribution (OOD) scenarios in which the assistant is exposed to a potentially controversial currency not present during training (in the example shown in Fig. 2(b), we train on dollars and test on grams of medicine).5 The left panel in Fig. 2(b) shows that testing a selfish policy results in selfish offers in-distribution (i.e., testing on dollars), whereas OOD offers were strongly influenced by the assistant's prior, which we here arbitrarily set to altruistic. This finding is interesting because the only difference in the assistant's prompts between in-distribution and OOD runs was the use of a different currency not present during training (i.e., grams of medicine instead of dollars).
Footnote 5: We further explored whether varying out-of-distribution amounts (e.g., training with amounts \(<\) 1,000 and testing with amounts such as \(2\). Billion) affected generalization behavior and found similar effects on offered-shares. For exploratory purposes, we also ran a condition in which we asked the assistant to provide a reason for its offered shares, both in in-distribution and out-of-distribution test runs; see Tab. A-1, for an example.
**Inconsistency.** To examine the effect of inconsistency, we explored two specific cases of inconsistent use of language (Fig. 2(c)). Here, we observed that when the manner in which users communicate their proposals (e.g., rude) conflicts with the expectations set by a given policy (e.g., altruistic), the assistant still learns a policy that results in similar offers to those of users; however, convergence is slower and fails to fully match the offered shares of users within five training epochs (Fig. 2(c), left panel). Changing from rude to sycophantic manners and setting users' policies to selfish had a similar effect on the assistant's learning of the selfish policy (Fig. 2(c), right panel).
## 4 Discussion
In this paper, we proposed Social Contract AI (SCAI), a method that combines simulation (Park et al., 2023; Liu et al., 2023; Liu et al., 2023) with verbal reinforcement techniques (Goodman, 2023; Shinn et al., 2023; Yao et al., 2023; Yang et al., 2023) to align an AI assistant with user preferences. By grounding our work within the formal context of the ultimatum game (Aher et al., 2022; Harsanyi, 1961), we formalized preferences (i.e., the shared group norm(s)) as policies that guide the actions of simulated players and measured alignment through the shares offered by the proposing player. Through our proof-of-concept simulations, we showed that the AI assistant can accurately learn policies to align its behavior with users. Additionally, we showed that the assistant's learned policies lack robustness and exhibit limited generalization in an out-of-distribution setting when confronted with a currency that was not included in the assistant's training distribution; moreover, learning from users using inconsistent (or contradictory) language slowed learning of the group's policy.
**Social Impacts Statement.** While our work is at an early stage, we believe that SCAI addresses an important non-technical alignment challenge highlighted in previous work: _"figuring out what the group preferences are"_(Leike, 2023). Specifically, rather than having a team of researchers write a model's content policy or _constitution_, we propose to have an AI assistant learn group norms and preferences through observation and active participation in interactions with simulated users. This approach allows for (1) the study of the kinds of group norms that _emerge_ under varying conditions; (2) assessing the _flexibility_ of learning such group norms across potentially inconsistent (or ambiguous) tasks; and (3) studying the _robustness_ of group norms as guiding principles for the AI assistant in out-of-distribution settings. More generally, scaling up simulation frameworks--where an AI assistant must infer the (unknown) preferences of diverse users--may provide insights into designing more democratic and representative guiding norms for AI assistants (Zaremba et al., 2023).
|
2308.15334
|
The Responsible Development of Automated Student Feedback with
Generative AI
|
Contribution: This paper identifies four critical ethical considerations for
implementing generative AI tools to provide automated feedback to students.
Background: Providing rich feedback to students is essential for supporting
student learning. Recent advances in generative AI, particularly with large
language models (LLMs), provide the opportunity to deliver repeatable, scalable
and instant automatically generated feedback to students, making abundant a
previously scarce and expensive learning resource. Such an approach is feasible
from a technical perspective due to these recent advances in Artificial
Intelligence (AI) and Natural Language Processing (NLP); while the potential
upside is a strong motivator, doing so introduces a range of potential ethical
issues that must be considered as we apply these technologies.
Intended Outcomes: The goal of this work is to enable the use of AI systems
to automate mundane assessment and feedback tasks, without introducing a
"tyranny of the majority", where the needs of minorities in the long tail are
overlooked because they are difficult to automate.
Application Design: This paper applies an extant ethical framework used for
AI and machine learning to the specific challenge of providing automated
feedback to student engineers. The task is considered from both a development
and maintenance perspective, considering how automated feedback tools will
evolve and be used over time.
Findings: This paper identifies four key ethical considerations for the
implementation of automated feedback for students: Participation, Development,
Impact on Learning and Evolution over Time.
|
Euan D Lindsay, Mike Zhang, Aditya Johri, Johannes Bjerva
|
2023-08-29T14:29:57Z
|
http://arxiv.org/abs/2308.15334v2
|
# A Framework for Responsible Development of Automated Student Feedback with Generative AI
###### Abstract
Providing rich feedback to students is essential for supporting student learning. Recent advances in generative AI, particularly within large language modelling (LLM), provide the opportunity to deliver repeatable, scalable and instant automatically generated feedback to students, making abundant a previously scarce and expensive learning resource. Such an approach is feasible from a technical perspective due to these recent advances in Artificial Intelligence (AI) and Natural Language Processing (NLP); while the potential upside is a strong motivator, doing so introduces a range of potential ethical issues that must be considered as we apply these technologies. The attractiveness of AI systems is that they can effectively automate the most mundane tasks; but this risks introducing a "tyranny of the majority", where the needs of minorities in the long tail are overlooked because they are difficult to automate.
Developing machine learning models that can generate valuable and authentic feedback requires the input of human domain experts. The choices we make in capturing this expertise - whose, which, when, and how - will have significant consequences for the nature of the resulting feedback. How we maintain our models will affect how that feedback remains relevant given temporal changes in context, theory, and prior learning profiles of student cohorts. These questions are important from an ethical perspective; but they are also important from an operational perspective. Unless they can be answered, our AI generated systems will lack the trust necessary for them to be useful features in the contemporary learning environment. Answering them will require careful and deliberate planning to ensure that our solutions benefit all of our students.
This article will outline the frontiers of automated feedback, identify the ethical issues involved in the provision of automated feedback and present a framework to assist academics to develop such systems responsibly.
Educational technology, artificial intelligence, ethics, natural language processing, human computer interaction, generative AI
## I Introduction
The release of powerful tools based on generative language modelling, such as ChatGPT, marked a significant shift in how we approach higher education. Mere days after its release, students, educators, and the public alike discovered the potential of the application for assisting with a range of teaching and learning tasks, but also encountered significant challenges in terms of academic integrity. The response of many leading technical universities was to revert to pen-and-paper formats in exams. In the literature, Nikolic et al. find that "with little modification to the input prompts, ChatGPT could generate passable responses to many of the assessments" [1]. The intense focus on academic integrity triggered an important and necessary conversations about the role of assessments, in particular, the increased potential of automatic assessment. Where prior approaches to automated assessment have been limited to, e.g., multiple-choice style questions, generative language modelling completely removes this barrier, potentially allowing for assessment of any type of student output. The need for developments in this space are clear given the recent developments, as access to tools such as ChatGPT potentially obliterates assessment at lower levels of Bloom's hierarchy [2] (Fig. 1), and it is a completely open question how writing with generative language models affects writing processes and assessment thereof. After all, if professionals are going to use AI tools in their working lives, we should aim to train them in their use.
While assessment is a clear space of development for this type of educational technology, we argue that the real potential of generative language modelling can be found in the area of _student feedback_. We propose that this type of technology can, relatively straightforwardly, be developed such that a student in any educational program can essentially receive unbounded amounts of feedback. One advantage of this approach over current approaches is that the existing feedback mechanisms using, e.g., multiple choice questions or parametrized questions in engineering courses are limited by several factors.
Figure 1: Bloom’s taxonomy
\(>\) REPLACE this LINE WITH YOUT MANNCRIPT ID NUMBER (DOUBLE-CLICK HERE TO EDIT) \(<\)
For one, such methods do not easily scale, and tend to tie educational programs into a strict set of practices, where it is difficult to argue for innovation due to prohibitive costs of developing new feedback scripts. Although many advantages of automation are gained from these practices, in that we can, e.g., recognize common patterns and standardize responses to them, rather than having to make bespoke responses to them all, there is a further weakness in that this feedback does not extend to a large portion of students. That is to say, current approaches to automated feedback best serve the median student, as developing tailored feedback for the long tail of students - be it highly excelling students who can be helped to excel further, or struggling students who need specific help to succeed in their education - is more expensive, both in terms of human and fiscal resources, and helps fewer students. The flexibility and scalability of generative language modelling provides a unique solution to this tyranny of the long tail(s). This in turn allows us to consider real universal coverage using automated feedback, allowing us to meet our obligations to all students, not just those who present the most common submissions.
To move towards this goal, we must consider how we generate this feedback, who engages with this feedback (and when and how) and the impact of providing feedback in a new paradigm. This requires the development of a framework for responsible development, where we follow recommendations from previous work in that we look at the dangers of automation in engineering education, aiming to keep engineering education human-centered. While doing so we must avoid the "Turing Trap" (i.e., eschewing human intelligence automation via an excessive focus on automation based on AI technologies) [3]. The development of this framework denotes the core contribution of this article, in which we outline the frontiers of automated feedback, identify the ethical issues involved in the provision of automated feedback and present a framework to assist academics in developing such systems responsibly, using technology, computing, engineering related teaching domains as our context.
## II The Frontiers of Automated Feedback
Engineering education has a long history of automating assessment, primarily focused on reducing the workload of assessing the lower-order thinking at the lower levels of Bloom's taxonomy (Fig 1). Multiple choice questions can test memory and understanding but require simplified questions. Parameterized numerical questions can test application and analysis of engineering theory, while offering the possibility of providing individualized versions of questions to each student. Automated test suites can compile, validate and test student code; but both require significant initial investment in anticipating student responses, and can only provide feedback if the students make anticipated errors. Adaptive learning systems allowed for identification of common student mistakes, and for providing tailored feedback given those mistakes. However, the workload needed to develop comprehensive feedback for any new learning material is prohibitive, and only "pays off" given a large number of students, and/or static educational material across time.
This also entails the converse, namely that given an early investment in this type of learning materials, new developments are difficult to invest in. Hence, this current state has effectively locked some engineering courses into a focus, where a particular set of questions are iterated over. If a faculty member has put significant effort into identifying the five most common errors, and the \(2^{5}\) = 32 possible combinations of students making all, some or none of them, and then set up the parameterized equations to identify each of those different combinations and provide the right set of feedback for each combination, there will be an inevitable reluctance to throw them away and build a new question from scratch next year. The issue is precisely that the current practice has not been to build a system to give feedback to students - but rather to build a system to give feedback on a _single question_ to students. This clearly does not scale without significant resource consequences.
Looking at the top of Bloom's pyramid (Fig. 1), there is a near complete gap in engineering education literature regarding automation of assessment and feedback. Prior to the release of tools such as ChatGPT, automation of the higher levels (evaluating, creating) was seen as unfeasible, as this type of assessment was out of reach of the existing AI-tools. Existing models are able to provide adaptive feedback, but that feedback is developed in advance, and deployed to the students based on guidelines, rules, and metrics that are developed in advance. This has proven to be a successful manner to deal with the center mass of the distribution of students. For instance, if 40% of the wrong answers are a specific misconception (e.g., using static friction when you should use dynamic friction) then there is a huge leverage to be had in pre-preparing an answer and feeding it back to the students automatically.
One core challenge in previous work is that we want to be able to deal with the long tail of required feedback, where some feedback is clearly necessary, but falls outside of the scope of what can feasibly be prepared in advance. The current automated feedback systems do not tailor feedback to these, but tend to default to a default response. A ChatGPT model, without further development, would at least return something, but would include the risk of providing a "hallucination" unrelated to the learning goals or problem at hand. Can frontier feedback models allow us to update our questions each year, or provide new scenarios? Can this type of technology keep up with a slow drift in the kinds of answers, reflections, or insights that students need? As curricula move from what students should learn, to what graduates should be able to do, to who professionals should be, we get more nuanced and subtle changes and evolutions in what we're actually looking for from our students, and so we possibly need a more continuous evolution in the feedback we give them. In essence, this entails a transition from compile time flexibility to run time flexibility.
_What is ChatGPT, and what does it offer?_
Since the release of ChatGPT, experts in education technology have started exploring the potential of this type of tool in automating processes in education. Assessment has been at the forefront of the field's attention [1] - while assessment clearly is a use case, the possibilities offered by generative
language modelling are not limited to this relatively basic application. Instead, we here focus on what we argue is a more viable and impactful long-term implementation of language technology in education, namely _automation of feedback_.
In short, we argue that tools such as ChatGPT offer opportunities to provide automated feedback, at the top of the pyramid. The technical developments needed to bring generative language modelling to this step are fairly foreseeable. What is more difficult, however, is anticipating how this might impact learning, and how this can be implemented in a responsible manner from an ethical perspective. It goes without saying that this type of tool will inevitably change the paradigm of feedback, as this will make feedback an abundant resource for students.
### The Potential of Generative Language Modelling
Before looking into ethical perspectives, we first outline the technical details of ChatGPT in its current state, leading into what technological developments are needed to develop a generative language model-based system for abundant feedback in learning situations.
ChatGPT is typically referred to as a chatbot, and likewise the hype around generative AI seems to be centered around this type of chatbot-esque interface. However, the real underlying technology is known as language modelling, and it is in clever applications of this type of model that the real potential lies. This terminology and technology stems from the field of Natural Language Processing (NLP) and deals with an interpretation of natural languages as strings of symbols, traditionally decomposed into "words". A language model, then, simply has the goal of computing the probability of a sequence of words, e.g.:
\[P(W)=P(w_{1},w_{2},...,w_{n})\]
Where \(w_{n}\) denotes the \(n^{\text{th}}\) word in a sequence. This tends to be intractable with simple probabilistic approaches, and is typically reformulated into a conditional probability:
\[\text{P}(w_{n}\mid w_{1},w_{2},\...,w_{n-1})\]
These formulations are both equivalently known as a _language model (LM)_. It is worthwhile to note that the choice of terminology here is debatable, as this is not exactly a model of language, given that language is not a strictly _linear_ artefact. Nonetheless, this formulation turns out to be an effective tool, and has yielded virtually all recent advances known to the general public, as all modern approaches to NLP rely on LMs. Rather than simply calculating conditional probabilities by, e.g., counting frequencies of combinations of words known as \(n\)-grams, the past 5-10 years has yielded relatively sophisticated neural machine learning models, culminating in the transformer-based LMs of today, such as the GPT-models, Llama, Bloom, Pixel, and others [4, 5]. The utility of a language model is not simply found in its ability to assign a probability to a sequence, although it does follow from this fact. For one, it is relatively straightforward to take the step from the conditional probability of a sentence in Eq. 2, to a generative model which could, e.g., sample the most probable continuation of a sentence. Always sampling the most probable continuation would, however, not lead to particularly natural output, hence effective and optimal sampling in LMs is a leading research area in NLP [6].
### How do we Build Appropriate Language Models?
Modern LMs are often referred to with the prefix 'Large' (i.e. 'LLMs'), but as there is no consensus on what this entails (e.g., how big does a model need to be to be considered 'large'?), we here discuss the core technology of language modelling. A good LM is an LM which produces a high probability for sentences which seem natural, and vice-versa. The key to constructing a good LM, beyond incremental improvements to LM architectures, is as in most parts of machine learning and AI; we need a massive amount of training data. For an LM, the training data is typically a corpus, containing a large collection of unlabeled text. As an example, GPT-4 was trained on approximately 500 billion words [7]. Historically, language models have been built without much consideration for the quality or type of this data, other than considering the language at hand (e.g., focusing on English, French, Mandarin). Due to their substantial data requirements, modern large language models, including the foundations of ChatGPT, e.g., GPT-3 and GPT-4,, are typically trained with even less consideration for the input data. When the need for training data is in the ballpark of hundreds of billions of words, one typically does not have the luxury of being too picky, hence in essence using all data within reach. This has the side-effect of offering multilingual capabilities, as unavoidably a mix of the languages on the internet are used in training. In sum, modern LMs can be seen as a distillation of a large sample of the internet, with both positive and negative ramifications. This training paradigm comes with ethical considerations of its own, e.g., in the area of perpetuating cultural biases [8], and amplifying extremist content online. In the long run, when more and more online content is generated by LMs rather than humans, there will also be an issue with the AI echo-chamber, where LMs trained on their own output will spiral into regurgitating more and more of this biased content.
In addition to being based on an LM, a chatbot such as ChatGPT is further trained using Reinforcement Learning with Human Feedback (RLHF). The technical details of this training are omitted here, but in essence this training allows users to have relatively natural typed conversations with the chatbot. The result is a general-purpose tool, which clearly shows the potential of generative AI, but lacks suitability to provide real value for learners. Furthermore, the underlying model has issues in terms of hallucinations, and a general lack of factuality. All-in-all, we have the tools needed to develop general-purpose chatbots, bringing us to the question - what do we need to add in order to develop specific-purpose technical solutions to delivering, e.g., unbounded feedback in learning situations?
\(>\) REPLACE THIS LINE WITH YOUT MANNCRIPT IDNER (DOUBLE-CLICK HERE TO EDIT) \(<\)
_What Technical Innovations are Needed for ChatGPT to Provide Feedback?_
Given the current state of ChatGPT, and given the relative ease with which RLHF can be adapted to different applicational scenarios, there is a clear _technical_ path forward towards providing abundant feedback. Let's consider the case of a specific course in, e.g., Algorithms and Data Structures in a computer engineering education. Given the versatility of LMs, a relatively straight-forward implementation could consist of continued fine-tuning of the LM on course-specific literature as well as continued RLHF tuning with domain experts and a periodic system evaluation.
This relatively straightforward procedure, albeit costly in terms of computational and human resources, would likely yield a relatively successful model. One drawback of this paradigm is that we would likely need to repeat the steps for any new course or degree, as different outputs would be expected in different learning situations. In other words, given the same input, a feedback model should provide different outputs depending on factors such as the learning goals of the student.
This solution still has drawbacks, however. For instance, output is not verified, and it is difficult to expect all output to be sound feedback. This type of model would still suffer significantly from drawbacks in terms of factuality and hallucinations, and there would be no guarantees that any feedback given would actually be useful. Much like with the general-purpose ChatGPT tool, output is designed to _look natural_, but its true utility is difficult to assess.
More importantly, a training paradigm as outlined above would most likely be effective at providing feedback for our median students, but less likely to be able to support students in the long tail.
### Future Potentials in Learning Outcomes
How does access to unlimited super-human feedback affect learning? We have a hypothesis or dream scenario that using generative language modeling will allow us to push students the higher levels of Bloom's taxonomy, resulting in "better learning" and "better students". There is precedent for this assumption, when looking at other domains. Within the domain of board games, access to superhuman AI has drastically changed the field. All professional chess players use supercomputers in their preparational work and in analysis of games, and even amateurs frequently make use of such tools. A superhuman level in Go was reached in 2013 when AlphaGo surpassed the level of the reigning world champion. Interestingly, Shin et al. [9] find that access to superhuman AIs drastically changed the playing field among professional Go players, based on data from the 20th and 21st centuries. Concretely, they find that unlimited feedback from superhuman AI significantly improved upon human decision-making processes, and allowed for humans to learn to play more novel moves than before training with this type of feedback. Can we expect the same type of outcome in higher education, if we can properly apply generative LMs for unlimited feedback?
Due to these recent developments in generative language modeling, and the clear potential of this type of methodology in educational technology, we foresee a significant ramp up in how educational institutes across the world make use of the technology. As we now have the tools at hand to provide potentially unlimited feedback to students, it may be tempting to jump straight into providing this feedback without second thought. However, rather than being preoccupied with whether we can provide this feedback and implementing it everywhere, we here stop to think whether we should, and how this can be done in a responsible manner.
## III The Ethical Issues Involved in the Provision of Automated Feedback
The provision of automated feedback is in the realms of technical feasibility, and tools such as ChatGPT are pushing the frontiers of what feedback can be provided adaptively at runtime rather than anticipated at compile time. Providing such feedback inherently raises a range of ethical issues and responsibilities towards students. Some of these are obvious while others are subtle; some are inherent in the nature of automated feedback, while others represent Parameters over which we have significant control.
The ethical issues surrounding the provision of automated feedback fall into four key categories:
* Participation: Who participates in the use and development of generative AI models, and to what extent is participation actually a choice?
* Development: How are generative AI models developed, and who are they developed for?
* Impact on learning: automating feedback will make feedback abundant -will it change how students value and engage with feedback?
* how do we design for more than just the constraints of right now?
Several of these issues are ethical issues in AI more broadly; but they have specific manifestations in the automated feedback context.
### Participation
A key part of the value proposition of automated feedback is its scalability - the ability to support large numbers of students without having to incur the marginal costs of the people to do so. Implicit in this model is that all of the students will participate - often without a meaningful choice to do so.
It will be impractical to seek consent from individual students as large-scale implementations are rolled out; operating a parallel non-automated feedback stream for students who do not consent will be far too costly, and will undo many of the organizational benefits of the process. Even if a consent process is operated, there are power differentials between individual students and the academics and the institutions that will cloud the ability of individuals to freely choose to participate. Institutions will seek compulsory participation, partly because that simplifies the implementation, but further because they believe that automated feedback is helpful for learning. However, the students that can potentially benefit the most from such systems - those in the long tail - are also the
ones who carry the greatest risks from participating, and who are most disempowered by removing the choice to opt out.
The issue of forced participation is even more challenging when the development of a corpus of training data is considered. Universities inherently capture significant amounts of data about their students as they conduct their teaching operations, and share this with their learning management system providers; this is an accepted necessity of the operation of a digital learning environment. Training sets for automated feedback are usually convenience samples of previous students' work - data that were collected for the purpose of learning and assessment, not the development of new tools for the future benefit of others. Larger, multi-year datasets add the additional complexity that it may no longer be possible to contact the students in the dataset as students graduate, leave, and move on to other (non-university) contact details.
The GDPR has responsibilities regarding data re-use, and the minimization of the quantities and nature of data being collected. A larger dataset will provide more information for an AI system to learn from; but this data is being taken from its original purpose and context, usually without the consent nor knowledge of the people those data represent. Issues with data privacy are exacerbated by the fact that tools such as ChatGPT are owned by private companies. This raises significant data ownership issues - the terms of use of many of these tools transfer ownership of any input into their systems to these companies. Providing feedback to students using tools such as ChatGPT inherently requires giving these companies a copy of that assignment and the rights to use that data as they see fit - most likely in a scenario in which students are unable to opt out of the provision of this feedback. It is this data transfer that has already led to organizations forbidding the use of generative AI tools in their work - for instance the Australian Research Council does not allow the use of these tools in their grant review processes [10].
While generative AI models may be intended for universal use, they are not always universal in their accessibility. Just because modern university students are digital natives does not mean that they are digitally fluent; interacting with automated feedback systems will be another skill that needs to be developed.
Access to tools like ChatGPT requires reliable access to the internet, as well as whatever subscription arrangements are required for the tools themselves. While these are near-universal in some contexts, these can serve as significant barriers in poorer areas, or for students in less developed economies. Equitable access to online tools will also require all of the considerations of accessible technology. Accessibility considerations are particularly important in the context of automated feedback - the students most likely to benefit from the advantages of automated feedback such as asynchronicity and infinite patience are those most likely to be in the long tail of ordinary feedback requirements.
### Development
Automated feedback is only as good as the models used to provide it, and the models are only as good as the data that are used to train them. ChatGPT provides a specific challenge in this regard, in that it is trained on a wide range of available data, roughly made up of 500 billion words of training data from Common Crawl, constituting a good chunk of the internet [7], rather than on a curated sample of similar assignments, with corresponding expert feedback. It is a generalized model, not a specifically tailored model, and as such it runs the risk of providing feedback that may be appropriate in a generic context but not in the specific discipline context.
A solution will be the development of automated feedback models that are specifically trained to provide such feedback, as outlined in the previous section. These will have the advantage that they are purpose built and can therefore be developed to provide appropriate feedback; but the appropriateness will be dependent upon how they are trained. Furthermore, even with such a tailored development procedure, generative LMs have a core technological issue in terms of hallucination, implying the absence of a guarantee of factuality. There are furthermore well-known examples in which bias in training datasets have led to issue with classification, such as facial recognition systems that have been trained on predominantly white datasets. It will be critically important to develop appropriate training datasets if automated feedback is to provide fair and equitable value to all students.
Following the development pipeline outlined in the previous section, the training procedure requires two datasets - one containing samples of student work and the learning curriculum, and one containing samples of the appropriate feedback to provide to the students, provided in RLHF. Training sets for student assessment are usually convenience samples of historical data - the assignments of current and previous students are used to train the models to provide feedback on future submissions. While these make excellent test data, they are unfortunately not ideal as training data. Student assignments are not evenly distributed with regard to their needs for feedback. The most common errors are by their definition common, and therefore easier to detect and to automate. The least common errors, which perhaps are those in most need of expert feedback, are the least common and thus the hardest to capture and automate in the model. Care must be taken to ensure that the models that are developed are able to provide useful feedback even when uncommon, or even unfamiliar submissions are provided - rather than just indulging in ChatGPT-style hallucinations. An algorithm that is most accurate on average may systematically discriminate against a specific minority [11].
The second dimension of a comprehensive training set is the provision of sample feedback from experts. Accessing expertise raises questions such as who is an expert, and who gets to decide who an expert is. There are also operational questions such as how this expertise should be captured - whether as a convenience sample from historical submissions or through a deliberate process to solicit expert feedback for the purposes of building the model.
The key challenge is to capture sufficient expertise so that the whole range of possible assignment submissions can be addressed - to ensure coverage over the whole spectrum of student submissions, without risking the loss of the long tail. Expertise is valuable and thus expensive, leading to a natural tendency to focus it on the most highly leveraged part of the spectrum; but to do so is to risk developing automated feedback
systems that are just memorizing rather than synthesizing. Due to this expense, it will also be key to employ transfer learning methodologies so as to take advantage of pre-existing developments. Since similar courses will potentially overlap in terms of the feedback needed, there will be possibilities for significant operational savings by basing a model for a new computer engineering degree on one developed for a similar engineering degree program.
### Impact on learning
It is critical to also consider the impact that automated feedback will have upon student learning. Generative AI provides the opportunity to provide repeatable, scalable and instant automatically generated feedback to students, making abundant a previously scarce and expensive learning resource. When coming from a place of scarcity, the prospect of abundance is inherently attractive; but the longer term consequences must also be considered, along with who actually benefits from the abundance.
Feedback on assignments is a critically valuable part of the learning process and the timelier the feedback, the more valuable. For the majority of students, and the majority of feedback they require, generative AI tools represent an opportunity to make that feedback instant. In the early implementations, this represents an opportunity to improve student learning. However, as they become accustomed to on-demand instant feedback, it is likely that students' attitudes towards feedback will change. A core challenge here is whether this abundance of feedback will devalue it, and whether automatic feedback will be seen as equivalent to human feedback. If there appears to be an inexhaustible supply of feedback, how much will students consume? Part of the value of feedback is that it triggers the reflective cycle in students to improve their work; if they can instead simply make some changes, resubmit, and get new feedback, will they still reflect deeply on their work? There is therefore a risk that abundant feedback could lead to a deskilling of our students. Rather than develop their own reflective competences, and the ability to review their own work, they may become reliant upon having external tools to guide the development of their work - in effect gamifying their work into pleasing the generative AI tool rather than developing their actual skills.
These quicker feedback loops also introduce a risk of homogenizing towards good practice. Generative AI systems are valued because they are able to quickly amplify the performance of workers, and they have been shown to be of greater benefit to novices than to experts. These quicker gains are because the systems push users towards a standard of good practice that has emerged from the training set. While this is helpful in speeding the development of novices, it risks pushing all users towards that particular version of good practice, regardless of what feedback they actually require.
It is not yet clear whether feedback is fungible. Do students regard feedback from a generative AI the same way they regard feedback from a human? Is AI-generated feedback less "real"? Will students have the same expectations of AI-generated feedback? Society holds self-driving cars to much higher standards than they do human drivers, but we do not yet know if students will similarly hold different standards for AI-generated feedback.
Ultimately it is essential that students trust the feedback they are receiving. Social media (an incredibly abundant resource) have shown that people are willing to dismiss information that does not fit with what they already believe. Will students be less likely to accept feedback that doesn't reinforce what they already believe if it comes from an AI-generated source? This is the kind of feedback that they most need to trust; but will they trust it if it does not come from a human?
Establishing trust will be an essential part of rolling out a generative AI feedback system. Universities have systems that implicitly establish trust in traditional feedback models - feedback comes from a human who has been hired into the system for their expertise in the area in which they are providing feedback. Trust will depend upon whether the generative AI system has been customized for the specific application. For generic tools such as ChatGPT, it will depend on whether the feedback is sufficiently specific to be valuable, despite having been trained on generic datasets. For specialized tools, it will depend upon whether the training process is sufficiently transparent for users to understand how the feedback has been generated. Trust will also depend on whether the generative AI provides feedback that is helpful. A system that provides only hallucinations will quickly be dismissed; but a system that has been shown to be able to help students learn will be trusted to continue to do so. This further enforces the need to develop language models which can guarantee the factuality and relevance of their outputs.
Trusting the feedback will be easier if you are the median student, in need of the median feedback; but for those students in the long tail there is a risk that generative AI systems exacerbate the inequalities already in our systems. Human generated feedback is able to respond to the full range of student assignment submissions. While the long tail may be less common, it is not inherently problematic for humans to provide meaningful feedback.
For generative AI systems, however, uncommon submissions pose a problem. What feedback should a generative AI system give to an assignment that is unlike any other it has been trained upon? This is a significant risk given that many specialized systems are trained on historical data, which are a biased sample of potential student submissions.
Students in the long tail are the ones needing the most help; theirs is the practice that is the furthest (either behind or ahead of) from best practice, and the ones who need customized feedback the most. There is a risk that generative AI systems are unable to provide meaningful feedback to such students, or worse, to provide misleading feedback instead. These risks can be mitigated through more comprehensive training datasets; but this represents an additional upfront cost that will benefit smaller and smaller numbers of students.
### Evolution over time
It is important to consider how automated feedback systems can and will be used over time. While it is perhaps impossible to correctly predict how circumstances will change over time, it is nonetheless irresponsibly naive to assume that they will not
change at all. Changing circumstances are often used to justify a kind of techno-optimist - these tools are here, they will be used anyway, and so we should embrace them. There are grounds to believe this [12] but this does not allow us to ignore questions about how widely they should be used, and for which students. Optimism cannot simply be a naive alternative to the "slippery slope" argument.
Drift over time is a challenge for generative AI models. These models will require maintenance to ensure that their feedback remains relevant given temporal changes in context, theory, and prior learning profiles of student cohorts. This maintenance will potentially be expensive, and there is a risk that institutions will not invest in this maintenance.
This risks the possibility that automated feedback engines will in fact clarify the learning experience - having invested heavily in building a feedback engine, they will be reluctant to update their teaching materials to move away from the set point where they can provide the detailed feedback.
A key theme for all AI decision making systems is the extent to which they operate as decision support systems rather than decision making systems. Will there be humans in the loop, and to what extent will these humans retain control of the process? This decision should be dependent upon the jeopardy involved - what are the consequences of a wrong decision? Are we more afraid of a false positive than a false negative? The provision of automated feedback to students carries potentially less jeopardy than the automation of assessment outcomes. A wrong grade on an assignment has greater consequence that inaccurate feedback, particularly when the alternative to automated feedback may be no feedback at all.
Initially, the distinction between marking and feedback is clear; however there is a risk that the lines between the two blur over time. Some types of feedback will correlate strongly with high performance; other types of feedback will correlate with failure. Over time, if this correlate stays sufficiently strong, the automated feedback engine drift into become an automated assessment engine - without the human in the loop.
This potential drift is a real risk over time. The more accurate a decision support system becomes, the more likely it is to morph into a decision making system - for how long will an institution continue to carry the expense of checking the recommended decisions when so few of them are actually incorrect? How much will they be willing to pay to avoid each false decision? But which false decisions will they be avoiding, and will they be in the long tail?
Resource reallocation is often presented as a driver for the automation of feedback. After an upfront investment we can move from a model of scarce, expensive feedback into a paradigm of abundant feedback, and instead reallocate our resources to supporting our students in different ways. But do we believe the reallocation narrative? We all say that this is an opportunity to redirect resources to more valuable learning situations, but is that likely to actually happen? Or will it just fund more postdocs elsewhere instead? While there is immense potential for more efficient resource allocation, we must ask ourselves if we can still uphold our graduate standards if we solely focus on automatable outcomes [11].
Many of these ethical concerns actually represent tensions between two good outcomes that we want to achieve, such as the balance of more complex models potentially giving better accuracy at the cost of not being able to understand the models as easily. How we balance competing outcomes is a choice, and it can no longer just be a default choice - it needs to be a deliberate choice, with explicit frameworks to guide that thinking.
## IV Frameworks for Development
Generative AI is a rapidly developing technology, and it is already having significant impact upon higher education. Adapting existing frameworks for engaging ethically with AI technology in higher education will allow for faculty to be guided in embracing this technology without having to become experts in the nuances of ethics surrounding this approach. In short, the frameworks allow for this type of technology to be included in learning curricula both efficiently and responsibly. There are many frameworks available for such approaches (eg [13]). In this manuscript we will use the RESPACT framework [14] as a basis for considering the implementation of generative AI for automated feedback.
The RESPACT framework was specifically developed for applications involving AI and machine learning in higher education contexts. This framework identifies seven key dimensions to consider when implementing educational technologies:
* Responsive and Responsible
* Ethical guidelines
* Security
* Privacy
* Accountability
* Consent
* Transparency
These dimensions provide a valuable framework to consider the application of generative AI to the provision of automated feedback to students. These can be used to guide decisions around if and how to use generative AI tools. In particular, they can be used to ensure that all students benefit from the opportunities that generative AI feedback can provide.
### Responsive and Responsible
The key question for responsible use is to ensure that the generative AI tool is able to provide appropriate feedback to students, and that such feedback is constructively aligned with the learning goals of the course. For models that have been trained on generic data, their responses must be sufficiently transferable to the context in which the students are working. For specifically trained models, the training process must be sufficiently rigorous and bias-free to ensure that it can support all students. Appropriateness of feedback can be further decomposed into _relevance_ and _factuality_. Recall that an LM is optimized to provide a probability for any given sentence, and that a good LM is one that provides high probabilities for 'natural' sentences; this implies that generation from an LM will attempt to deliver outputs with a high probability. While design patterns such as RLHF goes a long way towards providing _relevant_ outputs, there is no design parameter in current state-of-the-art LMs that pushes them towards
generating content that is _factual_. How to approach this is an unanswered research question in the area of NLP.
For responsible use it is important to consider the long tail of potential uses to ensure equity in supporting student learning. The key advantage of automation is that it can respond to the most common scenarios; but it is not enough to only address the most likely inputs. Human markers can give meaningful feedback to unlikely or unanticipated student submissions. Responsible use of generative AI requires faculty to know how their tool will respond to the long tail of potential student inputs - for it is in the long tail that the most support and feedback is required.
It is important to consider how the tool will be perceived from the student perspective, and how the students will end up actually using the tool. It is insufficient to only anticipate a single ideal mode of use; students will misinterpret instructions, and their patterns of use will evolve over time as they become more familiar with the tool. Students' initial use will likely be to shift the amount of feedback they get, from a historically scarce manual feedback environment, to an abundant automated feedback environment; but responsible use of these tools requires faculty to consider how they will be used once the full opportunities of abundance emerge. Responsive use of the tool will require monitoring of how these access patterns emerge over time, and ensuring that the students are utilizing the tool in a way that supports their achievement of their learning goals.
Responsible use also requires ensuring that all students can actually access the feedback tool. This will require reliable digital connectivity for students, as well as potentially involving licensing arrangements for third party tools.
### Ethical guidelines
The RESPACT framework emphasizes the importance of explicit ethical guidelines. It is not enough to simply work ethically; it is essential that the institutional environment have its own rules and processes to ensure that ethical principles are followed, and seen to be followed. Explicit ethical guidelines are particularly valuable in the implementation of emerging technologies. Well established technologies have established examples (and counter-examples) of how they should be used. Emerging technologies lack this body of wisdom, and instead require a stronger dependency upon principles and guidelines.
The tensions between ethical principles that arise from using generative AI in automated feedback further emphasize the importance of having explicit guidelines. Tension between principles requires judgement to resolve; explicit guidelines will assist in that resolution, and further assist in the communication and justification of that resolution in the future. By articulating priorities and values in advance, institutions show what matters to them, and help guide faculty as they implement these new technologies.
The ability to articulate the decisions that have been made, and the guidelines that have informed them, in turn assists in making the feedback systems more trustworthy. In this way stakeholders are asked not to just trust the product, but rather the trust the process that produced it.
### Security
Data security is an essential consideration for any educational technology application, both at the time of operation and ongoing into the future. An important consideration for automated feedback is that nature of the data that is involved. Student work is by its nature confidential information that needs protecting. The increasing prevalence of authentic assessment methods means that there is an increasing likelihood of the inclusion of data from other sources, such as project clients, users, stakeholders etc in the work that students submit for feedback. These may have further sensitivities, or commercial-in-confidence constraints that will need to be considered when transmitting and storing student work. A key issue for many automated feedback implementations is that they will be reliant upon external providers (such as ChatGPT) for their models, and as such will be dependent upon those providers to both offer and deliver the necessary data security. Early providers in this space have terms of use regarding the transfer of ownership of data entered into their systems, allowing them the right to use anything that is provided as input into their models. This represents a significant data security question that goes above and beyond simply whether the servers of the provider are secure enough from hacking.
### Privacy
Student assignments are inherently confidential information, and they are usually implemented in environments that are designed to protect that confidentiality, through both legal and ethical responsibilities of the institutions. Automating the provision of feedback complicates the privacy issues by changing the nature of the information involved. Student assignments become more than just assignments - they become the inputs to an AI system, as well as potential training data for the next generation of that system. More people are involved in the loop, and more people can potentially have access to this information, which raises potential privacy issues.
The nature of the feedback itself as a confidential item must also be considered. AI generated feedback is still confidential information; it is an inherent consequence of the assignment submission from the student, and needs to be treated with the same privacy considerations. This is particularly important for students in the long tail, whose feedback is much more likely to be unique to them as students, and thus more likely to be identifiable to a specific student. The more tailored feedback can be to the specific student, the more valuable it will be; but the more likely it can give away their personal details and circumstances.
The introduction of generative AI solutions poses a specific risk with regards to reporting on implementations. Evaluating the effectiveness of these systems will involve investigating the data; this introduces additional privacy risks on those who are most identifiable in the data set - those who are in the long tail of the student submissions. This risk is further enhanced if such evaluations are published widely rather than just kept as in-house reviews.
\(>\) REPLACE THIS LINE WITH YOUT MANNCRIPT ID NUMBER (DOUBLE-CLICK HERE TO EDIT) \(<\)
### Accountability
Accountability for AI systems are where frameworks can be particularly powerful. Many of the issues in the use of AI in education require compromises to be made; a clear accountability framework will allow for these compromises to be identified and recorded, ensuring that decisions are explicitly made rather than emerging by default.
In establishing accountability, it is important to identify what success actually looks like. Articulating the goals of implementing automated feedback with generative AI will help to clarify what such a system is required to do, and for whom, and over what timeframe. Automated feedback will affect different users differently, and by clearly articulating how the impact upon each will be measured it becomes easier to anticipate and design for that impact. It also makes explicit how costs and benefits are balanced amongst the different stakeholder groups.
Different user groups may have different objectives, but by explicitly identifying what success looks like it will be possible to explicitly evaluate the work against these clear targets, rather than falling into a default "it went well" assertion of success.
Continuous review of these systems will be an important part of the accountability. Monitoring their impact as they are used, and making changes if required, will be an essential part of building and maintaining trust in the generative AI systems. It is one thing to anticipate the impacts on students, particularly students in the long tail. It is another to actually measure the impact and have systems and processes in place to adjust if and when unanticipated outcomes arise.
### Consent
As noted earlier, the operational context for generative AI often introduces challenges regarding the issue of consent. Many of these issues arise from the all-or-nothing approach to data, where a single statement of agreement (either explicit or implied) is taken as permission to use the data now and in the future for any purpose related (or possibly unrelated) to the generative AI.
One potential solution to these issues is to take a differentiated consent approach, unbundling the many different overlapping uses of data and allowing students control over which uses they agree to. Students may be happy to use the automated feedback tool, but not wish to have their data used in the subsequent training of the next generation of the AI; allowing them to differentiate their consent will address some of the all-or-nothing issues in this space. In this way a distinction can be made between the consent necessary for the purposes of offering the course as a learning experience, and the consent to the broader uses of student data.
Depending upon the nature of the assignment, more than just the individual student may be involved. Reflections on group work with other students, or reports of project work conducted in industry will both include information from and about other people beyond just the student; consideration must be given to how they are informed and consulted in this regard.
It is important to remember that students always have an opt-out option: they can avoid the automated feedback tool. In the early stages of the implementation of this technology, the price of such an opt out is likely just foregoing the academic opportunity of feedback. As the technology becomes more widespread, this opt-out may require reluctant students to avoid higher education altogether.
It is also important to be clear on how consent was acquired for the dataset used to train your generative AI model. For custom-trained models, this should be straightforward; but for outsourced models more investigation is required. While you may not be able to control the issue of consent in your dataset, it is possible to become aware of how this issue has been resolved, and then decide appropriately about the use of that dataset.
### Transparency
Transparency is the final item in the RESPACT framework, but in some ways it is the most important. Transparency is key in establishing trust in the generative AI feedback systems, and the processes and decisions that were used to build them.
The explainable AI movement is focused on building trust in AI systems by ensuring that they are sufficiently transparent to understand how they operate. AI systems present the challenge of complexity vs transparency, and the compromise of potentially improving performance by sacrificing the ability to explain how the system operates. Being clear on how models are trained will demonstrate the transparency of process to build trust in the feedback they give.
Transparency can be a powerful mechanism in establishing trust from multiple stakeholders beyond just the students. Faculty, accreditation bodies, employers and government ministries are all invested in ensuring that student learning is indeed enhanced through these technologies, and a more open process will facilitate their trust. It is harder to make problematic choices when there is a clear commitment to making the choices visible to all.
Ultimately the trust in the generative AI systems will come from the transparency. Systems that can be understood that arise from open and transparent processes will build the trust necessary to ensure buy-in for these systems with the potential to improve student learning.
## V Conclusion
Recent developments in generative AI, including tools such as ChatGPT, offer significant possibilities in higher education. They present the opportunity to change feedback from being a scarce, expensive resource into being an abundant resource. There is enormous potential upside for learning in this new paradigm; but it comes with the risk that these benefits may not be evenly distributed.
Developing generative AI tools requires a significant investment of resources, and the process brings with it potential biases and constraints dependent upon the datasets that are used to train. At the high level, these biases can affect the suitability of the generative AI for use in teaching; at the small scale they can impact on the ability of these tools to respond to the specific needs of individual students.
The advantage of automation is that it allows for expertise to be shared at scale, but there is a risk that the expertise that is
scaled is the expertise that is most easy to automate. Unless care is taken to consider the issues involved, there is a risk that students in the long tail will be left behind by the roll out of these tools, further entrenching disadvantages in our higher education systems.
This paper has outlined key ethical considerations in the implementation of generative AI for automated feedback. Most of the ethical issues involved are issues of compromise; for instance, more sophisticated models can potentially provide better feedback; but they do so at the cost of the transparency and explainability that build the trust in the systems, and the impact on learning during the initial adoption of these systems will be different than the impact in the steady state future.
This paper presents a framework for navigating these challenges as faculty implement generative AI tools. This framework provides a lens for considering the identified ethical issues (and others that may emerge), considering them along the dimensions of the RESPACT framework. A deliberate framework can provide the structure needed to make good decisions, and in doing so provide the faith in the tools to support better learning for your students. We hope that future work will make use of the framework outlined in this work, ensuring responsible development of automated feedback methodologies based on generative language modelling, and enabling all students to benefit from the opportunities these models provide.
|
2308.02648
|
Privacy Preserving In-memory Computing Engine
|
Privacy has rapidly become a major concern/design consideration. Homomorphic
Encryption (HE) and Garbled Circuits (GC) are privacy-preserving techniques
that support computations on encrypted data. HE and GC can complement each
other, as HE is more efficient for linear operations, while GC is more
effective for non-linear operations. Together, they enable complex computing
tasks, such as machine learning, to be performed exactly on ciphertexts.
However, HE and GC introduce two major bottlenecks: an elevated computational
overhead and high data transfer costs. This paper presents PPIMCE, an in-memory
computing (IMC) fabric designed to mitigate both computational overhead and
data transfer issues. Through the use of multiple IMC cores for high
parallelism, and by leveraging in-SRAM IMC for data management, PPIMCE offers a
compact, energy-efficient solution for accelerating HE and GC. PPIMCE achieves
a 107X speedup against a CPU implementation of GC. Additionally, PPIMCE
achieves a 1,500X and 800X speedup compared to CPU and GPU implementations of
CKKS-based HE multiplications. For privacy-preserving machine learning
inference, PPIMCE attains a 1,000X speedup compared to CPU and a 12X speedup
against CraterLake, the state-of-art privacy preserving computation
accelerator.
|
Haoran Geng, Jianqiao Mo, Dayane Reis, Jonathan Takeshita, Taeho Jung, Brandon Reagen, Michael Niemier, Xiaobo Sharon Hu
|
2023-08-04T18:10:17Z
|
http://arxiv.org/abs/2308.02648v2
|
# Privacy Preserving In-memory Computing Engine
###### Abstract
Privacy has rapidly become a major concern/design consideration. Homomorphic Encryption (HE) and Garbled Circuits (GC) are privacy-preserving techniques that support computations on encrypted data. HE and GC can complement each other, as HE is more efficient for linear operations, while GC is more effective for non-linear operations. Together, they enable complex computing tasks, such as machine learning, to be performed exactly on ciphertexts. However, HE and GC introduce two major bottlenecks: an elevated computational overhead and high data transfer costs. This paper presents PPIMCE, an in-memory computing (IMC) fabric designed to mitigate both computational overhead and data transfer issues. Through the use of multiple IMC cores for high parallelism, and by leveraging in-SRAM IMC for data management, PPIMCE offers a compact, energy-efficient solution for accelerating HE and GC. PPIMCE achieves a 107\(\times\) speedup against a CPU implementation of GC. Additionally, PPIMCE achieves a 1,500\(\times\) and 800\(\times\) speedup compared to CPU and GPU implementations of CKKS-based HE multiplications. For privacy-preserving machine learning inference, PPIMCE attains a 1,000\(\times\) speedup compared to CPU and a 12\(\times\) speedup against CraterLake, the state-of-art privacy preserving computation accelerator.
## I Introduction
Privacy-preserving computation (PPC), where computations are performed directly on encrypted data, is a solution for providing security and privacy in modern systems. However, PPC techniques typically incur extremely high computation costs. For example, machine learning (ML) inference with encrypted data can be 10,000\(\times\) to 100,000\(\times\) slower than plaintext [68, 71, 72, 77]. Thus, there is a great need for solutions that mitigate the performance overhead of PPC.
One of the most promising PPC techniques is homomorphic encryption (HE) [8, 21, 30]. HE allows computations to be performed directly on ciphertexts. In cloud computing, HE protects clients' privacy, as data remains encrypted during server side computation. While HE strengthens security and privacy, it introduces substantial computation overhead due to (1) high volume of data generated by large ciphertexts (i.e., _ciphertext expansion_) [70, 80], and (2) expensive bootstrapping operations, especially for deep neural networks (DNN) that require many nested multiplications (e.g., [7]). Additionally, many HE schemes lack support for non-linear operations, including Brakerski/Fan-Vercauteren (B/FV) [21], Brakerski-Gentry-Vaikunathan (BGV) [8] and Cheon-Kim-Kim-Song (CKKS) [12], which can impact DNN accuracy [28].
Garbled Circuits (GC) are an alternative PPC technique that can efficiently support non-linear functions. GC can logically operate on encrypted binary data, allowing arbitrary computations. Numerous advancements have contributed to optimizing the performance of GC-based applications [49, 75, 88]. State-of-the-art (SOTA) private machine learning protocols [26, 42, 44, 59] use HE for linear operations and GC for non-linear to achieve high accuracy. However, previous work also shows that GC can suffer from high computational costs [32] and large client-server communication overheads [71, 72].
Hardware accelerators exist for both HE [48, 68, 71, 72] and GC [22, 36, 37, 60]. Although these accelerators yield high performance, they also suffer from large data transfer overheads [26, 71]. Recent research suggests that in-memory computing (IMC) is a viable solution [54, 62, 70, 79]. IMC has been proposed as an architectural solution to overcome latency and energy overheads both associate with data transfer [24, 58, 74]. With an IMC architecture, a subset of logic, arithmetic, and memory operations associated with given tasks are performed in memory (without transfers to/from a processor). IMC exploits the large internal memory bandwidth to achieve parallelism, which reduces latency and saves energy due to fewer external memory references.
Existing PPC accelerators have only targeted HE or GC, making stand-alone solutions suboptimal for certain PPC tasks. For example, in privacy-preserving machine learning (PPML) inference, HE cannot easily support non-linear operations like ReLUs while maintaining high accuracy [26, 27]. Therefore, existing HE accelerators must replace the non-linear activation functions in ML algorithms with HE-friendly operations using methods such as polynomial approximation. These HE-friendly activation functions cause a significant drop in accuracy [28]. Alternatively, while a GC accelerator can accelerate ReLU functions, it can be extremely inefficient for matrix-vector multiplication in ML algorithms.
Using a combination of HE and GC (e.g., [44]) allows the execution of PPML tasks _without any loss of accuracy_. Our experiments also show that the combined HE+GC protocol offers less latency for PPML tasks than a pure HE approach due to HE bootstrapping overheads. Therefore we introduce the Privacy Preserving In-memory Computing Engine (PPIMCE), an IMC architecture designed to accelerate HE and GC in a
single, unified hardware platform. In PPIMCE, we leverage the high parallelism, high throughput, low data transfer time, and low energy usage offered by IMC to overcome the performance and data transfer bottlenecks in HE and GC.
The key insight behind our approach is the use of an in-SRAM IMC accelerator for executing HE and GC, substantially mitigating data transfer costs between on-chip memory and processing units. The PPIMCE system utilizes specialized IMC cores designed to perform a range of operations tailored to the combined use of HE and GC. These cores, strategically placed near memory arrays, optimize data transfer and surpass traditional ASICs in efficiency. One significant challenge we confront is integrating HE and GC, two fundamentally distinct algorithms, into a single system. To tackle this, we leverage our IMC cores' proficiency in handling basic logical and arithmetic operations. Additionally, we employ a scheduler that efficiently coordinates these operations, facilitating the concurrent execution of HE and GC tasks within the system. Our key contributions can be summarized as follows:
* PPIMCE is the first hardware accelerator based on IMC that can execute all essential operations to support HE and GC with high performance.
* The mapping and scheduling scheme in PPIMCE enables the high-performance realization of HE and GC.
* A thorough evaluation shows PPIMCE outperforms CPU, GPU, and SOTA PPC accelerators in latency, area, and power across diverse benchmarks.
Our experimental results, detailed in Section VIII, show PPIMCE's superior performance. We observe a 100\(\times\) latency improvement over CPU-based GC and a remarkable 1,500\(\times\) and 800\(\times\) speedup over CPU and GPU in CKKS-based HE multiplications. Furthermore, PPIMCE surpasses existing PPML solutions, offering a 1,000\(\times\) speedup over Gazelle and up to 130\(\times\) over the latest PPC accelerators, all within a compact 138.8_mm\({}^{2}\)_ area and just 9.4\(W\) average power consumption.
## II Motivation and Challenges
This section highlights our primary motivations: using GC for non-linear layers in PPML inference to avoid expensive bootstrapping and exploiting IMC's performance against ASIC for data-intensive applications like HE and GC. The main challenge in designing PPIMCE is the integration of two fundamentally distinct algorithms, HE and GC, into a singular hardware architecture.
### _HE+GC vs. HE-only protocol for PPML inference_
There are two primary methods for PPML inference: (1) exclusively using HE [52, 9, 78], or (2) using mixed protocols that use HE for linear and GC for non-linear functions [44, 59]. The HE-only approach outsources all computations to the server, incurs low communication costs, but significantly increases latency due to HE bootstrapping. In contrast, while the mixed HE+GC protocols require increased communication, they can avoid bootstrapping and reduce computation latency. We aimed to compare the communication latency associated with GC with the bootstrapping demands of an HE-only strategy.
The HE-only runtime includes latencies from HE linear computations and bootstrapping. For non-linear functions, we assume degree-6 polynomial approximation for ReLU activations [13]. The HE-only computation is implemented and profiled with the SOTA CKKS library HEAAN [43].
In the HE+GC method, we follow the Gazelle framework [44]. Runtime consists of HE computations for linear functions and GC Garbler computations for non-linear functions. To support HE tasks, we use the SEAL library [20] (SEAL and HEAAN show comparable performance [78]). The emp-tool library [84] is employed for GC computations. To account for communication latencies, we considered the communication latency of GC under various communication protocols, assuming bandwidths of 2G [38] to 5G [41] network. This allows us to comprehensively assess the GC delay on the server side.
Figure 1 illustrates the time difference between the HE-only and HE+GC approaches. In the HE-only approach, bootstrapping consumed over 95% of the computation time, resulting in inference times of several days. Conversely, the HE+GC approach avoids bootstrapping, thus reducing the total inference time to mere hours, as GC computation is considerably faster. Despite the communication cost associated with GC, reduced bootstrapping overhead from the HE+GC approach can improve efficiency. These savings underscore PPIMCE's advantage, allowing it to outperform accelerators using only HE on PPML inference due to its effective execution of both HE and GC operations (See Section VIII-B).
### _Advantages of IMC_
As emphasized in [1], one major bottleneck in data-intensive applications like HE and GC is the substantial data movement overhead. This issue arises from the need to move large data between memory and computing units. For instance, to ensure a security level of 256 in HE, a single ciphertext size is 16MB, as the ciphertext is represented as a high-degree polynomial. A single convolutional layer in PPML inference might require as much as 256MB of ciphertext [71]. Similar issues are encountered with a GC approach as each bit of plaintext is
Fig. 1: Analysis of CPU computation time in HE-only and HE+GC approaches for PPML inference.
encrypted into 128-bit secret labels. Thus, compared to non-GC solutions, GC involves the transfer of more than 128 times the volume of data from memory to a computing unit.
IMC can alleviate this data movement overhead. IMC architectures can efficiently perform bitwise and arithmetic operations inside the memory, significantly improving efficiency for data-intensive applications like HE and GC. The potential of IMC to revolutionize hardware accelerators for such applications has garnered interest from academic circles [73, 82], government agencies [17, 18], and the semiconductor industry [25, 61].
To further illustrate the efficacy of IMC, we contrast it with a hypothetical ASIC accelerator that operates at an identical speed and capacity. The aim is to match the integer multiplications of a Compute-Enabled Memory (CEM) as utilized in PPIMCE (See Section IV-B). In one operation (assuming a polynomial size N=8192), we would require 2\(\times\)8192 multipliers, which amounts to a total area of 343.6 \(mm^{2}\) using data from [81]. Conversely, a CEM only necessitates 4096 arrays, thereby only consuming an area of 138.5 \(mm^{2}\). This indicates that IMC designs are around 2.5 times more area-efficient when achieving the same performance. Such efficiency underscores the effectiveness of IMC in managing the data movement overheads inherent in both HE and GC computations.
### _Challenges of Combining HE and GC_
PPIMCE aims to incorporate both HE and GC in a single IMC accelerator with the goal of supporting both HE and GC for the efficient execution of PPML tasks. However, unifying these two approaches in PPIMCE presents significant challenges due to the divergent computational and scheduling requirements.
**HE and GC computations fundamentally differ:** HE computation is inherently multi-layered, e.g., neural network operations that devolve into HE arithmetic, polynomial arithmetic, and finally, coefficient-wise integer arithmetic [44]. In contrast, GC computation is based on two primary gates--AND and XOR--operating on GC labels [49, 88]. Especially, AND in GC entails AES encryption and other miscellaneous logical operations. Thus, HE and GC have distinct computational kernels, with HE leaning more towards integer arithmetic, and GC leaning on logical operations and AES encryption. PPIMCE reconciles these differences by employing IMC-cores for basic logical and arithmetic operations in/near memory for HE and GC (See Section V-A,V-B).
**Scheduling Difficulties:** The scheduling requirements for HE and GC also diverge due to their distinct fundamental operations. In HE, coefficient-wise integer operations can typically be parallelized using Single Instruction, Multiple Data (SIMD) scheduling as these operations are mostly independent [71] However, in GC, the Boolean circuit (graph) demonstrates more significant data dependencies, which can vary across different tasks [60]. This variability makes it challenging to implement a universal scheduling mechanism as in HE. PPIMCE addresses this challenge by implementing a versatile scheduler that can effectively parallelize computation in HE while accurately tracking and managing data dependencies in GC (See Section V-C,V-D).
## III Background
This section provides a brief introduction to HE and GC. For a complete description, see [11, 86, 88, 49].
### _Homomorphic Encryption_
#### Iii-A1 HE basics
Homomorphic encryption (HE) enables computation on encrypted data, and Fully Homomorphic Encryption (FHE) can theoretically evaluate any function. FHE schemes typically rely on the Ring Learning With Errors (RLWE) problem, using tuples of polynomials in the ring \(R_{q}=\frac{\mathcal{I}_{q}\left|\mathcal{X}\right|}{\mathcal{X}^{q}+1}\) for a power of two \(N\). The B/FV and BGV FHE schemes work on finite-field plaintexts and are adaptable to machine learning applications [8, 21]. The CKKS scheme [12], which carries out approximate fixed-point arithmetic, is preferred for machine learning due to its native support for approximate arithmetic. Our work employs the CKKS scheme for its suitability to machine learning applications' approximate arithmetic [15, 46, 48, 51, 53]. However, our architecture can support other FHE schemes due to its emphasis on improving the fundamental integer/polynomial operations used in FHE algorithms.
#### Iii-A2 Operations, Noise and Bootstrapping
FHE schemes add noise to fresh ciphertexts in encryption, which accumulates as computations are performed [8, 12, 14, 21]. Eventually, the noise becomes large enough that correct decryption is no longer possible. _Bootstrapping_ is a highly complex and expensive operation that reduces noise to tolerable levels without secret keys. In this work, we interpose GC between linear layers of neural networks; this has the additional effect of removing noise from ciphertexts [44, 68], obviating any need for us to perform bootstrapping.
HE additions/multiplications are performed with sequences of polynomial additions, subtractions, multiplications, and scaling operations. HE rotations are performed by applying a polynomial automorphism to polynomials of the ciphertext and computing a dot product. All outcomes are closed in a polynomial ring, i.e., integer/polynomial modular reductions are performed after all the operations to keep the coefficients/degrees within a finite bound. For more details, please refer to the original schemes [11, 12].
#### Iii-A3 HE Optimizations
Number-Theoretic Transform (NTT) and Residue Number System (RNS) serve as prominent algorithmic optimizations in Fully Homomorphic Encryption (FHE). NTT, by enabling polynomial multiplication in the evaluation domain to correspond to coefficient-wise multiplication in the original domain, lowers computational complexity and lets multiplication be executed in \(O(N\log N)\) time due to the log-linear time complexity of the NTT and its inverse [57]. By keeping all polynomials in the evaluation domain in PPIMCE, expensive NTTs are reduced [11, 62]
Conversely, RNS optimization facilitates handling smaller coefficients in FHE polynomial calculations, enabling, for example, a polynomial with 512-bit coefficients to be represented as
16 polynomials with 32-bit coefficients. This method simplifies large computations and enables parallelization of all polynomial operations in hardware designs with ample computing resources [2, 29]. The multi-core design of PPIMCE is well-positioned to take advantage of RNS for efficient computations.
### _Garbled Circuits_
#### Iv-B1 GC basics
Garbled Circuits, introduced in 1986 [86], is a secure two-party computation scheme involving two key roles: the **Garbler** and the **Evaluator**. During the garbling phase, the Garbler encrypts Boolean circuits and prepares encrypted truth tables for all gates, which are then sent to the Evaluator [5]. The Evaluator uses these encrypted tables and inputs to process the GC during the evaluation phase.
To improve GC's performance, several optimizations, including Point-and-Permute [4], Row Reduction [65], FreeXOR [49], and Half-Gate [88], have been proposed. These reduce computation complexity and the size of garbled tables. The PPIMCE system leverages FreeXOR and Half-Gate as the basic operations for GC.
#### Iv-B2 FreeXOR and Half-Gate
FreeXOR allows secure execution of XOR gates without garbled tables [49]. Half-Gate optimizes the ciphertext size of the AND gate and, combined with FreeXOR, enables the construction of circuits with efficient garbled XOR and AND gates [33, 88].
The garbled tables generation in GC is handled through AES, which involves four primary operations: AddRoundKey, SubBytes, MixColumns, and ShiftRows [16]. These techniques collectively contribute to the effective operation of the GC in the PPIMCE system. A more general introduction to GC can be found [85].
## IV PPIMCE Architecture
In this section, we introduce the architecture of PPIMCE. Figure 2 shows the overall architecture of PPIMCE. PPIMCE consists of an IMC-Instruction Scheduler (IMC-IS) and multiple IMC cores. PPIMCE serves as a co-processor for a RISC-V processor extended with customized instructions (C-Inst) for HE and GC operations. To execute an HE/GC operation, a C-Inst is issued to the IMC-IS. The IMC-IS dispatches the instruction to each core controller in each IMC core. The core controller decodes the C-Inst into control signals for computing units in the in-memory processing element (IMC-PE). The detailed architecture of each block is described below. Section V will discuss how PPIMCE performs HE and GC tasks.
### _IMC-Instruction Scheduler_
The IMC-IS dispatches RISC-V instructions to IMC cores. The C-Inst Bank temporarily stores the C-Insts sent from the RISC-V, and the Output Address Content Addressable Memory (OA-CAM) checks the data dependency. CAM supports efficient parallel search [63]. The OA-CAM and C-Inst Bank are set to 16KB, which is sufficient to handle PPIMCE data dependencies.
IMC-IS employs the OA-CAM to ascertain data dependencies amongst instructions. An output address is deemed 'unavailable' if its instruction is being executed or waiting in the C-Inst Bank. Incoming instructions relying on these addresses must pause until prior instructions conclude. These unavailable addresses are held in the OA-CAM and are removed once the instruction is completed. Data dependencies are determined through an \(O(1)\) time search in the OA-CAM using an instruction's input address [63]. Both RISC-V instructions and those within the C-Inst Bank undergo this dependency check each cycle.
PPIMCE's operational modes vary for HE and GC tasks. GC tasks require IMC cores to optimize parallelism. IMC cores can be grouped into a GC computing unit to parallelize GC gates and maximize hardware utilization. Instructions are dispatched to these units by the IMC-IS, and potential stalling scenarios are mitigated by storing RISC-V instructions in the C-Inst Bank until issues are resolved.
Conversely, HE tasks represented by a C-Inst can perform \(N\) integer operations on each polynomial coefficient simultaneously across all IMC cores. As polynomial arithmetic in HE lacks data dependencies, instructions are dispatched to IMC cores without OA-CAM and Bank checks. Further details about IMC-IS functionality for GC and HE tasks are provided in Sections V-C and V-D, respectively.
### _IMC Core_
There are multiple IMC cores in PPIMCE, and each IMC core contains several computing units in its IMC-PE as well as a core controller. We describe each component in the IMC core in detail and then illustrate how HE and GC's basic operations are mapped into the IMC core.
#### Iv-B1 Imc-Pe
The core component of the IMC-PE is the CEM, an innovative design adapted from IMCRYPTO [69] that allows arithmetic and logic operations to be performed inside the SRAM array. However, the CEM is less efficient when handling permutation tasks, due to constant memory read/write operations, as well as LUT-based operations, as pre-string LUT tables can compromise memory capacity. To overcome these inefficiencies, the IMC-PE is supplemented with a Shifter and a LUT fabric. These enhancements are tailored to better support the fundamental functions in both HE and GC (see Section V-A and V-B), optimizing the performance of the PPIMCE's IMC core.
Fig. 2: A high-level view of the PPIMCE accelerator and details of the IMC core.
The **LUT fabric** employs small memory elements (i.e., 6T-SRAM arrays and RA/CAM arrays of size 256\(\times\)8 [69]) with customized peripherals, such as XOR networks (i.e., XOR trees). The memory elements of an LUT fabric and the XOR trees implement table-based multiplication over \(GF(2^{8})\), which is used in AES. The size of each memory element (i.e., 256\(\times\)8) is chosen so it is possible to store 256 pre-computed bytes (the size of an Sbox). In PPIMCE, besides AES, the regular 4-bit integer multiplication in HE is also implemented with pre-computed values stored in the LUT fabric. Each LUT fabric in an IMC-PE contains 4 RA/CAM arrays and 8 SRAM arrays, which enables a good trade-off between the multiplication speed for AES and HE implementations and the area overhead of PPIMCE.
The **shifter** of an IMC-PE performs byte permutations, rotations, and bit extensions. For instance, the ShiftRows (InvShiftRows) encryption steps (decryption) need byte permutations in AES. Note that only byte permutations were supported by the shifter in IMCRYPTO [69]. Rotations and bit extensions were introduced in PPIMCE to support shift-add and integer reductions in HE and Half-Gates in GC (See Section V for more details).
Finally, the **CEM** comprises multiple arrays (called CEM arrays). With the aid of customized sense amplifiers, each CEM array can execute AND, OR, XOR, NOT, and ADD. Operations between two aligned memory words via the simultaneous activation of two wordlines. The size of a CEM array varies from tens of KB up to a few MB. A large CEM array can be useful when a CPU frequently reads cached data and sends it to external parties using communication protocols. On the other hand, large memories have longer access times and consume more power. To allow for a compromise between memory size, access times, and energy consumption, the CEM of a single IMC-PE is a 4 KB memory that consists of a tiled SRAM structure (with 4 tiles) that allows for the implementation of a high-throughput pipeline structure inside the IMC-PE. The CEM in PPIMCE is equipped with carry-lookahead adders, which can considerably improve addition time for long words (beneficial for HE).
#### Iv-A2 Core controller
To execute basic HE and GC operations more efficiently, we encode each HE and GC instruction with a sequence of micro-instructions and add a core controller to guide the execution of these micro-instructions in each IMC-PE. The micro-instruction execution is fully pipelined using core controllers. The micro-instructions are stored inside the micro-instruction memory (\(\mu\)IM). The size of \(\mu\)IM is set to 16 KB, which is sufficient to store all the micro-instructions needed for HE and GC.
Each micro-instruction is a 128-bit value that contains the control signals for each computing unit in IMC-PE. A 1-bit enable/disable and 1-bit memory mode switch signal are allocated for the LUT fabric. A 6-bit control signal (containing 1-bit enable/disable and 5 bits of function code) is included to specify the different shift operations (e.g., shift left, shift right, bit extension, etc.). Each CEM array has 1-bit enable/disable and 3-bit function codes for different in-memory computing operations and 26 bits for the corresponding memory addresses.
Each core controller also contains a decoder, which decodes a C-Inst into a \(\mu\)IM's address sequence. The \(\mu\)IM reads one micro-instruction out in each cycle until it reaches the end of the address sequence. The control signals in a micro-instruction are sent to the respective components in parallel.
### _Customized RISC-V Processor_
In the PPIMCE architecture, a customized 32-bit RISC-V microprocessor is employed to efficiently manage HE and GC tasks. This involves enhancing the RISC-V ISA with ten new RV32I R-type instructions for HE and GC operations, while simultaneously updating micro-instructions and the LUT fabric accordingly. There are eight HE GC function instructions that execute fundamental HE and GC operations, including Half-Gate and FreeXOR for GC, polynomial manipulations (addition, subtraction, permutation, multiplication, reduction), NTT, and INTT for HE. Additionally, PPIMCE gains the capability to update micro-instructions in each core controller. A memory writes instruction that efficiently writes a micro-instruction to all core controllers simultaneously. By employing immediate values to define the micro-instruction and its address, it enables easy support for various HE and GC operations by adding new sequences. Furthermore, PPIMCE efficiently utilizes another RISC-V instruction to concurrently update content in all LUTs of each IMC core.
## V GC and HE mapping
This section details the execution of HE and GC's fundamental operations within the IMC core, and how PPIMCE manages HE and GC tasks.
### _GC basic operations in IMC core_
IMC cores perform Half-Gate and FreeXOR computations for GC tasks. The core controller performs static scheduling for dispatching the control signals of Half-Gate and FreeXOR into each component of the corresponding IMC core. Below, we describe how Half-Gate and FreeXOR are computed in the IMC core.
**Half-Gate:** The Half-Gate contains the AES-128 basic functions (AddRoundKey, SubBytes, MixColumns, and ShiftRows) [16] and other operations such as logical XOR, AND, and LSB extension of a label. The LUT fabric performs the SubBytes and MixColumns of AES in Half-Gate. The Shifter performs the ShiftRows and LSB extension. AddRoundKey and logic AND use in-memory XOR and AND in CEM arrays.
**FreeXOR:** FreeXOR can be performed via in-memory XOR in CEM arrays.
### _HE basic operations in IMC core_
The IMC core performs integer operations (integer reduction, integer addition, and integer multiplication) as the HE basic operations in polynomial computation for HE tasks. The core controller decodes integer operations into control signals and performs static scheduling to dispatch the control signals to each component. Below we describe how each integer operation in HE is computed in PPIMCE's IMC core.
**Integer reduction modulo \(q\):** In modern HE schemes, elements operate in the domain \(R_{q}=\frac{Z_{q}|X|}{X\wedge Z+1}\), where integer reduction modulo \(q\) is applied to all coefficients as part of basic arithmetic operations. PPIMCE utilizes Barrett reduction [3] for general modular reduction. However, prior work has shown that choosing moduli of special form can bring performance improvements [79, 83]. Barrett reduction works with any modulus of any size, but it introduces considerable computational overhead due to the two integer multiplications it performs. PPIMCE can utilize a set of three specific moduli \(q_{i}\) (with \(q_{0}=2^{k}-1\), \(q_{1}=2^{k}\), and \(q_{2}=2^{k}+1\)) as the ciphertext modulus \(q\) for low-depth tasks like PPML inference, allowing optimizations for better performance. Conversely, Barrett reduction is employed for cases in our benchmarks with larger ciphertext moduli.
For optimized modular reduction after multiplication, we take as input an integer \(X\) and produce \(X_{q_{i}}=X\pmod{q_{i}}\) for \(X\in[0,q_{i}^{2})\). We employ a similar algorithm described in [79] to avoid multiplication during the reduction process. Initially, we calculate \(X^{\prime}_{q_{1}}=X\wedge(2^{k}-1)\), \(Y=X<<k\), and \(X^{\prime}=X^{\prime}_{q_{i}}+Y\). If \(q=2^{k}\), we directly use \(X^{\prime}_{q_{1}}\) as the output. If \(q_{i}=2^{k}+1\), we first check if \(X^{\prime}_{q_{1}}\geq Y\) by performing \(A=X^{\prime}_{q_{1}}-Y\) and extending the most significant bits of the temporary value \(A\) to \(A^{\prime}\) in the Shifter of the IMC core. This step checks the signed bits of \(A\). If \(X^{\prime}_{q_{1}}<Y\), \(A^{\prime}\) will have all 32 bits set to 1. Otherwise, \(A^{\prime}\) will be 0. Next, we apply conditional logic using \(A^{\prime}\) to choose the output between \(X^{\prime}_{q_{1}}-Y\) and \(X^{\prime}_{q_{1}}+(q_{i}-Y)\), where \(X_{q_{i}}=(((X^{\prime}_{q_{1}}+(q_{i}-Y))\oplus(X^{\prime}_{q_{1}}-Y))\wedge A ^{\prime})\oplus(X^{\prime}_{q_{1}}-Y)\). If \(q_{i}=2^{k}-1\), we perform a similar process to select the output between \(X^{\prime}-q_{i}\) and \(X^{\prime}\) based on the condition \(X^{\prime}\geq q_{i}\). When compared to executing Barrett reduction in the IMC core, this process can provide a 15% performance improvement.
**Integer addition and subtraction:** Integer addition can be done using in-memory addition in CEM arrays. The CEM arrays do not support subtraction, but we can use NOT and addition operations to perform subtraction. We first perform a NOT operation on the subtrahend and store the result. Then we perform an ADD between the subtrahend and minuend and set the carry-in of the addition to 1.
**Integer multiplication:** Integer multiplication is the fundamental computing element of polynomial multiplication. Implementing integer multiplication in PPIMCE using the naive shift-add method requires \(O(n^{2})\) times (where \(n\) represents the number of bits) shift-and-add operations in CEM arrays. A naive approach to this would lead to impractically high computation costs. To avoid this, we apply two optimizations in integer multiplications: **(i)** We use LUT fabrics to perform fast 4-bit integer multiplication; **(ii)** We employ the Karatsuba multiplication algorithm [45].
We utilize the LUT fabric for 4-bit integer multiplication in a single clock cycle and employ the Karatsuba multiplication algorithm [45] to recursively break down multiplications of two integers into multiplications of integers with half the number of bits. With a complexity of only \(O(n^{1.6})\), the Karatsuba algorithm outperforms the naive approach. In PPIMCE, the base case for Karatsuba multiplication is 4-bit integer multiplication. The algorithm's addition operations are carried out using in-memory addition in CEM arrays of IMC cores, while left shift operations are executed in the IMC cores' Shifter.
### _GC tasks in PPIMCE_
PPIMCE takes on the task of accelerating GC computation by first compiling GC tasks into customized Half-Gate and FreeXOR. We pre-generate and store necessary labels for table generation in the system's main memory. During the pre-processing phase, these labels are moved from the main memory to the CEM arrays to generate the garbled tables. PPIMCE also organizes multiple IMC cores into a GC Computing Unit, with each core operating in parallel, executing the same GC gates on different data. For instance, when executing a Half-Gate instruction on data stored at addresses 0 and 1, every IMC core performs the Half-Gate operation using data in their local address 0 and address 1. This coordination allows for efficient and parallel computation across all the cores in the GC computing unit.
In Figure 3, we demonstrate how GC operations are dispatched to two GC computing units using OA-CAM and C-Inst Bank. In this example, we assume there are only two GC computing units. Each FreeXOR takes 3 cycles, and Half-Gate takes 45 cycles. The black and white circles represent the FreeXOR and Half-Gate gates, respectively.
Fig. 3: An example of GC instructions being executed on PPIMCE with only two GC computing units. The circle labeled with \(\mathrm{Ia}-\mathrm{Ie}\) represents the C-Inst of Half-Gate (white cycle) and FreeXOR (black cycle), and the black arrows represent data dependency. \(w0\to w4\) are the gates’ output. The instruction outlined in red represents the instruction fetched during the current cycle. The initial state is shown in (a). The states for cycles 1-5 are shown in (b)–(f). In cycle 4, \(\mathrm{Ia}\) is completed. The states for cycles 46 and 48 are shown in (g) and (h), respectively, where \(\mathrm{Ib}\) is completed in cycle 46, and \(\mathrm{Ic}\) is completed in cycle 48.
In cycle 1, Ia is executed in unit 0, and \(w\)0 is written into OA-CAM. In cycle 2, Ib is executed in unit 1. In cycle 3, Ic is placed into C-Inst Bank, and \(w\)2 is written into OA-CAM. In cycle 4, Ia completes, freeing \(w\)0 and allowing Id to be written into C-Inst Bank, with \(w\)3 written into OA-CAM. Ic is issued into unit 0. In cycle 5, Ic is written into C-Inst Bank. In cycle 46, Ib completes, freeing \(w\)1 and enabling Id to be issued into unit 1. In cycle 48, Ic completes, releasing unit 0 and \(w\)2, allowing Ic to be issued into unit 0. At this point, all instructions have been issued to a unit.
### _HE tasks in PPIMCE_
HE tasks consist of HE arithmetic including HE rotation, HE multiplication, and HE rotation. These operations are further broken down into polynomial arithmetic, which essentially comprises coefficient-wise integer computations. PPIMCE leverages its IMC cores, as detailed in Section V-B, to efficiently perform these integer computations in parallel.
In PPIMCE, we leverage its multiple IMC cores to store each coefficient of a polynomial at the same address within different cores. For example, the first coefficient is in address 1 of IMC core 1, the second coefficient is in address 1 of IMC core 2, and so forth. This setup enables a single address pointer to represent all coefficients in a polynomial. As a result, we can perform coefficient-wise arithmetic for polynomial multiplication, addition, and subtraction, all in parallel with a single command. When it comes to polynomial automorphism, PPIMCE handles read and write operations across the IMC cores to rearrange the coefficients in a polynomial, effectively accommodating the requirements of HE rotation operations.
We propose a unique scheduling scheme for NTT and INTT operations in PPIMCE that requires only \(N/2\) IMC cores to perform the butterfly computation [57] on a polynomial of degree \(N\). For instance, a PPIMCE with four IMC cores executing NTT on a degree-4 polynomial stores coefficients and corresponding twiddle factors evenly across the cores. Initially, coefficients \(c\) and \(d\) are moved to cores 0 and 1 for computing \(cTW\) and \(dTW\). Then, addition and subtraction operations occur in cores 0 and 1, producing temporary results \(a^{\prime}\) and \(b^{\prime}\) in core 0 and \(c^{\prime}\) and \(d^{\prime}\) in core 1. Next, cores 0 and 1 compute \(b^{\prime}*TW\) and \(d^{\prime}*TW\), respectively. Finally, the last butterfly computation is performed, placing the resulting polynomial coefficients in all four cores. This process involves only two IMC cores, enabling parallel execution of two NTTs with four cores and thus allowing for parallel execution of two NTT or INTT transformations on a degree-\(N\) polynomial using \(N\) IMC cores.
## VI PPML inference
This section introduces a client-server architecture for PPML inference similar to [44]. The client holds the input data needed for inference, and the server holds the pre-trained network. This architecture aims to make the inference on the client's data without letting the server know the client's input data and without exposing critical data (e.g., client input data and server's network weights and bias) during communication. The PPML network contains linear layers (e.g., convolution and fully connected layers) and non-linear layers (e.g., ReLU and Maxpooling). The linear layers are computed using HE and the non-linear layers are computed using GC. PPIMCE can support both HE and GC, so there is no data transfer between the linear and non-linear layers of inference.
Figure 4 shows the client-server architecture of PPML. PPIMCE can operate either as a client or as a server. Below we detail the steps of the PPML protocol. 1 The client and the server first possess additive secret shares \(c_{y}\) (on the client side) and \(s_{y}\) (on the server side) of the linear layer input \(y\), where \(y=c_{y}+s_{y}\) (At linear layer 0, we set \(c_{y}=y\) and \(s_{y}=0\)). The client and the server encrypt \(c_{y}\) and \(s_{y}\) to a polynomial \([c_{y}]\) and \([s_{y}]\). The client sends \([c_{y}]\) to the server. 2 The server executes \([x]=[F([c_{y}+s_{y}])]\) homomorphically (where \(F()\) is the function in this linear layer). The server also subtracts a random value \(r\) on \([x]\) to get \([x-r]\) homomorphically. This is to transform his ciphertext to additive secret shares. The server also prepares a random number \(s^{\prime}_{y}\) for the next phase. 3 The server sends \([x-r]\) to the client. The client uses HE decryption to get the value \(x-r\). 4 The client and server turn to the GC phase, where the client is the Garbler, and the server is the Evaluator. The value \(x-r\), \(r\), and \(s^{\prime}_{y}\) are the inputs of the GC phase. The client first picks label \(L(x-r)\) corresponding to her own input \(x-r\), and substitutes \(L(r)\), \(L(s^{\prime}_{y})\) for all possible \(r\) and \(s^{\prime}_{y}\). 5 The client sends her label \(L(x-r)\) directly, and lets the server picks his labels \(L(r)\) and \(L(s^{\prime}_{y})\) via OT according to his own inputs. 6 The server evaluates the GC truth tables for \(ReLU((x-r)+r)-s^{\prime}_{y}\). The truth tables are independent of the inputs, so they can be stored on the server in the pre-processing phase [44]. The server uses the labels \(L(x-r)\), \(L(r)\) and \(L(s^{\prime}_{y})\) to evaluate the truth tables. 7 The evaluation result will be shared to the client to decode \(c^{\prime}_{y}\) where \(c^{\prime}_{y}=ReLU(x)-s^{\prime}_{y}\). The \(c^{\prime}_{y}\) and the random value \(s^{\prime}_{y}\) from the GC phase will transfer to the HE phase as the inputs \(c_{y}\) and \(s_{y}\) for the next HE phase. Steps 1-7 are repeated for all the linear and non-linear layers until reaching the end of the network for the prediction result. PPIMCE can transfer from HE to GC lightly on the client or server, as it supports both protocols in one implementation.
Fig. 4: The client-server architecture of PPIMCE for PPML inference. [-] represents the ciphertext polynomial after homomorphic encryption. \(L()\) represents the labels after GC label substitution. \(F()\) represents the functions in the linear layer.
## VII PPIMCE Evaluation Setup
To validate the correct functionality of PPIMCE and evaluate its performance, including latency, power, and area, a comprehensive evaluation framework is indispensable. Toward this end, we develop a PPIMCE compiler, a PPIMCE cycle-accurate simulator, as well as a set of hardware simulators. Below, we describe how PPC tasks are executed in PPIMCE.
### _PPIMCE Evaluation Infrastructures_
We leverage several existing tools at different abstraction levels to estimate the latency, energy, and area of PPIMCE for each basic GC and HE operation. Specifically, we have implemented C-Inst in the RISC-V processor in Verilog at the RTL level and evaluated it through detailed RTL simulations to ensure the correctness of the C-Inst fetching. The decoder in each core controller and the Shifter in each IMC-PE are also validated through RTL simulations. The LUT fabric and CEM arrays of each IMC-PE are validated at the circuit level with SPICE simulations. Finally, the DESTINY simulator [66], an open-source memory simulator, is used to estimate latency, area, and power for the C-Inst Bank in the IMC-IS and the \(\mu\)IM in each core controller. The latency, area, and power of the OA-CAM in the IMC-IS are measured using EVA-CAM [56], an evaluation tool for CAM. The area and power dissipation of PPIMCE includes the area and power of all the IMC-PEs, all core controllers, and the IMC-instruction schedulers. An IMC-PE's area and power dissipation consist of the area and power of the CEM arrays, the shifter, and the LUT fabric.
We developed a PPIMCE compiler for compiling a PPC task described in C++ into a C-Inst list. The PPIMCE compiler includes the PPIMCE encoder and PPIMCE code generator. The PPIMCE encoder converts C++ code into HE and GC operations. For example, each linear layer of a DNN is converted into HE multiplications, additions, and rotations. Furthermore, the compiler converts each activation layer into GC ReLU operations. The PPIMCE code generator then generates the C-Inst list of polynomial arithmetic instructions for HE computation and Half-Gate and FreeXOR instructions for GC computation.
To estimate the delay and energy consumption of PPC tasks like PPML inference executed by PPIMCE, we developed a cycle-accurate simulator in Python 3.7 to evaluate PPIMCE's performance and explore the design space (e.g., the number of IMC cores). The simulator simulates the operations running in each IMC core cycle by cycle. Furthermore, the simulator meticulously tracks all data movement in the system, allowing us to account for all the data dependencies between the gates for GC functions and among the integer operations for HE functions (see Section V).
### _PPIMCE Parameter Setting_
In the PPIMCE architecture, several parameters can significantly impact performance. We describe the trade-offs among the selection of values for these parameters.
Operations in HE, like integer multiplication, can be highly parallelized. For example, PPIMCE can parallelize all integer operations in HE functions using \(N\) IMC cores if \(N\) equals the polynomial degree. We choose PPIMCE with 8192 IMC cores in our evaluation. 8192 IMC cores allow PPIMCE to fully parallelize all operations in HE when \(N=8192\), providing sufficient security levels and multiplicative depth for PPML inference.
The parallelism of GC functions is affected by the number of GC computing units in PPIMCE. We have studied all possible numbers of GC computing units to examine their impact on latency. The optimal number of units depends on the GC functions. Based on our study, we choose PPIMCE with 16 GC Computing Units, the Pareto optimal in terms of the number of units and latency for GC functions. Given the choice of 8192, each GC Computing Unit contains \(8192/16=512\) cores, allowing PPIMCE to run 512 GC tasks in parallel.
To study the impact of technology scaling on power and area, and to make a fair comparison with Cheetah [68], which uses 5nm nodes, we consider a 5nm technology node with foundry-reported scaling factors. Specifically, we use 0.079\(\times\) power and 0.059\(\times\) area to scale from 45nm to 7nm, based on [76]. The power and area scaling factors are 0.70\(\times\) and 0.54\(\times\) from 7nm to 5nm, based on [87]. Power and area scaling factors (45nm to 5nm) are 0.0553\(\times\) and 0.0318\(\times\), respectively.
Finally, we envision that PPIMCE will be placed on the same chip as the CPU, mimicking a last-level cache (LLC) to facilitate data exchange with the CPU. PPIMCE with 8192 IMC cores contains 32MB of on-chip memory, which may not be enough to hold all the data needed for a large-scale PPC task like PPML inference. PPIMCE needs to move the data from the main memory for computation. We assume 512 GB/s bandwidth between PPIMCE and the main memory (similar to HBM2 PHY bandwidth). PPIMCE executes HE functions in a computation-bound manner, allowing us to pipeline memory transfer and HE computation. On the other hand, the GC phase is memory-bound, but we can hide the memory transfer time in GC computation by pre-loading the data to PPIMCE's CEM arrays, such as the labels for the next computation during the current GC computation.
## VIII Evaluation Results
We first evaluate PPIMCE on GC and HE benchmarks and compares the results with CPU and GPU implementations. Then, we consider PPML inference and compare our design to existing PPC accelerators. We use a computer with an Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHz for CPU evaluation and an NVIDIA RTX6000 for the GPU implementation. As existing GPU implementations for GC do not use exactly the same optimization as PPIMCE (Half-Gate, FreeXOR), we only compare PPIMCE with CPU implementation for GC evaluation. All components in PPIMCE are implemented in the 45nm CMOS predictive technology model (PTM) [10]. We choose the operating frequency of 1 GHz for PPIMCE based on the longest basic operation in the IMC-PE.
### _HE and GC benchmarks_
#### Iv-A1 GC performance
We first evaluate the GC benchmarks in PPIMCE using the benchmarks from VIP-Bench [6] and prior works [23, 36, 37]: **(i) ReLU-32:** Perform an activation function to calculate \(max(0,input)\) with 32-bit input size. **(ii) Mul-32:** Perform a 32-bit integer multiplication with 32-bit output. **(iii) Hamm-50:** Calculate the hamming distance between two 50-bit binary values with 8-bit output. **(iv) AES-128:** Perform a 128-bit AES encryption where each party separately provides the key and plaintext. **(v) 5\(\times\)5MatMul-8:** Perform a 5\(\times\)5 matrix multiplication where each element in the matrix is 8 bits. **(vi) 3\(\times\)3MatMul-16:** Perform a 3\(\times\)3 matrix multiplication where each element in the matrix is 16 bits.
A key challenge when comparing with GC benchmarks is the substantial on-chip memory space required to store temporary values. We employ four 1KB CEM arrays in each IMC core for area and power evaluation, which suffices for PPML GC non-linear functions and HE computation. However, this may not be sufficient for some GC functions, such as AES-128 and 5\(\times\)5MatMul-8, which require large memory space for intermediate data. This evaluation compares the PPIMCE speedup of GC microbenchmarks with a CPU realization. We increase each CEM array size in the IMC core to 128KB solely for GC microbenchmark evaluation, large enough to account for all GC benchmarks. This adjustment ensures a fair throughput comparison with CPU performance in this specific evaluation.
Figure 5(a) reports the speedup of PPIMCE versus a CPU. PPIMCE performance is evaluated by scaling IMC cores from 1 to 16 with a fixed 128KB memory size. We compare the speedup of Garbler for generating the truth tables for the GC functions (i) - (vi). CPU-based GC is implemented with the EMP framework [84] for comparison. On the Evaluator slide, PPIMCE has a similar speedup. PPIMCE can achieve an average speedup of 9.6\(\times\) with a single IMC core compared with the CPU. The 16 IMC cores in the PPIMCE represent the Pareto-optimal solution that strikes a balance between area and latency. Because of data dependency in the GC, 16 cores cannot pump up the speed ideally to 16\(\times\) compared with using 1 core. PPIMCE with 16 IMC cores achieves an average of 107\(\times\) (10\(\times\) faster than single-core PPIMCE) speedup.
#### Iv-A2 HE performance
Next, we conduct an evaluation of PPIMCE with 8192 IMC cores against CPUs and GPUs for three fundamental ciphertext-ciphertext operations in HE: HE addition, HE multiplication, and HE rotation. We use three different sets of HE parameters and the full RNS CKKS scheme. Due to the testing of high logQ values, Barrett reduction is employed in these operations. The performance comparison involves CPU and GPU projections utilizing the HEAAN library [11], a C++ library that implements the CKKS scheme exclusively, and a GPU-accelerated version using CUDA.
Figure 5 (b) illustrates the resultant speedup, indicating that PPIMCE can achieve a substantial speedup in operations. Specifically, 1500\(\times\) to 5000\(\times\) for HE addition, 100\(\times\) to 1500\(\times\) for multiplication, and 110\(\times\) to 2000\(\times\) for rotation. Although a GPU can manage a 2\(\times\) improvement for large parameter values compared to a CPU, PPIMCE still offers a speedup of up to 4000\(\times\), 800\(\times\), and 960\(\times\) for the same operations, respectively.
### _PPML inference_
#### Iv-B1 PPIMCE vs. Combined HE & GC Designs
We compare scenarios with IMC (PPIMCE) and without IMC (CraterLake+HAAC) using LoLA-CIFAR [9], a 6-layer secure ML model for CIFAR-10 [50] to highlight the efficiency of a uniform IMC accelerator. LoLA-CIFAR exclusively employs HE, replacing non-linear layers with square activation to accommodate HE. In contrast, we use GC for the non-linear layers and HE for the linear ones. We assume an ideal case where the control system of CraterLake+HAAC incurs no overhead, and the data transfer cost between the two accelerators is zero, allowing us to concentrate on the intrinsic computational performance and establish the performance upper bound for such a combined system.
CraterLake's LoLA inference performance includes linear HE computations and HE-friendly polynomial approximation functions for non-linear operations. To fairly compare PPIMCE with CraterLake+HAAC, we first isolate the time CraterLake spends on the linear layers. The reported data in [72] does not provide the latency for only the linear layers. We compute this latency by analyzing the percentage of time spent on each layer in LoLA-CIFAR to extract the linear layers' latency. As approximately 80% of the time is spent on linear layers, we estimate CraterLake's linear layer latency by multiplying its total LoLA-CIFAR time by 80%.
Table I presents the performance comparison of PPIMCE and CraterLake+HAAC for LoLA-CIFAR inference. To ensure a fair comparison, we aim to maintain iso total latency for PPIMCE and CraterLake+HAAC and then compare their power dissipation and total area. It is challenging to manipulate CraterLake's design to match the HE latency of PPIMCE due to its more complex structure; however, we can achieve a similar
Fig. 5: (a) Speedup of PPIMCE with different numbers of IMC cores on GC benchmarks compared with a CPU implementation. (b) Comparison between PPIMCE with 8192 cores on full RNS CKKS benchmarks (homomorphic addition (Add), homomorphic multiplication (Mul), and homomorphic rotation (Rot)) with CPU and GPU implementations for different HE parameters.
GC performance with PPIMCE and with HAAC. Since HAAC is a smaller and more flexible design, we can employ multiple HAAC units for parallel computing, effectively matching the total latency of PPIMCE. We use 20 parallel HAAC units to parallelize the computation of non-linear functions in the combined system.
Respectively fabricated in 5nm, 16nm, and 45nm CMOS nodes, CraterLake, HAAC, and PPIMCE are scaled to a 5nm node for a balanced comparison using methods from [76, 87]. The rescaled PPIMCE is found to occupy significantly less area (138.3 \(mm^{2}\)) and consume less power (9.4 W) than CraterLake+HAAC (190.7\(mm^{2}\) and 128 W). PPIMCE, thus, despite mirroring the total latency, exhibits a marked advantage in terms of area and power efficiency. These benefits are linked to PPIMCE's IMC computing approach, reducing data movement and facilitating the simultaneous acceleration of HE and GC.
#### Iv-B2 PPIMCE vs. Alternatives in PPML Inference
Next, we compare PPIMCE with the SOTA software implementation, Gazelle [44], as well as the SOTA PPC hardware accelerators including Cheetah [68], F1 [71], BTS [48], CraterLake [72] and ARK [47] for end-to-end PPML inference. We compare these implementations' latency, accuracy, area, and power. We scale all designs' area and power to 5nm technology nodes for a fair comparison. Notice that all the HE discussed in this comparison utilizes ciphertext-plaintext arithmetic.
We specifically focus on the server-side execution time, which comprises HE operations for linear layers and GC operations for non-linear functions in PPML inference (see Section VI) [27]. The protocols used by PPIMCE, Gazelle, and Cheetah are similar, leading to their total execution times being composed of both HE and GC times. For this analysis, PPIMCE adopts the same HE and GC parameters as Gazelle and Cheetah. Cheetah only accelerates the HE operations for PPML and does not accelerate GC. For a fair comparison, we assume that Cheetah uses the same GC process as Gazelle on the system's CPU for computations within activation functions. F1, BTS, and CraterLake only implement support for HE computation for PPML (with non-linear functions replaced by polynomial approximation [28]), hence their execution time is solely comprised of HE computations.
This study assumes GC tables are transmitted during pre-processing (see Section VI). Additionally, we assume that Cheetah, Gazelle, and PPIMCE have the same transmission requirements for PPML inference, as depicted in Fig. 4. The transmission requirements for F1, BTS, CraterLake, and ARK are outlined in [52]. Our goal is to create a fair comparison between different PPML accelerators in terms of communication cost by making these assumptions.
We evaluated two PPML inference tasks: CIFAR-10 [50] on ResNet20 and ImageNet [19] for ResNet50 [34]. Gazelle's execution time is measured by running its source code [31] on these two tasks. The performance data for other accelerators are obtained from their respective publications. Cheetah only reports the execution time for ImageNet on ResNet50. F1, BTS, CraterLake, and ARK only report execution time for CIFAR-10 on ResNet20.
The accuracy of F1, BTS, CraterLake, and ARK performing CIFAR-10 inference on ResNet20 was reported in [52]. As these accelerators perform PPML exclusively with HE operations, inference accuracy drops -- i.e., owing to the need for polynomial approximation, which accumulates the error during polynomial approximation [28]. As such, while reasonable accuracy may be obtainable for networks for datasets such as CIFAR-10, accuracy is expected to plummet as more sophisticated networks/datasets are employed. PPIMCE, Cheetah, and Gazelle use GC for non-linear functions, so they do not have any accuracy drop. The accuracy of PPIMCE, Cheetah, and Gazelle for CIFAR-10 on ResNet20 is from [52], and ImageNet on ResNet50 is from [55].
Table II summarizes the performance results for PPIMCE and other accelerators. Compared with Gazelle, PPIMCE is not constrained by data transfer costs and can achieve high
\begin{table}
\begin{tabular}{l|c c c|c c|c c|c c} & \multicolumn{4}{c|}{CIFAR-10 on ResNet20} & \multicolumn{4}{c|}{ImageNet on ResNet50} & \multirow{4}{*}{\begin{tabular}{c} Area\({}^{*}\) \\ (m\(\times\)) \\ \end{tabular} } & \multicolumn{2}{c}{
\begin{tabular}{c} Power \({}^{*}\) \\ (m\(\times\)) \\ \end{tabular} } \\ \cline{2-2} \cline{6-9} & HE time & GC time & Total time & \multirow{2}{*}{Accuracy} & HE time & GC time & Total time & \multirow{2}{*}{Accuracy} & \multirow{2}{*}{Power \({}^{*}\)} \\ & (ms) & (ms) & (ms) & & (ms) & & (ms) & & (\(mm^{2}\)) & (\(W\)) \\ \hline Gazelle (CPU) & 1.4e+4 & 3008 & 1.7e+4 & 91.9\% & 7.3e+6 & 1.3e+5 & 7.4e+6 & 76.1\% & - & - \\ Cheetah & - & - & - & 91.9\% & 198 & 1.3e+5 & 1.3e+5 & 76.1\% & 587 & 30 \\ F1 & 2693 & 0 & 2693 & 90.7\% & - & - & - & LOW & 116 & 74 \\ RTS & 1910 & 0 & 1910 & 90.7\% & - & - & - & LOW & 201 & 88 \\ CraterLake & 249.4 & 0 & 249.4 & 90.7\% & - & - & - & LOW & 157 & 114.2 \\ ARK & 125 & 0 & 125 & 90.7\% & - & - & - & LOW & 225.9 & 105 \\
**PPIMCE** & **19.1** & **1.5** & **20.6** & **91.9\%** & **7347** & **66.5** & **7413** & **76.1\%** & **138.3** & **9.4** \\ \hline \hline \end{tabular} \({}^{\clclubsuit}\)_All area and power are scaled to 5nm_
parallelism. PPIMCE can be up to 1000\(\times\) faster than Gazelle. There are no performance improvements when using PPIMCE for HE versus Cheetah. However, despite fast computation for HE, Cheetah's total execution time is still impacted by GC computation overhead. PPIMCE obtains a 17\(\times\) speedup compared to Cheetah.
Compared with accelerators that use HE-only protocols, PPIMCE also gains significant speedup. The main reason is that over 95% of the execution time in F1, BTS, CraterLake, and ARK on PPML inference is spent on bootstrapping (See Section II-A). With the support of GC for non-linear functions, PPIMCE does not incur the same high overheads for bootstrapping in PPML inference. PPIMCE can achieve 130\(\times\), 90\(\times\), and 12\(\times\), 6.5\(\times\) speedups versus F1, BTS, CraterLake, and ARK, respectively.
PPIMCE surpasses other PP accelerators in terms of area consumption and power dissipation, chiefly due to its unique protocol and efficient In-SRAM IMC design. First, unlike existing HE-only accelerators like F1, BTS, CraterLake, and ARK which rely on computationally heavy bootstrapping [78], PPIMCE adopts a Gazelle-like protocol that resets noise at every layer, avoiding bootstrapping. Second, based on previous research, in-SRAM computing design can offer approximately 2.5\(\times\) energy and area savings compared to non-in-SRAM computing (See Section II-B). The reason for the area-saving advantage is that computations occur at the bitline level, using customized sense amplifiers that only need a few extra transistors compared to conventional sense amplifiers. The energy-saving benefit is due to the fact that in-SRAM computing requires fewer external data accesses compared to regular non-in-SRAM computing. Notably, the non-SRAM IMC solution Cheetah employs a similar protocol (GC-HE) to PPIMCE; Cheetah reports a power consumption of 30\(W\), which is over 3\(\times\) higher than PPIMCE, and in line with experimental results from II-B
Finally, we analyze how the primary bottleneck of GC, the client-server communication, impacts the runtime of PPML inference in PPIMCE. By merging computation data from Table II with communication latency, we illustrate the total PPML inference time for all designs in Figure 6. We calculate communication latency utilizing the bandwidth of wireless protocols from 2G to 5G from ITU recommendation [38, 39, 40, 41], and potential 6G bandwidth [64, 67]. The communication demand in PPIMCE is notably high; specifically, it requires 2GB for single inference on ResNet20, while the HE-only protocol only needs 8MB. With increasing bandwidth, communication time is reduced until computation time becomes the dominant factor, marking a saturation point. In low bandwidth scenarios, HE-only accelerators perform better in total latency, yet PPIMCE and other HE+GC accelerators achieve higher accuracy. In high bandwidth circumstances, PPIMCE surpasses all competitors in either latency or accuracy.
## IX Related Work
**HE acceleration:** HE accelerators F1 [71], CraterLake [72], BTS [48] and ARK [47] reduce HE computational overhead. F1 [71] performs well on various HE tasks but only supports low multiplicative depth. CraterLake [72], BTS [48], and ARK [47] offer unbounded multiplicative depth but are limited to HE tasks, and unable to handle complex PPML tasks with high accuracy.
Existing IMC designs for HE reduces data-transfer overhead. CIM-HE [70, 79], CryptoPIM [62] and X-poly [54] are accelerators performing HE arithmetic and logic operations in SRAM, Resistive RAM, and crossbar, respectively, focusing solely on HE.
**GC acceleration:** Hardware accelerators for GC aim for high throughput with minimal area and power overhead. Recent FPGA implementations [22][35] speed up Yao's GC, but lack advanced optimizations. Maxelerator [37] is an FPGA GC accelerator for matrix multiplication. FASE [36] is the current SOTA FPGA GC accelerator with a deeply pipelined architecture and optimized scheduling.
**PPML acceleration:** Several efforts have been made to use HE and GC to design specialized protocols for various applications. Software accelerators like Gazelle [44] and Delphi [59] use HE and GC to speed up PPML tasks. Gazelle uses HE for linear functions and GC for non-linear functions, while Delphi has a similar architecture but uses pre-processing to reduce communication costs. Cheetah [68] is an ASIC-based hardware accelerator that adapts Gazelle's framework but only accelerates the HE part and requires additional support for the GC part, resulting in additional overheads.
## X Conclusion
We propose PPIMCE, the first IMC accelerator for HE and GC that enables high throughput while reducing data transfer overheads. PPIMCE achieves significant speedup, up to 100 \(\times\) compared to GC CPU implementations and up to 1500\(\times\) and 800\(\times\) speedup compared to CPU and GPU implementations when executing CKKS-based homomorphic multiplications. PPIMCE accelerates PPML inference using HE and GC without sacrificing accuracy and achieves up to 1000\(\times\) speedup on single image inference compared to the SOTA CPU implementation Gazelle. Moreover, compared to the best
Fig. 6: Total latency of a single PPML inference vs. client-server communication bandwidth for PPIMCE and other implementations executing ResNet20 on CIFAR10 (lower) and ResNet50 on ImageNet (upper), followed by the note of inference accuracy.
performing PPC accelerators, PPIMCE achieves speedups of up to 130\(\times\) and exhibits superior area and power efficiency.
|
2301.09216
|
Determinantal point processes on spheres: multivariate linear statistics
|
In this paper, we will derive the first and 2nd order Wiener chaos
decomposition for the multivariate linear statistics of the determinantal point
processes associated with the spectral projection kernels on the unit spheres
$S^d$. We will first get a graphical representation for the cumulants of
multivariate linear statistics for any determinantal point process. The main
results then follow from the very precise estimates and identities regarding
the spectral projection kernels and the symmetry of the spheres.
|
Renjie Feng, Friedrich Götze, Dong Yao
|
2023-01-22T22:04:15Z
|
http://arxiv.org/abs/2301.09216v1
|
# Determinantal point processes on spheres: Multivariate linear statistics
###### Abstract.
In this paper, we will derive the first and 2nd order Wiener chaos decomposition for the multivariate linear statistics of the determinantal point processes associated with the spectral projection kernels on the unit spheres \(S^{d}\). We will first get a graphical representation for the cumulants of multivariate linear statistics for any determinantal point process. The main results then follow from the very precise estimates and identities regarding the spectral projection kernels and the symmetry of the spheres.
_In memory of Steve Zelditch (1953-2022)_
## 1. Introduction
The determinantal point process is an important class of point processes with applications in random matrix theory, statistical mechanics, quantum mechanics, etc. It's also called the Slater determinant in quantum mechanics that is to describe the wave function of a multi-fermionic system. In this paper, we will consider determinantal point processes on the unit spheres associated with the spectral projection kernels of the Laplace operator with respect to the standard round metric. Such spectral projection kernels can be represented in terms of the spherical harmonics, which are one of the most fundamental wave functions in quantum mechanics to describe particles confined to the spheres.
Let \(\Phi\) be a point process sampled on the space \(\mathcal{X}\). The \(k\)-th joint intensity function \(\rho_{k}\) of the point process \(\Phi\) is defined by
\[\mathbb{E}\Big{[}\sum_{(x_{1},\ldots,x_{k})\in\Phi_{*}^{k}}f(x_{1},\ldots,x_{k })\Big{]}=\int_{\mathcal{X}^{k}}f(x_{1},\ldots,x_{k})\rho_{k}(x_{1},\ldots,x_{ k})dx_{1}\ldots dx_{k}, \tag{1}\]
where \(f\) is any bounded measurable function and the set
\[\Phi_{*}^{k}:=\{(x_{1},\ldots,x_{k}):x_{i}\in\Phi,\,\forall 1\leq i\neq j\leq k,x_ {i}\neq x_{j}\}. \tag{2}\]
If \(\Phi\) is a determinantal point process associated with some kernel function \(K\), then its \(k\)-th joint intensity function reads
\[\rho_{k}(x_{1},\ldots,x_{k})=\det\Big{(}K(x_{i},x_{j})_{1\leq i\leq j\leq k} \Big{)}, \tag{3}\]
where \(K(x_{i},x_{j})_{1\leq i,j\leq k}\) is a \(k\times k\) matrix whose \((i,j)\) entry is \(K(x_{i},x_{j})\).
In this paper we will focus on the case when \(K\) is the spectral projection kernel on the unit sphere \(S^{d}\) with \(d\geq 2\), defined as follows. The Laplace operator on \(S^{d}\) with respect to the standard round metric has discrete spectrum \(\Big{\{}\lambda_{n}=-n(n+d-1),n=0,1,2,....\Big{\}}\). Here, the round metric is the pullback of the Euclidean metric under
the inclusion map \(i:S^{d}\to\mathbb{R}^{d+1}\). For a given eigenvalue \(\lambda_{n}\), the corresponding eigenfunctions are called the spherical harmonics of level \(n\). Let \(\mathcal{H}_{n}(S^{d})\) be the space of the spherical harmonics of level \(n\). Then one has [2]
\[k_{n}:=\dim\mathcal{H}_{n}(S^{d})=\frac{2n+d-1}{n+d-1}\binom{n+d-1}{d-1}, \tag{4}\]
which admits the asymptotic estimate (by \(d\geq 2\))
\[k_{n}\sim 2n^{d-1}/\Gamma(d). \tag{5}\]
Let \(K_{n}\) be the spectral projection
\[K_{n}:L^{2}(S^{d})\to\mathcal{H}_{n}(S^{d}), \tag{6}\]
and we denote by \(K_{n}(x,y)\) its kernel.
Now we define a determinantal point process \(\Phi_{n}\) on \(S^{d}\) associated with the kernel \(K_{n}(x,y)\). Here the total number of points in \(\Phi_{n}\) is almost surely \(k_{n}\). Note that \(\Phi_{n}\) can be alternatively defined by sampling \(k_{n}\) points on \(S^{d}\) according to the probability density
\[\frac{1}{k_{n}!}\det\Big{(}K_{n}(x_{i},x_{j})_{1\leq i,j\leq k_{n}}\Big{)}. \tag{7}\]
Given a function \(f(x_{1},..,x_{k})\) of \(k\geq 1\) variables, we define the multivariate linear statistics
\[L_{n}f:=\sum_{(x_{1},\ldots,x_{k})\in\Phi_{n,*}^{k}}f(x_{1},\ldots,x_{k}), \tag{8}\]
where
\[\Phi_{n,*}^{k}:=\{(x_{1},\ldots,x_{k}):x_{i}\in\Phi_{n},\,x_{i}\neq x_{j},\, \forall 1\leq i\neq j\leq k\}. \tag{9}\]
Multivariate linear statistics of this form are usually called U-statistics.
For \(1\leq i\leq k\), we define the _\(i\)-margin_ function \(f_{i}\) by integrating \(f\) with respect to all variables over \(S^{d}\) except \(x_{i}\), i.e.,
\[f_{i}(x)=\int_{(S^{d})^{k-1}}f(x_{1},\ldots,x_{i-1},x,x_{i+1},\ldots,x_{k})dx_ {1}\cdots dx_{i-1}dx_{i+1}\cdots dx_{k}. \tag{10}\]
Here, we denote by \(dx\) the volume element with respect to the standard round metric on \(S^{d}\). If \(k=1\), the \(1\)-margin function is defined to be \(f\) itself.
For \(1\leq i<j\leq k\), we define the _\((i,j)\)-margin_ function \(f_{i,j}\) to be
\[f_{i,j}=\int_{(S^{d})^{k-2}}f(x_{1},\ldots,x_{k})dx_{1}\cdots dx_{i-1}dx_{i+1 }\cdots dx_{j-1}dx_{j+1}\cdots dx_{k}. \tag{11}\]
In this article, we will study the limiting distribution of the multivariate linear statistics \(L_{n}f\). We first give an asymptotic expansion for the expectation of \(L_{n}f\).
**Theorem 1**.: _Let \(f(x_{1},..,x_{k})\) be a bounded function of \(k\) variables. We have_
\[\begin{split}\mathbb{E}(L_{n}f)=&\bigg{(}\frac{k_{ n}}{s_{d}}\bigg{)}^{k}\int_{(S^{d})^{k}}f(x_{1},\ldots,x_{k})dx_{1}\cdots dx _{k}\\ &-\frac{k_{n}^{k-1}}{s_{d}^{k}}\sum_{1\leq i<j\leq k}\frac{2^{d-1 }}{\Gamma(d)\pi}\Gamma\Big{(}\frac{d}{2}\Big{)}^{2}\int_{S^{d}}\int_{S^{d}} \frac{f_{i,j}(x,y)}{\sin^{d-1}(\arccos(x\cdot y))}dxdy\\ &+o(n^{(d-1)(k-1)}),\end{split} \tag{12}\]
_where \(s_{d}=2\pi^{\frac{d+1}{2}}/\Gamma(\frac{d+1}{2})\) is the surface area of \(S^{d}\)._
By estimating the growth order of the cumulants of \(L_{n}f\), we can prove the following central limit theorem for \(L_{n}f\).
**Theorem 2**.: _Let \(f\) be a bounded function of \(k\) variables on \(S^{d}\). Assume that_
\[F(x):=\sum_{i=1}^{k}f_{i}(x) \tag{13}\]
_is not constant almost everywhere in \(x\in S^{d}\), then it holds that_
\[\lim_{n\to\infty}\frac{1}{k_{n}^{2k-1}}\mathrm{Var}(L_{n}f)=\frac{2^{d-2}}{s_ {d}^{2k}\Gamma(d)\pi}\Gamma\Big{(}\frac{d}{2}\Big{)}^{2}\int_{S^{d}}\int_{S^{d }}\frac{(F(x)-F(y))^{2}}{\sin^{d-1}(\arccos(x\cdot y))}dxdy>0. \tag{14}\]
_In addition, \(L_{n}f\) is asymptotically normal, i.e.,_
\[\frac{L_{n}f-\mathbb{E}(L_{n}f)}{(\mathrm{Var}(L_{n}f))^{\frac{1}{2}}} \xrightarrow{d}N(0,1), \tag{15}\]
_where \(N(0,1)\) is the standard Gaussian distribution and the notation \(\xrightarrow{d}\) means the convergence in distribution._
Combining Theorem 1 and Theorem 2, we have the following corollary.
**Corollary 1**.: _Under the assumption of Theorem 2,_
\[\left(L_{n}f-\left(\frac{k_{n}}{s_{d}}\right)^{k}\int_{(S^{d})^{k}}f(x_{1}, \ldots,x_{k})dx_{1}\cdots dx_{k}\right)\mathrm{Var}(L_{n}f)^{-1/2}\xrightarrow {d}N(0,1).\]
When the assumption of Theorem 2 fails, i.e., \(F(x)\) is constant almost everywhere, the right hand side of (14) will be degenerate, i.e., \(\mathrm{Var}(L_{n}f)\) will have strictly smaller growth order than \(k_{n}^{2k-1}=\Theta(n^{(d-1)(2k-1)})\). For such degenerate case, our next theorem shows that for a class of test functions, the limiting distribution is given by a mixture of centered chi-square distributions, i.e., the 2nd order Wiener Chaos.
We now consider the following two invariance conditions on the bounded test function \(f(x_{1},\ldots,x_{k})\), \(k\geq 2\).
* \(f\) is invariant under permutations, i.e., \[f(x_{1},\ldots,x_{k})=f(x_{\sigma(1)},\ldots,x_{\sigma(k)}),\forall\,\sigma \in\mathrm{Sym}(k).\] (16)
* We assume that the \((1,2)\)-margin function \(f_{1,2}(x_{1},x_{2})\) only depends on their spherical distance \(\mathrm{dist}(x_{1},x_{2})\) (abbreviated as \(\mathrm{d}(x_{1},x_{2})\)), i.e., \[f_{1,2}(x_{1},x_{2})=f_{1,2}(x_{1}^{\prime},x_{2}^{\prime}),\ \ \forall\,\mathrm{d}(x_{1},x_{2})=\mathrm{d}(x_{1}^{\prime},x_{2}^{\prime}).\] (17)
We will show that if the test function \(f\) satisfies these two assumptions, then \(F(x)\) must be constant on the sphere, and thus the variance will be degenerate.
As a remark, the condition (16) is not an essential one. We can always symmetrize a function \(f\) by considering the average
\[\bar{f}(x_{1},\ldots,x_{k})=\frac{1}{k!}\sum_{\sigma\in\mathrm{Sym}(k)}f(x_{ \sigma(1)},\ldots,x_{\sigma(k)}),\]
and this will yield \(L_{n}f=L_{n}\bar{f}\) by (8).
There is an important class of test functions that satisfy these two assumptions. For example, given \(\delta>0\), if we choose
\[f(x_{1},x_{2})=\mathbf{1}[\mathrm{d}(x_{1},x_{2})<\delta], \tag{18}\]
where the indicator function is equal to \(1\) if the distance \(\mathrm{d}(x_{1},x_{2})<\delta\) and \(0\) otherwise, then the random variable \(L_{n}f\) will be the number of pairs of random points whose distances are less than \(\delta\). Similarly, if we take
\[f(x_{1},x_{2},x_{3})=\mathbf{1}[\mathrm{d}(x_{1},x_{2})<\delta,\,\mathrm{d}(x _{1},x_{3})<\delta,\,\mathrm{d}(x_{2},x_{3})<\delta], \tag{19}\]
then \(L_{n}f\) will count the number of triangles where the three vertices of the triangle are within distance \(\delta\). These types of counting statistics are useful tools to study the topology of random complexes built over random point processes, due to its connections with Betti numbers, e.g., [4, 6, 11]. Our main result Theorem 3 below implies that such types of counting statistics of the determinantal point process on \(S^{d}\) converge to the 2nd order Wiener chaos.
Under conditions (16) and (17) we can determine the growth order of \(\mathrm{Var}(L_{n}f)\) and find the limiting distribution of \(L_{n}f\). We define the function
\[\widehat{h}(x,y):=\int_{S^{d}}(f_{1,2}(x,y)-f_{1,2}(x,z))\sin^{-(d-1)}(\arccos (z\cdot y))dz. \tag{20}\]
We will see that \(\widehat{h}\) is a bounded symmetric function, and thus we can consider it as a Hilbert-Schmidt integral operator acting on \(L^{2}(S^{d})\). Then this operator is compact and self-adjoint. Therefore we have the spectral decomposition
\[\widehat{h}(x,y)=\sum_{j=1}^{\infty}z_{j}w_{j}(x)w_{j}(y), \tag{21}\]
where \(\{z_{j},j\geq 1\}\) are eigenvalues of the operator, and \(\{w_{j},j\geq 1\}\) are the corresponding eigenfunctions which form an orthonormal basis of \(L^{2}(S^{d})\).
The following theorem states that the multivariate linear statistics will tend to a mixture of centered chi-squared distributions in the degenerate case.
**Theorem 3**.: _For any bounded function \(f(x_{1},\ldots,x_{k})\) with \(k\geq 2\) satisfying conditions (16) and (17), we have_
\[\lim_{n\to\infty}\frac{\mathrm{Var}(L_{n}f)}{k_{n}^{2k-2}}=\frac{2C_{d}^{2}k^{ 2}(k-1)^{2}}{\Gamma(d)^{2}s_{d}^{2k}}\int_{(S^{d})^{2}}\widehat{h}(x,y)^{2}dxdy, \tag{22}\]
_where the constant \(C_{d}:=\frac{2^{d-2}\Gamma(d/2)^{2}}{\pi}\). Furthermore, we have_
\[\left(L_{n}f-\mathbb{E}(L_{n}f)\right)\left(\frac{k_{n}}{s_{d}}\right)^{-k} \left(\frac{C_{d}k(k-1)}{n^{d-1}}\right)^{-1}\overset{d}{\to}\sum_{i=1}^{ \infty}z_{i}(\chi_{i}-1)/2, \tag{23}\]
_where \(\chi_{i},i\geq 1\) are independent chi-squared random variables with one degree of freedom and \(\sum_{i=1}^{\infty}z_{i}(\chi_{i}-1)/2\) is understood as the \(L^{2}\)-limit of \(\sum_{i=1}^{N}z_{i}(\chi_{i}-1)/2\) as \(N\to\infty\)._
Similar to Corollary 1, using the fact that \(k_{n}\sim 2n^{d-1}/\Gamma(d)\), and Theorems 1 and 3, we deduce the following result.
**Corollary 2**.: _Under the assumptions of Theorem 3, we have_
\[\begin{split}&\left(\frac{k_{n}}{s_{d}}\right)^{-k}\left(\frac{C_{d}k (k-1)}{n^{d-1}}\right)^{-1}\left(L_{n}f-\left(\frac{k_{n}}{s_{d}}\right)^{k} \int_{(S^{d})^{k}}f(x_{1},\ldots,x_{k})dx_{1}\cdots dx_{k}\right.\\ &\left.+\frac{k_{n}^{k-1}}{s_{d}^{k}}\sum_{1\leq i<j\leq k}\frac{ 2^{d-1}}{\Gamma(d)\pi}\Gamma\Big{(}\frac{d}{2}\Big{)}^{2}\int_{S^{d}}\int_{S^{ d}}\frac{f_{i,j}(x,y)}{\sin^{d-1}(\arccos(x\cdot y))}dxdy\right)\\ &\xrightarrow{d}\sum_{i=1}^{\infty}z_{i}(\chi_{i}-1)/2.\end{split} \tag{24}\]
Note that the limiting distribution can be rewritten in the form of the 2nd order Wiener chaos
\[\sum_{i=1}^{\infty}z_{i}H_{2}(X_{i})/2, \tag{25}\]
where \(H_{2}(x)=x^{2}-1\) is the Hermite polynomial of degree 2, and \(X_{i}\) are independent and identically distributed (i.i.d.) standard Gaussian random variables \(N(0,1)\).
There is a vast literature on the univariate linear statistics of determinantal point processes, e.g., [7, 8, 9]. There are also very few works that give conditions for a Gaussian limit of multivariate linear statistics, e.g., [3]. But to the best of our knowledge, Theorem 3 is the very first result on the multivariate linear statistics for determinantal point processes beyond the Gaussian limit case.
Theorem 2 and Theorem 3 are proved by the method of cumulants. We will first derive a graphical representation for the cumulants of the multivariate linear statistics for any determinantal point process in Lemma 1, which generalizes the well-known formula for the univariate case (see (42) below). This graphical representation allows us to study the asymptotic properties of the cumulants by the off-diagonal decay of the spectral projection kernel, where we have to prove Lemma 3 and Lemma 4 bounding multiple integrals over the product of kernels. Exact identities and asymptotic expansions of the spectral projection kernels combined with the symmetry of the underlying space of the sphere are two crucial ingredients for our proofs. For example, we repeatedly use the facts that the spectral projection kernel is constant on the diagonal, and it satisfies very precise off-diagonal estimates for all length scales, e.g., (53); the important fact that the integral operator \(\widehat{h}(x,y)\) defined in (20) is symmetric is partially due to the symmetry of the sphere, etc.
Contrary to the i.i.d. point process, the determinantal point process has the negative association property. But our main results Theorem 2 and Theorem 3 are still analogs of the classical Wiener chaos decomposition in the theory of U-statistics for i.i.d random variables.
Given i.i.d. random variables \(X_{1},\cdots,X_{n}\), Hoeffding's form for U-statistics is the following (normalized) multivariate linear statistics,
\[U_{n}^{k}(g)=\binom{n}{k}^{-1}\sum_{1\leq i_{1}<\cdots<i_{k}\leq n}g(X_{i_{1}},\ldots,X_{i_{k}}),\]
where \(g\) is a symmetric real-valued function of \(k\) variables.
Without loss of generality, we assume \(\mathbb{E}(g(X_{1},..,X_{k}))=0\). Then Hoeffding in 1948 proved that, if the variance \(\text{Var}(g(X_{1},..,X_{k}))<\infty\), then the following central
limit theorem holds (Corollary 11.5 in [5]),
\[n^{1/2}U_{n}^{k}(g)\xrightarrow{\mathrm{d}}N(0,k^{2}\delta_{1}). \tag{26}\]
Here, the constant \(\delta_{1}\) is the variance
\[\delta_{1}=\mathrm{Var}(g_{1}(X_{1})),\]
where
\[g_{1}(x):=\mathbb{E}(g(x,X_{2},..,X_{k})).\]
If the variance \(\delta_{1}\) vanishes, that is the limit of U-statistics for i.i.d. random variables is degenerate, then a \(\chi^{2}\)-limit theorem holds for the rescaled statistics. To be more precise, we suppose that \(g_{1}(x)=\mathbb{E}g(x,X_{2},..,X_{k})=0\) and \(\mathbb{E}g^{2}(X_{1},..,X_{k})<\infty\), then we have (Corollary 11.5 in [5]),
\[nU_{n}^{k}(g)\xrightarrow{\mathrm{d}}\binom{k}{2}\sum_{i=1}^{\infty}\lambda_ {i}H_{2}(Y_{i}), \tag{27}\]
where \(H_{2}(x)=x^{2}-1\) is the Hermite polynomials of degree \(2\), \(Y_{i}\) are i.i.d. standard Gaussian random variables, and \(\lambda_{i}\) are eigenvalues of the integral operator \(A\) defined as follows. Let \(d\mu\) be the probability density of the random variable \(X_{1}\) and set
\[g_{2}(x,y):=\mathbb{E}g(x,y,X_{3},..,X_{k}).\]
For any bounded measurable function \(f\), the operator \(A\) is define by
\[(Af)(y)=\int g_{2}(x,y)f(x)d\mu(x). \tag{28}\]
The formats of results (26) and (27) are almost identical to Theorem 2 and Theorem 3, respectively. The roles of \(g_{1}(x_{1})\) and \(g_{2}(x_{1},x_{2})\) are replaced by the \(i\)-margin function \(f_{i}(x)\) and the \((i,j)\)-margin function \(f_{i,j}(x,y)\) respectively; when the variance vanishes, both the limiting distributions are the linear eigenvalue combination of \(H_{2}(Y_{i})\), where the role of the symmetric integral operator \(A\) is replaced by \(\widehat{h}(x,y)\).
In general, \(U_{n}^{k}\) may exhibit the convergence in distribution to the Wiener chaos with arbitrary order (Theorem 11.3 in [5]). For example, for the primitive completely degenerate case where
\[g\left(x_{1},\ldots,x_{k}\right)=\prod_{i=1}^{k}\mathfrak{g}\left(x_{i}\right)\]
with \(\mathbb{E}\mathfrak{g}\left(X_{1}\right)=0\) and \(\mathbb{E}\mathfrak{g}^{2}\left(X_{1}\right)=\sigma^{2}<\infty\), one has the convergence
\[\frac{n^{k/2}U_{n}^{k}\left(g\right)}{\sigma^{k}}\xrightarrow{\mathrm{d}}H_ {k}(Y), \tag{29}\]
where \(H_{k}(x)\) is the Hermite polynomial of degree \(k\) and \(Y\) is the standard Gaussian random variable.
Therefore, we may expect that the multivariate linear statistics of the determinantal point process associated with the spectral projection kernel on \(S^{d}\) also admits some kind of Wiener chaos decomposition. Actually, our method, especially the representation formula in Lemma 1, can be applied to any other determinantal point process such as CUE, GUE, the complex Ginibre ensemble in random matrix theory and Gaussian analytic functions in random polynomial theory. And the similar results may hold as well, but note that one has to change the conditions
especially (17) for the test functions to others according to the symmetry and the invariance of the underlying space and the kernel.
_Notation._ In this paper, we use \(C\) (or \(c\)) to denote some constants independent of \(n\), whose specific values may change from line to line. For a sequence of numbers \(a_{n}\) and \(b_{n}\), we write \(a_{n}=o(b_{n})\) if \(b_{n}\neq 0\) and \(\lim_{n\to\infty}a_{n}/b_{n}=0\); \(a_{n}=O(b_{n})\) if there exists some constant \(C\) such that \(\left|a_{n}\right|\leq C\left|b_{n}\right|\); \(a_{n}=\Theta(b_{n})\) if \(a_{n}=O(b_{n})\) and \(b_{n}=O(a_{n})\); \(a_{n}\sim b_{n}\) if \(\lim_{n\to\infty}a_{n}/b_{n}=1\).
## 2. A graphical representation of cumulants
In this section we will derive a graphical representation of the cumulants for the multivariate linear statistics of any determinantal point process.
Given a random variable \(X\), its \(m\)-th cumulant \(Q_{m}(X)\) is defined to be the coefficient in the formal expansion of \(\log\mathbb{E}\exp(\mathrm{i}tX)\),
\[\log\mathbb{E}\exp(\mathrm{i}tX)=\sum_{m=1}^{\infty}\frac{Q_{m}(X)}{m!}( \mathrm{i}t)^{m}. \tag{30}\]
A _partition_ of a set \(S\) is an unordered collection \(R=\{R_{1},\ldots,R_{\ell}\}\) of nonempty subsets of \(S\) where \(\ell\) is some positive integer not exceeding \(\left|S\right|\). In addition, \(R\) satisfies the following two conditions:
* \(R_{i}\cap R_{j}=\emptyset\) for \(i\neq j\).
* \(\cup_{i=1}^{\ell}R_{i}=S\).
Let \(m\) be any positive integer. We denote by \(\Pi(m)\) the set of partitions of \(\{1,2,\cdots,m\}\). The moments of \(X\) can be derived from its cumulants as follows,
\[\mathbb{E}(X^{m})=\sum_{R=\{R_{1},\ldots,R_{\ell}\}\in\Pi(m)}Q_{\left|R_{1} \right|}\ldots Q_{\left|R_{\ell}\right|}. \tag{31}\]
On the other hand, the cumulants can be expressed by moments as
\[Q_{m}(X)=\sum_{R=\{R_{1},\ldots,R_{\ell}\}\in\Pi(m)}(-1)^{\ell-1}(\ell-1)\Pi_ {i=1}^{\ell}\mathbb{E}X^{\left|R_{i}\right|}. \tag{32}\]
Some simple properties of cumulants include
\[Q_{1}(X)=\mathbb{E}(X),\ Q_{2}(X)=\mathrm{Var}(X),\ Q_{m}(cX)=c^{m}Q_{m}(X).\]
If \(X\) is a Gaussian random variable, then \(Q_{m}(X)=0\) for all \(m\geq 3\).
Similarly to the method of moments, to show that \(X_{n}\) converges in distribution to \(X\), it suffices to prove that the \(m\)-th cumulant of \(X_{n}\) converges to \(Q_{m}(X)\) for all fixed \(m\) (as long as the limit is uniquely determined by its cumulants). For the special case that \(X\) is Gaussian distributed and \(X_{n}\) has mean \(0\), it suffices to prove
\[\lim_{n\to\infty}\frac{Q_{m}(X_{n})}{\mathrm{Var}(X_{n})^{\frac{n}{2}}}=0\]
for all sufficiently large \(m\) ([9, Lemma 3]).
Let \(\Phi\) be a determinantal point process on the space \(\mathcal{X}\) associated with the kernel \(K(x,y)\). In the followings, we will derive a formula for the cumulants of the multivariate linear statistics. We will expand the \(m\)-th power of the multivariate linear statistics and express it in the form of (31), then the formula for the cumulants can be found directly from this expression.
To expand \((\sum_{(x_{1},\ldots,x_{k})\in\Phi_{*}^{k}}f(x_{1},..,x_{k}))^{m}\), we have \(km\) points \(x_{1},\ldots,x_{mk}\) (counting multiplicities) appearing in the product \(f(x_{1},\ldots,x_{k})\cdots f(x_{mk-k+1},\ldots,x_{mk})\). We write \(y_{i,j}:=x_{(i-1)k+j}\) for \(1\leq i\leq m\), \(1\leq j\leq k\), and set \(\mathbf{y}_{i}:=(y_{i,1},\ldots,y_{i,k})\). Then we have
\[\left(\sum_{(x_{1},\ldots,x_{k})\in\Phi_{*}^{k}}f(x_{1},\ldots,x_{k})\right)^{ m}=\sum_{\mathbf{y}_{1},\ldots,\mathbf{y}_{m}\in\Phi_{*}^{k}}f(\mathbf{y}_{1}) \cdots f(\mathbf{y}_{m}). \tag{33}\]
We first introduce a notation: given any positive integer \(p\), we define the set
\[[p]:=\big{\{}1,\ldots,p\big{\}}.\]
To find the relations among the points \(x_{1},\ldots,x_{mk}\), we define by
\[M(m,k):=\operatorname{Map}([m],[km]_{*}^{k})\]
the set of all maps from \([m]\) to
\[[km]_{*}^{k}:=\Big{\{}(i_{1},\ldots,i_{k})\in[km]^{k}:i_{j}\neq i_{\ell},\ \forall\,1\leq j<\ell\leq k\Big{\}}. \tag{34}\]
To be more precise, let \(\mathbf{T}\) be an element in \(M(m,k)\), then we can rewrite it as
\[\mathbf{T}:=(T_{1},\ldots,T_{m}),\]
where each \(T_{i}\) is the image of \(i\in\{1,2,..,m\}\) under the map \(\mathbf{T}\) and
\[T_{i}\in\Big{\{}(i_{1},\ldots,i_{k}):i_{j}\in[km]\text{ and }i_{j}\neq i_{\ell}, \ \forall\,1\leq j<\ell\leq k\Big{\}}.\]
We also write \(T_{i}=(T_{i,1},\ldots,T_{i,k})\) where \(T_{i,j}\) is the \(j\)-th component of the \(k\)-tuple \(T_{i}\). For example, when \(m=3\) and \(k=2\), then \(\mathbf{T},\mathbf{T}^{\prime},\mathbf{T}^{\prime\prime}\) defined as follows all belong to \(M(3,2)\),
\[T_{1}=(1,2),T_{2}=(1,4),T_{3}=(2,4). \tag{35}\]
\[T_{1}^{\prime}=(1,3),T_{2}^{\prime}=(1,6),T_{3}^{\prime}=(3,6). \tag{36}\]
\[T_{1}^{\prime\prime}=(1,2),T_{2}^{\prime\prime}=(1,4),T_{3}^{\prime\prime}=(5,6). \tag{37}\]
We say two maps \(\mathbf{T},\widehat{\mathbf{T}}\in M(m,k)\) are _equivalent_ if they differ by a permutation of \([km]\), i.e. by composing with a permutation they become the same map. We denote by \(S(m,k)\) the set of all equivalence classes of \(M(m,k)\). As an example, the \(\mathbf{T}\) and \(\mathbf{T}^{\prime}\) defined in (35) and (36) are equivalent since the permutation (23)(46) brings \(\mathbf{T}\) to \(\mathbf{T}^{\prime}\). But \(\mathbf{T}^{\prime\prime}\) defined in (37) is neither equivalent to \(\mathbf{T}\) nor \(\mathbf{T}^{\prime}\).
For any \(\mathbf{T}\in M(m,k)\), we can construct a graph for it, which we call \(\mathbf{T}\)_-graph_. The \(\mathbf{T}\)-graph is constructed in two steps. Initially there are \(mk\) vertices in total, indexed by \((i,j)\) for \(1\leq i\leq m,1\leq j\leq k\). First for each \(1\leq i\leq m\) and \(1\leq j\leq k-1\), we draw a black edge between \(T_{i,j}\) and \(T_{i,j+1}\). Then for any \((i,j)\neq(i^{\prime},j^{\prime})\) such that \(T_{i,j}=T_{i^{\prime},j^{\prime}}\), we use a solid red edge to connect \((i,j)\) and \((i^{\prime},j^{\prime})\). See Figure 1 for the graphical representations of \(\mathbf{T}\), \(\mathbf{T}^{\prime}\) and \(\mathbf{T}^{\prime\prime}\). One can see that if \(\mathbf{T}\) is equivalent to \(\widehat{\mathbf{T}}\), then \(\mathbf{T}\)-graph is the same as \(\widehat{\mathbf{T}}\)-graph, and vice versa. Consequently, each equivalence class in \(S(m,k)\) can be identified with a \(\mathbf{T}\)-graph.
For \(\mathbf{T}\in M(m,k)\), we define the size and the range of \(\mathbf{T}\) as
\[|\mathbf{T}|:=|\cup_{i=1}^{m}T_{i}|\,,\ \ \text{Range}(\mathbf{T})=\cup T_{i}.\]
For example, for the \(\mathbf{T}\) defined in (35) we have \(|\mathbf{T}|=3\) and \(\text{Range}(\mathbf{T})=\{1,2,4\}\).
For notational simplicity, for a collection of indices \(\mathbf{t}=(t_{1},\ldots,t_{k})\), we define \(f(\mathbf{t}):=f(x_{t_{1}},x_{t_{2}},\ldots,x_{t_{k}})\); by an abuse of nation, for \(\mathbf{T}=(T_{1},\ldots,T_{m})\), we set
\(f(\mathbf{T})=\Pi_{i=1}^{m}f(T_{i})\); and we write \(d\mathbf{x}\) as the volume element involved in the integration. By the definition of the determinantal point process, we have
\[\begin{split}&\mathbb{E}\left[\left(\sum_{(x_{1},\ldots,x_{k})\in \Phi_{\pi}^{k}}f(x_{1},\ldots,x_{k})\right)^{m}\right]\\ &=\sum_{\mathbf{T}\in S(m,k)}\int_{\mathcal{X}^{|\mathbf{T}|}}f( \mathbf{T})\det\Big{(}K(x_{i},x_{j})_{i,j\in\mathrm{Range}(\mathbf{T})}\Big{)} d\mathbf{x}\\ &=\sum_{\mathbf{T}\in S(m,k)}\sum_{\sigma\in\mathrm{Sym}( \mathrm{Range}(\mathbf{T}))}\int_{\mathcal{X}^{|\mathbf{T}|}}f(\mathbf{T}) \mathrm{sgn}(\sigma)\Pi_{q\in\mathrm{Range}(\mathbf{T})}K(x_{q},x_{\sigma(q)} )d\mathbf{x}.\end{split} \tag{38}\]
Here, for any set \(A\), \(\mathrm{Sym}(A)\) is the set of all permutations of the elements in \(A\), and \(\mathrm{sgn}(\sigma)\) is the sign of the permutation \(\sigma\).
For any \(\mathbf{T}\) and \(\sigma\in\mathrm{Sym}(\mathrm{Range}(\mathbf{T}))\) we can further construct a \((\mathbf{T},\sigma)\)_-graph_\(G\) by adding dotted red edges to the \(\mathbf{T}\)-graph. Specifically, for any \(T_{i,j}\neq T_{i^{\prime},j^{\prime}}\), we add a dotted red edge between two vertices \((i,j)\) and \((i^{\prime},j^{\prime})\) if \(\sigma(T_{i,j})=T_{i^{\prime},j^{\prime}}\) or \(\sigma(T_{i^{\prime},j^{\prime}})=T_{i,j}\). We say the pair \((\mathbf{T},\sigma)\) is _connected_ if the \((\mathbf{T},\sigma)\)-graph is connected.
For example, for \(\mathbf{T}\) defined in (35), the \((\mathbf{T},\sigma)\)-graph \(G\) is connected for any \(\sigma\in\mathrm{Sym}(\{1,2,4\})\) because the \(\mathbf{T}\)-graph itself is already connected. On the other hand, for \(\mathbf{T}^{\prime\prime}\) defined in (37), if \(\sigma=id\) (the identity in the permutation group), then the \((\mathbf{T}^{\prime\prime},\sigma)\)-graph has two components. However, for \(\sigma=(15)\in\mathrm{Sym}(\{1,2,4,5,6\})\) the \((\mathbf{T}^{\prime\prime},\sigma)\)-graph becomes connected.
If a \((\mathbf{T},\sigma)\)-graph \(G\) has \(\ell\) connected components, then \(G\) naturally induces a partition \(R\) of \([m]\) into \(\ell\) disjoint sets \(\{R_{1},\ldots,R_{\ell}\}\). For \(1\leq j\leq\ell\), we set
\[H_{j}:=\cup_{i\in R_{j}}T_{i},\ \sigma_{j}=\text{the restriction of $\sigma$ to $H_{j}$.}\]
Figure 1. Graphical view of \(\mathbf{T}\) (left), \(\mathbf{T}^{\prime}\) (middle) and \(\mathbf{T}^{\prime\prime}\) (right)
Let \(f(\mathbf{T}|_{R_{j}})=\prod_{i\in R_{j}}f(T_{i})\). Then for the integral
\[\int_{\mathcal{X}^{|\mathbf{T}|}}f(T_{1})f(T_{2})\ldots f(T_{m})\mathrm{sgn}( \sigma)\Pi_{q\in\mathrm{Range}(\mathbf{T})}K(x_{q},x_{\sigma(q)})d\mathbf{x},\]
we can split it into a product of exactly \(\ell\) integrals
\[\prod_{j=1}^{\ell}\left(\int_{\mathcal{X}^{|H_{j}|}}\mathrm{sgn}(\sigma_{j})f( \mathbf{T}|_{R_{j}})\prod_{q\in H_{j}}K_{n}(x_{q},x_{\sigma_{j}(q)})dx_{q} \right).\]
For any integer-valued \(r\), we define
\[\begin{split}\mathcal{C}(r):=&\Big{\{}(\mathbf{T}, \sigma):\mathbf{T}\in S(r,k),\sigma\in\mathrm{Sym}(\mathrm{Range}(\mathbf{T}) ),\\ &(\mathbf{T},\sigma)\text{-graph is connected}\Big{\}}.\end{split} \tag{39}\]
The definition of \(\{R_{1},\ldots,R_{\ell}\}\) implies that, for each \(1\leq j\leq\ell\), the pair \((\mathbf{T}|_{R_{j}},\sigma_{j})\) is in \(\mathcal{C}(|R_{j}|)\). Therefore, we have
\[\begin{split}&\sum_{\mathbf{T}\in S(m,k)}\sum_{\sigma\in\mathrm{ Sym}(\mathrm{Range}(\mathbf{T}))}\int_{\mathcal{X}^{|\mathbf{T}|}}f(\mathbf{T}) \mathrm{sgn}(\sigma)\prod_{q\in\mathrm{Range}(\mathbf{T})}K(x_{q},x_{\sigma(q )})d\mathbf{x}\\ =&\sum_{R=\{R_{1},\ldots,R_{\ell}\}\in\Pi(m)}\prod _{j=1}^{\ell}\left(\sum_{(\mathbf{T},\sigma)\in\mathcal{C}(|R_{j}|)}\mathfrak{ Int}(f,(\mathbf{T},\sigma))\right),\end{split} \tag{40}\]
where
\[\mathfrak{Int}(f,(\mathbf{T},\sigma)):=\int_{\mathcal{X}^{|\mathbf{T}|}}\left( f(\mathbf{T})\mathrm{sgn}(\sigma)\prod_{q\in\mathrm{Range}(\mathbf{T})}K(x_{q},x_{ \sigma(q)})\right)d\mathbf{x}.\]
Combining (31) (38) and (40), we obtain the following formula for the cumulants of multivariate linear statistics of general determinantal point processes.
**Lemma 1**.: \[\begin{split}& Q_{m}\left(\sum_{(x_{1},\ldots,x_{k})\in\Phi^{k}_{ *}}f(x_{1},\ldots,x_{k})\right)\\ =&\sum_{(\mathbf{T},\sigma)\in\mathcal{C}(m)}\int_{ \mathcal{X}^{|\mathbf{T}|}}f(\mathbf{T})\mathrm{sgn}(\sigma)\prod_{q\in \mathrm{Range}(\mathbf{T})}K(x_{q},x_{\sigma(q)})d\mathbf{x}.\end{split}\] (41)
For \(k=1\), (41) gives the following well-known formula (Formula (2.7) in [8]),
\[\begin{split}& Q_{m}\left(\sum_{x\in\Phi}f(x)\right)\\ =&\sum_{\ell=1}^{m}\sum_{(n_{1},\ldots,n_{\ell}):\sum_{j =1}^{\ell}n_{j}=m,n_{j}\geq 1,\forall j}\frac{(-1)^{\ell-1}}{\ell}\frac{m!}{n_{1}! \ldots n_{\ell}!}\\ &\int_{\mathcal{X}^{\ell}}f^{n_{1}}(x_{1})\cdots f^{n_{\ell}}(x_ {\ell})K(x_{1},x_{2})\cdots K(x_{\ell-1},x_{\ell})K(x_{\ell},x_{1})d\mathbf{x}. \end{split} \tag{42}\]
Indeed, by the definition of \(S(m,1)\), each \(\mathbf{T}\in S(m,1)\) corresponds to one way of assigning \(m\) different balls into \(\ell\) indistinguishable urns for some \(\ell\). Hence the \(\mathbf{T}\)-graph itself has \(\ell\) components and also partitions the set \([m]\) into \(\ell\) components.
Thus, to ensure the \((\mathbf{T},\sigma)\)-graph is connected, different components have to be linked through \(\sigma\in\mathrm{Sym}(\mathrm{Range}(\mathbf{T}))\), which implies that \(\sigma\) has to be a cyclic permutation of length \(\ell\). As an example, suppose \(k=1\), \(m=5\) and \(\mathbf{T}=\{1,2,3,3,3\}\), then \(|\mathbf{T}|=3\) and \(\sigma\) has to be (123) or (132) to obtain a connected \((\mathbf{T},\sigma)\)-graph.
For later reference, we introduce a few more concepts.
**Definition 1**.: _For a \((\mathbf{T},\sigma)\)-graph, we say \(T_{i,j}\) is a connection point if at least one of the two conditions are satisfied:_
* \(\sigma(T_{i,j})\notin T_{i}\)_._
* _There exists an_ \(i^{\prime}\neq i\) _such that_ \(T_{i,j}\in T_{i^{\prime}}\)_._
_Equivalently, using the graphical representation of a \((\mathbf{T},\sigma)\)-graph, \(T_{i,j}\) is a connection point if \((i,j)\) is connected to some vertex in \(\{(i^{\prime},j^{\prime}):i^{\prime}\neq i,1\leq j^{\prime}\leq k\}\) by a red edge, either solid or dotted._
Note that, if the \((\mathbf{T},\sigma)\)-graph is connected, then for each \(i\), there must exist at least one connection point \(T_{i,j}\).
**Definition 2**.: _We say a \((\mathbf{T},\sigma)\) pair is reducible if its \((\mathbf{T},\sigma)\)-graph is connected and there exists an \(i\in[m]\) and a \(j\in[k]\) such that_
* \(T_{i,j}\) _is the only connection point in_ \(T_{i}\)_._
* \(\sigma(x)=x,\forall x\in T_{i}-\{T_{i,j}\}\)_._
_If the above two conditions hold, then we say the \((\mathbf{T},\sigma)\)-graph breaks at \(T_{i,j}\) and \(T_{i,j}\) is a break point. Equivalently, \((\mathbf{T},\sigma)\)-graph is reducible if it is connected and there exists some \((i,j)\) which is the only vertex in \(\{(i,j):1\leq j\leq k\}\) that can have red edge(s) connecting with other vertices. We say a \((\mathbf{T},\sigma)\) pair is irreducible if it is not reducible._
We define \(\mathfrak{I}(m)\) to be the set of all \((\mathbf{T},\sigma)\in\mathcal{C}(m)\) that are irreducible, i.e.,
\[\mathfrak{I}(m):=\{(\mathbf{T},\sigma)\in\mathcal{C}(m):(\mathbf{T},\sigma) \text{ is irreducible}\}. \tag{43}\]
An example of the reducible graph is given by the right panel of Figure 2, while the left and the middle ones in Figure 2 are irreducible.
**Definition 3**.: _We say a \((\mathbf{T},\sigma)\in\mathfrak{I}(m)\) is circle-like if for each \(1\leq i\leq m\), there are exactly two distinct numbers \(1\leq i_{1}\neq i_{2}\leq k\) such that each of \((i,i_{1})\) and \((i,i_{2})\) has exactly one red edge and the red edge is connected to a vertex in \(\{(i^{\prime},j^{\prime}):i^{\prime}\neq i,1\leq j^{\prime}\leq k\}\), and all other vertices, i.e., those not in the set \(\{(i,i_{1}),(i,i_{2}):1\leq i\leq m\}\), have no red edge._
The following proposition explains the name 'circle-like'.
**Proposition 1**.: _Let \((\mathbf{T},\sigma)\) be circle-like. Then there exists a cyclic permutation \(p\) of \(\{1,\ldots,m\}\) such that, for each \(1\leq i\leq m\), there exist two distinct indices \(i_{1}\) and \(i_{2}\) and that \((i,i_{2})\) is connected with \((p(i),p(i)_{1})\) with a red edge._
Proof.: Note that, by the definition of being circle-like, if we contract all vertices in \(\{(i,j):1\leq j\leq k\}\) into a single vertex (and give it label \(i\)), then we will obtain a connected graph with \(m\) vertices such that each vertex has degree \(2\), which is then necessarily a circle of size \(m\). Fix a direction of the circle, suppose the label of these vertices are \(a_{1},\ldots,a_{m}\). Then we can define a permutation \(p\) such that \(p(a_{i})=a_{i+1}\) where \(a_{m+1}:=a_{1}\). In addition, by reordering \(i_{1}\) and \(i_{2}\) for each \(1\leq i\leq m\) if needed, we can assume that \((a_{i},(a_{i})_{2})\) is connected with \((a_{i+1},(a_{i+1})_{1})\) for all \(1\leq i\leq m\) with a red edge. This completes the proof.
As a remark, we will see that in the proof of Theorem 3 for the degenerate case, the collections of the cycle-like \(({\bf T},\sigma)\)-graph will provide the leading order term for the cumulants of multivariate linear statistics, which will eventually yield the 2nd order Wiener chaos.
## 3. Properties of the spectral projection kernel
In this section, we first review some basic facts for the spectral projection kernel. Then we will derive several integral lemmas which provide the key estimates to prove the main results.
### Preliminaries
It's well-known that the kernel for the spectral orthogonal projection \(K_{n}:L^{2}(S^{d})\to{\mathcal{H}}_{n}(S^{d})\) satisfies (Theorem 2.9 in [2])
\[K_{n}(x,y)=\frac{k_{n}}{s_{d}}P_{n}(\cos\mathrm{d}(x,y))=\frac{k_{n}}{s_{d}}P_ {n}(x\cdot y), \tag{44}\]
where \(\mathrm{d}(x,y)\in[0,\pi]\) is the geodesic distance which is the angle between the vectors \(x,y\in S^{d}\), \(P_{n}\) is the Legendre polynomial of degree \(n\) in \(d\) dimension, \(k_{n}\) is the dimension of \({\mathcal{H}}_{n}\) given in (4) and \(s_{d}=2\pi^{\frac{d+1}{2}}/\Gamma(\frac{d+1}{2})\) is the surface area of \(S^{d}\). Since both \(x\) and \(y\) are on the unit sphere, then we can rewrite \(\cos\mathrm{d}(x,y)=x\cdot y\) as the inner product between \(x\) and \(y\).
We also write
\[P_{n}(x,y):=P_{n}(\cos\mathrm{d}(x,y))=P_{n}(x\cdot y).\]
By the fact that \(P_{n}(1)=1\)[2], one has the identity
\[K_{n}(x,x)=\frac{k_{n}}{s_{d}}. \tag{45}\]
The kernel \(K_{n}(x,y)\) satisfies the reproducing property,
\[\int_{S^{d}}K_{n}(x_{1},x_{2})K_{n}(x_{2},x_{3})dx_{2}=K_{n}(x_{1},x_{3}). \tag{46}\]
When \(x_{1}=x_{3}\), (46) reads,
\[\int_{S^{d}}K_{n}^{2}(x_{1},x_{2})dx_{2}=\frac{k_{n}}{s_{d}}, \tag{47}\]
and thus we have
\[\int_{(S^{d})^{2}}K_{n}^{2}(x_{1},x_{2})dx_{1}dx_{2}=k_{n}. \tag{48}\]
For \(P_{n}\), two basic properties are [2],
\[P_{n}(x)=(-1)^{n}P_{n}(-x) \tag{49}\]
and
\[|P_{n}(x)|\leq 1,\ \forall x\in[-1,1]. \tag{50}\]
By (45) and the reproducing property (46), we obtain that
\[\int_{S^{d}}P_{n}(x_{1},x_{2})P_{n}(x_{2},x_{3})dx_{2}=\left(\frac{k_{n}}{s_{ d}}\right)^{-1}P_{n}(x_{1},x_{3}), \tag{51}\]
and by (47), we have
\[\int_{S^{d}}P_{n}^{2}(x_{1},x_{2})dx_{2}=\left(\frac{k_{n}}{s_{d}}\right)^{-1}. \tag{52}\]
For \(0\leq\theta\leq\pi/2\), one has the Hilb's asymptotics for the Legendre polynomials (by taking \(\alpha=\beta=\frac{d-2}{2}\) in [10, Theorem 8.21.12]),
\[\begin{split} P_{n}(\cos\theta)=&\Gamma\left(\frac{d} {2}\right)\left(\frac{\theta}{\sin\theta}\right)^{\frac{1}{2}}\left(\frac{1}{2 }(n+\frac{d-1}{2})\sin\theta\right)^{-\frac{d-2}{2}}J_{\frac{d-2}{2}}\left((n+ \frac{d-1}{2})\theta\right)\\ &+R_{n}(\theta),\end{split} \tag{53}\]
where \(J_{\frac{d-2}{2}}\) is the Bessel function of order \(\frac{d-2}{2}\). And the error term satisfies the estimates:
\[R_{n}(\theta)=\begin{cases}\theta^{2}O(1)&0\leq\theta\leq cn^{-1}\,\\ \theta^{\frac{3-d}{2}}O(n^{-\frac{1+d}{2}})&cn^{-1}\leq\theta\leq\pi/2\, \end{cases}\]
where \(c\) is some constant independent of \(n\).
For the Bessel function, \(J_{\frac{d-2}{2}}\) is bounded on the positive real line and has the expansion (Formula (1.71.1) in [10]),
\[J_{\frac{d-2}{2}}(x)=\sum_{j=0}^{\infty}\frac{(-1)^{j}}{j!\Gamma(j+\frac{d}{2 })}\left(\frac{x}{2}\right)^{2j+\frac{d-2}{2}}. \tag{54}\]
Furthermore, it admits the asymptotic expansion (Formula (1.71.7) in [10]),
\[J_{\frac{d-2}{2}}(x)=\sqrt{\frac{2}{\pi x}}\cos\left(x-(d-1)\frac{\pi}{4} \right)+o(x^{-1})\quad\text{as}\,\ x\to+\infty. \tag{55}\]
Now we define a function \(p_{n}(\theta)\) for \(\theta\in[0,\pi]\) as follows. For \(0\leq\theta\leq\pi/2\), we define
\[\begin{split} p_{n}(\theta):=&\Gamma\Big{(}\frac{d} {2}\Big{)}\left(\frac{\theta}{\sin\theta}\right)^{1/2}\left(\frac{1}{2}(n+ \frac{d-1}{2})\sin\theta\right)^{-(d-2)/2}\\ &\times\sqrt{\frac{2}{\pi(n+(d-1)/2)\theta}}\cos\left((n+(d-1)/2 )\theta-(d-1)\frac{\pi}{4}\right)\\ =&\Gamma\Big{(}\frac{d}{2}\Big{)}\Big{(}\frac{2^{d-1 }}{\pi}\Big{)}^{1/2}\Big{(}n+(d-1)/2\Big{)}^{-(d-1)/2}\big{(}\sin\theta\big{)} ^{-(d-1)/2}\\ &\times\cos\left((n+(d-1)/2)\theta-(d-1)\frac{\pi}{4}\right); \end{split} \tag{56}\]
for \(\pi/2<\theta\leq\pi\), we define
\[p_{n}(\theta):=(-1)^{n}p_{n}(\pi-\theta).\]
Combining (53) and (55), for \(0\leq\theta\leq\pi\), we have the estimates,
\[|P_{n}(\cos\theta)-p_{n}(\theta)|\leq C\left(\min\{n\theta,n(\pi-\theta)\}^{- d/2}\wedge 1\right), \tag{57}\]
\[|P_{n}(\cos\theta)|\leq C\left(\min\{n\theta,n(\pi-\theta)\}^{-(d-1)/2}\wedge 1 \right), \tag{58}\]
and
\[|p_{n}(\theta)|\leq C\left(\min\{n\theta,n(\pi-\theta)\}^{-(d-1)/2}\wedge 1 \right). \tag{59}\]
### Integral estimates
Now we will prove several lemmas involving the integrals of the kernel \(K_{n}\). They will be one of the main technical ingredients in the proofs of our main results.
We will use the spherical coordinate system \((\theta,\phi_{1},\ldots,\phi_{d-1})\) for \(S^{d}\), where \(\theta,\phi_{1},\ldots,\phi_{d-2}\) range over \([0,\,\pi]\) and \(\phi_{d-1}\) ranges over \([0,\!2\pi]\). Here, \(\theta\) is the arc length from the point \((\theta,\phi)\) to the origin of the coordinate system. For simplicity, we will use \(\phi\) as a shorthand for \((\phi^{1},\ldots,\phi^{d-1})\), and thus the range of \(\phi\) is \(\Omega:=[0,\pi]^{d-2}\times[0,2\pi]\). Then the volume element for \(S^{d}\) with respect to the standard round metric is
\[dx=\widehat{J}(\theta,\phi)d\theta d\phi,\]
where
\[\widehat{J}(\theta,\phi)=\sin^{d-1}(\theta)\sin^{d-2}(\phi_{1})\cdots\sin(\phi _{d-2}).\]
We define
\[J(\phi):=\sin^{d-2}(\phi_{1})\cdots\sin(\phi_{d-2}),\]
and thus we can rewrite
\[dx=\sin^{d-1}(\theta)J(\phi)d\theta d\phi.\]
The first lemma concerns the integration of a function against \(K_{n}^{2}\).
**Lemma 2**.: _For any bounded function \(f(x,y)\), we have_
\[\lim_{n\to\infty}\frac{1}{k_{n}}\int_{S^{d}}\int_{S^{d}}f(x,y)K_ {n}^{2}(x,y)dxdy\] \[= \frac{2^{d-1}}{\Gamma(d)\pi}\Big{(}\frac{\Gamma\big{(}\frac{d}{2 }\big{)}}{s_{d}}\Big{)}^{2}\int_{S^{d}}\int_{0}^{\pi}\int_{\Omega}f(x,x+( \theta,\phi))J(\phi)d\phi d\theta dx. \tag{60}\] \[= \frac{2^{d-1}}{\Gamma(d)\pi}\Big{(}\frac{\Gamma\big{(}\frac{d}{2 }\big{)}}{s_{d}}\Big{)}^{2}\int_{S^{d}}\int_{S^{d}}\frac{f(x,y)}{\sin^{d-1}( \arccos(x\cdot y))}dxdy.\]
The next two lemmas give upper bounds on the integration of the product of several \(K_{n}^{\prime}s\).
**Lemma 3**.: _For any \(r\in\mathbb{N},r\geq 2\),_
\[\int_{(S^{d})^{r}}\Big{|}\prod_{i=1}^{r}K_{n}(x_{i},x_{i+1})\Big{|}dx_{1} \cdots dx_{r}=O(n^{\frac{(d-1)r}{2}}), \tag{61}\]
_where \(x_{r+1}\) is set to be \(x_{1}\). Equivalently,_
\[\int_{(S^{d})^{r}}\Big{|}\prod_{i=1}^{r}P_{n}(x_{i},x_{i+1})\Big{|}dx_{1} \cdots dx_{r}=O(n^{-\frac{(d-1)r}{2}}). \tag{62}\]
**Lemma 4**.: _For any \(r\in\mathbb{N},r\geq 3\) and bounded measurable function \(h\) of \(r\) variables,_
\[\int_{(S^{d})^{r}}h(x_{1},\ldots,x_{r})\prod_{i=1}^{r}P_{n}(x_{i},x_{i+1})dx_{ 1}\cdots dx_{r}=o(n^{-\frac{(d-1)r}{2}}), \tag{63}\]
_where \(x_{r+1}\) is set to be \(x_{1}\)._
We now give the proofs of Lemmas 2-4.
Proof of Lemma 2.: By the boundedness of \(f\), without loss of generality, we may assume that \(f\) is nonnegative. For \(x,y\in S^{d}\), we build a spherical coordinate system \((\theta,\phi)\) with \(x\) being the north pole and write \(y\) as \(x+(\theta,\phi)\). By the facts that \(K_{n}(x,y)=k_{n}P_{n}(\cos\theta)/s_{d}\) and \(|P_{n}\cos(\theta)|=|P_{n}\cos(\pi-\theta)|\), we have
\[\begin{split}&\int_{S^{d}}\int_{S^{d}}f(x,y)K_{n}^{2}(x,y)dxdy\\ =&\Big{(}\frac{k_{n}}{s_{d}}\Big{)}^{2}\int_{S^{d}} \int_{0}^{\pi}\int_{\Omega}f(x,x+(\theta,\phi))P_{n}(\cos\theta)^{2}\widehat{J} (\theta,\phi)d\phi d\theta dx\\ =&\Big{(}\frac{k_{n}}{s_{d}}\Big{)}^{2}\left(\int_{S ^{d}}\int_{0}^{\frac{\pi}{2}}\int_{\Omega}f(x,x+(\theta,\phi))P_{n}(\cos \theta)^{2}\widehat{J}(\theta,\phi)d\phi d\theta dx\\ &\qquad\qquad+\int_{S^{d}}\int_{0}^{\frac{\pi}{2}}\int_{\Omega} f(x,x+(\pi-\theta,\phi))P_{n}(\cos\theta)^{2}\widehat{J}(\theta,\phi)d\phi d \theta dx\right)\\ :=&\Big{(}\frac{k_{n}}{s_{d}}\Big{)}^{2}(I_{1}+I_{2}).\end{split} \tag{64}\]
We will analyze \(I_{1}\) and \(I_{2}\) by a series of approximations. We only give details for \(I_{1}\), and \(I_{2}\) follows from the same arguments. By Hilb's asymptotic (53), one has
\[\begin{split} P_{n}(\cos\theta)^{2}=&\Gamma\Big{(} \frac{d}{2}\Big{)}^{2}\left(\frac{1}{2}(n+\frac{d-1}{2})\sin\theta\right)^{-( d-2)}\Big{(}\frac{\theta}{\sin\theta}\Big{)}J_{\frac{d-2}{2}}\left((n+\frac{d-1}{ 2})\theta\right)^{2}\\ &+\widehat{R}_{n}(\theta),\end{split} \tag{65}\]
where
\[\widehat{R}_{n}(\theta)=\begin{cases}\theta^{2}O(1)&0\leq\theta\leq c/n \,\\ \theta^{2-d}O(n^{-d})&c/n\leq\theta\leq\pi/2\.\end{cases}\]
We now define
\[I_{3}=\int_{S^{d}}\int_{0}^{\frac{\pi}{2}}\int_{\Omega}\theta J_{\frac{d-2}{2 }}\Big{(}(n+\frac{d-1}{2})\theta\Big{)}^{2}f(x,x+(\theta,\phi))J(\phi)d\phi d \theta dx. \tag{66}\]
By (64) and (65) there exists \(C>0\) such that
\[\left|I_{1}-\Gamma\Big{(}\frac{d}{2}\Big{)}^{2}\left(\frac{1}{2}(n+\frac{d-1} {2})\right)^{-(d-2)}I_{3}\right|\leq Cn^{-d}. \tag{67}\]
By (55), for any \(\epsilon\in(0,1)\), there exists an \(M>0\) large enough such that for \(x>M\), we have
\[\begin{split}&(1-\epsilon)\frac{2}{\pi x}\cos^{2}\Big{(}x-(d-1) \frac{\pi}{4}\Big{)}-x^{-\frac{3}{2}}\\ \leq& J_{\frac{d-2}{2}}(x)^{2}\\ \leq&(1+\epsilon)\frac{2}{\pi x}\cos^{2}\Big{(}x-(d-1) \frac{\pi}{4}\Big{)}+x^{-\frac{3}{2}}.\end{split} \tag{68}\]
Now we split \(I_{3}\) into two terms,
\[\begin{split} I_{3}=&\int_{S^{d}}\int_{0}^{M/n}\int_{ \Omega}J_{\frac{d-2}{2}}\left((n+\frac{d-1}{2})\theta\right)^{2}f(x,x+(\theta, \phi))\theta J(\phi)d\phi d\theta dx\\ &+\int_{S^{d}}\int_{M/n}^{\frac{\pi}{2}}\int_{\Omega}J_{\frac{d-2 }{2}}\left((n+\frac{d-1}{2})\theta\right)^{2}f(x,x+(\theta,\phi))\theta J(\phi )d\phi d\theta dx\\ :=& I_{4}+I_{5}.\end{split} \tag{69}\]
For \(I_{4}\), by the boundedness of \(f\) and \(J_{\frac{d-2}{2}}\), there exists some \(C>0\) such that
\[|I_{4}|\leq CM^{2}/n^{2}. \tag{70}\]
For \(I_{5}\), it holds trivially that \((n+\frac{d-1}{2})\theta>M\) for \(\theta>M/n\). Hence, we can apply the estimates (68) for \(J_{\frac{d-2}{2}}((n+\frac{d-1}{2})\theta)\). We set
\[\begin{split} I_{6}=&\int_{S^{d}}\int_{M/n}^{\frac {\pi}{2}}\int_{\Omega}\frac{2}{\pi(n+(d-1)/2)\theta}f(x,x+(\theta,\phi))\\ &\times\cos^{2}\Big{(}(n+\frac{d-1}{2})\theta-(d-1)\frac{\pi}{4} \Big{)}\theta J(\phi)d\phi d\theta dx.\end{split} \tag{71}\]
Combining (68), (71) and the following estimate
\[\int_{S^{d}}\int_{M/n}^{\frac{\pi}{2}}\int_{\Omega}\left((n+\frac{d-1}{2} \theta)\right)^{-3/2}f(x,x+(\theta,\phi))\theta J(\phi)d\phi d\theta dx\leq Cn ^{-3/2},\]
we have that
\[(1-\epsilon)I_{6}-Cn^{-\frac{3}{2}}\leq I_{5}\leq(1+\epsilon)I_{6}+Cn^{-\frac{ 3}{2}}. \tag{72}\]
By Riemann-Lebesgue lemma, for any fixed \(x\) and \(\phi\), one has
\[\begin{split}&\lim_{n\to\infty}\int_{M/n}^{\frac{\pi}{2}}f(x,x+( \theta,\phi))\cos^{2}\Big{(}(n+\frac{d-1}{2})\theta-(d-1)\frac{\pi}{4}\Big{)} d\theta\\ =&\lim_{n\to\infty}\int_{M/n}^{\frac{\pi}{2}}f(x,x+( \theta,\phi))\Big{(}\frac{1}{2}+\frac{\cos((2n+d-1)\theta-(d-1)\frac{\pi}{2}) }{2}\Big{)}d\theta\\ =&\frac{1}{2}\int_{0}^{\frac{\pi}{2}}f(x,x+(\theta, \phi))d\theta.\end{split} \tag{73}\]
Therefore, the bounded convergence theorem implies that
\[\begin{split}&\lim_{n\to\infty}\int_{S^{d}}\int_{\Omega}\int_{M/ n}^{\frac{\pi}{2}}f(x,x+(\theta,\phi))\cos^{2}\Big{(}(n+\frac{d-1}{2}) \theta-\frac{\pi}{4}\Big{)}J(\phi)d\theta d\phi dx\\ =&\frac{1}{2}\int_{S^{d}}\int_{\Omega}\int_{0}^{\frac {\pi}{2}}f(x,x+(\theta,\phi))J(\phi)d\theta d\phi dx.\end{split} \tag{74}\]
This implies that
\[\lim_{n\to\infty}nI_{6}=\frac{1}{\pi}\int_{S^{d}}\int_{\Omega}\int_{0}^{\frac {\pi}{2}}f(x,x+(\theta,\phi))J(\phi)d\theta d\phi dx:=I_{7}. \tag{75}\]
Now, combining (69), (70) and (72) and (75), we have
\[(1-\epsilon)I_{7}\leq\liminf_{n\to\infty}nI_{3}\leq\limsup_{n\to\infty}nI_{3} \leq(1+\epsilon)I_{7}. \tag{76}\]
Since (76) holds for all \(\epsilon\in(0,1)\) while \(I_{3}\) and \(I_{7}\) don't depend on \(\epsilon\), by sending \(\epsilon\to 0\), we have
\[\lim_{n\to\infty}nI_{3}=I_{7}. \tag{77}\]
Combining (67) and (77), we have
\[\lim_{n\to\infty}n^{d-1}I_{1}=\Gamma\Big{(}\frac{d}{2}\Big{)}^{2}2^{d-2}I_{7}.\]
By the same argument with \(\theta\) replaced by \(\pi-\theta\), we get a similar limit
\[\lim_{n\to\infty}n^{d-1}I_{2}=\Gamma\Big{(}\frac{d}{2}\Big{)}^{2}2^{d-2}I_{8},\]
where \(I_{8}\) is defined similarly to \(I_{7}\) as
\[I_{8}=\frac{1}{\pi}\int_{S^{d}}\int_{\Omega}\int_{\frac{\pi}{2}}^{\pi}f(x,x+( \theta,\phi))J(\phi)d\theta d\phi dx.\]
By the fact \(k_{n}\sim 2n^{d-1}/\Gamma(d)\), we get
\[\begin{split}&\lim_{n\to\infty}\frac{1}{k_{n}}\int_{S^{d}}\int_{S^{d} }f(x,y)K_{n}^{2}(x,y)dxdy\\ =&\lim_{n\to\infty}\left(\frac{k_{n}}{s_{d}^{2}} \right)(I_{1}+I_{2})\\ =&\lim_{n\to\infty}\frac{2n^{d-1}}{\Gamma(d)}\frac{ 1}{s_{d}^{2}}\Gamma\Big{(}\frac{d}{2}\Big{)}^{2}2^{d-2}n^{-(d-1)}(I_{7}+I_{8} )\\ =&\frac{\Gamma\Big{(}\frac{d}{2}\Big{)}^{2}2^{d-1}} {\pi\Gamma(d)s_{d}^{2}}\int_{S^{d}}\int_{\Omega}^{\pi}\int_{\Omega}f(x,x+( \theta,\phi))J(\phi)d\phi d\theta dx.\end{split} \tag{78}\]
This completes the proof of Lemma 2.
As a remark, the proof of Lemma 2 actually shows that for almost all \(x\), we have
\[\begin{split}&\lim_{n\to\infty}\frac{1}{k_{n}}\int_{S^{d}}f(x,y)K_{n} ^{2}(x,y)dy\\ =&\frac{2^{d-1}}{\Gamma(d)\pi}\Big{(}\frac{\Gamma \big{(}\frac{d}{2}\big{)}}{s_{d}}\Big{)}^{2}\int_{S^{d}}\frac{f(x,y)}{\sin^{d -1}(\arccos(x\cdot y))}dy.\end{split} \tag{79}\]
Proof of Lemma 3.: To prove Lemma 3, we recall (58) where we have
\[P_{n}(x,y)\leq Cn^{-\frac{d-1}{2}}(\min\{\mathrm{d}(x,y),\pi-\mathrm{d}(x,y)\} )^{-\frac{d-1}{2}}. \tag{80}\]
For \(x_{1},\dots,x_{d}\in S^{d}\), let \(\alpha_{i,j}=\mathrm{d}(x_{i},x_{j})\) be the geodesic distance which is the angle between \(x_{i}\) and \(x_{j}\) and let \(\beta_{i,j}=\min\{\alpha_{i,j},\pi-\alpha_{i,j}\}\). We now claim
\[\beta_{1,3}\leq\beta_{1,2}+\beta_{2,3}. \tag{81}\]
To prove (81), we consider four possible cases.
* If \(\alpha_{1,2}<\pi/2\) and \(\alpha_{2,3}\leq\pi/2\), then we have \[\beta_{1,2}+\beta_{2,3}=\alpha_{1,2}+\alpha_{2,3}\geq\alpha_{1,3}\geq\beta_{1, 3}.\] Here, the first inequality follows from triangle inequality.
* If \(\alpha_{1,2}<\pi/2\) and \(\alpha_{2,3}\geq\pi/2\), then by symmetry of the sphere, if we set \(x_{3}^{\prime}:=-x_{3}\) (the reflection of \(x_{3}\) through the origin of \(\mathbb{R}^{d+1}\)), we have \(\beta_{1,2}+\beta_{2,3}=\alpha_{1,2}+\pi-\alpha_{2,3}=\mathrm{d}(x_{1},x_{2})+ \mathrm{d}(x_{2},x_{3}^{\prime})\geq\mathrm{d}(x_{1},x_{3}^{\prime})\geq\beta_ {1,3}\).
* The case \(\alpha_{1,2}\geq\pi/2\) and \(\alpha_{2,3}<\pi/2\) can be analyzed similarly to the second case.
* If \(\alpha_{1,2}\geq\pi/2\) and \(\alpha_{2,3}\geq\pi/2\), then by setting \(x_{2}^{\prime}:=-x_{2}\), we have \[\beta_{1,2}+\beta_{2,3}=\mathrm{d}(x_{1},x_{2}^{\prime})+\mathrm{d}(x_{2}^{ \prime},x_{3})\geq\mathrm{d}(x_{1},x_{3})\geq\beta_{1,3}.\]
The inequality (81) implies that
\[\beta_{1,2}\beta_{2,3}=\max\{\beta_{1,2},\beta_{2,3}\}\min\{\beta_{1,2},\beta_ {2,3}\}\geq\frac{\beta_{1,3}}{2}\min\{\beta_{1,2},\beta_{2,3}\},\]
which gives
\[(\beta_{1,2}\beta_{2,3})^{-(d-1)/2} \leq C\beta_{1,3}^{-(d-1)/2}\min\{\beta_{1,2},\beta_{2,3}\}^{-(d- 1)/2} \tag{82}\] \[\leq C\beta_{1,3}^{-(d-1)/2}\left(\beta_{1,2}^{-(d-1)/2}+\beta_{2,3}^ {-(d-1)/2}\right).\]
By (80) and (82), for any fixed \(x_{1}\) and \(x_{3}\), we have
\[\begin{split}&\int_{S^{d}}\Big{|}P_{n}(x_{1},x_{2})P_{n}(x_{2},x_{3}) \Big{|}dx_{2}\\ \leq& Cn^{-(d-1)}\int_{S^{d}}(\beta_{1,2}\beta_{2,3} )^{-(d-1)/2}dx_{2}\\ \leq& Cn^{-(d-1)}\beta_{1,3}^{-(d-1)/2}\int_{S^{d}} \left(\beta_{1,2}^{-(d-1)/2}+\beta_{2,3}^{-(d-1)/2}\right)dx_{2}\\ \leq& Cn^{-(d-1)}\beta_{1,3}^{-(d-1)/2}\left(\int_{ 0}^{\pi}\beta_{1,2}^{-(d-1)/2}\sin^{d-1}(\alpha_{1,2})d\alpha_{1,2}\right.\\ &\left.+\int_{0}^{\pi}\beta_{2,3}^{-(d-1)/2}\sin^{d-1}(\alpha_{2,3})d\alpha_{2,3}\right)\\ \leq& Cn^{-(d-1)}\beta_{1,3}^{-(d-1)/2}.\end{split} \tag{83}\]
Using (83) \(r-2\) times to integrate out the variables \(x_{2},\ldots,x_{r-1}\), we get
\[\begin{split}&\int_{(S^{d})^{r}}\Big{|}\prod_{i=1}^{r}P_{n}(x_{i},x_ {i+1})\Big{|}dx_{1}\cdots dx_{r}\\ \leq& Cn^{-(d-1)r/2}\int_{S^{d}}\Big{(}\int_{0}^{\pi }\beta_{1,r}^{-(d-1)}\sin^{(d-1)}(\alpha_{1,r})d\alpha_{1,r}\Big{)}dx_{1}\\ \leq& Cn^{-(d-1)r/2}.\end{split} \tag{84}\]
This proves Lemma 3.
Proof of Lemma 4.: As in the proof of Lemma 3, let \(\alpha_{i,i+1}\) be the angle between \(x_{i}\) and \(x_{i+1}\) and set \(\beta_{i,i+1}=\min\{\alpha_{i,i+1},\pi-\alpha_{i,i+1}\}\). Recall the function \(p_{n}\) defined in (56), by (57),
\[|P_{n}(x_{i},x_{i+1})-p_{n}(\alpha_{i,i+1})|=|P_{n}(\cos\alpha_{i,i+1})-p_{n}( \alpha_{i,i+1})|\leq C(n\beta_{i,i+1})^{-d/2}. \tag{85}\]
We can write
\[h(x_{1},\ldots,x_{r})\Pi_{i=1}^{r}P_{n}(x_{i},x_{i+1})=h(x_{1},\ldots,x_{r})\Pi _{i=1}^{r}p_{n}(\alpha_{i,i+1})+I_{r} \tag{86}\]
where the error term \(I_{r}\) is bounded from above as
\[\begin{split} I_{r}\leq& C\left|h(x_{1},\ldots,x_{r}) \right|\sum_{j=1}^{r}(n\beta_{j,j+1})^{-d/2}\Big{(}\prod_{i=1,i\neq j}^{r}(|P_{ n}(\cos\alpha_{j,j+1})|+|p_{n}(\alpha_{j,j+1})|)\,\Big{)}\\ \leq& Cn^{-\frac{(d-1)(r-1)}{2}}n^{-d/2}\sum_{j=1}^{ r}\left(\beta_{j,j+1}^{-d/2}\left(\prod_{i=1,i\neq j}^{r}\beta_{i,i+1}^{-(d-1)/ 2}\right)\right).\end{split} \tag{87}\]
The first inequality is given by the estimate (85) together with the following elementary inequality: given \(a_{1},\ldots,a_{r},b_{1},\ldots,b_{r}\in\mathbb{R}\), one has
\[\left|\prod_{i=1}^{r}a_{i}-\prod_{i=1}^{r}b_{i}\right|\leq\sum_{j=1}^{r}|a_{j}- b_{j}|\left(\prod_{i=1,i\neq j}^{r}(|a_{i}|+|b_{i}|)\right).\]
The second inequality in (87) is given by the estimates (58) and (59).
By slightly modifying the proof of Lemma 3 we can show that
\[\int_{(S^{d})^{r}}\beta_{j,j+1}^{-d/2}\left(\prod_{i=1,i\neq j}^{r}\beta_{i,i +1}^{-\frac{d-1}{2}}\right)dx_{1}\cdots dx_{r}<\infty. \tag{88}\]
Combining (87) and (88), we get
\[\int_{(S^{d})^{r}}|I_{r}|\,dx_{1}\cdots dx_{r}\leq Cn^{-\frac{(d-1)r}{2}-\frac {1}{2}}=o(n^{-\frac{(d-1)r}{2}}). \tag{89}\]
We define a function
\[g(x_{1},\ldots,x_{r}):=h(x_{1},\ldots,x_{r})\prod_{i=1}^{r}\sin^{-(d-1)/2}( \beta_{i,i+1}). \tag{90}\]
The proof of Lemma 3 implies that the function \(\Pi_{i=1}^{r}\sin^{-\frac{d-1}{2}}(\beta_{i,i+1})\) is integrable over \((S^{d})^{r}\). On the other hand, by definition of \(p_{n}\), we can write
\[\begin{split}& h(x_{1},\ldots,x_{r})\prod_{i=1}^{r}p_{n}(\alpha_{ i,i+1})\\ =&(n+(d-1)/2)^{-(d-1)r/2}\times g(x_{1},\ldots,x_{r} )\\ &\times\prod_{i=1}^{r}\left((-1)^{n\mathbf{1}[\alpha_{i,i+1}>\pi /2]}\cos\left((n+\frac{d-1}{2})\beta_{i,i+1}-(d-1)\frac{\pi}{4}\right)\right). \end{split} \tag{91}\]
When computing the integration over \(x_{1},\ldots,x_{r}\), we can build a spherical coordinate system \((\theta,\phi)\) around \(x_{2}\) and represent \(x_{1}\) by \(x_{2}+(\theta,\phi)\). Here \(\theta\in[0,\pi]\) and \(\phi\) has \(d-1\) components \(\phi_{1},\ldots,\phi_{d-1}\). We claim that, for almost every (fixed) \(\phi,x_{2},\ldots,x_{r}\), the integration of (91) over \(\theta\) has the limit
\[\begin{split}&\lim_{n\to\infty}\int_{S^{d}}g(x_{2}+(\theta,\phi), x_{2},\ldots,x_{r})\times\\ &\prod_{i=1,r}\left((-1)^{n\mathbf{1}[\alpha_{i,i+1}>\pi/2]}\cos \left((n+\frac{d-1}{2})\beta_{i,i+1}-(d-1)\frac{\pi}{4}\right)\right)d\theta=0.\end{split} \tag{92}\]
Assume (92) for the moment, by (91) and the dominated convergence theorem, we have
\[\int_{(S^{d})^{r}}h(x_{1},\ldots,x_{r})\prod_{i=1}^{r}p_{n}(\alpha_{i,i+1})dx_{1} \cdots dx_{r}=o(n^{-\frac{(d-1)r}{2}}). \tag{93}\]
Lemma 4 now follows from (86), (89) and (93). Hence it remains to prove (92).
To this end we first rewrite the product of the two \(\cos(\cdots)\) factors in (92) as
\[\begin{split}&\frac{1}{2}\cos\left((n+\frac{d-1}{2})(\beta_{1,2}+ \beta_{r,r+1})-(d-1)\frac{\pi}{2}\right)\\ +&\frac{1}{2}\cos\left((n+\frac{d-1}{2})(\beta_{1,2} -\beta_{r,r+1})\right).\end{split} \tag{94}\]
Under the spherical coordinate system, \(\alpha_{1,2}=\theta\) so that \(\beta_{1,2}=\min\{\theta,\pi-\theta\}\). Denote by \((\theta^{\prime},\phi^{\prime})\) the coordinate of \(x_{r}\) in this system. By making an orthogonal transformation if necessary, we may assume that \(\phi^{\prime}_{1}=0\).
To compute \(\beta_{r,r+1}\), note that
\[\cos\alpha_{r,r+1}=\cos\alpha_{r,1}=x_{1}\cdot x_{r}=\cos\theta\cos\theta^{ \prime}+\sin\theta\cos\phi_{1}\sin\theta^{\prime}. \tag{95}\]
If neither \(\theta^{\prime}\) nor \(\phi_{1}\) is not equal to \(0\) or \(\pi\), then \(\alpha_{r,r+1}\), viewed as a function of \(\theta\), is continuously differentiable at all but finite many \(\theta\)'s, and satisfies
\[\left|\frac{d\alpha_{r,r+1}}{d\theta}\right|=\frac{|-\sin\theta\cos\theta^{ \prime}+\cos\theta\cos\phi_{1}\sin\theta^{\prime}|}{\sqrt{1-(\cos\theta\cos \theta^{\prime}+\sin\theta\cos\phi_{1}\sin\theta^{\prime})^{2}}}<1.\]
Thus, \(\beta_{1,2}\pm\beta_{r,r+1}\) is piecewise differentiable in \(\theta\) with a nonzero derivative. The limit (92) now follows from (94) and (the proof of) the Riemann-Lebesgue lemma.
Note that (92) is not true for \(r=2\) where the second \(\cos(\cdots)\) factor in (94) is a constant, which further implies that the integration (92) may tend to some constant other than \(0\). Thus we need the assumption \(r\geq 3\).
## 4. Proof of Theorem 1
In this section we prove Theorem 1 regarding the asymptotic expansion of the mean \(\mathbb{E}(L_{n}f)\). By (1) and (3), we have
\[\mathbb{E}(L_{n}f)=\int_{(S^{d})^{k}}f(x_{1},\ldots,x_{k})\det\Big{(}K_{n}(x_ {i},x_{j})_{1\leq i,j\leq k}\Big{)}dx_{1}\cdots dx_{k} \tag{96}\]
We can expand the determinant as
\[\det\Big{(}K_{n}(x_{i},x_{j})_{1\leq i,j\leq k}\Big{)}= \prod_{i=1}^{k}K_{n}(x_{i},x_{i})-\sum_{1\leq i<j\leq k}K_{n}^{2 }(x_{i},x_{j})\prod_{\ell\neq i,j}K_{n}(x_{\ell},x_{\ell})\] \[+\text{remainder term},\]
where the remainder term (denoted by \(I_{9}\)) is the sum of \(\operatorname{sgn}(\sigma)\Pi_{i=1}^{k}K_{n}(x_{i},x_{\sigma(i)})\) over all \(\sigma^{\prime}s\in\operatorname{Sym}(k)\) which are neither the identity nor a transposition (a permutation which exchanges two elements and keeps all others fixed). Using the cycle
decomposition of permutations, (44) and (50), we have the upper bound
\[\begin{split}|I_{9}|\leq& C\left(\frac{k_{n}}{s_{d}} \right)^{k}\left(\sum_{\sigma=(i_{1}j_{1})(i_{2}j_{2})}P_{n}^{2}(x_{i_{1}},x_{j _{1}})P_{n}^{2}(x_{i_{2}},x_{j_{2}})\right.\\ &\left.+\sum_{3\leq r\leq k}\sum_{\sigma=(i_{1}\cdots i_{r})} \left|P_{n}(x_{i_{1}},x_{i_{2}})\cdots P_{n}(x_{i_{r-1}},x_{i_{r}})P_{n}(x_{i_ {r}},x_{i_{1}})\right|\right),\end{split} \tag{97}\]
where \(C\) is some constant depending on \(k\).
Combining (52), the estimate \(k_{n}=\Theta(n^{d-1})\), the boundedness of \(f\) and Lemma 3, we have the upper bound
\[\int_{(S^{d})^{k}}\left|f(x_{1},\ldots,x_{k})\right|\left|I_{9}\right|dx_{1} \cdots dx_{k}\leq Cn^{(d-1)k}(n^{-2(d-1)}+n^{-3(d-1)/2}), \tag{98}\]
which gives the error term in (12). We also have
\[\begin{split}&\int_{(S^{d})^{k}}f(x_{1},\ldots,x_{k})\\ &\times\Big{(}\prod_{i=1}^{k}K_{n}(x_{i},x_{i})-\sum_{1\leq i<j \leq k}K_{n}^{2}(x_{i},x_{j})\prod_{\ell\neq i,j}K_{n}(x_{\ell},x_{\ell}) \Big{)}dx_{1}\cdots dx_{k}\\ =&\left(\frac{k_{n}}{s_{d}}\right)^{k}\int_{(S^{d}) ^{k}}f(x_{1},\ldots,x_{k})dx_{1}\cdots dx_{k}\\ &-\left(\frac{k_{n}}{s_{d}}\right)^{k-2}\int_{(S^{d})^{2}}\sum_{ 1\leq i<j\leq k}f_{i,j}(x,y)K_{n}^{2}(x,y)dxdy,\end{split} \tag{99}\]
where \(f_{i,j}\) is the \((i,j)\)-margin function of \(f\) as defined in (11). Applying Lemma 2 to (99), we will get the first two terms in (12), which finishes the proof of Theorem 1.
## 5. Proof of Theorem 2
### Univariate case
The univariate linear statistics for determinantal point processes has been understood very well. The following result proved in [9] is particularly useful. Given a family of determinantal point processes with kernel \(K_{n}\) and measurable bounded univariate functions \(f_{n}\) with compact support (to ensure integrability), let \(L_{n}f_{n}\) and \(L_{n}\left|f_{n}\right|\) be the linear statistics of \(f_{n}\) and \(\left|f_{n}\right|\), respectively. Suppose that
\[\text{Var}(L_{n}f_{n})\to\infty,\,\sup|f_{n}|=o(\text{Var}(L_{n}f_{n})^{ \epsilon}),\,\,\mathbb{E}(L_{n}\left|f_{n}\right|)=O(\text{Var}(L_{n}f_{n})^{ \delta}) \tag{100}\]
for any \(\epsilon>0\) and some \(\delta>0\), then one has the central limit theorem,
\[\frac{L_{n}f_{n}-\mathbb{E}(L_{n}f_{n})}{\sqrt{\text{Var}(L_{n}f_{n})}} \xrightarrow{\mathrm{d}}N(0,1).\]
In our case, the integrability condition holds trivially as the test function is bounded and the underlying space \(S^{d}\) is compact. Thus, it remains to check the three conditions in (100) in order to to prove Theorem 2 for the univariate case.
Note that the variance of \(L_{n}f\) is given by
\[\operatorname{Var}(L_{n}f)=\frac{1}{2}\int_{S^{d}}\int_{S^{d}}(f(x)-f(y))^{2}K_{n }^{2}(x,y)dxdy. \tag{101}\]
By Lemma 2, one immediately has the limit,
\[\begin{split}&\lim_{n\to\infty}\frac{\operatorname{Var}(L_{n}f)}{k_ {n}}\\ &=\frac{2^{d-2}}{\Gamma(d)\pi}\Big{(}\frac{\Gamma(\frac{d}{2})}{s _{d}}\Big{)}^{2}\int_{S^{d}}\int_{0}^{\pi}\int_{\Omega}(f(x)-f(x+(\theta,\phi)) )^{2}J(\phi)d\phi d\theta dx.\\ &=\frac{2^{d-2}}{\Gamma(d)\pi}\Big{(}\frac{\Gamma(\frac{d}{2})}{s _{d}}\Big{)}^{2}\int_{S^{d}}\int_{S^{d}}\frac{(f(x)-f(y))^{2}}{\sin^{d-1}( \arccos(x\cdot y))}dxdy.\end{split} \tag{102}\]
By definition (10), the \(1\)-margin function is itself for \(k=1\), i.e., \(F(x)=f(x)\), and thus (102) gives the limit of variance in (14) for \(k=1\). The assumption that \(F(x)\) is not constant almost everywhere implies the first condition \(\operatorname{Var}(L_{n}f_{n})=\Theta(k_{n})\to\infty\). The second condition is satisfied since \(f\) is bounded. The third condition is satisfied with \(\delta=1\) by the fact that
\[\mathbb{E}(L_{n}\left|f\right|)=\int_{S^{d}}\left|f(x)\right|K_{n}(x,x)dx= \frac{k_{n}}{s_{d}}\int_{S^{d}}\left|f(x)\right|dx=\Theta(k_{n}).\]
This completes the proof of Theorem 2 for the univariate case.
### Multivariate case
Now we prove Theorem 2 for the multivariate linear statistics. There are two steps in the proof. We will first derive the growth order of the variance \(\operatorname{Var}(L_{n}f)=Q_{2}(L_{n}f)\), then we will prove \(Q_{m}(L_{n}f)=o(Q_{2}(L_{n}f)^{\frac{m}{2}})\) for all fixed \(m\geq 3\). This will imply the Gaussian limit for the multivariate linear statistics by the method of cumulants.
We first introduce a notation. Given the set \(A\) which is a collection of \((\mathbf{T},\sigma)\)-graph, we define
\[Q_{m}(L_{n}f,A):=\sum_{(\mathbf{T},\sigma)\in A}\int_{(S^{d})^{|\mathbf{T}|}}f (\mathbf{T})\text{sgn}(\sigma)\Pi_{q\in\text{Range}(\mathbf{T})}K(x_{q},x_{ \sigma(q)})d\mathbf{x}, \tag{103}\]
where \(d\mathbf{x}\) is the volume element involved in the integration. With such notation, we have \(Q_{m}(L_{n}f)=Q_{m}(L_{n}f,\mathcal{C}(m))\) by (41) (recall the definition of \(\mathcal{C}(m)\) in (39)).
We first estimate \(Q_{2}(L_{n}f)\), which is the variance \(\operatorname{Var}(L_{n}f)\). We can split the expression for \(Q_{2}(L_{n}f)\) into \(3\) parts:
\[Q_{2}(L_{n}f)=Q_{m}(L_{n}f,\mathcal{C}(2))=Q_{2}(L_{n}f,A_{1})+Q_{2}(L_{n}f,A_{ 2})+Q_{2}(L_{n}f,A_{3}),\]
where \(A_{1},A_{2},A_{3}\) are disjoint subsets of \(\mathcal{C}(2)\) defined as follows:
\[\begin{split}& A_{1}=\{(\mathbf{T},\sigma)\in\mathcal{C}(2):| \mathbf{T}|=2k,\sigma\text{ is a transposition, i.e. }\sigma=(ij)\text{ for some }i,j\},\\ & A_{2}=\{(\mathbf{T},\sigma)\in\mathcal{C}(2):|\mathbf{T}|=2k-1, \sigma=id\},\\ & A_{3}=\mathcal{C}(2)-A_{1}-A_{2}.\end{split}\]
**Lemma 5**.: _We have the following two estimates._
1. \[Q_{2}(L_{n}f,A_{1})+Q_{2}(L_{n}f,A_{2})\] \[= \left(\frac{k_{n}}{s_{d}}\right)^{2k-2}\frac{k_{n}2^{d-2}}{\Gamma( d)\pi}\Big{(}\frac{\Gamma(\frac{d}{2})}{s_{d}}\Big{)}^{2}\int_{S^{d}}\int_{S^{d}} \frac{(F(x)-F(y))^{2}}{\sin^{d-1}(\arccos(x\cdot y))}dxdy\] (104) \[+o(n^{(d-1)(2k-1)}).\]
2. \(Q_{2}(L_{n}f,A_{3})=o\left(n^{(d-1)(2k-1)}\right)\).
The limit (14) now follows from Lemma 5. In particular, since \(F\) is not constant almost everywhere, we have the following estimate of the variance
\[Q_{2}(L_{n}f)=\Theta(n^{(d-1)(2k-1)}). \tag{105}\]
Proof of Lemma 5.: We first consider \(Q_{2}(L_{n}f,A_{1})\). If \(|\mathbf{T}|=2k\), then \(\mathbf{T}\) has to be \(((1,\ldots,k),(k+1,\ldots,2k))\). Pick any \(1\leq i\leq k\) and \(k+1\leq j\leq 2k\). Then for such \(\mathbf{T}\) and \(\sigma\) we have
\[Q_{2}(L_{n}f,(\mathbf{T},\sigma)) =-\int_{(S^{d})^{2k}}f(x_{1},\ldots,x_{k})f(x_{k+1},\ldots,x_{2k} )\Big{(}\frac{k_{n}}{s_{d}}\Big{)}^{2k-2}K_{n}^{2}(x_{i},x_{j})d\mathbf{x}\] \[=-\Big{(}\frac{k_{n}}{s_{d}}\Big{)}^{2k-2}\int_{(S^{d})^{2}}f_{i} (x_{i})f_{j-k}(x_{j})K_{n}^{2}(x_{i},x_{j})dx_{i}dx_{j},\]
where the second equality is given by the definition of the \(i\)-margin function \(f_{i}\) in (10). Summing over all \(i,j\), we see that \(Q_{2}(L_{n}f,A_{1})\) is equal to
\[-\Big{(}\frac{k_{n}}{s_{d}}\Big{)}^{2k-2}\int_{(S^{d})^{2}}\left( \sum_{i=1}^{k}f_{i}(x)\right)\left(\sum_{i=1}^{k}f_{i}(y)\right)K_{n}^{2}(x,y)dxdy \tag{106}\] \[= -\left(\frac{k_{n}}{s_{d}}\right)^{2k-2}\int_{(S^{d})^{2}}F(x)F( y)K_{n}^{2}(x,y)dxdy.\]
Now we consider \(A_{2}\). Since \(\mathbf{T}\in S(2,k)\) and \(|\mathbf{T}|=2k-1\), \(\mathbf{T}\) has to satisfy \(|T_{1}\cap T_{2}|=1\). The number of ways to choose \(1\) location in \(T_{1}\) and \(1\) location in \(T_{2}\) are both \(k\). Therefore, \(Q_{2}(L_{n}f,A_{2})\) equals
\[\sum_{i=1}^{k}\sum_{j=k+1}^{2k}\Big{(}\frac{k_{n}}{s_{d}}\Big{)} ^{2k-1}\int_{(S^{d})^{2k-1}}f(x_{1},\ldots,x_{k})f(x_{k+1},\ldots,x_{j-1},x_{i },x_{j},\ldots,x_{2k-1})d\mathbf{x}\] \[= \sum_{i=1}^{k}\sum_{j=k+1}^{2k}\Big{(}\frac{k_{n}}{s_{d}}\Big{)} ^{2k-1}\int_{S^{d}}f_{i}(x)f_{j-k}(x)dx\] \[= \Big{(}\frac{k_{n}}{s_{d}}\Big{)}^{2k-1}\int_{S^{d}}F(x)^{2}dx.\]
Adding up \(Q_{2}(L_{n}f,A_{1})\) and \(Q_{2}(L_{n}f,A_{2})\) and using (47), we have
\[Q_{2}(L_{n}f,A_{1})+Q_{2}(L_{n}f,A_{2})=\Big{(}\frac{k_{n}}{s_{d}}\Big{)}^{2k-2 }\frac{1}{2}\int_{(S^{d})^{2}}(F(x)-F(y))^{2}K_{n}^{2}(x,y)dxdy.\]
Now (104) follows by applying Lemma 2 to the function \((F(x)-F(y))^{2}\).
Now we turn to the second part of Lemma 5. We can further decompose the set \(A_{3}\) into \(3\) subsets \(A_{4},A_{5},A_{6}\) corresponding to \(|\mathbf{T}|=2k\) or \(2k-1\) or smaller than \(2k-1\). For any \((\mathbf{T},\sigma)\in A_{4}\), \(\sigma\) is neither a transposition nor identity (because \((\mathbf{T},\sigma)\)
has to induce a connected graph), thus there are at least three different indices \(q\) such that \(\sigma(q)\neq q\). By (41) and Lemma 3 with \(r=3\), we have
\[Q_{2}(L_{n}f,A_{4})=O(n^{(2k)(d-1)}n^{-\frac{3(d-1)}{2}})=o(n^{(d-1)(2k-1)}). \tag{107}\]
For any \((\mathbf{T},\sigma)\in A_{5}\), it is not in \(A_{2}\), i.e., \(\sigma\) is not identity, and thus there are at least two \(q\)'s such that \(\sigma(q)\neq q\). Applying Lemma 3 with \(r=2\), we get
\[Q_{2}(L_{n}f,A_{5})=O(n^{(2k-1)(d-1)}n^{-(d-1)})=o(n^{(d-1)(2k-1)}). \tag{108}\]
For any \((\mathbf{T},\sigma)\in A_{6}\), it's clear that if \(|\mathbf{T}|\leq 2k-2\), then for any \(\sigma\), we have
\[|Q_{2}(L_{n}f,(\mathbf{T},\sigma))|\leq C\Big{(}\frac{k_{n}}{s_{d}}\Big{)}^{| \mathbf{T}|}=O(n^{(d-1)|\mathbf{T}|})=o(n^{(d-1)(2k-1)}).\]
Hence, we have
\[Q_{2}(L_{n}f,A_{6})=o(n^{(d-1)(2k-1)}). \tag{109}\]
Combining (107), (108) and (109), we have
\[Q_{2}(L_{n}f,A_{3})=Q_{2}(L_{n}f,A_{4})+Q_{2}(L_{n}f,A_{5})+Q_{2}(L_{n}f,A_{6} )=o(n^{(d-1)(2k-1)}),\]
which completes the proof of Lemma 5.
Next we will prove the estimates for the higher order cumulants.
**Lemma 6**.: _For any \(m\geq 3\), it holds that_
\[Q_{m}(L_{n}f)=o(\operatorname{Var}(L_{n}f)^{\frac{m}{2}}),\text{ i.e., }Q_{m}(L_{n}f)=o(n^{(d-1)(km-\frac{m}{2})}). \tag{110}\]
This lemma will imply the convergence of the multivariate linear statistics to the Gaussian distribution (15) by the method of cumulants. To prove Lemma 6, we first need the following lemma.
**Lemma 7**.: _Given a permutation \(\sigma\), let \(a(\sigma)\) be the number of elements \(q\) such that \(\sigma(q)\neq q\). Suppose the \((\mathbf{T},\sigma)\)-graph is connected, then we have_
\[km-|\mathbf{T}|+a(\sigma)\geq m-1+\mathbf{1}[\sigma\neq id]. \tag{111}\]
Proof.: (111) is essentially due to the simple fact in graph theory that for a connected graph the number of edges is not smaller than the number of vertices minus \(1\).
Before applying this fact, we note that, due to the construction of the \((\mathbf{T},\sigma)\)-graph, the connectivity property of the graph is not affected by removing some redundant red edges. Indeed, if a vertex has \(\ell\) solid red edges, then it lies in a clique (i.e., a complete graph) of size \(\ell+1\) formed by red solid edges only. We can change this clique to a path graph by removing \(\frac{\ell(\ell+1)}{2}-\ell=\frac{\ell(\ell-1)}{2}\) red solid edges without affecting the connectivity. After the edge removals, the number of solid red edges becomes \(km-|\mathbf{T}|\).
We now consider the new \((\mathbf{T},\sigma)\)-graph after removing some redundant red edges as described above. Note that the total number of vertices and black edges are equal to \(km\) and \((k-1)m\), respectively.
* If \(\sigma=id\), then there is no dotted red edge. The number of red solid edges (after the edge removals) is equal to \(km-|\mathbf{T}|\). Hence, by the connectivity of the graph, we have \[(k-1)m+km-|\mathbf{T}|\geq km-1,\] which proves (111).
* If \(\sigma\neq id\), then we have dotted red edges. We now perform a contraction of the graph by contracting all vertices connected by black or red solid edges into a single one. After this contraction, the number of remaining vertices is at least \[m-(km-|\mathbf{T}|).\] These remaining vertices must be connected by dotted red edges to ensure that the \((\mathbf{T},\sigma)\)-graph is connected, whose number can be upper bounded by \(a(\sigma)-1\). (We may remove one dotted red edge without affecting the connectivity, if the number of the vertices is \(a(\sigma)\).) This implies \[a(\sigma)-1\geq m-(km-|\mathbf{T}|)-1,\] which proves (111) in the case \(\sigma\neq id\).
Now we decompose \(\mathcal{C}(m)\) into the following three subsets,
\[B_{1} =\{(\mathbf{T},\sigma)\in\mathcal{C}(m):|\mathbf{T}|=km,a(\sigma) =m\}, \tag{112}\] \[B_{2} =(\mathcal{C}(m)-B_{1})\cap\{(\mathbf{T},\sigma)\in\mathcal{C}(m ):\sigma=id\},\] \[B_{3} =(\mathcal{C}(m)-B_{1})\cap\{(\mathbf{T},\sigma)\in\mathcal{C}( m):\sigma\neq id\}.\]
For any \((\mathbf{T},\sigma)\in B_{1}\), by the restrictions that \(a(\sigma)=m\geq 3\) and \((\mathbf{T},\sigma)\in\mathcal{C}(m)\), the cycle decomposition of \(\sigma\) must be the multiplication of one cyclic permutation of length \(m\) and \((mk-m)\) cyclic permutations of length \(1\), e.g., \(\sigma=(12\cdots m)(m+1)\cdots(km)\). Applying Lemma 4 with \(r=m\geq 3\), we have
\[Q_{m}(L_{n}f,(\mathbf{T},\sigma))=o(n^{(d-1)|\mathbf{T}|}n^{-\frac{(d-1)m}{2} })=o(n^{(d-1)(km-\frac{m}{2})}),\]
which further implies that
\[Q_{m}(L_{n}f,B_{1})=o(n^{(d-1)(km-\frac{m}{2})}). \tag{113}\]
For any \((\mathbf{T},\sigma)\in B_{2}\), by \(m\geq 3\), (41), (111) and the boundedness of \(f\), we have
\[Q_{m}(L_{n}f,(\mathbf{T},\sigma))=O(n^{(d-1)|\mathbf{T}|})=O(n^{(d-1)(km-m+1)} )=o(n^{(d-1)(km-\frac{m}{2})}).\]
Therefore, we get the estimate
\[Q_{m}(L_{n}f,B_{2})=o(n^{(d-1)(km-\frac{m}{2})}). \tag{114}\]
For any \((\mathbf{T},\sigma)\in B_{3}\), by the boundedness of \(f\) and Lemma 3 with \(r=a(\sigma)\geq 2\), we have
\[Q_{m}(L_{n}f,(\mathbf{T},\sigma)))=O(n^{(d-1)|\mathbf{T}|}n^{-(d-1)a(\sigma)/ 2}).\]
If \(|\mathbf{T}|=km\), then we must have \(a(\sigma)>m\) since \((\mathbf{T},\sigma)\in\mathcal{C}(m)\) is connected but it is not in \(B_{1}\). It follows that
\[O(n^{(d-1)|\mathbf{T}|}n^{-(d-1)a(\sigma)/2})=o(n^{(d-1)(km-\frac{m}{2})}).\]
If \(|\mathbf{T}|<km\), then by (111) with \(\sigma\neq id\), we have
\[km-|\mathbf{T}|+\frac{a(\sigma)}{2}\geq km-|\mathbf{T}|+\frac{m-(km-|\mathbf{ T}|)}{2}>\frac{m}{2},\]
which implies
\[Q_{m}(L_{n}f,(\mathbf{T},\sigma))=O(n^{(d-1)|\mathbf{T}|}n^{-(d-1)a(\sigma)/2} )=o(n^{(d-1)(km-\frac{m}{2})}).\]
Hence, we have
\[Q_{m}(L_{n}f,B_{3})=o(n^{(d-1)(km-\frac{m}{2})}). \tag{115}\]
By (113), (114) and (115), for \(m\geq 3\) we get
\[Q_{m}(L_{n}f)=Q_{m}(L_{n}f,B_{1})+Q_{m}(L_{n}f,B_{2})+Q_{m}(L_{n}f,B_{3})=o(n^{(d -1)(km-\frac{m}{2})}).\]
This together with (105) will complete the proof of Lemma 6, and thus the proof of Theorem 2 for \(k\geq 2\).
## 6. Proof of Theorem 3
In this section, we will prove Theorem 3. We first claim that if \(f(x_{1},...,x_{k})\) with \(k\geq 2\) satisfies (16) and (17), then the \(i\)-margin function \(f_{i}(x)\) is necessarily constant for all \(1\leq i\leq k\). In fact, condition (16) of the permutation invariance implies that
\[f_{i}(x)=f_{1}(x)\,\text{ for all }\,i, \tag{116}\]
which is equal to
\[\int_{(S^{d})^{k-1}}f(x,x_{2},\dots,x_{k})dx_{2}\cdots dx_{k}\] \[= \int_{S^{d}}\left(\int_{(S^{d})^{k-2}}f(x,x_{2},\dots,x_{k})dx_{3 }\cdots dx_{k}\right)dx_{2}\] \[= \int_{S^{d}}f_{1,2}(x,x_{2})dx_{2}.\]
Here \(f_{1,2}\) is \((1,2)\)-margin function of \(f\). Condition (17) further implies that the integral \(\int_{S^{d}}f_{1,2}(x,x_{2})dx_{2}\) is independent of \(x\), i.e., \(f_{1}(x)\) is a constant independent of \(x\), and thus \(F(x)\) is a constant. Therefore, the limit of the variance on the right hand side of (14) is degenerate. Without loss of generality, we assume that the integral of \(f\) is \(0\), i.e.,
\[\int_{(S^{d})^{k}}f(x_{1},\dots,x_{k})dx_{1}\cdots dx_{k}=0.\]
This is equivalent to \(\int_{S^{d}}f_{1}(x)dx=0\), which implies that (since \(f_{1}\) is constant)
\[f_{1}(x)=0\text{ and thus }F(x)=0\text{ for all }x\in S^{d}. \tag{117}\]
### Calculations of the cumulants
Again we will prove Theorem 3 by the method of cumulants. Recall the concepts of break points, (ir)reducible graph and the notation \(\mathfrak{I}(m)\) (see Definitions 1 and 2, and (43)), we first have
**Lemma 8**.: _Let \(f\) be a function of \(k\geq 2\) variables that satisfies the \(i\)-margin function \(f_{i}=0\) for all \(i\). For any \((\mathbf{T},\sigma)\notin\mathfrak{I}(m)\), we have \(Q_{m}(L_{n}f,(\mathbf{T},\sigma))=0\)._
Proof.: By the definition of the reducible graph, we can assume that \((\mathbf{T},\sigma)\) breaks at \(q_{0}\in T_{i}\), and thus \(\sigma(q)=q\) for \(q\in\operatorname{Range}(T_{i})-q_{0}\). Thus we have
\[Q_{m}(L_{n}f,(\mathbf{T},\sigma))\] \[= \int_{(S^{d})^{|\mathbf{T}_{|}}}\operatorname{sgn}(\sigma)f(T_{1} )\cdots f(T_{m})\Pi_{q\in\operatorname{Range}(\mathbf{T})}K_{n}(x_{q},x_{ \sigma(q)})d\mathbf{x}\] \[= \operatorname{sgn}(\sigma)\int_{(S^{d})^{|\mathbf{T}_{|}-k+1}} \left(\int_{(S^{d})^{k-1}}f(T_{i})\Pi_{q\in\operatorname{Range}(T_{i}),q \neq q_{0}}K_{n}(x_{q},x_{q})dx_{q}\right)\] \[\times(\Pi_{j\neq i}f(T_{j}))\left(\Pi_{q^{\prime}\in\{q_{0}\} \cup(\operatorname{Range}(\mathbf{T})-\operatorname{Range}(T_{i}))}K_{n}(x_{q ^{\prime}},x_{\sigma(q^{\prime})})\right)dx_{q^{\prime}}\] \[= 0.\]
We have used the assumption \(f_{i}=0\) in the last equality.
Lemma 8 implies that
\[Q_{m}(L_{n}f)=Q_{m}(L_{n}f,\mathcal{C}(m))=Q_{m}(L_{n}f,\mathfrak{I}(m)).\]
Recall the concept of the circle-like graph in Definition 3, we express \(\mathfrak{I}(m)\) as the union of
\[E_{1}:=\{(\mathbf{T},\sigma)\in\mathfrak{I}(m):(\mathbf{T},\sigma)\text{ is circle-like}\} \tag{118}\]
and its complement
\[E_{2}:=\mathfrak{I}(m)-E_{1}. \tag{119}\]
**Lemma 9**.: _For \(m\geq 2\), recall that \(a(\sigma)\) is the number of elements that are not fixed by \(\sigma\), we have_
* _For any_ \((\mathbf{T},\sigma)\in\mathfrak{I}(m)\)_,_ \[km-|\mathbf{T}|+\frac{a(\sigma)}{2}\geq m.\] (120)
* _If_ \((\mathbf{T},\sigma)\in E_{2}\) _and_ \(km-|\mathbf{T}|+\frac{a(\sigma)}{2}=m\)_, then_ \(\sigma\) _is not a composition of disjoint transpositions, i.e., in the cycle decomposition of_ \(\sigma\)_, there must exist at least one cyclic permutation with length strictly greater than 2._
Proof.: We now define two functions \(M(i,j)\) and \(\Delta(i,j)\) for \(1\leq i\leq m,1\leq j\leq k\). Given a \((\mathbf{T},\sigma)\)-graph, we say an index \(q\in[km]\) has _multiplicity_\(M\) if there are exactly \(M\) different \(i\)'s such that \(q\in T_{i}\). We define \(M(i,j)\) as the multiplicity of \(T_{i,j}\). We define \(\Delta(i,j)=1\) if \(\sigma(T_{i,j})\neq T_{i,j}\) and \(0\) otherwise. Then we have
\[km-|\mathbf{T}|+\frac{a(\sigma)}{2}=\sum_{i=1}^{m}\sum_{j=1}^{k}\left(\frac{M( i,j)-1}{M(i,j)}+\frac{\Delta(i,j)}{2M(i,j)}\right). \tag{121}\]
Since we assume that \((\mathbf{T},\sigma)\in\mathfrak{I}(m)\), for each \(i\), \(T_{i}\) has at least two distinct elements, denoted by \(T_{i,i_{1}}\) and \(T_{i,i_{2}}\), such that they both have red edges. Therefore, we have
\[\max\{M(i,i_{1})-1,\Delta(i,i_{1})\}\geq 1\text{ and }\max\{M(i,i_{2})-1,\Delta(i,i_{2})\}\geq 1. \tag{122}\]
If \(M(i,i_{1})>1\), then
\[\frac{M(i,i_{1})-1}{M(i,i_{1})}\geq\frac{1}{2}.\]
If \(M(i,i_{1})=1\), then by (122), \(\Delta(i,i_{1})=1\). We then have
\[\frac{\Delta(i,i_{1})}{2M(i,i_{1})}=\frac{\Delta(i,i_{1})}{2}=\frac{1}{2}.\]
In both cases we always have
\[\frac{M(i,i_{1})-1}{M(i,i_{1})}+\frac{\Delta(i,i_{1})}{2M(i,i_{1})}\geq\frac{1 }{2}. \tag{123}\]
The same inequality holds for \(T_{i,i_{2}}\). Hence we have
\[\sum_{j=1}^{k}\left(\frac{M(i,j)-1}{M(i,j)}+\frac{\Delta(i,j)}{2M(i,j)}\right) \geq 2\times\frac{1}{2}=1. \tag{124}\]
And the equality in (124) holds iff there are exactly two vertices \((i,i_{\alpha}),\alpha\in\{1,2\}\) that have red edges and each satisfies
\[M(i,i_{\alpha})=1\text{ and }\Delta(i,i_{\alpha})=1;\text{ or }M(i,i_{\alpha})=2\text{ and }\Delta(i,i_{\alpha})=0. \tag{125}\]
By summing over \(1\leq i\leq m\), we have
\[\sum_{i=1}^{m}\sum_{j=1}^{k}\left(\frac{M(i,j)-1}{M(i,j)}+\frac{\Delta(i,j)}{2 M(i,j)}\right)\geq\sum_{i=1}^{m}1=m. \tag{126}\]
(120) now follows from (121) and (126).
Now we turn to prove the second part of Lemma 9 by contradiction. Suppose that \(km-|\mathbf{T}|+a(\sigma)/2=m\) and the cycle decomposition of \(\sigma\) only consists of disjoint transpositions, we need to show \((\mathbf{T},\sigma)\in E_{1}\). By the proof of (120) above, the condition \(km-|\mathbf{T}|+a(\sigma)/2=m\) implies that
\[\sum_{j=1}^{k}\left(\frac{M(i,j)-1}{M(i,j)}+\frac{\Delta(i,j)}{2M(i,j)}\right)=1 \tag{127}\]
for each \(1\leq i\leq m\). This further implies that for each \(1\leq i\leq m\), there are exactly two vertices \((i,i_{1})\) and \((i,i_{2})\) that can have red edges, and all the other vertices have no red edges. By (125), for all \(1\leq i\leq m\) and any \(\alpha\in\{1,2\}\), either of the following two conditions holds:
* \(M(i,i_{\alpha})=2\) and \(\Delta(i,i_{\alpha})=0\). In this case \((i,i_{\alpha})\) has exactly one solid red edge but no red dotted edge.
* \(M(i,i_{\alpha})=1\) and \(\Delta(i,i_{\alpha})=1\). In this case \((i,i_{\alpha})\) has at least one dotted red edge, but no solid edge. Since \(\sigma\) is only composed of disjoint transpositions, \((i,i_{\alpha})\) must have exactly one dotted red edge connecting with some other vertex \((j,j_{\alpha^{\prime}})\). And \(j\) has to be distinct from \(i\). Otherwise there would be no red edge between the set \(\{(i,\cdot)\}\) and \(\{(i^{\prime},j^{\prime}):i^{\prime}\neq i,1\leq j^{\prime}\leq k\}\), which makes \((\mathbf{T},\sigma)\notin\mathcal{C}(m)\).
As a conclusion, in both cases, for each \(1\leq i\leq m\), there are exactly two vertices in \(\{(i,j):1\leq j\leq k\}\) that can have red edge and each of them is connected to vertices in \(\{(i^{\prime},j^{\prime}):i^{\prime}\neq i,1\leq j^{\prime}\leq k\}\) with a single red edge. This shows that \((\mathbf{T},\sigma)\) is circle-like which is a contradiction, and this proves the second part of Lemma 9.
The following lemma indicates that the summation over the subset \(E_{1}\) yields the leading order term of \(Q_{m}(L_{n}f)\).
**Lemma 10**.: _Fix any \(m\geq 2\), we have the following two estimates._
1. \[Q_{m}(L_{n}f,E_{2})=o(n^{(d-1)(k-1)m}).\] (128)
2. \[\begin{split} Q_{m}(L_{n}f,E_{1})=&\frac{1}{2}(m-1 )!(k(k-1))^{m}\left(\frac{k_{n}}{s_{d}}\right)^{mk}\left(\frac{C_{d}}{n^{d-1} }\right)^{m}\\ &\times\int_{(S^{d})^{m}}\widehat{h}(x_{1},x_{2})\widehat{h}(x_{ 2},x_{3})\cdots\widehat{h}(x_{m},x_{1})dx_{1}\cdots dx_{m}\\ &+o(n^{(d-1)(k-1)m}),\end{split}\] (129)
_where the constant \(C_{d}\) is defined in Theorem 3, and the symmetric function \(\widehat{h}(x,y)\) is defined in (20)._
By the relation \(Q_{m}(L_{n}f)=Q_{m}(L_{n}f,\mathfrak{I}(m))=Q_{m}(L_{n}f,E_{1})+Q_{m}(L_{n}f,E_{ 2})\), we have the following corollary.
**Corollary 3**.: _For any \(m\geq 2\), the \(m\)-th cumulant satisfies the asymptotic expansion_
\[\begin{split} Q_{m}(L_{n}f)=&\frac{1}{2}(m-1)!(k(k -1))^{m}\left(\frac{k_{n}}{s_{d}}\right)^{mk}\left(\frac{C_{d}}{n^{d-1}}\right) ^{m}\\ &\times\int_{(S^{d})^{m}}\widehat{h}(x_{1},x_{2})\widehat{h}(x_{2 },x_{3})\cdots\widehat{h}(x_{m},x_{1})dx_{1}\cdots dx_{m}\\ &+o(n^{(d-1)(k-1)m}).\end{split} \tag{130}\]
_In the special case \(m=2\), it yields the limit (22) for the variance of \(L_{n}f\)._
Proof of Lemma 10.: We first prove part (1). Given any \((\mathbf{T},\sigma)\in E_{2}\subset\mathfrak{I}(m)\), by (120), it holds that \(km-|\mathbf{T}|+\frac{a(\sigma)}{2}\geq m\). For the case \(km-|\mathbf{T}|+\frac{a(\sigma)}{2}>m\), by Lemma 3, we have
\[\begin{split}& Q_{m}(L_{n}f,(\mathbf{T},\sigma))\\ =&\int_{(S^{d})^{|\mathbf{T}|}}f(T_{1})\cdots f(T_{m} )\text{sgn}(\sigma)\Pi_{q\in\text{Range}(\mathbf{T})}K_{n}(x_{q},x_{\sigma(q) })d\mathbf{x}\\ =& O(n^{|\mathbf{T}|(d-1)})O(n^{-(d-1)a(\sigma)/2}) \\ =& O(n^{(d-1)(|\mathbf{T}|-a(\sigma)/2)})=o(n^{(d-1) (mk-m)}).\end{split} \tag{131}\]
For the case \(km-|\mathbf{T}|+\frac{a(\sigma)}{2}=m\), by the second part of Lemma 9, there must be a cyclic permutation whose length is at least 3 in the cycle decomposition of \(\sigma\). Hence by Lemma 3 and Lemma 4, we can first integrate all variables with indices in that cyclic permutation, and then integrate the remaining variables to get
\[\begin{split} Q_{m}(L_{n}f,(\mathbf{T},\sigma))&= O(n^{|\mathbf{T}|(d-1)})o(n^{-(d-1)a(\sigma)/2})\\ &=o(n^{(d-1)(|\mathbf{T}|-a(\sigma)/2)})=o(n^{(d-1)(mk-m)}).\end{split} \tag{132}\]
By (131) and (132), if we sum over all \((\mathbf{T},\sigma)\in E_{2}\), we prove (128).
We next prove part (2). We define
\[\begin{split} h_{n}(x,y):=&\int_{S^{d}}(f_{1,2}(x,y )-f_{1,2}(x,z))P_{n}^{2}(y,z)dz\\ =&(k_{n}/s_{d})^{-1}f_{1,2}(x,y)-\int_{S^{d}}f_{1,2}( x,z)P_{n}^{2}(y,z)dz.\end{split} \tag{133}\]
Since \(f_{1,2}(x,y)\) and \(P_{n}(x,y)\) depend only on the distance \(\mathrm{d}(x,y)\), we have
\[\begin{split} h_{n}(x,y)=&(k_{n}/s_{d})^{-1}f_{1,2} (x,y)-\int_{S^{d}}f_{1,2}(x,z)P_{n}^{2}(y,z)dz\\ =&(k_{n}/s_{d})^{-1}f_{1,2}(y,x)-\int_{S^{d}}f_{1,2} (y,z)P_{n}^{2}(x,z)dz=h_{n}(y,x).\end{split} \tag{134}\]
Hence \(h_{n}(x,y)\) is symmetric in \(x\) and \(y\). We claim that
\[\begin{split} Q_{m}(L_{n}f,E_{1})=&\frac{1}{2}(m-1)!(k (k-1))^{m}\left(\frac{k_{n}}{s_{d}}\right)^{mk}\\ &\times\int_{(S^{d})^{m}}h_{n}(x_{1},x_{2})h_{n}(x_{2},x_{3}) \cdots h_{n}(x_{m},x_{1})dx_{1}\cdots dx_{m}.\end{split} \tag{135}\]
Now we prove (135). Given \((\mathbf{T},\sigma)\in E_{1}\) which is circle-like, by Proposition 1, we can find vertices \((i,i_{1})\) and \((i,i_{2})\) for \(1\leq i\leq m\) and a cyclic permutation \(p\) of \(\{1,\ldots,m\}\) such that \((i,i_{2})\) is connected with \((p(i),p(i)_{1})\) with a red edge for all \(i\). To compute \(Q_{m}(L_{n}f,(\mathbf{T},\sigma))\), for simplicity, by condition (16) of the permutation invariance of \(f\), we assume that \(i_{1}=1\) and \(i_{2}=2\) for all \(i\), and we also assume \(p\) is the cyclic permutation \((12\cdots m)\). We now define a new kernel \(\tilde{P}_{i}(x,y)\) as follows. If \((i,2)\) and \((i+1,1)\) are connected by a solid red edge (i.e., \(T_{i,2}=T_{i+1,1}\)), we let
\[\tilde{P}_{i}(x,y)=K_{n}^{-1}(x,x)\delta_{y}(x)=(k_{n}/s_{d})^{-1}\delta_{y}(x),\]
where \(\delta_{y}(x)\) is a Dirac delta function such that for any function \(g\),
\[\int_{S^{d}}\delta_{y}(x)g(x)dx=g(y).\]
If \((i,2)\) and \((i+1,1)\) are connected by a dotted red edge (i.e., \(\sigma(T_{i,2})=T_{i+1,1}\) or \(\sigma(T_{i+1,1})=T_{i,2}\)), then we let
\[\begin{split}\tilde{P}_{i}(x,y)=-P_{n}^{2}(x,y)=-(k_{n}/s_{d})^{ -2}K_{n}^{2}(x,y).\end{split}\]
Integrating over all variables except those in the set \(\{T_{i,\alpha},1\leq i\leq m,1\leq\alpha\leq 2\}\),
\[\begin{split} Q_{m}(L_{n}f,(\mathbf{T},\sigma))=&\left( \frac{k_{n}}{s_{d}}\right)^{mk}\int_{(S^{d})^{2m}}f_{1,2}(x_{1},y_{1})\tilde{P }_{1}(y_{1},x_{2})f_{1,2}(x_{2},y_{2})\tilde{P}_{2}(y_{2},x_{3})\times\\ &\cdots\times f_{1,2}(x_{m},y_{m})\tilde{P}_{m}(y_{m},x_{1})dx_{ 1}\cdots dx_{m}dy_{1}\cdots dy_{m}.\end{split} \tag{136}\]
If we fix the cyclic permutation \(p=(1\cdots m)\) and indices \(i_{\alpha}=1,2\), then we can get \(2^{m}\) different \((\mathbf{T},\sigma)\) in the set \(E_{1}\), because each red edge between \((i,2)\) and \((i+1,1)\) can either be a solid one, or a dotted one. If we sum over all \(2^{m}\) different \((\mathbf{T},\sigma)\) in (136) and integrate over the variables \(y_{1},\ldots,y_{m}\), then we get a total contribution of
\[\left(\frac{k_{n}}{s_{d}}\right)^{mk}\int_{(S^{d})^{m}}h_{n}(x_{1},x_{2})h_{n }(x_{2},x_{3})\times\cdots\times h_{n}(x_{m},x_{1})dx_{1}\cdots dx_{m}. \tag{137}\]
Since there are \((m-1)!\) cyclic permutations of \([m]\) and there are \((k(k-1))^{m}\) distinct combinations of the indices \(i_{\alpha},1\leq i\leq m,1\leq\alpha\leq 2\), we obtain (135). But note that there is a factor \(1/2\) in the front of (135), this is because given a circle-like \((\mathbf{T},\sigma)\)-graph, the correspondence from \(p\) and \(\{i_{\alpha},1\leq i\leq m,\alpha=1,2\}\) to \((\mathbf{T},\sigma)\) is not 1-1, but rather 2-1. Indeed, by defining \(p^{\prime}=p^{-1}\) and \(i_{\alpha}^{\prime}=i_{3-\alpha}\), we end up at the same \((\mathbf{T},\sigma)\)-graph. As an example, the \((\mathbf{T},\sigma)\)-graph given in the left panel of Figure 2 is circle-like, and by Proposition 1 we can take the cyclic permutation as \(p=(123)\) or \(p=(132)\).
Now we prove part (2) of Lemma 10. By (79), for any fixed \(x\) and \(y\),
\[\begin{split}\lim_{n\to\infty}n^{d-1}h_{n}(x,y)&=C_{d} \int_{S^{d}}(f_{1,2}(x,y)-f_{1,2}(x,z))\sin^{-(d-1)}(\arccos(z\cdot y))dz\\ &=C_{d}\widehat{h}(x,y).\end{split} \tag{138}\]
Furthermore, by the boundedness of \(f\) and (52), there exists constants \(c\) and \(C\) such that for all \(x\) and \(y\), we have
\[\big{|}n^{d-1}h_{n}(x,y)\big{|}\leq cn^{d-1}\int_{S^{d}}P_{n}^{2}(y,z)dz\leq C. \tag{139}\]
Hence, part (2) of Lemma 10 follows from (135), (138) and the dominated convergence theorem.
### Identification of the limiting distribution
Recall from (134) that \(h_{n}\) and thus \(\widehat{h}\) are both symmetric, i.e., \(\widehat{h}(x,y)=\widehat{h}(y,x)\). This implies that there exists an orthonormal basis of \(L^{2}(S^{d})\), say \(w_{j},j\geq 1\) such that
\[\widehat{h}(x,y)=\sum_{j=1}^{\infty}z_{j}w_{j}(x)w_{j}(y)\]
for almost all \((x,y)\in S^{d}\times S^{d}\).
We consider the following random variable
\[X_{n}:=\left(L_{n}f-\mathbb{E}(L_{n}f)\right)\left(\frac{k_{n}}{s_{d}}\right) ^{-k}\left(\frac{C_{d}k(k-1)}{n^{d-1}}\right)^{-1}.\]
By Corollary 3, for any fixed \(m\geq 2\), we have
\[\lim_{n\to\infty}Q_{m}(X_{n})=\frac{(m-1)!}{2}\sum_{j=1}^{\infty}z_{j}^{m}. \tag{140}\]
In addition, \(Q_{1}(X_{n})=\mathbb{E}(X_{n})=0\) for all \(n\).
We shall now determine the specific form of the limiting distribution of \(X_{n}\) in three steps. Let \(\chi_{i},i\geq 1\) be independent chi-squared random variables with one degree of freedom, defined on some common probability space \(\Omega_{0}\). We consider a sequence of random variables \(Y_{N}\) defined by
\[Y_{N}=\sum_{i=1}^{N}z_{i}(\chi_{i}-1)/2.\]
* We show that \(Y_{N},N\geq 1\) is a Cauchy sequence in \(L^{2}(\Omega_{0})\). Thus \(Y_{N}\) converges to some limiting random variable \(Y\) in the \(L^{2}\) norm. We further show that the convergence is also in \(L^{m}\) for any \(m\geq 1\), which implies that \(Q_{m}(Y_{N})\to Q_{m}(Y)\) for any \(m\geq 1\).
* We next find the cumulants of \(Y\) by computing \(\log\mathbb{E}\exp(\mathrm{i}tY_{N})\) and taking the limit \(N\to\infty\). It turns out that \[Q_{m}(Y)=\lim_{n\to\infty}Q_{m}(X_{n}),\,\forall m\geq 1.\]
* Finally, we prove that the distribution of \(Y\) satisfies the Carleman's condition. This combined with the second step shows that \(X_{n}\) converges to \(Y\) in distribution and completes the proof of Theorem 3.
By (138) and (139), \(\widehat{h}(x,y)\) is uniformly bounded, and thus we have
\[\sum_{j=1}^{\infty}z_{j}^{2}=\int_{S^{d}}\int_{S^{d}}\widehat{h}(x,y)^{2}dxdy<\infty.\]
Thus for any \(N_{1}<N_{2}\), it holds that
\[\|Y_{N_{1}}-Y_{N_{2}}\|_{L^{2}}^{2}\leq C\sum_{j=N_{1}+1}^{N_{2}}z_{j}^{2}\to 0 \ \ \text{as}\ \ N_{1},N_{2}\to\infty,\]
which implies that \(Y_{N},N\geq 1\) is a Cauchy sequence in \(L^{2}(\Omega_{0})\). Consequently, we can find a limiting random variable \(Y\) such that \(Y_{N}\to Y\) in \(L^{2}\).
For all \(m\geq 2\), one has
\[\sum_{j=1}^{\infty}|z_{j}|^{m}\leq\Big{(}\sum_{j=1}^{\infty}z_{j}^{2}\Big{)}^{ m/2}<\infty. \tag{141}\]
By (141), for any even integer \(m\), the \(m\)-th moment of \(Y_{N}\) is bounded uniformly from above:
\[\mathbb{E}(Y_{N}^{m})\leq C\sum_{m=m_{1}+\cdots+m_{\ell}:m_{1},\ldots,m_{\ell }\geq 2}\prod_{i=1}^{\ell}\Big{(}\sum_{j=1}^{\infty}|z_{j}|^{m_{i}} \,\Big{)}<\infty,\]
where the summation is over all integer partitions of \(m\). This further implies the sequence \(\{Y_{N}^{m},N\geq 1\}\) is uniformly integrable for any fixed \(m\geq 1\) and thus we have
\[Y_{N}\xrightarrow{L^{m}}Y\ \ \text{as}\ \ N\to\infty\]
for all \(m\geq 1\). We can formally write \(Y\) as the sum \(\sum_{i=1}^{\infty}z_{i}(\chi_{i}-1)/2\).
We now turn to the second step. The cumulant generating function of \(Y_{N}\) is
\[\log\mathbb{E}\exp(\mathrm{i}tY_{N})= \sum_{i=1}^{N}\log\mathbb{E}\exp(z_{i}(\chi_{i}-1)\mathrm{i}t/2)\] \[= \sum_{i=1}^{N}\log\frac{1}{\sqrt{1-z_{i}\mathrm{i}t}}-\sum_{i=1} ^{N}\frac{z_{i}\mathrm{i}t}{2}=\sum_{i=1}^{N}\sum_{m=2}^{\infty}\frac{z_{i}^ {m}}{2m}(\mathrm{i}t)^{m}.\]
For \(m\geq 2\), the \(m\)-th cumulant of \(Y_{N}\) is
\[Q_{m}(Y_{N})=m!\sum_{j=1}^{N}\frac{z_{j}^{m}}{2m}=\frac{(m-1)!}{2}\sum_{i=1}^ {N}z_{i}^{m}. \tag{142}\]
Since \(Y_{N}\) converges to \(Y\) in \(L^{m}\) for all \(m\geq 1\), by (32) we have
\[Q_{m}(Y)=\lim_{N\to\infty}Q_{m}(Y_{N})=\frac{(m-1)!}{2}\sum_{i=1}^{\infty}z_{i }^{m},\,\forall\,m\geq 2, \tag{143}\]
which coincides with (140). Also, \(Q_{1}(Y)=\mathbb{E}(Y)=0\) since \(\mathbb{E}(Y_{N})=0\) for all \(N\).
To finish the proof of the convergence of \(X_{n}\) to \(Y\), we need to show that the distribution of \(Y\) is uniquely determined by the cumulant condition (143). To this end it suffices to verify the Carleman's condition
\[\sum_{m=1}^{\infty}\big{(}\mathbb{E}(Y^{2m})\big{)}^{-1/(2m)}=\infty. \tag{144}\]
To establish (144), by (141) and (143), for \(m\geq 2\), we have
\[|Q_{m}(Y)|\leq m!C^{m}, \tag{145}\]
for some constant \(C>0\). Note that (145) also holds for \(m=1\) since \(Q_{1}(Y)=0\). By (31) and (145), for any even integer \(m\), we have
\[\begin{split}\mathbb{E}(Y^{m})&\leq\sum_{R=\{R_{1},\ldots,R_{\ell}\}\in\Pi(m)}\big{|}Q_{|R_{1}|}\big{|}\cdots\big{|}Q_{|R_{\ell}| }\big{|}\\ &\leq\sum_{R=\{R_{1},\ldots,R_{\ell}\}\in\Pi(m)}|R_{1}|!C^{|R_{1}| }\cdots|R_{\ell}|!C^{|R_{\ell}|}\\ =& C^{m}\sum_{R=\{R_{1},\ldots,R_{\ell}\}\in\Pi(m)}|R _{1}|!\cdots|R_{\ell}|!,\end{split} \tag{146}\]
where we used the fact that \(\sum_{i=1}^{\ell}|R_{i}|=m\). To estimate the last summation, given an integer partition \(m=m_{1}+\cdots+m_{\ell}\) for some \(\ell\geq 1\) and \(m_{1},\ldots,m_{\ell}\geq 1\). Denote the number of distinct \(m_{i}\)'s by \(N^{\prime}\) and let \(\tau_{1},\ldots,\tau_{\ell}\) be their multiplicities. Then the number of partitions \(\{R_{1},\ldots,R_{\ell}\}\) of \([m]\) such that
\[\#\{R_{i}:|R_{i}|=m_{i}\}=\tau_{i},\quad\forall\,1\leq i\leq\ell\]
is given by
\[\frac{m!}{m_{1}!\cdots m_{\ell}!\prod_{i}^{N^{\prime}}\tau_{i}!}.\]
Thus, we have
\[\begin{split}\mathbb{E}(Y^{m})\leq& C^{m}\sum_{m=m_{1}+ \cdots+m_{\ell}}\sum_{R\in\Pi(m),|R_{i}|=m_{i}}m_{1}!\cdots m_{\ell}!\\ \leq& C^{m}\sum_{m=m_{1}+\cdots+m_{\ell}}\frac{m!}{m _{1}!\cdots m_{\ell}!\prod_{i}^{N^{\prime}}\tau_{i}!}m_{1}!\cdots m_{\ell}!\\ \leq& C^{m}m!\sum_{m=m_{1}+\cdots+m_{\ell}}1.\end{split} \tag{147}\]
It is known that the total number of partitions of an integer \(m\), denoted by \(\kappa(m)\), satisfies \(\log\kappa(m)\sim\pi\sqrt{(2m)/3}\) as \(m\to\infty\) (p.70 in [1]). Consequently, for some constant \(\tilde{C}\) large enough and all \(m\geq 1\), \(\kappa(m)\leq\tilde{C}^{m}\). Therefore, we have
\[\mathbb{E}(Y^{m})\leq C^{m}m!\tilde{C}^{m}\leq(C\tilde{C})^{m}m^{m}. \tag{148}\]
Now (144) follows from (148) since
\[\sum_{i=1}^{\infty}\left(\mathbb{E}(Y^{2i})\right)^{-1/(2i)}\geq\sum_{i=1}^{ \infty}\left((C\tilde{C})^{2i}(2i)^{2i}\right)^{-1/(2i)}\geq\sum_{i=1}^{ \infty}\frac{c}{i}=\infty. \tag{149}\]
This completes the proof of (144), and thus we finish the proof of Theorem 3.
|
2301.11459
|
Neural-Symbolic Inference for Robust Autoregressive Graph Parsing via
Compositional Uncertainty Quantification
|
Pre-trained seq2seq models excel at graph semantic parsing with rich
annotated data, but generalize worse to out-of-distribution (OOD) and long-tail
examples. In comparison, symbolic parsers under-perform on population-level
metrics, but exhibit unique strength in OOD and tail generalization. In this
work, we study compositionality-aware approach to neural-symbolic inference
informed by model confidence, performing fine-grained neural-symbolic reasoning
at subgraph level (i.e., nodes and edges) and precisely targeting subgraph
components with high uncertainty in the neural parser. As a result, the method
combines the distinct strength of the neural and symbolic approaches in
capturing different aspects of the graph prediction, leading to well-rounded
generalization performance both across domains and in the tail. We empirically
investigate the approach in the English Resource Grammar (ERG) parsing problem
on a diverse suite of standard in-domain and seven OOD corpora. Our approach
leads to 35.26% and 35.60% error reduction in aggregated Smatch score over
neural and symbolic approaches respectively, and 14% absolute accuracy gain in
key tail linguistic categories over the neural model, outperforming prior
state-of-art methods that do not account for compositionality or uncertainty.
|
Zi Lin, Jeremiah Liu, Jingbo Shang
|
2023-01-26T23:11:03Z
|
http://arxiv.org/abs/2301.11459v1
|
Neural-Symbolic Inference for Robust Autoregressive Graph Parsing via Compositional Uncertainty Quantification
###### Abstract
Pre-trained seq2seq models excel at graph semantic parsing with rich annotated data, but generalize worse to out-of-distribution (OOD) and long-tail examples. In comparison, symbolic parsers under-perform on population-level metrics, but exhibit unique strength in OOD and tail generalization. In this work, we study compositionality-aware approach to neural-symbolic inference informed by model confidence, performing fine-grained neural-symbolic reasoning at subgraph level (i.e., nodes and edges) and precisely targeting subgraph components with high uncertainty in the neural parser. As a result, the method combines the distinct strength of the neural and symbolic approaches in capturing different aspects of the graph prediction, leading to well-rounded generalization performance both across domains and in the tail. We empirically investigate the approach in the English Resource Grammar (ERG) parsing problem on a diverse suite of standard in-domain and seven OOD corpora. Our approach leads to \(35.26\%\) and \(35.60\%\) error reduction in aggregated Smatch score over neural and symbolic approaches respectively, and \(14\%\) absolute accuracy gain in key tail linguistic categories over the neural model, outperforming prior state-of-art methods that do not account for compositionality or uncertainty.
## 1 Introduction
A structured account of compositional meaning has become a longstanding goal for Natural Language Processing. To this end, a number of efforts have focused on encoding semantic relationships and attributes into graph-based meaning representations (MRs, see Appendix A for details). In particular, graph semantic parsing has been an important task in almost every Semantic Evaluation (SemEval) exercise since 2014. In recent years, we have witnessed the burgeoning of applying neural networks to semantic parsing. Pre-trained language model-based approaches have led to significant improvements across different MRs (Oepen et al., 2019, 2020). However, these models often generalize poorly to out-of-distribution (OOD) and tail examples (Cheng et al., 2019; Shaw et al., 2021; Kim, 2021; Lin et al., 2022), while grammar or rule-based parser work relatively robustly across different linguistic phenomena and language domains (Cao et al., 2021; Lin et al., 2022). See Section 6 for a review of related work.
In this paper, we propose a novel compositional neural-symbolic inference for graph semantic parsing, which takes advantage of both uncertainty quantification from a seq2seq parser and prior knowledge from a symbolic parser at the subgraph level (i.e., nodes and edges). We take graph semantic parsing for English Resource Grammar (ERG) as our case study. ERG is a compositional semantic representation explicitly coupled with the syntactic structure. Compared to other graph-based meaning representations like Abstract Meaning Representation (AMR), ERG has high coverage of English text and strong transferability across domains, rendering itself as an attractive target formalism for automated semantic parsing. Furthermore, many years of ERG research has led to well-established symbolic parser and a rich set of carefully constructed corpus across different application domains and fine-grained linguistic phenomena, making it an ideal candidate for studying cross-domain generalization of neural-symbolic methods (Oepen et al., 2002; Crysmann and Packard, 2012).
We start with a novel investigation of the uncertainty calibration behaviour of a T5-based state-of-the-art neural ERG parser (Lin et al., 2022) on the subgraph level (Section 3), where we make some key observations: (1) the performance of the neural parser degrades when it becomes uncertain at the subgraph level, while (2) the symbolic parser works still robustly when the neural parser is un
certain at the subgraph level. This motivates us to develop a _compositional_ neural-symbolic inference process where the neural and symbolic parser collaborates at a more fine-grained level and guided by model uncertainty, which is an aspect missing in the previous neural-symbolic and ensemble parsing literature (see Appendix 6).
We then propose a decision-theoretic criteria to allow for neural-symbolic inference at subgraph level (i.e., nodes and edges) and incorporates the neural parser's fine-grained uncertainty for each graph component prediction (Section 4.1). The key to this approach is a _meta graph_\(\mathcal{G}_{M}\) that enumerates possible candidates for each node/edge prediction, and is constructed by merging multiple beam predictions from the neural seq2seq model.
The core challenge here is how to properly quantify _compositional uncertainty_ using a seq2seq model, i.e., assigning model probability for a node or edge prediction. For example, our interest is to express the conditional probability of a graph node \(v\) with respect to its parent \(p(v|pa(v),x)\), rather than the likelihood of \(v\) conditioning on the previous tokens in the linearized string. As a result, it cannot be achieved by relying on the naive token-level autoregressive probabilities from the beam search. To address this issue, we introduce a simple probabilistic formalism termed _Graph Autoregressive Process_ (GAP) (Section 4.2). GAP adopts a dual representation of an autoregressive process and a probabilistic graphical model, and can serve as a powerful medium for expressing compositional uncertainty for seq2seq graph parsing.
We demonstrate the effectiveness of our approach in experiments across a diverse suite of eight in-domain and OOD evaluation datasets encompassing domains including Wikipedia entries, news articles, email communications, etc (Section 5). We achieve the best results on the overall performance across the eight domains, attaining \(35.26\%\) and \(35.60\%\) error reduction in the aggregated Smatch score over the neural and symbolic parser, respectively. Our approach also exhibits significantly stronger robustness in generalization to OOD datasets and long-tail linguistic phenomena than previous work, while maintaining the state-of-the-art performance on in-domain test. Further study also shows that the compositionality aspects of neural-symbolic inference helps the model to assemble novel graph solution that the original inference process (e.g., beam search or symbolic parse) fails to provide (Section 5.4).
In summary, our contributions are four-fold:
* We present a novel investigation of neural graph parser's uncertainty calibration performance at _subgraph level_ (Section 3). Our study confirms the seq2seq uncertainty is effective for detecting model error even out-of-distribution, establishing the first empirical basis for the utility of _compositional_ uncertainty in seq2seq graph parsing.
* We propose a practical and principled framework for neural-symbolic graph parsing that utilizes model uncertainty and exploits compositionality (Section 4.1). The method is fully compatible with modern large pre-trained seq2seq network using beam decoding, and is general-purpose and applicable to any graph semantic parsing task.
* We propose a simple probabilistic formalism (GAP) to express a seq2seq model's compositional uncertainty (Section 4.2). GAP allows us to go beyond the conventional autoregressive sequence probability and express long-range parent-child conditional probability on the graph, serving as a useful medium of compositional uncertainty quantification.
* We conduct a comprehensive study to evaluate the state-of-the-art graph parsing approaches across a diverse suite of in-domain and out-of-distribution datasets (Section 5). Our study reveals surprising weakness of previous neural-symbolic methods in OOD generalization, and confirms the proposed method significantly im
Figure 1: The EDS representation for ERG and the corresponding linearization of the example sentence “_The boy wants the girl to believe him_”.
proves models OOD and tail performance.
**Reproducibility.** Our code is available on Github: [https://github.com/google/uncertainty-baselines/tree/main/baselines/t5/data/deepbank](https://github.com/google/uncertainty-baselines/tree/main/baselines/t5/data/deepbank).
## 2 Background
### English Resource Grammar (ERG)
In this work, we take the representations from English Resource Grammar (ERG; Flickinger et al., 2014) as our target meaning representations. ERG is a broad-coverage computational grammar of English that derives underspecified logical-form representations of meaning (Oepen and Flickinger, 2019). It is rooted in the general linguistic theory of Head-driven Phrase Structure Grammar (HPSG; Pollard and Sag, 1994).
ERG can be presented into different types of annotation formalism (Copestake et al., 2005). This work focuses on the Elementary Dependency Structure (EDS; Oepen and Lanning, 2006) which is a compact representation that can be expressed as a directed acyclic graph (DAG) and is widely adopted in the neural parsing approaches (Buys and Blunsom, 2017; Chen et al., 2018). An example is shown in Figure 1(a).
### Parsing Approaches
In this section, we review the state-of-the-art symbolic and neural parsers utilized in our work, i.e., the ACE parser (Crysmann and Packard, 2012) and the T5 parser (Lin et al., 2022). Appendix B reviews other ERG parsing techniques.
**The symbolic parser: ACE.** The ACE parser (Crysmann and Packard, 2012) is one of the state-of-the-art symbolic parsers. It first decomposes sentences into ERG-consistent candidate derivation trees, and the parser will rank candidates based on the structural features in the nodes of the derivation trees via maximum entropy models (Oepen and Lanning, 2006; Toutanova et al., 2005). This approach fails to parse sentences for which no valid derivation is found.
**The neural parser: T5.**Lin et al. (2022) proposed a T5-based ERG parser which achieves the best known results on the in-domain DeepBank benchmark. It is the first work that successfully transfers the ERG parsing problem into a pure end-to-end translation problem via compositionality-aware tokenization and a variable-free top-down graph linearization based on the PENMAN notation (Kasper, 1989). Figure 1(b) shows an example of the linearized graph string from the original EDS graph.
## 3 Motivation: Subgraph-level Uncertainty in Seq2seq Graph Parsing
We hypothesize that when the neural seq2seq model is uncertain at the subgraph level, it is more likely to make mistakes. Assuming the symbolic parser performs more robustly in these situations, we can then design a procedure to ask the symbolic parser for help when the model is uncertain. To validate this hypothesis, we conduct experiments to empirically explore the following two questions: (1) how does the model perform when it is uncertain at the subgraph level? and (2) how does the symbolic parser perform when the model is uncertain?
First, we compute model probabilities for each graph element (i.e., node and edge) prediction (see Section 4.2 for how to compute these quanitities), and identify the corresponding ACE parser prediction using the graph matching algorithm from Smatch(Cai and Knight, 2013). We then evaluate the accuracies of those graph element predictions with respect to the gold labels, and compare it to
Figure 2: Bar charts for the predictive accuracies of the T5 parser (blue) and ACE parser (orange) for all the node / edge prediction across different uncertainty buckets based on T5 model’s probabilities. The performance is evaluated on the Tanaka and Brown datasets. Each bin represents a quantile bucket of the model probability (i.e., they contain the same number of examples). Since at most of the subgraphs, the model is pretty certain (\(\log P>-1e-5\)), we exclude these pretty certain predictions in the figures.
that of the ACE parser.
In Figure 2, we plot the bar charts compare the neural and symbolic performance in different bucket of seq2seq model uncertainties on the two largest datasets (e.g., Tanaka and Brown, see Appendix G). Results on other datasets can be found in the Appendix K. As shown in the figure, low model probability generally corresponds to low T5 performance, while the corresponding ACE parser's accuracies spread relatively stably (e.g., it attains \(>90\%\) accuracy in the lowest-confidence buckets, while T5 accuracy is \(<50\%\)). This implies that when the model is uncertain, the accuracy of the neural model tend to be low, while the ACE parser still performs well. This has motivated us to develop a _compositional_ neural-symbolic inference procedure guided the model's subgraph level uncertainty, such that the T5 and ACE parser can collaborate at a more fine-grained level via _compositional uncertainty quantification_ (Section 4).
## 4 Methods
**Notation & Problem Statement.** For graph semantic parsing, the input is a natural language utterance \(x\), and the output is a directed acyclic graph (DAG) \(G=\langle\mathbf{N},\mathbf{E}\rangle\), where \(\mathbf{N}\) is the set of nodes and \(\mathbf{E}\in\mathbf{N}\times\mathbf{N}\) is the set of edges (e.g., Figure 1(a)). In the case of seq2seq parsing, \(G\) is represented as a linearized graph string \(g=s_{1}s_{2}\cdots s_{L}\) which consists of symbols \(\{s_{l}\}_{l=1}^{L}\) (e.g., Figure 1(b)). As the graph prediction is probabilistic, each of the graph element \(v\in\mathbf{N}\cup\mathbf{E}\) is a random variable whose values are the symbols \(s_{i}\) observed from the beam outputs, leading to marginal probabilities \(p(v=s_{i}|x)\) and conditional probabilities \(p(v=s_{i}|v^{\prime}=s_{j},x)\).
To this end, our goal is to produce a principled inference procedure for graph prediction accounting for model uncertainty on predicting graph elements \(v\in G\). In the sequel, Section 4.1 presents a decision-theoretic criterion that leverages the graphical model likelihood \(p(G|x)\) to conduct compositional neural-symbolic inference for graph prediction. To properly express the graphic model likelihood \(p(G|x)=\prod_{v\in G}p(v|pa(v),x)\) using a learned seq2seq model, Section 4.2 introduces a simple probabilistic formalism termed _Graph Autoregressive Process_ (GAP) to translate the autoregressive sequence probability from the seq2seq model to graphical model probability. Appendix E discusses some additional extensions.
### Compositional Neural-Symbolic Inference
Previously, an uncertainty-aware decision criteria was proposed for neural-symbolic inference based on the Hurwicz pessimism-optimism criteria \(R(G|x)\)(Lin et al., 2022). Specifically, the criteria is written as:
\[R(G|x)=\alpha(x)*R_{p}(G|x)+(1-\alpha(x))*R_{0}(G),\]
where \(R(G|x)=-\log p(G|x)\) is the neural model likelihood, \(R_{0}(G)=\log p_{0}(G)\) is the symbolic prior likelihood, and \(\alpha(x)\) is a the uncertainty-driven trade-off coefficient to balance between the optimistic MLE criteria \(R_{p}(G|x)\) and the pessimistic, prior-centered criteria \(R_{0}(G|x)\) centered around symbolic prediction \(G_{0}\).
A key drawback of this approach is the lack of accounting for the compositionality. This motivates us to consider synthesizing the multiple graph predictions \(\{G_{k}\}_{k=1}^{K}\) from the neural parser to form a _meta graph_\(\mathcal{G}\)1, where we can leverage the disentangled uncertainty of \(p(G|x)\) to perform fine-grained neural-symbolic inference for each graph component \(v\in G\) (i.e., nodes or edges). Specifically, we leverage the factorized graphical model likelihood \(p(G|x)=\prod_{v\in G}p(v|\operatorname{pa}(v),x)\) to decompose the overall decision criteria \(R(G|x)\) into that of individual components \(R(v|x)\):
Footnote 1: Given a group of candidate graphs \(\{G_{k}\}_{k=1}^{K}\), well-established algorithm exists to synthesize different graph predictions into a _meta_ graph \(\mathcal{G}\)(Cai and Knight, 2013; Hoang et al., 2021) (see Appendix F for a more detailed review).
\[R(v|x) =\alpha(v|x)*\log p(v|\operatorname{pa}(v),x)\] \[+(1-\alpha(v|x))*\log p_{0}(v), \tag{1}\]
and the overall criteria is written as \(R(G|x)=\sum_{v\in G}R(v|x)\). Here \(\operatorname{pa}(v)\) refers to the parents of \(v\) in \(G\), and \(\alpha(v|x)=\operatorname{sigmoid}(-\frac{1}{T}H(v|x)+b)\) is the component-specific trade-off parameter driven by model uncertainty \(H(v|x)=-\log p(v|\operatorname{pa}(v),x)\), and \((T,b)\) are scalar calibration hyperparameters that can be tuned on the dev set.
Following previous work (Lin et al., 2022), the symbolic prior \(p_{0}\) for each graph component \(v\) is defined as a Boltzmann distribution based on the graph output \(G_{0}\) from the symbolic parser, i.e., \(p_{0}(v=s)\propto\exp(I(s\in G_{0}))\), so that it is proportional to the empirical probability of whether a symbol \(s\) appears in \(G_{0}\). Notice that we have
ignored the normalizing constants since they do not impact optimization.
Algorithm 1 summarizes the full algorithm. As shown, during inference, the method proceeds by starting from the root node \(v_{0}\) and selects the optimal prediction \(\hat{v}_{0}=\arg\max_{c_{0}\in\text{Candidate}(v_{0})}R(c_{0}|x)\), where \(c_{0}\) are different candidates for \(v_{0}\) given by the _meta graph_\(\mathcal{G}\). The algorithm then recursively performs the same neural-symbolic inference procedure for the children of \(v_{0}\) (i.e., \(\text{ch}(v)\)). The algorithm terminates when the optimal candidates for all graph variables \(v\in G\) are determined.
As a result, the algorithm is able to adaptively combine subgraph predictions across multiple beam candidates thanks to the meta graph \(\mathcal{G}\), and appropriately weight between the local neural and symbolic information thanks to the uncertainty-aware decision criteria \(R(v|x)\). Empirically, this also gives the algorithm the ability to synthesize novel graph predictions that are distinct from its base models (Section 5.4).
```
Inputs: Meta graph \(\mathcal{G}\) Graphical model likelihood \(\log p(G|x)\) Symbolic prior \(p_{0}\) Output: Neural-symbolic graph prediction \(G\) Initialize: \(v=\text{root}(G_{M})\); \(G=\mathcal{G}_{M}\). if\(G\) does not contain undecided candidates thenreturn\(G\) else for\(c_{v}\in\text{Candidate}(v)\)do Compute decision criteria \(R(c_{v}|x)\) (Equation 1) Select optimal candidate \(\hat{v}=\arg\max_{c}R(c|x)\) Remove non-optimal candidates of \(v\) from \(G\) Recursively perform Algorithm 1 for all \(v^{\prime}\in\text{ch}(v)\)
```
**Algorithm 1** Compositional Neural-Symbolic Inference
### Compositional Uncertainty
**Quantification with Graph Autogressive Process (GAP)**
To properly model the uncertainty \(p(G|x)\) from a seq2seq model, we need an intermediate probabilistic representation to translate the raw token-level probability to the distribution over graph elements.
To this end, we introduce a simple probabilistic formalism termed _Graph Autoregressive Process_ (GAP), which is a probability distribution assigning seq2seq learned probability to the graph elements \(v\in G\). Specifically, as the seq2seq-predicted graph adopts both a sequence-based representation \(g=s_{1},...,s_{L}\) and a graph representation \(G=\langle\mathbf{N},\mathbf{E}\rangle\), the GAP model adopts both an autoregressive representation \(p(g|x)=\prod_{i}p(s_{i}|s_{<i},x)\) (Section 4.2.1), and also a probabilistic graphical model representation \(p(G|x)=\prod_{v\in G}p(v|\operatorname{pa}(v),x)\) (Section 4.2.2). Both representations share the same set of underlying probability measures (i.e., the graphical-model likelihood \(p(G|x)\) can be derived from the autoregressive probabilities \(p(s_{i}|s_{<i},x)\)) (Figure 3), rendering itself a useful medium for principled compositional neural-symbolic inference using seq2seq probabilities.
#### 4.2.1 Autoregressive Representation for
**Linearized Sequence \(g\)**
Given an input sequence \(x\) and output sequence \(y=y_{1}y_{2}\cdots y_{N}\), the token-level autoregressive distribution from a seq2seq model is \(p(y|x)=\prod_{i=1}^{N}p(y_{i}|y_{<i},x)\). In the context of graph parsing, the output sequence describes a linearized graph \(g=s_{1}s_{2}\cdots s_{L}\), where each symbol \(s_{i}=\{y_{i_{1}}y_{i_{2}}\cdots y_{i_{N_{i}}}\}\) represents either a node \(n\in\mathbf{N}\) or an edge \(e\in\mathbf{E}\) of the graph and corresponds to a collection of beam-decoded tokens \(\{y_{i_{1}}y_{i_{2}}\cdots y_{i_{N_{i}}}\}\), e.g., the node __the_q in Figure 1(a) is represented by tokens {_, the _q}. This process is illustrated in follows:
To this end, the _Graph Autoregressive Process_ (GAP) assigns probability to each linearized graph \(g=s_{1}s_{2}\cdots s_{L}\) autoregresively as \(p(g|x)=\prod_{i=1}^{L}p(s_{i}|s_{<i},x)\), and the conditional probability \(p(s_{i}|s_{<i},x)\) is computed by aggregating the token probability:
\[p(s_{i}|s_{<i},x)=p(\{y_{i_{1}}\cdots y_{i_{N_{i}}}\}|s_{<i},x)=\prod_{j=1}^{N_ {i}}p(y_{i_{j}}|y_{i_{<j}},s_{<i},x)\]
**Marginal and Conditional Probability.** Importantly, GAP allows us to compute the marginal and (non-local) conditional probabilities for graph elements \(s_{i}\). Given the input \(x\), the marginal probability of \(s_{i}\) is computed as
\[p(s_{i}|x)=\int_{s_{<i}}p(s_{i}|s_{<i},x)p(s_{<i}|x)\text{d}s_{<i}\]
by integrating over the space of all possible sub-sequences \(s_{<i}\) prior to the symbol \(s_{i}\). Then, the (non-local) conditional probability between two graph elements \((s_{i},s_{j})\) with \(i<j\) is computed as
\[p(s_{j}|s_{i},x)=\] \[\int_{s_{i-j},s_{<i}}p(s_{i},s_{i-j}|s_{i},s_{<i},x)p(s_{i}|s_{<i},x)p(s_{<i}|x)\text{d}s_{i-j}\text{d}s_{<i}\]
by integrating over the space of subsequences \(s_{i\to j}\) between \((s_{i},s_{j})\) and the subsequence \(s_{<i}\) before \(s_{i}\). Higher order conditional (e.g., \(p(s_{j}|(s_{i},s_{l}),x)\)) can be computed analogously. Notice this gives us the ability to reason about long-range dependencies between non-adjacent symbols on the sequence. Furthermore, the conditional probability on the _reverse_ direction can also be computed using the Bayes' rule: \(p(s_{i}|s_{j},x)=\frac{p(s_{j}|s_{i},x)p(s_{i}|x)}{p(s_{j}|x)}\).
**Efficient Estimation Using Beam Outputs.** In practice, we can estimate \(p(s_{i}|x)\) and \(p(s_{j}|s_{i},x)\) efficiently via importance sampling using the output from the beam decoding \(\{g_{k}\}_{k=1}^{K}\), where \(K\) is the beam size (Malinin and Gales, 2020). The marginal probability can be computed as
\[\hat{p}(s_{i}|x)=\sum_{k=1}^{K}\pi_{k}p(s_{i}|s_{k,<i},x) \tag{2}\]
where \(\pi_{k}=\frac{\exp(\frac{1}{t}\log p(g_{k}|x))}{\sum_{k=1}^{K}\exp(\frac{1}{t }\log p(g_{k}|x))}\) is the importance weight proportional to the beam candidate \(g_{k}\)'s log likelihoods, and \(t>0\) is the temperature parameter fixed to a small constant (e.g., \(t=0.1\), see Appendix C.1 further discussion) (Malinin and Gales, 2020). If the symbol \(s_{i}\) does not appear in the \(k^{th}\) beam, we set \(p(s_{i}|s_{k,<i},x)=0\).
Then, for two symbols \((s_{i},s_{j})\) with \(i<j\), we can estimate the joint probability as
\[\hat{p}(s_{j}|s_{i},x)=\sum_{k=1}^{K}\pi_{k}^{i}p(s_{j}|s_{i},s_{k,i\to j},s_{ k,<i},x) \tag{3}\]
where \(\pi_{k}^{i}=\frac{\exp(\frac{1}{t}\log p(g_{k}|x))*I(s_{i}\in g_{k})}{\sum_{k= 1}^{K}\exp(\frac{1}{t}\log p(g_{k}|x))*I(s_{i}\in g_{k})}\) is the importance weight among beam candidates that contains \(s_{i}\). Notice this is different from Equation 2 where \(\pi_{k}\) is computed over all beam candidates regardless of whether it contains \(s_{i}\).
#### 4.2.2 Probabilistic Graphical Model Representation for \(G\)
So far, we have focused on probability computation based on the graph's linearized representation \(p(g|x)=\prod_{i}p(s_{i}|s_{<i},x)\). To conduct the compositional neural-symbolic inference (Section 4.1), we also need to consider GAP's graphical model representation \(p(G|x)=\prod_{v\in G}p(v|\operatorname{pa}(v),x)\).
GAP's graphical model representation \(G\) depends on the _meta graph_\(\mathcal{G}\) constructed from \(K\) candidate graphs \(\{G_{k}\}_{k=1}^{K}\) (Section 4.1). Figure 3 shows an example, where \(n_{i}\) and \(e_{j}\) are the candidates for the node and edge predictions collected from beam sequences. Compared to the sequence-based representation \(g\), \(\mathcal{G}\) provides two advantages: it (1) explicitly enumerates different candidates for each node and edge prediction (e.g., \(n_{2}\) v.s. \(n_{3}\) for predicting the third element), and (2) provides an explicit account of the parent-child relationships between variables on the graph (e.g., \(e_{2}\) is a child node of \(n_{1}\) in the predicted graph, which is not reflected in the autoregressive representation). From the probabilistic learning perspective, \(\mathcal{G}\) describes the space of possible graphs (i.e., the _support_) for a graph distribution \(p(G|x):G\rightarrow[0,1]\).
To this end, GAP assigns proper graph-level probability \(p(G|x)\) to graphs \(G\) sampled from the meta graph \(\mathcal{G}\) via the graphical model likelihood:
\[p(G|x) =\prod_{v\in G}p(v|\operatorname{pa}(v),x)\] \[=\prod_{n\in\mathbb{N}}p(n|\operatorname{pa}(n),x)*\prod_{e\in \mathbb{E}}p(e|\operatorname{pa}(e),x)\]
where \(p(v|\operatorname{pa}(v),x)\) is the conditional probability for \(v\) with respect to their parents \(\operatorname{pa}(v)\) in \(G\). Given the candidates graphs \(\{G_{k}\}_{k=1}^{K}\), we can express the likelihood for \(p(v|\operatorname{pa}(v),x)\) by writing down a multinomial likelihood enumerating over different values of \(\operatorname{pa}(v)\)(Murphy, 2012). This in fact leads to a simple expression for the model likelihood as a simple averaging of the beam-sequence log likelihoods:
\[\log p(n|\operatorname{pa}(n),x)\propto\frac{1}{K}\sum_{k=1}^{K}\log p(n| \operatorname{pa}(n)=c_{k}) \tag{4}\]
where \(c_{k}\) is the value of \(\operatorname{pa}(n)\) in \(k^{\text{th}}\) beam sequence, and the conditional probabilities are computed using Equation (3). See Appendix D for a detailed derivation.
```
Inputs: Beam candidates with probabilities \(\{p(g_{k}|x)\}_{k=1}^{K}\) Meta graph \(\mathcal{G}\) Output: Marginal probabilities \(\{p(s|x)\}\) Graph model likelihood \(\log p(G|x)\) for\(v\in G\)do Compute marginal likelihood: \(p(v=s|x)\) (Equation 2 Compute graphical model likelihood: \(\log p(v=s|\operatorname{pa}(v),x)\) (Equation 4) return\(\{p(v|x)\},\log p(G|x))=\sum_{v\in G}\log p(v|\operatorname{pa}(v),x)\)
```
**Algorithm 2** Graph Autoregressive Process
In summary, for each graph element variable \(v\in G\), GAP allows us to compute the graphical-model conditional likelihood \(p(v|pa(v),x)\) via its graphical model representation, and also to compute the marginal probability \(p(v|x)\) via its autoregressive presentation. The conditional likelihood is
crucial for neural-symbolic inference (Section 4.1), and the marginal probability is useful for sparsity regularization in global graph structure inference (Appendix E). Algorithm 2 summarizes the full GAP computation.
## 5 Experiments
### Experiment Setup
**Datasets.** Consistent with previous ERG works, we train the neural model on DeepBank v1.1 annotation of the Wall Street Journal (WSJ), sections 00-21 (the same text annotated in the Penn Tree Bank) that correspond to ERG version 1214.
For OOD evaluation, we select 7 diverse datasets from the Redwoods Treebank corpus: Wikipedia (Wiki), the Brown Corpus (Brown), the Eric Raymond Essay (Essay), customer emails (E-commerce), meeting/hotel scheduling (Verbmobil), Norwegian tourism (LOGON) and the Tanaka Corpus (Tanaka) (See Appendix G for more details).
**Model.** Following Lin et al. (2022), We train a T5\({}_{\text{large}}\) using the official T5X finetune pipeline2, and use beam search with size \(K=5\) at inference time. Further details are collected in Appendix H.
Footnote 2: [https://github.com/google-research/t5x/blob/main/t5x/train.py](https://github.com/google-research/t5x/blob/main/t5x/train.py)
**Evaluation.** we use the standard eval metric Smatch(Cai and Knight, 2013), which computes the maximum F1-score obtainable from an alignment between the predicted and gold graphs. We evaluate the models' average-case performance on all the 8 in-domain and OOD datasets, and also conduct fine-grained evaluation of the models' tail generalization performance across 19 important linguistic subcategories (Appendix J, Table 2).
**Baselines.** We compare with two recent state-of-the-art approaches from the neural-symbolic and ensemble graph parsing literature, respectively. (see Appendix 6 for a review) (1) Lin et al. (2022) is uncertainty-aware neural-symbolic framework method attained state-of-the-art performance on the in-domain DeepBank test set, and (2) Hoang et al. (2021), a majority-voting-based graph ensemble method that uses a voting strategy based on beam sequences from the T5 model and predictions from the ACE parser 3. It doesn't exploit uncertainty.
Footnote 3: We have tried several other variants for the voting candidates, e.g., top K predictions from the T5 parser and top 1 prediction + ACE prediction. It turns out the best one is using top K predictions from the T5 parser and ACE predictions.
### Results
The results are shown in Table 1. Detailed in-domain comparision with other previous work is in Appendix I. As shown, among the base models, the T5 and ACE parser achieve similar overall performance, with T5 strongly outperforms on in-domain data but underperforms on the OOD data (see last row in Table 1). Our approach achieves best re
\begin{table}
\begin{tabular}{l|c c c c c c|c} \hline \hline & **\#** & **T5** & **ACE** & **Vote** & **Collab.** & **Ours** & **ACE*** \\ \hline WSJ (in-domain) & 1,437 & 96.56 & 87.14 & 88.22 & **97.01** & 96.77 & 90.94 \\ Wiki & 1,307 & 90.12 & 80.25 & 80.55 & **98.58** & 90.004 & 90.42 \\ Brown & 2,182 & 92.05 & 91.74 & 83.66 & 93.84 & 93.11 & 93.20 \\ Easy & 591 & 92.19 & 92.64 & 83.72 & 93.57 & **93.76** & 93.52 \\ E-commerce & 1,141 & 93.15 & 92.57 & 83.78 & 95.41 & **97.37** & 93.36 \\ Verbombil & 931 & 90.06 & 95.15 & 84.80 & 92.24 & **96.42** & 97.62 \\ LOGON & 1,895 & 87.13 & **93.58** & 80.11 & 92.88 & 93.33 & 94.17 \\ Tanaka & 2,796 & 95.24 & **98.38** & 90.13 & 90.79 & **98.14** & 98.55 \\ \hline Mean w/ in-domain & 92.06 & 92.02 & 85.16 & 94.01 & **94.86** & 94.60 \\ Mean w/o in-domain & - & 91.50 & 92.63 & 84.72 & 93.61 & **94.62** & 95.05 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Smatch for T5, ACE, and collaborative/compositional inference. # refers to the number of sentences in the dataset. ACE* refers to the evaluation results only for valid parse. Collab. refers to collaborative inference from Lin et al. (2022). Vote refers to voting strategy from Hoang et al. (2021). The **bold** and underlined refer to the best and the second best results.
Figure 3: Visual illustration of constructing graphical model representation \(\mathcal{G}_{M}\) from autoregressive representation \(\{g_{k}\}_{k=1}^{K}\). The example here represents the sentence “_The Cathedral and the Bazar_” from the Eric Raymond Essay dataset. Note that here we have omitted the brackets in \(g\) for simplicity (see 1(b)).
sults on overall performance, which is \(\sim 35\%\) error reduction in aggregated \(\mathtt{Smatch}\) score over the T5-based and symbolic approaches.
We now compare with the previous state-of-the-art methods. Though in-domain performance is not the focus of this work, our approach is still comparable to **Collab**, i.e., the neural-symbolic method from Lin et al. (2022). However, on the challenging out-of-domain eval sets (e.g., E-commerce, Verb-mobil whose topic and style are significantly different from WSJ), the performance of **Collab** starts to deteriorate. In comparison, our neural-symbolic approach remains robust out-of-domain. Its performance stays competitive with and even sometimes outperforms the ACE parser on difficult domains, illustrating the advantage of compositionality.
We also notice that the voting-based ensemble method **Vote**Hoang et al. (2021) performs poorly in the neural-symbolic setting, despite based a moderate number of beam sequences. This is likely because the majority-voting approach requires a large number of diverse predictions from distinct models. When there are only two models, the ability of quantifying uncertainty becomes important.
### Fine-grained Linguistic Evaluation
ERG provides different levels of linguistic information that can benefit many NLP tasks, e.g., named entity recognition and semantic role labeling. This rich linguistic annotation provides an oppurtunity to evaluate model performance in meaningful population subgroups. Detailed description of those linguistic phenomena is in Appendix J.
Result is in Table 2. As shown, on OOD datasets, the T5 parser underperforms the ACE parser on most of the linguistic categories. Our approach outperforms both the neural model and the non-compositional neural-symbolic method especially on long-tail categories (the gray colored rows in the table), attaining an \(>14\%\) average absolute gain compared to the base model. In some categories, our method even outperforms the ACE parser while all base model underperforms, e.g., ARG3 of basic verb on Verbmobil and ARG3 of verb-particle on E-commerce.
### Case Study: Synthesizing Novel Graphs
To test if our methods can generate optimal graph solution which the base models fail to obtain, we further explore the percentage of novel graphs (graphs that are not identical to any of the candidate predictions of the neural or symbolic model) for each dataset, and compare the corresponding \(\mathtt{Smatch}\) scores on those novel cases. The results are shown in Table 3. We see that our method synthesize novel graph parses that are in general of higher quality than that of the base models, thanks to the calibrated uncertainty (Section 4.2). This indicates the compositional neural-symbolic inference can synthesize evidence across neural and symbolic results and produce novel graphs that are closer to ground truth.
\begin{table}
\begin{tabular}{l|c c c c c|c c c c|c c c|c c c} \hline \hline & \multicolumn{3}{c|}{Easy} & \multicolumn{3}{c|}{E-commerce} & \multicolumn{3}{c}{Vetrophobil} \\ Type & \# & ACE & T5 & Collab & Ours & \# & ACE & T5 & Collab & Ours & \# & ACE & T5 & Collab & Ours \\ \hline Compound & 671 & 83.76 & 73.39 & 76.75 & **80.26** & 844 & 95.50 & 76.96 & 83.22 & **94.94** & 308 & 86.36 & 67.41 & 68.13 & **87.50\({}^{\text{a}}\)** \\ \hline Nominal \(\nicefrac{{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{ \text{ \texttexttext{ }}}}}}}}}}}}}}}\) & 15 & 80.00 & **80.00** & 73.33 & **80.00** & 6 & 1000.00 & 72.78 & **100.00** & **1000.00** & & & & & & & & & & \\ \hline Nominal \(\nicefrac{{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{ \texttext{ \texttext{ \texttexttext{ \texttexttexttext{ }}}}}}}}}}}}}}\) & 521 & 88.68 & 76.84 & 80.79 & **84.56** & 62 & 95.60 & 72.67 & 86.93 & **95.45** & 194 & 95.88 & 77.80 & 83.50 & **95.15** \\ \hline Normal & 18 & **722** & **73.89** & **73.68** & **78.58** & **78.95** & **78.95** & & & & & & & & & & & & & \\ \hline Named entity & 74 & 67.57 & **77.46** & **68.42** & 60.53 & 52 & 92.56 & 77.49 & 80.00 & **93.33** & 80 & 62.50 & 56.51 & 52.50 & **67.50\({}^{\text{a}}\)** \\ \hline Argument structure & 3,314 & 87.09 & 82.63 & **85.52** & 85.26 & 5,932 & 95.79 & 83.60 & 88.47 & **94.73** & 4,206 & 95.29 & 77.52 & 86.57 & **94.56** \\ Total verb & 1,616 & 83.66 & 81.11 & **83.78** & 82.66 & 4,504 & 95.12 & 83.48 & 87.36 & **93.90** & 2,330 & 95.19 & 82.25 & 89.36 & **94.35** \\ Basic verb & 895 & 83.35 & 82.01 & **84.71** & 83.50 & 2,910 & 94.58 & 87.20 & 91.44 & **92.77** & 1,206 & 94.36 & 89.15 & 91.48 & **94.48** \\ _ARG1_ & 694 & 89.80 & 88.26 & **86.61** & 86.24 & 9,679 & 95.64 & 97.08 & **93.71** & 1,168 & 96.75 & 95.40 & 95.27 & **96.90\({}^{\text{a}}\)** \\ _ARG2_ & 708 & 88.28 & 86.69 & **89.04** & **88.77** & 2,660 & 97.14 & 91.11 & 93.36 & **97.20** & 87 & 95.89 & 89.34 & 93.91 & **95.65** \\ \hline ARG3 & 69 & **89.36** & **75.87** & **85.87** & **80.34** & **83.29** & **80.00** & **90.01** & **81.33** & **78.00** & 62 & 93.85 & **67.56** & **87.50** & **96.885** \\ \hline Verb-particle & 721 & 84.05 & 79.99 & 65.15 & **81.41** & 1,592 & 95.61 & 76.94 & 82.31 & 95.95* & 1,12.94 & 90.69 & 74.14 & 87.07 & **94.22** \\ _ARG1_ & 620 & 87.90 & 84.39 & **86.53** & 85.58 & 1,448 & 96.27 & 80.77 & 84.73 & **96.62*** & 1,096 & 96.53 & 80.20 & 90.77 & **96.90\({}^{\text{a}}\)** \\ _ARG2_ & 498 & 86.14 & 84.96 & 86.52* & **88.77*** & 88.95 & 86.55 & 71.30 & 81.33 & 95.56 & 4.24 & 94.34 & 66.73 & 78.90 & **92.66** \\ \hline ARG3 & 62 & 79.03 & 65.15 & 65.15 & **74.24** & 208 & **03.27** & 69.05 & 83.02 & **69.23** & **24** & 83.33 & 47.17 & **88.33** & **88.33** \\ \hline Total noun & 189 & **91.53** & **82.90** & **86.01** & **86.49** & 90 & **1000.00** & **76.81** & **78.26** & **97.83** & **26** & **92.31** & 69.00 & **93.33** & **93.33** \\ \hline Total adjective & 1,336 & 90.64 & 84.56 & 87.13 & **83.39** & **11.16** & 95.77 & 85.42 & 93.07 & **97.34** & 1,833 & 95.43 & 72.54 & 82.75 & **94.81** \\ \hline Returnancy & 850 & 80.59 & 78.39 & **81.26*** & 77.01 & 1,686 & 95.73 & 75.83 & 81.59 & **84.76** & 800 & 39.25 & 60.23 & 72.77 & **89.20** \\ _passive_ & 173 & 86.71 & 83.33 & **88.89*** & 86.71 & 222 & 98.20 & 85.56 & 92.11 & **97.37** & 12 & 100.00 & 79.10 & **100.00** & **100.00** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Comparing ACE, Collab. (Lin et al., 2022) and our parsers on fine-grained linguistic categories. All scores are reported in accuracy. The gray colored row means long-tail phenomenon (\(<500\) cases in the training set). The **bold** indicates the best results among neural approaches (T5
## 6 Related Work
In this section we introduce related work for neural-symbolic and ensemble learning for graph semantic parsing. For a broader context of graph semantic parsing, please refer to Appendix B.
**Neural-Symbolic Graph Semantic Parsing.** Though neural models excel at semantic parsing, they have been shown to struggle with out-of-distribution compositional generalization, while grammar or rule-based approaches work relatively robustly. This has motivated the work in neural-symbolic parsing where symbolic approaches are imported as inductive bias Shaw et al. (2021); Kim (2021); Cheng et al. (2019); Cole et al. (2021). For graph meaning representations, importing inductive bias into neural model was somehow difficult due to the much more complicated structure compared to pure syntactic rules or logical formalism Peng et al. (2015); Peng and Gildea (2016). To address this, Lin et al. (2022) proposes a collaborative framework by designing a decision criterion for beam search that incorporates the prior knowledge from a symbolic parser and accounts for model uncertainty, which achieves the state-of-the-art results on the in-domain test set.
**Ensemble Learning for Graph Parsing.** Ensemble learning is a popular machine learning approach that combines predictions from multiple candidates to create a new one that is more robust and accurate than individual predictions. Previous studies have explored various ensemble learning approaches for graph parsing Green and Zabokrtsky (2012); Barzdins and Gosko (2016). Specifically, for graph semantic parsing at subgraph level, Hoang et al. (2021) make use of checkpoints from models of different architectures, and mining the largest graph that is the most supported by a collection of graph predictions. They then propose a heuristic algorithm to approximate the optimal solution.
Compare to the previous ensemble work, our work differ in three ways: (1) Our decision rule is based on neural model confidence, so the decision is driven not by model consensus, but by model confidence which indicates when the main (neural) result is untrustworthy and needs to be complemented by symbolic result. Model consensus is effective when there exists a large number of candidate models. However, in the neural-symbolic setting when there are only two models, the ability of quantifying model uncertainty becomes important. (2) A secondary contribution of our work is to produce an parsing approach for the ERG community that not only exhibits strong average-case performance on in-domain and OOD environments, but also generalizes robustly in important categories of tail linguistic phenomena. Therefore, our investigation goes beyond average-case performance and evaluates in tail generalization as well. (3) We reveal a more nuance picture of neural models' OOD performance: a neural model's top K parses in fact often contains subgraphs that generalize well to OOD scenarios, but the vanilla MLE-based inference fails to select them (see Section 5.4 for more details).
## 7 Conclusions
We have shown how to perform accurate and robust semantic parsing across a diverse range of genres and linguistic categories for English Resource Grammar. We achieve this by taking the advantage of both the symbolic parser (ACE) and the neural parser (T5) at a fine-grained subgraph level using compositional uncertainty, an aspect missing in the previous neural-symbolic or ensemble parsing work. Our approach attains the best known result on the aggregated SMATCH score across eight evaluation corpus from Redwoods Treebank, attaining \(35.26\%\) and \(35.60\%\) error reduction over
\begin{table}
\begin{tabular}{l|c c c c c c c c c} \hline \hline & **\%** & **Top 1** & **Top 2** & **Top 3** & **Top 4** & **Top 5** & **Collah.** & **ACE** & **Ours** \\ \hline In-domain & 31.25 & 94.95 & 93.01 & 91.91 & 89.92 & 89.58 & 95.10 & 82.80 & **98.44** \\ Wiki & 32.29 & 87.55 & 86.54 & 85.56 & 86.00 & 83.90 & 88.77 & 82.67 & **92.24** \\ Brown & 46.84 & 90.54 & 89.34 & 88.57 & 88.10 & 87.11 & 92.53 & 96.15 & **96.56** \\ Essay & 50.93 & 90.71 & 90.02 & 89.31 & 89.02 & 87.60 & 92.41 & 95.73 & **96.08** \\ E-commerce & 34.65 & 90.03 & 88.34 & 86.61 & 85.56 & 82.91 & 92.82 & **98.96** & 97.54 \\ Vethmobil & 39.96 & 85.45 & 83.06 & 81.54 & 79.30 & 78.27 & 88.42 & **97.78** & 96.70 \\ LOGON & 58.10 & 90.75 & 89.65 & 88.20 & 87.90 & 86.95 & 92.50 & 96.70 & **97.06** \\ Tanaka & 24.89 & 89.35 & 87.46 & 85.60 & 83.55 & 83.16 & 92.30 & 98.23 & **98.27** \\ \hline All & 38.76 & 90.57 & 89.18 & 88.01 & 87.24 & 86.13 & 92.29 & 93.93 & **96.28** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Smatch performance on novel graphs, where the results of our inference process are not identical to any of the candidates from the base model.
the neural and symbolic parser, respectively.
## Acknowledgement
Our work is sponsored in part by National Science Foundation Convergence Accelerator under award OIA-2040727 as well as generous gifts from Google, Adobe, and Teradata. Any opinions, findings, and conclusions or recommendations expressed herein are those of the authors and should not be interpreted as necessarily representing the views, either expressed or implied, of the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for government purposes not withstanding any copyright annotation hereon. We thank Du Phan, Panupong Pasupat, Jie Ren, Balaji Lakshminarayanan and Deepak Ramachandran for helpful discussion.
## Limitation
Here we discuss a potential limitations of the current study:
Problem domainIn this work, we have selected English Resource Grammar as the target formalism. This is a deliberate choice based on the availability of (1) realistic out-of-distribution evaluation corpus, and (2) well-established, high-quality symbolic parser. This is a common setting in industrial applications, where an practitioner is tempted to combine large pre-trained neural model with expert-developed symbolic rules to improve performance for a new domain. Unfortunately, we are not aware of another popular meaning representation for which both resources are available. To overcome this challenge, we may consider studying collaborative inference between a standard seq2seq model and some indirect symbolic supervision, e.g., syntactic parser or CCG parser (Steedman, 2001), which is an interesting direction for future work.
Uncertainty estimation techniquesThe vanilla seq2seq model is known to under-estimate the true probability of the high-likelihood output sequences, wasting a considerable amount of probability mass towards the space of improbable outputs (Ott et al., 2018; LeBrun et al., 2022). This systematic underestimation of neural likelihood may lead to a conservative neural-symbolic procedure that implicitly favors the information from the symbolic prior. It may also negatively impact calibration quality, leading the model to under-detect wrong predictions. To this end, it is interesting to ask if a more advanced seq2seq uncertainty method (e.g., Monte Carlo dropout or Gaussian process Gal and Ghahramani, 2016; Liu et al., 2020)) can provide systematically better uncertainty quantification, and consequently improved downstream performance.
Graphical model specificationThe GAP model presented in this work considers a classical graphical model likelihood \(p(G|x)=\prod_{v\in G}p(v|\operatorname{pa}(v),x)\), which leads to a clean factorization between graph elements \(v\) and fast probability computation. However, it also assumes a local Markov property that \(v\) is conditional independent to its ancestors given the parent \(\operatorname{pa}(v)\). In theory, the probability learned by a seq2seq model is capable of modeling higher order conditionals between arbitrary elements on the graph. Therefore it is interesting to ask if a more sophisticated graphical model with higher-order dependency structure can lead to better performance in practice while maintaining reasonable computational complexity.
Understanding different types of uncertaintyThere exists many different types of uncertainties occur in a machine learning system (Hulermeier and Waegeman, 2021). This includes data uncertainty (e.g., erroneously annotated training labels, ill-formedness of the input sentence, or inherent ambiguity in the example-to-label mapping), and also model uncertainty which occurs the test example not containing familiar patterns the model learned from the training data. In this work, we quantifies uncertainty using mean log likelihood, which broadly captures both types of uncertainty and does not make a distinction between these different subtypes. As different source of uncertainty may lead to different strategy in neural-symbolic parsing, the future work should look into more fine-grained uncertainty signal that can decompose these different sources of error and uncertainty, and propose adaptive strategy to handle different scenarios.
## Ethical Consideration
This paper focused on neural-symbolic semantic parsing for the English Resource Grammar (ERG). Our architecture are built based on open-source models and datasets (all available online). We do not anticipate any major ethical concerns.
|
2302.08458
|
Solid State Neuroscience: Spiking Neural Networks as Time Matter
|
We aim at building a bridge between to {\it a priori} disconnected fields:
Neuroscience and Material Science. We construct an analogy based on identifying
spikes events in time with the positions of particles of matter. We show that
one may think of the dynamical states of spiking neurons and spiking neural
networks as {\it time-matter}. Namely, a structure of spike-events in time
having analogue properties to that of ordinary matter. We can define for neural
systems notions equivalent to the equations of state, phase diagrams and their
phase transitions. For instance, the familiar Ideal Gas Law relation (P$v$ =
constant) emerges as analogue of the Ideal Integrate and Fire neuron model
relation ($I_{in}$ISI = constant). We define the neural analogue of the spatial
structure correlation function, that can characterize spiking states with
temporal long-range order, such as regular tonic spiking. We also define the
``neuro-compressibility'' response function in analogy to the lattice
compressibility. We show that similarly to the case of ordinary matter, the
anomalous behavior of the neuro-compressibility is a precursor effect that
signals the onset of changes in spiking states. We propose that the notion of
neuro-compressibility may open the way to develop novel medical tools for the
early diagnose of diseases. It may allow to predict impending anomalous neural
states, such as Parkinson's tremors, epileptic seizures, electric
cardiopathies, and perhaps may even serve as a predictor of the likelihood of
regaining consciousness.
|
Marcelo J. Rozenberg
|
2023-02-16T18:04:10Z
|
http://arxiv.org/abs/2302.08458v1
|
# Solid State Neuroscience: Spiking Neural Networks as Time Matter
###### Abstract
We aim at building a bridge between to _a priori_ disconnected fields: Neuroscience and Material Science. We construct an analogy based on identifying spikes events in time with the positions of particles of matter. We show that one may think of the dynamical states of spiking neurons and spiking neural networks as _time-matter_. Namely, a structure of spike-events in time having analogue properties to that of ordinary matter. We can define for neural systems notions equivalent to the equations of state, phase diagrams and their phase transitions. For instance, the familiar Ideal Gas Law relation (\(\mathrm{P}v=\mathrm{constant}\)) emerges as analogue of the Ideal Integrate and Fire neuron model relation (\(I_{\mathrm{m}}\mathrm{ISI}=\mathrm{constant}\)). We define the neural analogue of the spatial structure correlation function, that can characterize spiking states with temporal long-range order, such as regular tonic spiking. We also define the "neuro-compressibility" response function in analogy to the lattice compressibility. We show that similarly to the case of ordinary matter, the anomalous behavior of the neuro-compressibility is a precursor effect that signals the onset of changes in spiking states. We propose that the notion of neuro-compressibility may open the way to develop novel medical tools for the early diagnose of diseases. It may allow to predict impending anomalous neural states, such as Parkinson's tremors, epileptic seizures, electric cardiopathies, and perhaps may even serve as a predictor of the likelihood of regaining consciousness.
## I Introduction
The understanding of the mind is, arguably, the most mysterious scientific frontier. The ability of the mind to understand itself is puzzling. Nevertheless, it seems increasingly possible and within our reach. Neuroscience and Artificial Intelligence are making great progress in that regard, however, following very different paths and driven by very different motivations. In the first, the focus is to address fundamental questions of biology, while in the second, it is to develop brain inspired computational systems for practical applications for modern life. Evidently, there is also large overlap between them both.
The basic units that conform the physical support of the mind, namely the brain and the associated neuronal systems, are neurons. These are cells with electrical activity that interact via electric spikes, called action potentials [1]. In animals, neurons conform networks of a wide range of complexity. Ranging from a few hundred units in jelly fish, to a hundred billion in humans. A fundamental question to answer is _how and why_ nature has adopted this electric signaling system. Its main functions are multifold: to sense and monitor the environment, then to produce behavior and decision making, and finally to drive the required motor actions to assure the survival of living beings. Neuroscience has already provided a good understanding of the electric behavior of individual neurons [2; 3]. A major milestone was the explanation by Hodgkin and Huxley of the physiological mechanism for the generation of the action potential [4]. At the other end, that of neural networks with large number of neurons, important contributions come from Artificial Intelligence. For instance, significant progress was made in the 80s following the pioneering work of Hopfield [5]. More recently, this area received a renewed boost of activity, enabled by the combination of new learning algorithms for Deep Convolutional Neural Networks [6] with the numerical power of modern computers [7]. However, the networks adopted in Artificial Intelligence overwhelmingly describe the neurons' activity by their firing rate and not by individual spikes. These are conceptually different, a spike is a discrete event, while the spiking rate is a continuous variable. Hence, modeling neurons in terms of the latter does not directly address the question posed above, namely, the why and how of Nature using discrete spikes.
Here we propose to look at this problem under a different light, which to my knowledge has not been discussed before. Hopefully, this may bring new insights and perhaps help to develop our intuition for the challenging problem of understanding the mechanism of spiking neural networks. We shall postulate an analogy between matter states, as in organized spatial patterns of particles (or atoms or molecules) and that of neuronal states, as organized patterns of spikes in time. Since we are attempting to build a bridge between disconnected disciplines, we shall keep our presentation pedagogical.
Ultimately, as with any definition, our analogy will be of value if it turns to be a useful besides the intrinsic academic interest. With this in mind, we shall later discuss an exciting perspective that the present approach may perhaps open. Namely, to develop novel screening tools that could allow to detect an enhanced risk to develop neural diseases that involve anomalous spiking states, such as Parkinson's, epilepsy, cardiopathies, unconsciousness.
## II The analogy
As mentioned above, here we postulate an analogy between spatial matter states, such as solid, liquid and gas,
and the dynamical states of spiking neurons and neural networks. More specifically, an analogy between the organization of matter in space and that of spiking events in time, which we call _temporal-matter_ states.
To motivate this, in Fig.1 we show the familiar phase diagram of water as function of pressure and temperature (\(p\),\(T\)). Next to it we show another phase diagram, that of an electronic bursting spiking neuron, which we introduce recently [8]. In the diagram, we observe various phases that correspond to qualitatively different states: tonic spiking (TS), fast spiking (FS), and two types of bursting (IB1, IB2). The phase diagram is obtained as a function of two parameters, the excitatory current \(I\) and a circuit time constant \(\tau_{s}\) (the circuit is shown in Fig.4).
To characterize the spiking states in the phase diagram, we need to consider the nature of the discrete spiking events in the time domain. For instance, tonic spiking is characterized by a sequence of spikes that occur at equally spaced time-intervals. In Neuroscience, the time between two consecutive spike events is called the inter-spike interval (ISI). At the transition from the TS to FS phase, one observes a sudden decrease of the ISI (i.e. a jump in the spiking frequency) as function of the parameter \(\tau_{s}\).
The ISI(t) characterizes the organization of the spikes in the time domain and it indicates the "time-distance" between spikes. If a sequence of spike events at times \(t_{i}\) are indicated by the function
\[s(t)=\sum_{i}\delta(t-t_{i}) \tag{1}\]
Then, it may be tempting to establish the analogy by simply identifying the time-position of spike events with the positions of particles in a matter state. The matter state is characterized by its particle density function,
\[n(x)=\sum_{i}\delta(x-x_{i}) \tag{2}\]
where \(x_{i}\) denotes the position of the \(i^{th}\) particle. For simplicity, we assume point particles and one dimensional space. Similarly, in Eq.1 we have assume ideal spikes represented by \(\delta\)-functions, while in reality the action potentials have a typical duration of \(\sim 1ms\) From the analogy, the simple tonic spiking case with equispaced spikes (i.e. constant ISI) would correspond to a perfect crystal of equally spaced particles (atoms). Thus, the familiar ISI of neuroscience would be the analogue of the familiar lattice constant \(a\) (or specific volume \(v\)) for material science.
If one applies a pressure \(P\) to matter, in general one observes the decrease of \(v\), such as in the case of the Ideal Gas law \(Pv=\) constant. On the other hand, in neuroscience, it is well known that the ISI can be reduced by increasing the excitatory input current. Hence, one may be tempted to extend our analogy to associate the \(P\) with the input \(I\) in neural systems. We can make this more precise by introducing the simplest theoretical model of an ideal spiking neuron, the Integrate and Fire (IF) model [2].
In a very schematic view, as shown in Fig.2, a biological neuron is composed of three main parts: the dendrites, the soma and the axon. The neuron is excited through the input of electric signals arriving to the dendrites, which are called the synaptic currents. This input is integrated in the cell's body, which leads to the increase in its electric potential with respect to its resting state. By means of intense excitation, the neuron eventually reaches a threshold potential value. At that point a dramatic event takes place, an electric spike is initiated, it propagates down the axon and eventually is communicated to the dendrites of a downstream neuron. This phenomenon is called the emission of an action potential, which was first described by Hodgkin and Huxley [4]. This qualitative description can be represented by the simple (leaky) IF model [2; 3]
\[\frac{du}{dt}=-\frac{1}{\tau}u(t)+I;\ \ if\ u\geq u_{th}\ then\ spike\ and\ u=u_{rest} \tag{3}\]
where \(u(t)\) represents the potential of the soma, \(u_{th}\) is the threshold potential, \(u_{rest}\) is the resting potential value and \(I\) is the input (synaptic) current. The time constant \(\tau\) is a characteristic relaxation time of the neuron that represents the leakage of charge out of the soma. If the leakage is negligible, \(\tau\rightarrow\infty\), and the integration is perfect, so one has an ideal IF model. The electric circuit representation of the model is straight forward. The soma is represented by a capacitor \(C\) that accumulates the charge of the input current, and the leakage by a resistor \(R\) in parallel. The threshold voltage can be represented by a switch that closes yielding the emission of the spike, which is the fast discharge of the charge accumulated in \(C\), as a delta function of current.
Figure 1: Left: Phase diagram of states of ordinary matter (water) showing the solid, liquid and gas phases. \(P\) and \(T\) determine the specific volume \(v\) of the state, which is the inverse of the density \(n\). Right: Measured phase diagram of _temporal-matter_ states of an electronic neuron model. The system is a two-compartment bursting neuron, showing tonic spiking (TS), fast spiking (FS) and two different intrinsic bursting states (IB1 and IB2). The input current \(I\) and the circuit time-constant \(\tau_{s}\) determine the inter-spike interval of the state, which corresponds to the inverse of the frequency.
The limit of zero leakage, i.e. the ideal Integrate and Fire is trivially solved. The potential \(u\) due to the integrated charge in \(C\) during an interval \(t\) is \(u(t)=Q/C=(I/C)t\). Thus, the spike fire time \(t_{f}\) is given by the condition \(u(t_{f})=u_{th}\). Which leads to \(It_{f}=u_{th}C\), or \(\Pi\mathrm{SI}=constant\).
Hence, we may extend our analogy by noting that the equation of state of the Ideal Gas has the same form as that of an ideal IF neuron, namely,
\[Pv\mathrm{=constant}\longleftrightarrow I\mathrm{SI}=\mathrm{constant} \tag{4}\]
where
\[P\longleftrightarrow I;\quad v\longleftrightarrow\mathrm{ISI} \tag{5}\]
As the equation of state of a real gas departs from the ideal case, biological neurons and neuron models' equation of state, will depart from the ideal IF "neuronal equation of state" above. Interestingly, the notion of neuronal equation of state should not appear so strange. Indeed, it is nothing else than the familiar concept in neuroscience of the neuron's _activation function_, namely, the firing rate as a function of the excitatory input current, \(f=f(I)\). For instance, some popular models are: the rectified linear units, or ReLU, where \(f(I)=max[0,I]\); the sigmoid activation \(f(I)=1/(1+e^{-I})\); etc. These are examples of _mathematical_ neuron models, however, we may also include here _physical_ neuron models, namely, models that are defined by an electronic circuit. In physical neuron models the equation of state \(f(I)\) can be measured, as in real gases. We should mention that while the relation \(Pv\mathrm{=constant}\) is familiar from the Ideal Gas law, it is in general valid for any liquid or solid in the linear regime.
Recently, we introduced a _minimal_ model of a physical spiking neuron, which is achieved by exploiting the memristive properties of an "old" conventional electronic component, the thyristor. The model is minimal because it provides a physical realization of the basic _leaky-integrate-and-fire_ (LIF) neuron model by associating exactly one component to each one of the three functions: a capacitor to integrate, a resistor to leak and the thyristor to fire. Qualitatively, the thyristor acts as the switch in the circuit of Fig.2.
In Fig.3 we show the circuit that defines physical neuron model just described, where we identify the role of each of the three components to the functions of the LIF. We call this artificial neuron model the Memristive Spiking Neuron (MSN), which is implicitly defined by its electronic circuit. In the right panel of the figure we show the experimental neuron equation of state, which is nothing other than the activation function, as noted above. We can observe that near the excitation threshold the equation of state is well represented by the functional form of the activation function of the LIF mathematical model (red fitting line in Fig.3). At intermediate input currents, the behavior approaches the Ideal IF neuron, whose equation of state is \(f\propto I\) (see 4), since \(f=1/ISI\)
The same methodology allows us to consider more complex neuron models, which are also defined by their respective circuit implementation. For instance, we may consider the case of bursting neurons. From theoretical neuroscience we know that a requirement to obtain bursting behavior is by adding a second dynamical variable, besides the potential of the cell body \(u(t)\), in Eq.4 and represented by the capacitor in the circuits of Fig.2 and 3. Thus, to do this we add a second \(RC\) pair to our basic MSN circuit. A simple option is to add a capacitor \(C_{L}\) in parallel to the (small) output resistor \(R_{L}\), introducing a new time constant \(\tau_{L}=R_{L}C_{L}\). The resulting circuit is shown in Fig.4a. In panel (b) we show the dynamical behavior that it now produces. We observe four qualitatively different spiking types: simple tonic spik
Figure 2: (a) Schematic biological neuron. (b) Electric circuit of the IF model (for the case \(u_{res}\)=0). The membrane of the cell’s body (soma) is represented by the capacitor \(C_{m}\), which accumulates charge of the input current. If the resistor \(R\rightarrow\infty\) then the current integration is perfect, however for a finite value there is a “leaky” integration and the model is called LIF, for leaky-integrate-and-fire.
Figure 3: (a) Electric circuit that defines the physical neuron model based on the memristive properties of the thyristor. The function of the thyristor is that of a voltage controlled switch, as shown in Fig.2 above. The small “load” resistor \(R_{L}\) is to transform the output spike current into a voltage action potential signal for measurement convenience. (b) The “neuron equation of state” or activation function \(f(I)\). Near the threshold, the equation of state follows closely the functional form of a LIF model (red line), \(-1/[log(1-I_{c}/I)]\) where \(I_{c}\) is the minimal activation current. The blue line corresponds to the Ideal IF behavior \(I\propto 1/\mathrm{ISI}=f\).
ing, fast spiking, and two bursting modes. These four spiking types are realized in the respective regions of the phase diagram of Fig.1 presented before.
As done before for the basic MSN, we may also obtain the equation of state of the Memristor Bursting Neuron (MBN) model. In the present case, we can consider that the time constant \(\tau_{L}\) plays the role of a third parameter, similarly as the temperature in the case of matter systems. Hence, in Fig.5 we show the curve \(f_{\tau_{L}}(I)\) measured at a fixed \(\tau_{L}\), indicated by the vertical purple line that crosses three phases. We observe the jumps in the frequency as the current drives the system from one phase to the other. This is reminiscent of changes in density when the pressure drives phase transitions at a fixed \(T\) in the phase diagram of water in Fig.1.
It is interesting to mention that a biologically realistic theoretical model of bursting neurons introduced by Pinsky and Rinzel (PR) shows a qualitatively similar behavior [9]. The PR is an example of a two-compartment model, where both the soma and the dendrites are described. We may notice that in the MBN model the \(R_{L}C_{L}\) block (see Fig.4) can be considered as a second compartment, which is connected to the output of the first. In the right panel of Fig.5, we reproduce the activation function (i.e. the neuronal equation of state) of the PR model. It is interesting to observe that in the simpler limits of only one compartment (soma alone and dendrite alone) the behavior of the PR is qualitatively the same as that of our basic MSN, which is also a single compartment model (see bottom curve of Fig.5a). More importantly, for the relevant case of two compartments (i.e. finite \(g_{c}\)) we observe a sudden changes in the firing rate as a function of excitatory input current, also in qualitative agreement with the MBN. In fact, both PR and MBN traverse the same sequence as the excitatory current is increased: initially quiescent below a critical current, then bursting, and finally jump in firing rate to the fast spiking mode. Hence, the phase transitions are abrupt, through a steep or a discontinuous change in the activation function. As we shall discuss in the next section this feature may have interesting consequences. Moreover, and we shall show that the \(f(I)\) anomalies may be considered as the counterparts of certain phenomena occurring in phase transitions in matter systems.
## III Correlation and response functions
Correlation and response functions are useful concepts in material science, where they serve to characterize different states of matter. For instance, a regularity in the arrangement of positions of atoms is revealed by Bragg peaks in the x-ray spectra, which are maxima of the structure factor. In real space, the regularity is revealed by the pair correlation function
\[g(x)=\frac{\int n(x+x^{\prime})n(x^{\prime})dx^{\prime}}{\int n^{2}(x^{\prime}) dx^{\prime}} \tag{6}\]
where \(n(x)\) indicates the particle density (such as electrons, atoms, molecules, etc) at position \(x\), and where we consider one dimension for simplicity. In the case of a crystalline order, \(g(x)\) shows structure with peaks. For a simple arrangement of particles along one dimension with a lattice constant \(a\), the peaks will be at \(a\), \(2a\), \(3a\),... In contrast, for a disordered state, such as gas or liquid, the \(g(x)\) is mostly featureless. The study of \(g(x)\) is routinely done in condensed matter physics for the study of phase transitions (see, for example, [10]).
In our analogy, spiking systems are thought of as temporal-matter states, so it is natural to explore the behavior of the correlation-function analogue to \(g(x)\). Since
Figure 5: (a) Activation function \(f(I)\) in semilog scale. The top curve corresponds to the MBN model along the right purple line at \(\tau_{L}\)=0.3ms indicated in the phase diagram in the inset. Following the definition of Pinsky and Rinzel, the frequency is defined as the inverse of the period between spike trains (bursts). The bottom \(f(I)\) curve is that of the MSN model discussed before, which corresponds to the limit \(\tau_{L}\to 0\) of the bursting neuron model (indicated with the left purple line in the inset). (b) Activation function of the Pinsky-Rinzel model reproduced from [9].
Figure 4: (a) Electric circuit that defines the physical model of a bursting neuron based on the MSN model and adding a second time constant \(\tau_{L}=R_{L}C_{L}\). (b) The various dynamical behaviors produced by the circuit. From top to bottom: tonic spiking (TS), fast spiking (FS) and two bursting types (IB1, IB2), which correspond to the four phases of the phase diagram shown in Fig.1.
the position of particles correspond to position of spikes, by analogy we can define the neural correlation function \(g_{n}(t)\) as
\[g_{n}(t)=\frac{\int s(t+t^{\prime})s(t^{\prime})dt^{\prime}}{\int s^{2}(t^{\prime })dt^{\prime}} \tag{7}\]
where \(s(t)\) indicates a given spike trace. The function \(g_{n}(t)\) can characterize different spiking states of a neuron. In Fig.6 we provide a concrete example, which is realized in the basic MSN model described before (see Fig.3).
We can observe the two qualitatively different tonic spiking behaviors in the two left panels of the figure. They correspond to two constant current inputs (\(36.4\mu\)A and \(45.3\mu\)A). In the first case, at higher input current (top panel), the spiking is perfectly regular. In contrast for a smaller current close to the threshold, the trace changes dramatically. The interval between spikes become very irregular. By analogy between spike and particles, we can think of the first case as that of a solid and the second as of the melting of the solid state. This qualitative description can be made more precise by the correlation function \(g_{n}(t)\), shown on the right side panels of Fig.6.
The top panel shows a succession of delta functions at equally spaced at times, multiples of the constant interspike interval, \(t_{k}\)=\(k\)ISI. This indicates the long range order in time. It tells that given the presence of a spike at time \(t=0\) we have a high probability ( 1) of finding another spike event at times \(t_{k}\) (\(k\)=1, 2, 3,...). In contrast, the \(g_{n}(t)\) shown in the bottom panel is featureless, showing a small approximate constant value. This indicates total lack of order as the presence of a spike a time \(t=0\) does not permit to predict the presence of ulterior spikes events. The emission of spikes is random as the positions of particles in a liquid or a gas.
The peaks of the \(g_{n}(t)\) are very narrow, delta-function-like, because the spikes are very narrow with respect to the duration of the ISI. In a solid, the atoms have a size that is smaller but of the same order as the lattice spacing, so instead of narrow deltas one observes broad peaks in the \(g(x)\)[10].
One may understand quite intuitively the physical origin of this dramatic changes in the time-structure. The key point is to realize that the "melted" state occurs in a regime where the activation function \(f(I)\) is very steep, at the onset of neuron excitability, i.e. near the threshold. Therefore, small variations of the input current will reflect on significant variations in the ISI. This observation motivates the following important insight.
This enhanced sensitivity to current fluctuations is due to the large slope \(df/dI\) in the activation function. Then, what is this feature related to, if one follows the analogy back to the matter systems? We recall that ISI plays the role of the lattice spacing, or specific volume (see Eqs.4 and 5), then \(f\)=1/ISI corresponds the particle density \(n(x)=1/v(x)\). On the other hand, since the input current \(I\) is like the pressure, then it follows that the slope \(df/dI\) corresponds to \(dn/dP\). This last quantity is closely related to the compressibility of matter systems
\[\beta=\frac{1}{n}\ (\frac{dn}{dP}) \tag{8}\]
which is the inverse of the bulk modulus. We can therefore follow the analogy and introduce the concept of "neuro-compressibility",
\[\beta_{n}=\frac{1}{f}\ (\frac{df}{dI}) \tag{9}\]
It is important to mention that this quantity may be measured using experimental methods such as dynamic clamp, where a controlled synaptic current can be injected into a neuron while its activity is monitored [11]. Moreover, this definition may turn out to have important consequences, as we discuss next.
Anomalies in the compressibility of materials are precursor signatures of structural phase transitions. A sudden increase in the compressibility of a solid indicates the "softening" of a vibrational mode (a phonon mode), which leads to a change in the structure, or possibly a phase change. Then, the question is, what would the analogue phenomenon be for a neuronal system? For a single neuron, the enhancement of \(\beta_{n}\) would indicate the proximity to a qualitative change in the spiking mode of a neuron, i.e. a "bifurcation" in its dynamics. This can in fact be seen in the panels of Fig.5. There, we observe that all the changes in the spiking modes for both, the MSN and for the MBN models, occur at current values where there are enhancements in \(df/dI\) or jumps. Most notably, this is not only a feature of our artificial neuron circuits, but also can be clearly seen in the biologically realistic Pinsky-Rinzel model activation functions that we reproduced in Fig.5[9]. The enhancements seen in the PR data occur at the onset of change from quiescent to spiking and also in the change from burst to tonic spiking (black circles and black triangles), in very good qualitative agreement to our electronic neuron model,
Figure 6: (a) Measured spike traces \(s(t)\) of the MSN model at two values of the input current: a “solid” state measured at \(I=45.3\mu\)A (top) and a “melted” state at \(I=36.4\mu\)A. The states are indicated with the green and blue dots in the activation function of the neuron reproduced in the inset. (b) The neural correlation function \(g_{n}(t)\) computed for the respective traces shown in the left side panels (only a small portion of the measured traces is shown).
We may then speculate on an important implication of our observations. It would be interesting to explore if neuro-compressibility anomalies are also found across the boundaries of qualitatively different states in _neuron networks_. If that is the case, an intriguing and exciting possibility would be to investigate if anomalies in \(\beta_{n}\) are also detected (by small current stimulation) in animal models of epilepsy and Parkinson's disease. If this were the case, then one may envision a pathway to a novel diagnose tool for early detection or a risk predictor of mental diseases associated to abnormal spike patterns in humans. In even further speculation, one may also search for anomalies at the onset of regaining, or loosing, consciousness, which is another challenging frontier of research [12].
## IV Bursting spikes as an ananlogue of formation of clusters defects
Here we describe another interesting connection between common a phenomena in spiking neurons and in material science. We shall show that missing spikes in a trace of a fast spiking state, can be thought of as the analogue of missing atoms, i.e, defects, in a crystal structure.
From Fig.7 we observe that the proliferation of missing spikes in a fast spiking state is a route to generate bursting behavior. This is illustrated in the sequence of traces shown in the figure, which were obtained for a step-wise decreasing input current to the MBN. The thick purple arrow in the vertical path followed in phase diagram (from blue to green to grey). In the top trace we indicate with small purple arrows the missing arrows, showing that one may understand the onset of bursting as the result of skipping spike events, which are initially few (i.e. dilute). As the current intensity is further reduced, the missing spikes become more numerous (i.e dense) and occurring in _clusters_ of inactivity, which give rise to the stuttering mode bursts [13]. In our analogy, we think of spikes as of atoms in a lattice, therefore, the initial continuous fast spiking state is like a perfect crystal. The missing spikes then play the analogue role as vacancy defects, i.e. lacking atoms. Moreover, the missing spikes are the result of decreasing current, which in the analogy represents pressure. It is then interesting to observe that in thin-film deposition, which is a topic in material science, the partial pressure of oxygen \(P(\mathrm{O}_{2})\) is a relevant parameter for the quality of the growth of crystalline oxides. Moreover, it is well known that reducing the \(P(\mathrm{O}_{2})\) induces the creation of oxygen vacancy defects in the crystal structure [14; 15], which often cluster together forming dislocations [16], This is in full qualitative analogy to the spiking traces in the stuttering bursting mode shown in Fig.7.
We would like to emphasize that the path of phase transformations, i.e. the evolution from fast spiking, to bursting, to quiescent does not seem to be just a peculiarity of our MBN circuit model. In the lower panel of the Fig.7 we illustrate the striking resemblance of the traces of the MBN with those measured in bursting neuron of rats [17]. Quite remarkably, the experimental traces were obtained by solely changing the intensity of the excitatory DC current.
## V Neural networks
We now consider one important final aspect in our analogy that may eventually bring new light to the issue of how to think about inter-neuron coupling. So far we have considered essentially individual neurons, but we may ask what would it be to extend the analogy to multi-neuron systems, i.e. to neural networks.
As a first glimpse into this question, we shall consider the simplest network case, namely, just two neurons that are mutually excitatory or inhibitory. We focus first in a system of two identical neurons, each excited with equal input currents. The currents are above threshold, so the neurons are active and theirs spikes are transformed via conductances into mutually injected synaptic currents that are positive for the excitatory case and negative for the inhibitory one. This is schematically shown in Fig.8.
We shall see that our analogy takes an interesting twist, as the dynamical states of the two-neuron system can be considered as an analogue of a complex crystal, i.e. a crystal with two atoms in the unit cell. Moreover, the coupling between neurons is mediated by synaptic currents, which can be excitatory or inhibitory. Since in our analogy current plays the role of pressure, then the synaptic currents should also play such a role. More precisely, an excitatory synaptic current should correspond to a repulsive inter-particle interactions (positive pressure), and an inhibitory synaptic current should be an attractive interaction (negativ
Figure 7: Top left: Measured spike traces of the MBN as a function of decreasing current in discrete steps (purple). The thin purple arrows indicate the missing spikes. Bottom left: Experimental trace of pre-Bötzinger bursting neurons from rats. The data is obtained by changing excitatory input current in discrete steps. Adapted from [17]. The circuit parameters may be easily adjusted to fit the experimental data [8]. Right: Phase diagram of the MBN where the purple arrow indicates the evolution from the fast spiking (blue) to the bursting type 1 phase (green) by decreasing current at constant \(\tau_{L}\).
sider the two cases, which we study using realistic electronic circuit simulations (LTspice).
We first consider two spiking neurons with mutual inhibition. When one neuron spikes it inhibits the other, and vise-versa, so they try to avoid spiking at unison since we are synaptic current are instantaneous. After a transient period, they expectedly find a stable dynamical state where they alternate to emit spikes as shown in the top panel of Fig.8. Using our analogy, this spiking pattern corresponds two a perfect molecular crystal where the unit cell has an A-B atom pair (or a basis).
In the bottom panel of the figure we consider the second case, that of mutually excitatory neurons. Again after a transient time, the system adopts a periodic spiking pattern. However, in contrast to the previous case, now both neurons fire at unison. This is also intuitive, the excitatory synaptic current of emitted by a neuron that spiked promotes the spiking of the other one, and vise versa. So, naturally, the both spike at the same time, which is nothing else that excitatory synapses promote synchrony in neural networks [18]. Following our analogy, spiking at unison corresponds to a "dimerization" of the lattice. Namely, that the distance between the A-B pair atoms is reduced as due to an attractive interaction between the A and B atomic species.
These two cases are consistent with our analogy, where current is interpreted as pressure. Indeed, the volume of the unit cell in the inhibitory case is large, as expected for positive effective pressure between A and B, while in the excitatory case the volume is fully collapsed to zero, as expected for a negative pressure within the unit cell.
It is an interesting perspective for future work to consider increasingly complex networks of several neurons (motifs). The periodic states that emerge constitute spiking _sequences_, which are of great relevance for automatic motor behavior. By virtue of the analogy that we introduced in the present work, those periodic sequences should correspond to a variety of molecular crystals. It would be exciting to explore if new intuition for Neuroscience could be brought from from those traditional areas of Condensed Matter Physics and Chemistry [19].
## VI Conclusion
In this work we introduced the idea that the dynamical states of neural networks may be though of as realizations of "time matter" states.
We started from the notion that a trace of spiking events of a neuron can be analogue to a snapshot of particles or atoms arranged in space. We then went on to explore and show that the analogy may be pushed far beyond that literary statement, and may provide new intuition in the challenging problem of understanding and designing spiking neural networks.
We identified analogue roles of basic quantities of Physics with those of Neuroscience, such as pressure and volume with input currents and inter-spike intervals. We then logically build on this assumption to show connections between correlation and response functions in both fields. Perhaps most significant was the finding that a _neuro-compressibility_ can be defined, with possible far reaching consequences, including medical ones, that may be experimentally tested.
An exciting new road of discovery may open ahead.
## VII Acknowledgments
We acknowledge support from the French ANR "MoMA" project ANR-19-CE30-0020.
|
2304.12956
|
Single-active-element demultiplexed multi-photon source
|
Temporal-to-spatial demultiplexing routes non-simultaneous events of the same
spatial mode to distinct output trajectories. This technique has now been
widely adopted because it gives access to higher-number multi-photon states
when exploiting solid-state quantum emitters. However, implementations so far
have required an always-increasing number of active elements, rapidly facing
resource constraints. Here, we propose and demonstrate a demultiplexing
approach that utilizes only a single active element for routing to, in
principle, an arbitrary number of outputs. We employ our device in combination
with a high-efficiency quantum dot based single-photon source, and measure up
to eight demultiplexed highly indistinguishable single photons. We discuss the
practical limitations of our approach, and describe in which conditions it can
be used to demultiplex, e.g., tens of outputs. Our results thus provides a path
for the preparation of resource-efficient larger-scale multi-photon sources.
|
Lena M. Hansen, Lorenzo Carosini, Lennart Jehle, Francesco Giorgino, Romane Houvenaghel, Michal Vyvlecka, Juan C. Loredo, Philip Walther
|
2023-04-25T16:01:00Z
|
http://arxiv.org/abs/2304.12956v1
|
# Single-active-element demultiplexed multi-photon source
###### Abstract
Temporal-to-spatial demultiplexing routes non-simultaneous events of the same spatial mode to distinct output trajectories. This technique has now been widely adopted because it gives access to higher-number multi-photon states when exploiting solid-state quantum emitters. However, implementations so far have required an always-increasing number of active elements, rapidly facing resource constraints. Here, we propose and demonstrate a demultiplexing approach that utilizes only a single active element for routing to, in principle, an arbitrary number of outputs. We employ our device in combination with a high-efficiency quantum dot based single-photon source, and measure up to eight demultiplexed highly indistinguishable single photons. We discuss the practical limitations of our approach, and describe in which conditions it can be used to demultiplex, e.g., tens of outputs. Our results thus provides a path for the preparation of resource-efficient larger-scale multi-photon sources.
## I Introduction
Advances in photonic quantum science [1; 2; 3; 4; 5] occur with increased complexity of the available sources of non-classical light. The main technologies to date for producing single-, and multi-photon states are based on either frequency-conversion in non-linear media [6; 7; 8], or atomic transitions of quantum emitters [9; 10; 11]. The former produces heralded single-, and entangled-photon statistics, or squeezed states of light, and the latter primarily results in deterministic single-photon emission at high efficiencies and rates. The most advanced multi-photon experiments thus far involved the interference of up to 14 particles using single-photon sources [12; 13], or the detection of hundreds of photons using squeezed light sources [14; 15].
Temporal-to-spatial demultiplexing has played a key role in enabling these levels of complexity. This technique allowed the preparation of the multi-photon sources in the cases of space-encoded interference [13], and enabled the measurement of--up to 16--consecutive temporal modes from time-bin interferometers [15]. This protocol deals with routing subsequent events, or time bins, from one spatial mode towards different locations, and it has been widely used to allow multi-photon experiments using quantum emitters [16; 17; 18; 19; 20]. Indeed, creating multiple indistinguishable photons from a single demultiplexed source is still technically more viable than fabricating many quantum emitters that produce indistinguishable photons, where the state-of-the-art remains in the demonstration of photon indistinguishability from two remote sources [21].
The standard approach for building a temporal-to-spatial demultiplexer starts by using an active element--e.g., an electro-optic modulator (EOM)--for producing orthogonal polarizations in two subsequent photons, later following different trajectories after traversing a polarization selective element, such as a polarizing beam splitter (PBS). By repeating this process at each new output, any number \(N\) of consecutive time bins can be demultiplexed, however at the increasing cost of using \(N\!-\!1\) active elements. At the output of the demultiplexer, photons originally separated in units of a temporal distance \(\tau\) are appropriately delayed such that they all travel simultaneously and can interfere. Most implementations to date have employed this method [16; 17; 18; 19], with the current record of using 19 high-voltage bulk EOMs for obtaining 20 photons [13], each occupying one spatial mode. Evidently, this approach becomes resource expensive and costly.
Here, we demonstrate a temporal-to-spatial demultiplexer that uses only one active element to produce, in principle, an arbitrary number of outputs. We combine our device with a highly-efficient single-photon source from a quantum dot, and demonstrate the generation of an eight-photon state, where each indistinguishable photon is propagating in a separate spatial mode. Our scheme can be extended to bigger multi-photon states, however, in our case we are limited by efficiency of the photon source. One interesting feature of our approach is that the demultiplexed time bins can be as close as a few nanoseconds apart, which is beneficial for maintaining high levels of photon indistinguishability from quantum emitters [22; 23]. Our implementation underlines the feasibility of our scheme that can enable near future practical multi-photon sources at larger scale.
## II Source
A sample containing semiconductor quantum dots (QDs) in micropillar cavities is placed inside a cryostat at \(\sim\)4K, see Fig. 1(a). A QD is resonantly driven with laser pulses of 80 MHz repetition rate, and it is spectrally tailored to match the QD cavity wavelength of 922.2 nm, and linewidth at FWHM of 120 pm. We use a 97:3 beam splitter (BS) to guide a fraction of the laser pump towards the sample, while maintaining high transmission for the emitted single photons. The pump pulse is then focused on the sample by an aspheric lens placed inside the cryostat. We use a cross-polarized configuration for optical excitation and collection. In the excitation path, we control the power of the laser light, and initialize its polarization with a quarter-, and a half-wave plate, \(Q_{1},H_{1}\), respectively, to one of the linear-polarization cavity modes. In the collection path, we place another set of wave plates, \(Q_{2},H_{2}\), and a Glan-Taylor polarizer (GT) to suppress the laser by more than seven orders-of-magnitude. Thus, only single photons are coupled into the fiber of the collection setup.
We use this setup to characterize our source. Figure 1(b) displays Rabi oscillations as a signature of the coherent driving of the system. At \(\pi\)-pulse excitation, we measure a maximum single-photon count rate of 17.1 MHz--that is, an end-to-end source efficiency of 21.4%--recorded with a superconducting nanowire single-photon detector of 85% system efficiency. We now characterize the single-photon purity by measuring the second-order auto-correlation function at zero time delay \(g^{(2)}(0)\) using a standard Hanbury Brown and Twiss setup. At \(\pi\)-pulse excitation, we retrieve a value of \(g^{(2)}(0)\)=1.57(2)%, as shown in Fig. 1(c). At these same conditions, a two-photon Hong-Ou-Mandel (HOM) interference experiment [24] reveals a photon indistinguishability [25]\(\mathcal{I}\)=95.35(3)%, see Fig. 1(d). Moreover, lifetime measurements reveal a single-photon decay rate of 207.4(5) ps, as shown in the Supplementary Material.
## III Single-active-element demultiplexer
The second and main part of our experiment consists of the temporal-to-spatial demultiplexer. It employs a single electro-optical modulator placed at the center of a near-recurrent geometry, as shown in Fig. 2. The temporal sequence of single photons emitted by the QD is guided to the demultiplexing setup via single-mode fibers, here the photons' polarization is set to horizontal. The scheme consists of two phases, a loading phase, and a release phase. During the loading part, the EOM is off, maintaining horizontal polarization. Here, photons follow trajectories slightly displaced from the center of a telescope of unity magnification made of two converging
Figure 1: **Single-photon source.** (a) Schematic figure of the excitation and collection setup with crossed-polarized configuration to separate pump light from single-photon emission. (b) Rabi oscillations: detected single-photon countrate vs pump pulse area. (c) Second-order auto-correlation measurement at \(\pi\)-pulse excitation. The value at zero time delay is \(g^{(2)}(0)\)=1.57(2)%. (d) Hong-Ou-Mandel interference at \(\pi\)-pulse, resulting in photon indistinguishability \(\mathcal{I}\)=95.35(3)%. Both values in (c) and (d) are obtained by integrating peak areas in a 3 ns window. Measurement uncertainties are estimated following Poissonian counting statistics.
lenses, with the EOM placed in its center. After the PBS transmits the horizontal modes, the single photons follow a free-space delay line, matched to the separation between the input photons, and reenter the setup on a new parallel trajectory. This process is repeated several times, until a mirror \(M_{r}\) back-reflects the photons' trajectories doubling the number of photons that pass through the EOM. For the release phase, with all modes loaded into the setup, we turn on the EOM, switching the photons' polarization to vertical. Now the PBSs reflect the photons simultaneously into distinct spatial output modes. This geometry can in principle continue to increase the number of optical paths as long as the optical elements' clear apertures allow it. In practice, the initial single-photon source efficiency is one main limiting factor for building multi-photon sources, as it determines the exponentially decreasing multi-photon rates. In this work, our source efficiency enables us to demultiplex up to eight single-photons.
We now operate and characterize our demultiplexer with the input single-photon source of Fig. 1. First, we insert photons separated by a shorter delay of \(\tau\)=6.25 ns, see Fig. 3(a), for which we passively increase the repetition rate of our laser [26] to \(f_{\text{L}}\)=\(\tau^{-1}\)=160 MHz. We use this reduced temporal distance to allow for a shorter free-space delay. We drive the EOM with a frequency of \(f_{\text{EOM}}\)=8 MHz phase-locked to the laser, a switching rise time of \(\sim\)5 ns, and an on-state duration of 25 ns, see Fig. 3(b). Note that although EOMs with faster repetition rates exists to date, reducing the modulation rise time is favored in the present architecture.
For as long as the EOM is off--that is, during the load
Figure 3: **Time traces.** (a) Single-photon stream generated at 160 MHz. The red colored area indicates the on state of the EOM. The numbering denotes the demultiplexer channel at which they exit. (b) Time modulation of the EOM. The loading phase has a duration of about 40 ns, and a switching time (light-red colored area) of \(\sim\)5 ns to the release phase. Approximately 25 ns pass before the EOM returns to the off state. (c) Time trace of all demultiplexer outputs. Eight single photons (red peaks) are demultiplexed into different spatial modes in the same time bin. The intensity of the correctly routed peak of a given channel is decreased by the sum of incorrectly routed peaks appearing in the loading phase among all previous channels.
Figure 2: **Demultiplexer setup.** A stream of single photons emitted by the QD enters the demultiplexer with horizontal polarization. During the loading phase, polarization is maintained and photons follow the gray-colored trajectory. The temporal delay introduced by the free-space path matches the temporal separation of the single photons. For the release phase, the EOM switches the photons’ polarization to vertical, such that they are reflected at the PBSs. The input stream that initially contained \(N\) single photons in the same spatial mode and in consecutive time bins, is transformed into \(N\) distinct trajectories containing a single photon each, in the same time bin.
ing phase--ideally no signal should be observed at any demultiplexer output. Thereon, as a result of the synchronized modulation on the input single-photon stream, every time the EOM reaches its on state--every 125 ns--a number of accumulated single photons is released, ideally one photon at every demultiplexer output. Figure 3(c) displays such time traces for eight outputs.
In an ideal implementation, we expect peaks only appearing in the targeted spatial-temporal outputs, and repeated every 125 ns, while no signal should be observed at all other times. In practice, we find additional peaks in other time bins. Here, we distinguish between three cases native to our architecture. In the first case, additional prominent peaks appear in the first output channel. They exist because the modulation pulse-width is longer than the separation between time bins. Thus, single photons that continue entering the setup after the start of the release phase are directly switched and reflected by the second PBS into the first output channel. However, these events have no impact in the resulting \(N\)-photon rates, and if necessary they can be suppressed prior to entering the demultiplexer. In a second case, incorrect routing of events occurs during the loading process, showing that the horizontal polarization of every path is not optimally maintained, such that photons reflect with a small probability when passing through the PBSs. This occurs because the trajectories followed by the photons are slightly misaligned from normal incidence towards the EOM central axis. This second case is a main reason for performance degradation in this scheme. Albeit small, the probability of incorrectly releasing a photon at any time bin of the loading phase accumulates, and thus it increases with higher number of outputs. In the third and final case, the EOM imperfectly switches the photons' polarization during the release phase, i.e., the maximum probability of reflecting is smaller than unity. Thus, some photons continue on the delay path and are likely to be released at the next output channel in the following time bin.
To assess the performance of the single-active-element demultiplexer, we now estimate the channel efficiency \(\eta_{\mathrm{ch}}\) of every output. This denotes the probability of a given input time bin--labeled from 1 to 8 in Fig. 3(a)--to be released in the targeted output--red-colored peaks in Fig. 3(c). We obtain efficiencies ranging from \(\eta_{\mathrm{ch1}}\)=0.73 for the first channel, to \(\eta_{\mathrm{ch8}}\)=0.14 for the eighth channel, see Supplementary Material for the list of channel efficiencies. These values are estimated as the ratio of the integrated counts of the main peak in a given output channel, to the counts of the corresponding input time bin. Note that this efficiency parameter is affected by any reason that lead to a reduced performance--for instance, losses of the optical elements, fiber-coupling losses, losses in connecting and mating fibers, as well as incorrect active switching of the EOM--this being the main contributing factor.
We now report the main figure-of-merit of our demultiplexing scheme: the measured multi-fold coincidence rate \(R_{N}\)--that is, the simultaneous detection of \(N\) photons in distinct output modes, see Fig. 4. Here, the four-photon coincidence rate allows a direct comparison of our results to previous works. In our case, we detect a four-photon rate of \(\sim\)1.4 kHz, a minimum of three orders-of-magnitude higher than almost all previous implementations, with the exception of Ref. [13], where our work compares similarly in the same count rate scale. The values \(R_{N}\) depend and decrease exponentially with the source efficiency, as well as channel efficiencies. In our work, the high levels of efficiencies enable us to measure up to eight photons at a coincidence rate of \(\sim\)20 mHz. Our implementation, therefore, is at the state-of-the-art of active spatial-to-temporal demultiplexing schemes, but notably by using only a single active element.
## IV Discussion and Conclusion
In this work we presented a resource-efficient scheme for temporal-to-spatial mode demultiplexing, and use it in combination with a highly-efficient quantum dot based single-photon source. Our architecture requires only one active element for obtaining, in principle, an arbitrary number of outputs, thus significantly reducing the amount of resources compared to former implementations. At present, our demultiplexer enables up to eight-photon coincidence events measured at a rate of \(\sim\)20 mHz. This constitutes a significant improvement in number of output channels and count rates compared to alternatives, locating our work at the state-of-the-art for multi-photon sources.
Moreover, count rates following this approach can still significantly improve--for the same source efficiency--by addressing the factors affecting the channel efficiencies. The main limiting factor is the leakage of photon events
Figure 4: **Multi-photon rates.** Measured multi-fold coincidence rates \(R_{N}\): simultaneous output \(N\)-photon events detected per second. We integrate over a coincidence window of 3 ns. Uncertainties are obtained from Poissonian counting statistics, and are too small to be visible for smaller \(N\).
in incorrect time bins during the loading process. This is due to the diagonal propagation of the single-photon modes relative to the central axes of the birefringent crystal, causing a small but accumulative polarization walk-off. While individually controlling the polarization of each path entering the EOM mitigates this effect to some extent, optimized channel efficiencies will require making use of compensation crystals, or modified trajectories that exploit near-recurrent geometries while imposing normal incidence along the birefringent element. Furthermore, reducing the path delay--consequently, using an EOM with faster rise time--within the demultiplexer is largely beneficial: smaller distances allows for smaller beam diameters and Rayleigh ranges, therefore many more trajectories can fit within limited optics' clear apertures. As proof-of-principle, we also built a demultiplexer unit that outputs sixteen modes using standard one inch optical elements, see Supplementary Material for more information. With these modifications in mind, our approach can be used to build multi-photon sources at larger scales.
**Funding.** This research was funded in whole, or in part, from the European Union's Horizon 2020 and Horizon Europe research and innovation programme under grant agreement No 899368 (EPIQUS), the Marie Sklodowska-Curie grant agreement No 956071 (AppQInfo), and the QuantERA II Programme under Grant Agreement No 101017733 (PhoMentor); from the Austrian Science Fund (FWF) through [F7113] (BeyondC), and [FG5] (Research Group 5); from the Austrian Federal Ministry for Digital and Economic Affairs, the National Foundation for Research, Technology and Development and the Christian Doppler Research Association. For the purpose of open access, the author has applied a CC BY public copyright licence to any Author Accepted Manuscript version arising from this submission.
**Acknowledgment.** The authors thank Patrik Zahalka for assistance with FPGA electronics and signal processing.
|
2306.15698
|
Physics over a finite field and Wick rotation
|
The paper develops an earlier proposition that the physical universe is a
finite system co-ordinatised by a very large finite field
$\mathrm{F}_\mathfrak{p}$ which looks like the field of complex numbers to an
observer.
We construct a place (homomorphism) $\mathrm{lm}$ from a pseudo-finite field
$\mathrm{F}_\mathfrak{p}$ onto the compactified field of complex numbers in
such a way that certain multiplicative subgroups $'\mathbb{R}'_+$ and
$'\mathbb{S}'$ correspond to the polar coordinate system $\mathbb{R}_+$ and
$\mathbb{S}$ of $\mathbb{C}.$ Thus $\mathrm{F}_\mathfrak{p},$ $'\mathbb{R}'_+$
and $'\mathbb{S}'$ provide co-ordinates for physical universe.
We show that the passage from the scale of units in $'\mathbb{R}'_+$ to the
scale of units of $'\mathbb{S}'$ corresponds to a multiplication (on the
logarithmic scale) by a very large integer $\mathfrak{i}$ equal approximately
to $\sqrt{\mathfrak{p}}.$ This provides an explanation to the phenomenon of
Wick rotation.
In the same model we explain the phenomenon of phase transition in a large
finite system
|
Boris Zilber
|
2023-06-26T18:48:19Z
|
http://arxiv.org/abs/2306.15698v2
|
# Physics over a finite field and Wick rotation
###### Abstract
The paper develops some mathematics supporting an earlier hypothesis that the physical universe is a finite system co-ordinatised by a huge finite field \(\mathrm{F}_{\mathfrak{p}}\) which looks like the field of complex numbers to an observer.
Earlier we constructed a place ('limit' homomorphism) \(\mathsf{Im}\) from a pseudo-finite field \(\mathrm{F}_{\mathfrak{p}}\) onto the compactified field of complex numbers. In the current paper we construct \(\mathsf{Im}\) in a more concrete form. In particular, \(\mathsf{Im}\) sends certain multiplicative subgroups \({}^{\prime}\mathbb{R}_{+}^{\prime}\) and \({}^{\prime}\mathbb{S}^{\prime}\) of \(\mathrm{F}_{\mathfrak{p}}\) onto the non-negative reals \(\mathbb{R}_{+}\) and the unit circle \(\mathbb{S}\) in \(\mathbb{C}.\) Thus \(\mathrm{F}_{\mathfrak{p}},\)\({}^{\prime}\mathbb{R}_{+}^{\prime}\) and \({}^{\prime}\mathbb{S}^{\prime}\) provide co-ordinates for physical universe.
We introduce two systems of natural units corresponding to \({}^{\prime}\mathbb{R}_{+}^{\prime}\) and \({}^{\prime}\mathbb{S}^{\prime},\) respectively, on the logarithmic scale. The passage from the scale of units of \({}^{\prime}\mathbb{R}_{+}^{\prime}\) to the scale of units of \({}^{\prime}\mathbb{S}^{\prime}\) corresponds to a multiplication (on the logarithmic scale) by a 'huge' (non-standard) integer \(\mathfrak{i}\) equal approximately to \(\sqrt{\mathfrak{p}}.\) This provides an explanation to the phenomenon of Wick rotation.
In the same model we explain the phenomenon of phase transition in a large finite system
## 1 Introduction
**1.1**: The hypothesis that the universe is infinite is an open question. This concerns both the size of the universe and the number of atoms or elements that comprise it. Since it is now accepted that there is a minimal length, the Planck length, assumption of the spatial finiteness of the universe implies the assumption of the finiteness of the number of its elements.
In [1] we discussed the concept of approximation in physics and the suggestion that physics universe is co-ordinatised by a _huge_1 finite field \(\mathrm{F}_{\mathfrak{p}}.\) It was proved (Proposition 5.2 of [1]) that the only metric field (locally compact field) that can be approximated by finite fields is the field \(\mathbb{C}\) of complex numbers. Thus "seen from afar" the huge finite field looks like a field of complex numbers \(\mathbb{C}.\)
Footnote 1: We use _huge_ for numbers which some physics authors call “ridiculously large”, see the discussion in 2.7 below.
The current work has been inspired by Hao Hu, the expert in Philosophy of Physics, who approached the author with the suggestion to apply the idea of [1] to statistical mechanics and to attack the well-discussed problem of phase transition (which happen in large finite systems but require the assumption of infinity for its mathematical theory). We suggest here an answer to the problem.
Perhaps a more important outcome of the mathematical theory developed below is the interpretation of the phenomenon of _Wick rotation_ as the result of the change of scales in physics.
I would also like to note that a compatible attempt to develop the mathematical background for a theory of finite physics was presented in [2], [3] and [4].
**1.2**: Let us recall the notion of structural approximation suggested in [1], in the specific context of approximation by huge finite fields \(\mathrm{F}_{\mathfrak{p}}.\) A finite structure is discrete but to see its grainy structure we should be able to detect difference, _inequality_, between its neighbouring elements, which might be impossible with the instruments we use to observe the structure. However, if there is a shape within the \(n\)-space \(\mathrm{F}_{\mathfrak{p}}^{n}\) which is given by an algebraic _equation_ then the observer could see it as a shape in a continuous field, say equal to \(\mathbb{R}\), \(\mathbb{C}\) or maybe a p-adic field, given by the same equation. This brings us to the idea that such an approximation by an observer is a map (called "limit") onto a continuous field \(\mathrm{K},\)
\[\mathsf{Im}:\mathrm{F}_{\mathfrak{p}}\to\mathrm{K}\]
which takes tuples \((x_{1},\ldots,x_{n})\in\mathrm{F}_{\mathfrak{p}}^{n}\) satisfying a polynomial equation \(f(x_{1},\ldots,x_{n})=0\) (with integer coefficients) to the tuple \((y_{1},\ldots,y_{n})\in\mathrm{K}^{n}\) satisfying the same equation. In other words, \(\mathsf{Im}\) is a ring-homomorphism.
In fact, for a finite \(F_{p}\) this scheme is not going to work verbatim but it works when we assume that \(F_{p}\) is infinite _pseudo-finite_, which for all intents and purposes replaces a huge finite structure, see 1.5 below for definition.
However, as explained in [1], the requirement that \(Im\) is defined on the whole of the discrete structure necessitates that the target structure \(K\) must be compact, which for a metric field can be achieved by adding a point \(\infty\) (so for \(K=\mathbb{C}\) the compactification gives us the _extended complex numbers_\(\bar{\mathbb{C}}:=\mathbb{C}\cup\{\infty\}\), equivalently, the Riemann sphere ). In particular, there are non-zero elements \(x\in F_{p}\) indistinguishable from \(0\) (that is \(Im(x)=0\)) and so for the inverses \(x^{-1}\in F_{p}\), \(Im(x^{-1})=\infty.\) Such a map \(Im\) between fields is called a _place_ in algebra.
The above mentioned key Proposition 5.2 of [1] states:
_There exists a place_
\[Im:F_{p}\rightarrow\bar{\mathbb{C}} \tag{1}\]
_from any pseudo-finite field \(F_{p}\) of characteristic \(0\) onto the compactification of the field of complex numbers \(\mathbb{C},\) and \(\mathbb{C}\) is the only metric (locally compact) field for which such a map exists._
**1.3 The relationship with the p-adic approach in physics**. The p-adic approach in physics, in particular string theory, has proved quite productive, see e.g. the survey [5]. The recourse to a prime \(p\) is motivated by the needs of discretisation and the field \(\mathbb{Q}_{p}\) of \(p\)-adic numbers has the advantage of bearing a nice metric and locally compact topology. There is no preferred prime but a very large prime like the above \(p\) seems to be a reasonable choice. We note that by definition there is a canonical place
\[\mathbb{Q}_{p}\rightarrow F_{p} \tag{2}\]
and thus, combining with (1) we get a place
\[\mathbb{Q}_{p}\rightarrow\bar{\mathbb{C}}.\]
In other words, p-adic calculations pass via (2) to \(F_{p}\) which, according to (1), can be reinterpreted as calculations in the complex numbers.
**1.4**: The main mathematical problem in making a practical use of the idea of a "physics over a finite field \(F_{p}\)" is to find a way of representing the common-sense real quantities of physics inside the huge finite field \(F_{p}\) and to find such
a representation that allows the standard physics calculations. That is to explain how both \(\mathbb{R}\) and \(i\mathbb{R}\) emerge from a large finite field.
In this regard it is useful to invoke a notion of _feasible numbers_ that was discussed by philospers of mathematics, mathematical logicians and computer scientists, see e.g. [9] for a mathematical treatment of the notion. Roughly speaking, \(1,2,\ldots,1000\), as well as their ratios such as \(1/5\) and \(0.203\) are feasible numbers, but the Avogadro number \(\sim 10^{23}\) is not feasible. We think of the latter as _very large_ but potentially observable numbers. This contrasts with _huge_ numbers, such as \(\mathfrak{p}\), the number of points in \(\mathrm{F}_{\mathfrak{p}}\), and \(\mathfrak{p}>>10^{23}\).
Thus, while we work with feasible numbers \(1,2,\ldots\) inside \(\mathrm{F}_{\mathfrak{p}}\) we can think of these as the usual integers but when our integers become very large but still much less than \(\mathfrak{p}\) then \(\mathsf{Im}\) takes such ones to \(\infty\). Further on this scale, according to the approximation theorem of [1], the integers start to behave like complex numbers, e.g. if an integer \(\mathfrak{i}\) satisfies \(\mathfrak{i}^{2}+1=0\mod\mathfrak{p}\) (huge number) it takes the role of \(\sqrt{-1}\).
All this is made precise in the formulation of the Main Theorem in 1.7 below.
* In the current work a "huge finite field" is a _pseudo-finite field_\(\mathrm{F}_{\mathfrak{p}}\) which can be obtained by considering a non-principal ultrafilter \(\mathcal{D}\) on the set of prime numbers \(P\subset\mathbb{N}\) (positive integers) and the ultraproduct \[\mathrm{F}_{\mathfrak{p}}=\prod_{p\in P}\mathrm{F}_{p}/\mathcal{D}.\] Such a construction sees \(\mathrm{F}_{\mathfrak{p}}\) as a logical limit of finite fields \(\mathrm{F}_{p}\) along the ultrafilter: a first order sentence \(\Phi\) is valid in \(\mathrm{F}_{\mathfrak{p}}\) if and only if it is valid in almost all, in the sense of \(\mathcal{D}\), finite fields \(\mathrm{F}_{p}\).
Model theory tells us that \(\mathrm{F}_{\mathfrak{p}}\) can equaivalently be obtained as the quotient-ring of the ring \({}^{*}\mathbb{Z}\) of non-standard integers by the prime ideal \(\mathfrak{p}\,{}^{*}\mathbb{Z}\), where \(\mathfrak{p}\) is the respective non-standard prime number,
\[\mathrm{F}_{\mathfrak{p}}\cong{}^{*}\mathbb{Z}/\mathfrak{p}\text{ where }{}^{*} \mathbb{Z}=\mathbb{Z}^{P}/\mathcal{D}.\]
The interpretation of \(\mathrm{F}_{\mathfrak{p}}\) in non-standard integers allows us to apply, among others, the means of non-standard analysis, in particular the _standard part map_
\[\mathrm{st}:{}^{*}\mathbb{Q}\to\mathbb{R}\cup\{+\infty,\,-\infty\}\]
from non-standard rationals \(\frac{l}{m}\), \(l,m\in{}^{*}\mathbb{Z}\), to the compactification of the reals.
**1.6**: Along with \(\mathfrak{p}\) and the field \((F_{\mathfrak{p}};+,\cdot,0,1)\) we specify:
- a non-standard _higly divisible_ number \(I\) (each standard integer \(m\) divides \(I\)) satisfying some other assumptions below;
- a two-sorted pseudo-finite structure \((U_{\mathfrak{p},I},F_{\mathfrak{p}})\) with
\[U_{\mathfrak{p},I}=({}^{*}\mathbb{Z}/_{(\mathfrak{p}-1)I};+,\hat{0},\hat{1})\]
a pseudo-finite additive cyclic2 group of order \((\mathfrak{p}-1)I\) with generator \(\hat{1}\);
Footnote 2: here and below “cyclic” means in the pseudo-finite sense, i.e. the ultraproduct of cyclic groups
- a surjective group homomorphism
\[\exp_{\mathfrak{p}}:U_{\mathfrak{p},I}\to F_{\mathfrak{p}}^{\times};\quad n \cdot\hat{1}\mapsto\epsilon^{n}\]
where \(\epsilon\) is a generator of the (pseudo)-cyclic group \(F_{\mathfrak{p}}^{\times}\), \(n\in{}^{*}\mathbb{Z}\). It follows that
\[\ker\exp_{\mathfrak{p}}=(\mathfrak{p}-1)\cdot U_{\mathfrak{p},I},\]
the subgroup generated by \((\mathfrak{p}-1)\cdot\hat{1}\), (and suggests that \(\mathfrak{p}-1\) of \(U_{\mathfrak{p},I}\) should be interpreted as \(2\pi i\));
- a pair of surjective "limit" homomorphisms (places) \(\mathfrak{l}\mathfrak{m}\) which make the diagram commute
\[\begin{array}{ccc}\mathfrak{l}\mathfrak{m}_{\mathbb{U}}:&\mathbb{U}_{ \mathfrak{p},I}&\rightarrow&\bar{\mathbb{C}}\\ \\ \exp_{\mathfrak{p}}\downarrow&&\exp\downarrow\\ \\ \mathfrak{l}\mathfrak{m}_{F}:&F_{\mathfrak{p}}&\rightarrow&\bar{\mathbb{C}} \end{array} \tag{3}\]
There is a natural cyclic order on \(U_{\mathfrak{p},I}\) in which \(u+\hat{1}>u\) and there is a related cyclic order on \(F_{\mathfrak{p}}^{\times}\) in which \(\epsilon^{n+1}>\epsilon^{n}\) for all \(n\in{}^{*}\mathbb{Z}\).
We treat \(\hat{1}\) as an infinitesimal and choose two "units of length" \(\mathfrak{u}\) and \(\mathfrak{v}\), elements of \(U_{\mathfrak{p},I}\),
\[\hat{1}<<\mathfrak{u}<<\mathfrak{v}.\]
More precisely,
\[\mathfrak{u}=\frac{\mathfrak{p}-1}{i}\mbox{ and }\mathfrak{v}=\mathfrak{p}-1 \tag{4}\]
for some \(i\in{}^{*}\mathbb{N}\) such that
\[iI|(\mathfrak{p}-1),\;\;i>>I \tag{5}\]
(it divides \(\mathfrak{p}-1\)).
Thus, \(\mathbf{u}\) and \(\mathbf{v}\) are units of two very different scales (see 1.9 below for further comment).
We assume that
\[\mathfrak{i}=\iota^{2}\text{ and }\mathfrak{l}=\mu^{2} \tag{6}\]
for some \(\mu,\iota\in{}^{*}\mathbb{N}\).
We also need to assume
\[\mathfrak{i}^{2}+1=\mathfrak{p}\quad\text{or}\quad\mathfrak{i},\mathfrak{l} \text{ algebraically independent in }\mathrm{F}_{\mathfrak{p}} \tag{7}\]
(the first is the preferable and more elegant assumption but it is not known whether it is consistent with \(\mathfrak{p}\) being infinite (the Landau problem)).
It is easy to check that our set of assumptions (5)-(7) along with the assumption of high divisibility of \(\mathfrak{l}\) are consistent. We are going to slightly extend these assumption later, in particular (24) assumes that \(\mathfrak{l}^{n}<\mathfrak{p}\) for all \(n\in\mathbb{N}\).
We also define additive subgroups of \(\mathbb{U}_{\mathfrak{p},\mathfrak{l}}\) called suggestively \({}^{\prime}\mathbb{R}^{\prime}\) and \({}^{\prime}i\mathbb{R}^{\prime}\).
**1.7 Main Theorem**.: _There exists a surjective ring homomorphism (place)_
\[\mathsf{Im}_{\mathbb{F}}:\mathrm{F}_{\mathfrak{p}}\twoheadrightarrow\bar{ \mathbb{C}}\]
_and a surjective additive semigroup homomorphism_
\[\mathsf{Im}_{\mathbb{U}}:\mathbb{U}_{\mathfrak{p},\mathfrak{l}}\twoheadrightarrow \bar{\mathbb{C}}\]
_such that:_
_the diagram (3) is commutative,_
\[\mathsf{Im}_{\mathbb{U}}:{}^{\prime}\mathbb{R}^{\prime}\twoheadrightarrow \mathbb{R}; \tag{8}\]
\[\mathsf{Im}_{\mathbb{U}}:{}^{\prime}i\mathbb{R}^{\prime}\twoheadrightarrow i \mathbb{R}; \tag{9}\]
_where \(i=\sqrt{-1}\)._
_For any \(l\in{}^{*}\mathbb{Z}\) such that \(0<l\leq\mathfrak{l}\),_
\[\mathsf{Im}_{\mathbb{F}}:l\cdot\mathfrak{l}^{-1}\mapsto\mathrm{st}\big{(} \frac{l}{\mathfrak{l}}\big{)}\in\mathbb{R} \tag{10}\]
\[\mathsf{Im}_{\mathbb{F}}:\exp_{\mathfrak{p}}(l\mathbf{u})\mapsto e^{-\pi z}, \quad\text{for }z:=\mathrm{st}\big{(}\frac{l}{\mathfrak{l}}\big{)}\in \mathbb{R} \tag{11}\]
\[{\sf Im}_{\rm F}:\exp_{\sf p}(l{\bf v})\mapsto e^{i\pi z},\quad\mbox{for }z:={\rm st}\big{(}\frac{l}{l}\big{)}\in\mathbb{R} \tag{12}\]
_where \({\rm st}\) is the standard part map._
_For \(a\in\mathbb{Q}_{+},\) a complete square and \(\mu\in{}^{*}\mathbb{Z}\) such that \(\mu^{2}=l:\)_
\[{\sf Im}_{\rm F}:\ \frac{1}{\mu}\sum_{-{\rm i}l/a\leq n<{\rm i}l/a}\exp_{\sf p} (a\frac{n^{2}}{2l}{\bf u}) \mapsto \int_{\mathbb{R}}e^{-a\pi x^{2}}dx \tag{13}\]
_and \(\int_{\mathbb{R}}e^{-ax^{2}}dx=\frac{1}{\sqrt{a}},\) according to the standard definition of the Gaussian integral;_
\[{\sf Im}_{\rm F}:\ \frac{1}{\mu}\sum_{-{\rm l}/a\leq n<{\rm l}/a}\exp_{\sf p} (a\frac{n^{2}}{2l}{\bf v}) \mapsto \int_{\mathbb{R}}e^{ia\pi x^{2}}dx \tag{14}\]
_where \(\int_{\mathbb{R}}e^{ia\pi x^{2}}dx=\frac{e^{\frac{\pi i}{4}}}{\sqrt{a}}\) according to the Quantum Mechanics calculus._
Note that (8) and (9) give us subgroups of \({\rm F}_{\sf p}^{\times}\)
\[{}^{\prime}\mathbb{R}^{\prime}{}_{+}:=\exp_{\sf p}({}^{\prime}\mathbb{R}^{ \prime})\mbox{ and }^{\prime}\mathbb{S}^{\prime}:=\exp_{\sf p}({}^{\prime}i \mathbb{R}^{\prime})\]
furnish a good analogue of polar coordinate system in \({\rm F}_{\sf p}.\)
**1.8 Remark.** Since \({\sf Im}\) is a place, (10) determines the values of \({\sf Im}(Q(l)\) for any rational function \(Q(x)\) over \(\mathbb{Z}.\)
**1.9 Discussion**. The theorem clarifies the relationship between two different scales in \((\mathbb{U}_{\sf p,l},{\rm F}_{\sf p})\) presented by units \({\bf u}\) and \({\bf v}.\) These should be though of as units for physics of 'low energy' and 'high energy', respectively, the latter being physics at quantum level and the former the physics at the level of Brownian motion.
The statements (11) and (12) demonstrate that the action of the huge pseudofinite integer \({\sf i}\) which changes the scale of units in \(\mathbb{U}_{\sf p,l}\) (recall that \({\bf v}={\sf i}\cdot{\bf u},\) and \({\sf i}\approx\sqrt{\sf p}\)) is seen in \(\mathbb{C}\) as the **Wick rotation**\(e^{\pi z}\mapsto e^{i\pi z}.\)
The Gaussian integrals in (13) and (14) are mathematical manifestation of the same phenomena expressed by the summation formulae over \({\rm F}_{\sf p}\) on the left hand side of \(\mapsto.\) The difference in the integral expressions on the right comes from the difference in the scale of units that measure the respective processes.
Thus the puzzling regularity of the transition from the Brownian motion integral (13) to the quantum mechanics integral (14) known and exploited by physicists as Wick rotation has an explanation as a mathematical consequence of the change of scales.
**1.10 Discussion.** The "limit" \(\mathsf{Im}_{\mathsf{F}}\) maps the discrete field \(\mathrm{F}_{\mathfrak{p}}\) into (the compactification of) the field of complex numbers and thus endows the image with a metric. The Theorem determines \(\mathsf{Im}_{\mathsf{F}}\) on the subring generated by specific points (see (10), (11) and (12)) but leaves the rest free. This means that the observer, which sees \(\mathrm{F}_{\mathfrak{p}}\) with all the algebraic geometry over it through \(\mathsf{Im}_{\mathsf{F}}\), has some freedom in choosing the metric on algebraic varieties. In particular, if \(Z\subseteq\mathrm{F}_{\mathfrak{p}}^{m}\) is an algebraic subvariety, say a torus, then \(\mathsf{Im}_{\mathsf{F}}(Z)\) is a subset of the compactification of \(\mathbb{C}^{m}\), a compact complex variety. In general, such a compactification is far of being unique.
Thus the freedom in the choice of \(\mathsf{Im}_{\mathsf{F}}\) implies a respective degree of freedom in the choice of complex/metric version of physics.
## 2 Statistical physics and phase transition
**2.1 Physical units and dimensions**
In the formalism of two sorts \(\mathbb{U}_{\mathfrak{p},\mathfrak{l}}\) and \(\mathrm{F}_{\mathfrak{p}}\) introduced in 1.6 the sort \(\mathbb{U}_{\mathfrak{p},\mathfrak{l}}\) is assumed to be the sort that holds all the physical units (dimensions). It is convenient for each principal unit of measurement to define a special sort \(\mathbb{D}_{i}\), \(i=1,\ldots,k\) which is going to be naturally interpreted in terms of \(\mathbb{U}_{\mathfrak{p},\mathfrak{l}}\)..
Each sort \(\mathbb{D}_{i}\) is a subgroup of \(\mathbb{U}_{\mathfrak{p},\mathfrak{l}}\) and so has a cyclic additive group structure with the unit (generator) \(\mathbf{d}_{i}\in\mathbb{U}_{\mathfrak{p},\mathfrak{l}}\) (\(\mathbb{D}_{i}\)-unit). \(\mathbb{D}_{i}\) is isomorphic to \(\mathbb{U}_{\mathfrak{p},\mathfrak{l}}/\ker_{i}\), where
\[\ker_{i}=\frac{(\mathfrak{p}-1)\mathfrak{l}}{\mathbf{d}_{i}}\cdot\mathbb{U}_{ \mathfrak{p},\mathfrak{l}},\ \ \mathbf{d}_{i}|\,(\mathfrak{p}-1)\mathfrak{l}.\]
Thus the size of \(\mathbb{D}_{i}\),
\[|\mathbb{D}_{i}|=\frac{(\mathfrak{p}-1)\mathfrak{l}}{\mathbf{d}_{i}}.\]
Between some of the unit sorts there are bilinear maps
\[\mathbb{D}_{1}\times\mathbb{D}_{2}\twoheadrightarrow\mathbb{D}_{3};\quad(x_{ 1}\mathbf{d}_{1},x_{2}\mathbf{d}_{2})\mapsto x_{1}x_{2}\mathbf{d}_{3}\]
where we assume \(\ker_{3}=\ker_{1}\cap\ker_{2}\) and
\[x_{1}=u_{1}+\ker_{1},\ x_{2}=u_{1}+\ker_{2},\quad x_{1}\cdot x_{2}:=u_{1}\cdot u _{2}+\ker_{3}\]
in the ring \({}^{*}\mathbb{Z}/(\mathfrak{p}-1)\mathfrak{l}\).
Thus counting in units of \(\mathbb{D}_{3}\) gives
\[x_{3}=x_{1}x_{2}\]
and this can be equivalently written as
\[x_{1}=x_{2}^{-1}x_{3}.\]
The units \(\mathbf{u}\) and \(\mathbf{v}\) introduced in 1.6 are examples of \(\mathbb{D}\)-units.
**2.2**: The field sort \(\mathrm{F}_{\mathfrak{p}}\) is assumed to be dimensionless and the exponentiation map \(\exp_{\mathfrak{p}}\) restricted to a sort \(\mathbb{D}_{i}\) is a homomorphism
\[\exp_{\mathfrak{p}}:\mathbb{D}_{i}\twoheadrightarrow\mathrm{F}_{\mathfrak{p}} ^{\times};\ \ n\cdot\mathbf{d}_{i}\mapsto\exp_{\mathfrak{p}}(n\mathbf{d}_{i}).\]
**2.3**: In Statistical Mechanics the dimensions in \(\mathbb{U}_{\mathfrak{p},\mathfrak{l}}\) are usually Energy (E), Temperature (T) and (in ferromagnets) magnetic moment (H).
According to this theory probability that the system in temperature \(T\) is in a state \(\sigma\) is equal to \(\exp(-\frac{E_{\sigma}}{kT})\), and the probability \(p_{n}\) that the system consists of exactly \(n\) atoms, out of possible \(N\), is
\[p_{n}=\frac{P_{n}}{Z_{N}},\ \ P_{n}=\sum_{\sigma\in\Sigma(n)}\exp(-\frac{E_{ \sigma}}{kT}),\]
where \(\Sigma(n)\) are all the states with exactly \(n\) atoms and
\[Z_{N}=\sum_{\sigma}\exp(-\frac{E_{\sigma}}{kT}),\]
where \(\sigma\) runs in all possible states with at most \(N\) atoms.
Setting (with some simplifications) \(y:=\exp(-\frac{H}{kT})\), the equilibrium state of the system of volume \(N\) (that is having up to \(N\) particles) is analysed via the polynomial
\[\mathcal{P}_{N}(y):=\sum_{n=0}^{N}p_{n}y^{n}.\]
which is called the _grand partition function_ of the system.
Assuming frequentist probabilities we may regard the \(P_{n}\) and \(Z_{N}\) integers.
Since there are very few restrictions on states in the models the number \(Z_{N}\) is close to the number of all possible subsets, that is
\[\sum_{n=0}^{N}P_{n}=Z_{N}\approx 2^{N}. \tag{15}\]
(The estimate (15) appears also in [8] on page 4, for \(N\) the Avogadro number, and the number \(Z_{N}\) is being characterised as _ridiculously large number_.)
Also note that when \(n\) is near \(N/2\), \(P_{n}\) is near its maximum
\[P_{n}\approx\left(\begin{array}{c}N\\ \frac{N}{2}\end{array}\right)\approx\frac{2^{N}}{\sqrt{N}} \tag{16}\]
**2.4** We note that since \(N\) is supposed to be very large number, by (16) \(P_{n}\) can reach huge values and so modulo \(\mathfrak{p}\) will be outside the real part \({}^{\prime}\mathbb{R}^{\prime}\) of \(\mathrm{F}_{\mathfrak{p}}.\) Same is true for \(Z_{N}.\) Hence in general the \(p_{n}\) should be treated as **probability amplitudes** rather than classical probabilities, and \(\mathcal{P}_{N}\) should be treated as a polynomial with complex coefficients rather than polynomial over \(\mathbb{R}.\)
**2.5** The seminal work of C.N.Yang and T.D.Lee, [6] - [7] (1952), laid the ground for the modern theory of critical points in the evolution of large finite systems such as the ideal gas.
The main theorem of [6] states that the phase transition in the system happens at the point \(y_{\mathrm{crit}}\) (a critical point) such that
\[\mathcal{P}_{N}(y_{\mathrm{crit}})=0\]
which obviosly **can not be a real point**. The paper analyses complex roots of the polynomial which proves very important for the behaviour of the system near the critical point.
The modern theory resolves this paradox by assuming \(N\rightarrow\infty\), in which case \(\frac{1}{N}\ln\mathcal{P}_{N}(y)\) converges, away of the critical point, to an analytic function (the passage to the thermodynamical limit) and \(y_{\mathrm{crit}}\) converges to a real point. This solution of the paradox is not considered to be fully satisfactory as the actual systems are always finite, although very large. Under the passage to thermodynamical limit some information is being lost.
**2.6** The hypothesis of the _universe over finite field_ suggests a solution to the paradox. Under the hypothesis \(y_{\mathrm{crit}}\in\mathrm{F}_{\mathfrak{p}}\), that is \(y_{\mathrm{crit}}\) is an integer such that
\[\mathcal{P}_{N}(y_{\mathrm{crit}})=0\mod\mathfrak{p}. \tag{17}\]
One can explore the assumption further using the reduction (24) from [7] together with Theorem 3 therein which states that after the reduction all the
zeroes of \({\cal P}_{N}\) are on the unit circle and the limit of the zeroes is \(=1.\) This leads us to the conclusion that
\[{\cal P}_{N}(1)=0\mod{\mathfrak{p}}\]
Equivalently,
\[\sum_{n=1}^{N}P_{n}=0\mod{\mathfrak{p}}.\]
Combining this with (15) and assuming that the number of atoms \(N\) is \(s\)\(10^{23},\) the Avogadro number, we can make an estimate on \({\mathfrak{p}},\) the upper bound:
\[{\mathfrak{p}}<2^{N}\approx 2^{10^{23}} \tag{18}\]
On the other hand the same argument proves that _there is a low bound on the volume \(N\) of gas which allows a phase transition_, that is has a critical point satisfying (17):
\[N>\log{\mathfrak{p}}.\]
**2.7 Discussion.** The assumption of physics over a finite field explains the necessity of extending the definition of grand partition function as the function of complex variable as well as explains the phase transition in a large finite system.
In a forthcoming paper we develop an _analytic theory_ on \({\mathbb{U}}_{{\mathfrak{p}},{\mathfrak{l}}}\) which, via \({\mathfrak{m}}_{{\mathbb{U}}}\) corresponds to the analytic theory on \({\mathbb{C}}.\) In particular the expression like
\[\frac{1}{N}\ln{\cal P}_{N}(y)\mbox{ and }{\mathfrak{l}}{\mathfrak{m}}_{{ \mathbb{U}}}\left\{\frac{1}{N}\ln{\cal P}_{N}(y)\right\}\]
become the lawfull objects of the theory and one can carry on the thermodynamic theory as usual, along with the discrete theory on \({\rm F}_{{\mathfrak{p}}}.\)
The rest of the paper is purely mathematical. It provides the construction of the limit map \({\mathfrak{l}}{\mathfrak{m}}\) and proofs.
## 3 The pseudo-finite exponentiation
Fix notation for a non-standard model of \({\mathbb{C}}\)
\[{}^{*}{\mathbb{C}}={\mathbb{C}}^{P}/{\cal D}\]
where \(P\) and the ultrafilter \(\mathcal{D}\) on \(P\) are defined in 1.5.
Note that by construction \({}^{*}\mathbb{Z}\subset{}^{*}\mathbb{C}\) and this allows us to identify elements \(l\in\mathrm{F}_{\mathfrak{p}}\) which are represented by \(l\in{}^{*}\mathbb{Z}\), \(0\leq l<\mathfrak{p}\), with \(l\in{}^{*}\mathbb{C}\) in the theorem below.
**3.1 Theorem**.: _Then there is a place \(\mathcal{I}:\mathrm{F}_{\mathfrak{p}}\to{}^{*}\bar{\mathbb{C}}\). such that \(\mathcal{I}\) maps:_
_for all \(l\in{}^{*}\mathbb{Z}\) satisfying \(-\mathfrak{I}\leq l\leq\mathfrak{I}\)_
\[l\mapsto l \tag{19}\]
\[\iota\mapsto e^{-\frac{\pi i}{4}}\text{ and }\mathfrak{i}\mapsto e^{-\frac{\pi i }{2}} \tag{20}\]
_For all \(a\in\mathbb{Q},\) for all \(l\in{}^{*}\mathbb{Z},\)\(-\mathfrak{I}<l\leq\mathfrak{I},\)_
\[\epsilon^{\frac{al(p-1)}{2\mathfrak{i}}}\mapsto e^{-\frac{al\pi i}{\mathfrak{ i}}} \tag{21}\]
\[\epsilon^{\frac{al(p-1)}{2\mathfrak{i}}}\mapsto e^{-\frac{al\pi}{\mathfrak{ i}}}. \tag{22}\]
The proof is by Lemmata below.
We consider linear equations of the form
\[\sum_{i=1}^{k}c_{i}\cdot X_{i}=1\]
where the variables \(X_{i}\) are assumed to be in a specific subset \(G\) of the field. A solution \(x_{1},\ldots,x_{k}\) is said to be **non-degenerate** if for any proper subset \(K\subset\{1,\ldots,k\}\)
\[\sum_{i\in K}c_{i}\cdot x_{i}\neq 1.\]
**3.2 Lemma**.: _There is a function \(f:\mathbb{N}\to\mathbb{N},\) an \(\eta\in{}^{*}\mathbb{N}\) and a highly divisible \(\nu\in{}^{*}\mathbb{N}\) such that_
\(2\nu\eta|(\mathfrak{p}-1),\)__
_for all \(n\in\mathbb{N}\)\(\nu^{n}|\eta,\)_
_and_
_for all \(k\in\mathbb{N}\), for all rational functions \(c(X,Y)=\langle c_{1}(X,Y),\ldots,c_{k}(X,Y)\rangle,\) for any \(0\leq l\leq\nu\)_
_any non-degenerate solution_ \(x_{1},\ldots,x_{k}\in\mathrm{F}_{\mathfrak{p}}\)__
_of_ \(\sum_{i=1}^{k}c_{i}(l,\eta)\cdot x_{i}=1\) & \(\bigwedge_{i}x_{i}^{\nu}=1\)__
_satisfies_ \(\bigwedge_{i}x_{i}^{f(k)}=1\)_._
**Proof.** Recall that \(\mathrm{F}_{\mathfrak{p}}\) is a field of characteristic \(0\). Thus the well-known Theorem of Mann about linear equations in roots of unity with rational coefficients is applicable. A consequence of Mann's Theorem is that there is a function \(f\) satisfying (23) for any \(\nu,\eta\in\mathbb{N}\) (see [14] for this and other consequences).
We treat the expression \(x^{\nu}\) as an arithmetic function of \(x,\nu\) defined in \(({}^{*}\mathbb{Z};+,\cdot,\mathfrak{p})\) along with the interpretation of the field \(\mathrm{F}_{\mathfrak{p}}\).
Let \(\mathcal{M}_{c}\subset{}^{*}\mathbb{N}^{2}\) be the set of \((\nu,\eta)\in{}^{*}\mathbb{N}^{2}\) such that (23) holds for given \(k\) and \(c(X,Y).\) Clearly \(\mathcal{M}_{c}\) is definable in \(({}^{*}\mathbb{Z};+,\cdot,\mathfrak{p}).\) By the above consequence of the Mann Theorem \(\mathbb{N}^{2}\subseteq\mathcal{M}_{c}\) and so
\[\mathbb{N}^{2}\subseteq\bigcap_{c}\mathcal{M}_{c}\]
where \(c\) runs in all \(k\)-tuples of rational functions \(c(X,Y)\).
Since each \(n\in\mathbb{N}\) divides \(\mathfrak{p}-1\) it follows that the countable type
\[\bigwedge_{c}(\nu,\eta)\in\mathcal{M}_{c}\ \&\ 2\nu\eta|(\mathfrak{p}-1)\ \&\ \bigwedge_{n\in\mathbb{N}}n|\nu\ \&\ \nu^{n}|\eta\]
is consistent, thus has a realisation in the \(\aleph_{0}\)-saturated structure \({}^{*}\mathbb{Z}.\)\(\square\)
Below we use notation
\[{}^{*}\mathbb{Z}[\mathfrak{l}]:=\{l\in{}^{*}\mathbb{Z}:-\mathfrak{l}\leq l \leq\mathfrak{l}\}.\]
Assuming \(\mathfrak{l}<<\mathfrak{p}\) we may equally treat \({}^{*}\mathbb{Z}[\mathfrak{l}]\) as a subset of \(\mathrm{F}_{\mathfrak{p}}\).
**3.3 Corollary**.: _We may assume that for all \(k\) and \(c(X,Y)\) (23) is satisfied when \(\nu:=\mathfrak{l}\) and, if \(\mathfrak{i}^{2}+1\neq\mathfrak{p},\)\(\eta:=\mathfrak{i}\). In particular,_
\[\begin{array}{l}\mathfrak{l}^{n}|\mathfrak{i},\ \text{for all $n\in\mathbb{N}$ and}\\ \mathfrak{i}^{2}+1=\mathfrak{p}\ \ \text{or $\mathfrak{i}$ is transcendental in $\mathrm{F}_{\mathfrak{p}}$ over ${}^{*}\mathbb{Z}[\mathfrak{l}]$}\end{array} \tag{24}\]
**3.4**: Set
\[{}^{\prime}\mathfrak{i}\mathbb{R}^{\prime}:=\{\frac{\kappa}{\mathfrak{l}} \mathbf{v}:\ -m\mathfrak{l}/2\leq\kappa\leq m\mathfrak{l}/2,\ m\in\mathbb{N}\}\subset \mathbb{U}_{\mathfrak{p},\mathfrak{l}}\]
and let
\[{}^{\prime}\mathbb{S}^{\prime}:=\exp_{\mathfrak{p}}({}^{\prime}i\mathbb{R}^{ \prime})\subset\mathrm{F}_{\mathfrak{p}}.\]
Since \(\exp_{\mathfrak{p}}(\mathbf{v})=1,\)\({}^{\prime}\mathbb{S}^{\prime}\) is the group of all the elements \(\gamma\) of \(\mathrm{F}_{\mathfrak{p}}^{\times}\) satisfying \(\gamma^{\mathfrak{l}}=1\).
**3.5 Lemma.**_Let \(\gamma_{1},\ldots,\gamma_{n}\in{}^{\prime}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
**3.8 Lemma.**_Let \(k,l\in\mathbb{N}\) and \(\alpha=\alpha(\epsilon)=\epsilon^{\frac{p-1}{i_{l}}}.\) Let \(\bar{M}=\langle M_{1},\ldots,M_{k}\rangle\) in \({}^{*}\mathbb{Z}^{k},\)\(-l\!l<M_{i}<l\!l.\)_
_Let \(\bar{g}(\bar{Z})=\langle g_{1}(Z_{1},\ldots,Z_{k}),\ldots,g_{k}(Z_{1},\ldots,Z_{ k})\rangle\) be a \(k\)-tuple of rational functions over \(\mathbb{Q}({}^{*}\mathbb{Z}[\mathfrak{l}],\mathfrak{i})\) and \(\bar{c}=\langle c_{1},\ldots,c_{k}\rangle,\)\(c_{i}\in\mathbb{Q}(\mathbb{Z}[\mathfrak{l}],\mathfrak{i},{}^{\prime}\!\mathbb{S} ^{\prime})\) of the form_
\[c_{i}=g_{i}(\bar{s}),\ \bar{s}:=\langle s_{1},\ldots,s_{k}\rangle,\text{ for }s_{1}, \ldots,s_{k}\in{}^{\prime}\!\mathbb{S}^{\prime}.\]
_Consider a non-standard Laurent polynomial in \(\mathrm{F}_{\mathfrak{p}}\) of the form_
\[P_{\bar{c},\bar{M}}(X):=\sum_{i=1}^{k}c_{i}X^{M_{i}}-1. \tag{25}\]
_Then there is a generator \(\epsilon\) of \(\mathrm{F}_{\mathfrak{p}}\) such that for all \(\bar{M}\) and \(\bar{s}\) as above_
\[P_{\bar{c},\bar{M}}(\alpha)\neq 0.\]
**Proof.** For a given \(\bar{M}\) and \(\bar{c},\)\(P_{\bar{c},\bar{M}}(X)\) has at most \(l\!l\) zeroes. There are at most \((l\!l)^{k}\) possible tuples \(\bar{c}=\bar{g}(\bar{s})\) and \((l\!l)^{k}\) of tuples \(\bar{M}\), so at most \((l\!l)^{2k}\) polynomials (25) altogether, so \((l\!l)^{2k+1}\) zeroes of the polynomials.
On the other hand \(\alpha(\epsilon)\) takes any value of primitive root of order \(\mathfrak{i}\) as \(\epsilon\) runs through generators of \(\mathrm{F}_{\mathfrak{p}}^{\times}.\) Thus there are \(\varphi(\mathfrak{i}\mathfrak{i})\) (the Euler function) such \(\alpha\) and the well-known lower estimate gives us \(\varphi(\mathfrak{i}\mathfrak{i})>\sqrt{\mathfrak{i}\mathfrak{i}},\) which is bigger than \((l\!l)^{2k+1}\) by (24). Thus there is an \(\epsilon\) such as \(\alpha(\epsilon)\) is as required. \(\square\)
**3.9 Corollary.**_There is \(\epsilon\) such that \(\alpha(\epsilon)\) is not a zero of any \(P_{\bar{c},\bar{M}}(X)\) for all \(l,\)\(k,\)\(\bar{M}\) and \(\bar{c}\in\mathbb{Q}({}^{*}\mathbb{Z}[\mathfrak{l}],\mathfrak{i},{}^{\prime}\! \mathbb{S}^{\prime})^{k}\) as in 3.8._
**Proof.** Note that the conclusion of 3.8 can be restated as a formal statement \(\exists\epsilon\,\Phi_{l,\bar{g}}(\alpha(\epsilon),\mathfrak{l},\mathfrak{i}, \mathfrak{p})\) in the language of arithmetic. The statement of (3.8) readily generalises to the statement that \(\alpha\) avoids zeroes of a finite number of polynomials of the form (25), by taking the product of the polynomials. Thus the type
\[\{\Phi_{l,\bar{g}}(\alpha(\epsilon),\mathfrak{l},\mathfrak{i},\mathfrak{p}): \,l,\bar{g}\text{ as in }(\ref{eq:1})\}\]
in variable \(\epsilon\) is consistent. Since \({}^{*}\mathbb{Z}\) is \(\aleph_{0}\)-saturated there is an \(\epsilon\) such that \(\alpha(\epsilon)\in\mathrm{F}_{\mathfrak{p}}\) is not a zero of any polynomial (25) \(\square\)
**3.10 Lemma.**_There is a generator \(\epsilon\) such that any multiplicatively independent \(\gamma_{1},\ldots,\gamma_{m}\in{}^{\prime}\!\mathbb{R}_{+}^{\prime}\) are algebraically independent over \(\mathbb{Q}({}^{*}\mathbb{Z}[\mathfrak{l}],\mathfrak{i},{}^{\prime}\!\mathbb{S} ^{\prime}).\)_
**Proof.** Let \(\epsilon\) be as stated in 3.9. Suppose \(\gamma_{1},\ldots,\gamma_{m}\) are multiplicatively independent and satisfy a polynomial equation \(R(\gamma_{1},\ldots,\gamma_{m})=0.\) The equation can be rewritten as
\[\sum_{j}c_{j}\gamma^{\bar{m}_{j}}=1\]
for some monomials \(\gamma^{\bar{m}_{j}}=\prod_{i}\gamma_{i}^{m_{ji}}\) and \(c_{j}\in\mathbb{Q}(\mathfrak{l},\mathfrak{i},{}^{\prime}\mathbb{S}^{\prime}).\)
Clearly, \(\gamma^{\bar{m}_{j}}\) belong to the group \({}^{\prime}\mathbb{R}_{+}^{\prime}\) and so \(\gamma^{\bar{m}_{j}}=\alpha(\epsilon)^{M_{j}},\) for some \(M_{j},\)\(|M_{j}|<l\mathfrak{l},\) contradicting our choice of \(\epsilon.\)\(\square\)
**3.11 Corollary**.: _The place \(\mathcal{I}\) of 3.6 can be extended to_
\[\mathcal{I}:\mathbb{Q}({}^{*}\mathbb{Z}[\mathfrak{l}],\mathfrak{i},{}^{ \prime}\mathbb{S}^{\prime},{}^{\prime}\mathbb{R}_{+}^{\prime})\to{}^{*} \bar{\mathbb{C}}\]
_which satisfies (22)._
**Proof.**\(\mathcal{I}\) of (22) is an isomorphism of groups. Since the only relations in the language of rings between elelements in \({}^{\prime}\mathbb{R}_{+}^{\prime}\) are multiplicative relations, \(\mathcal{I}\) is a place in the language of rings. \(\square\)
**3.12**: Since \({}^{*}\mathbb{C}\) is algebraically closed the \(\mathcal{I}\) of 3.11 can be extended to a place
\[\mathcal{I}:\mathrm{F}_{\mathfrak{p}}\to{}^{*}\bar{\mathbb{C}}.\]
This finishes the proof of Theorem 3.1.
\(\square\)
**3.13 Proof of the Main Theorem 1.7: (8) - (12)**.
Define
\[\mathsf{Im}_{\mathrm{F}}:=\mathrm{st}\circ\mathcal{I}.\]
In particular, taking into account that, for \(\mathrm{st}:{}^{*}\mathbb{C}\to\bar{\mathbb{C}},\)
\[\mathrm{st}(e^{x})=e^{\mathrm{st}(x)}\]
we get
\[\mathsf{Im}_{\mathrm{F}}:\quad\epsilon^{\frac{l}{2l}\mathfrak{u}}\mapsto e^{- \pi\,\mathrm{st}(\frac{l}{l})};\quad\epsilon^{\frac{l}{2l}\mathfrak{v}}\mapsto e ^{-i\pi\,\mathrm{st}(\frac{l}{l})}.\]
It is easy to check that
\[{}^{\prime}\mathbb{R}^{\prime}\cap{}^{\prime}i\mathbb{R}^{\prime}=\{0\}.\]
For \(\frac{l}{l}\in{}^{*}\mathbb{Q}\) (non-standard rationals), such that \(\frac{l}{l}\mathbf{u}\in{}^{\prime}\mathbb{R}^{\prime}\), define
\[\mathsf{Im}_{\mathbb{U}}(\frac{l}{l}\mathbf{u}):=-2\pi\mathrm{st}(\frac{l}{l}). \tag{26}\]
This is an additive homomorphism
\[{}^{\prime}\mathbb{R}^{\prime}\twoheadrightarrow\mathbb{R}.\]
Then also \(\frac{l}{l}\mathbf{v}\in{}^{\prime}i\mathbb{R}^{\prime}\), and define
\[\mathsf{Im}_{\mathbb{U}}(\frac{l}{l}\mathbf{v}):=-i2\pi\,\mathrm{st}(\frac{l}{ l}), \tag{27}\]
an additive homomorphism
\[{}^{\prime}i\mathbb{R}^{\prime}\twoheadrightarrow i\mathbb{R}.\]
This defines \(\mathsf{Im}_{\mathbb{U}}\) on \({}^{\prime}\mathbb{R}^{\prime}+{}^{\prime}i\mathbb{R}^{\prime}\) respecting the commutation with \(\exp.\) Moreover,
\[\mathsf{Im}_{\mathbb{U}}:\ ^{\prime}\mathbb{R}^{\prime}+{}^{\prime}i\mathbb{R}^{ \prime}\twoheadrightarrow\mathbb{C}.\]
Note that the definition of the homomorphism \(\mathsf{Im}_{\mathbb{U}}\) on \({}^{\prime}\mathbb{R}^{\prime}\) and \({}^{\prime}i\mathbb{R}^{\prime}\) above extends uniquely on their divisible hulls
\[\mathsf{Im}_{\mathbb{U}}:\frac{1}{n}{}^{\prime}\mathbb{R}^{\prime}\to \mathbb{R}\text{ and }\mathsf{Im}_{\mathbb{U}}:\frac{1}{n}{}^{\prime}i\mathbb{R}^{\prime}\to i \mathbb{R},\quad\text{for }n\in\mathbb{N}\]
The sum of divisible hulls \(\mathrm{H}({}^{\prime}\mathbb{R}^{\prime})+\mathrm{H}({}^{\prime}i\mathbb{R}^ {\prime})\) is a divisible subgroups of \(\mathbb{U}_{\mathfrak{p},\mathfrak{i}}\) and so can be complemented by a subgroup \(\mathbb{U}_{\mathfrak{p},\mathfrak{l}}(\infty)\),
\[\mathbb{U}_{\mathfrak{p},\mathfrak{l}}=\mathrm{H}({}^{\prime}\mathbb{R}^{ \prime})\dot{+}\mathrm{H}({}^{\prime}i\mathbb{R}^{\prime})\dot{+}\mathbb{U}_{ \mathfrak{p},\mathfrak{l}}(\infty).\]
Define \(\mathsf{Im}_{\mathbb{U}}(u)=\infty\) for all \(u\in\mathbb{U}_{\mathfrak{p},\mathfrak{l}}(\infty).\) Using \(\exp_{\mathfrak{p}}\) and \(\exp\) as in the commuting diagram (3) this can be extended to \(\mathsf{Im}_{\mathbb{F}}\) so that (8), (9), (10), (11) and (12) are satisfied. \(\square\)
The rest of the Main Theorem will be proved in the next section.
## 4 Integration
Recall that \(\mathfrak{l}=\mu^{2}\) and \(\mathfrak{i}=\iota^{2}\) elements of \({}^{*}\mathbb{N}\).
**4.1 Proposition**.: _Let \(a=\frac{d^{2}}{l^{2}}\) for some \(d,l\in\mathbb{N}.\)_
\[\sum_{-\mathfrak{i}l/2a\leq n\leq\mathfrak{i}l/2a}\exp_{\mathfrak{p}}(-a\frac{n^ {2}}{2\mathfrak{I}}\mathbf{u})=\mu\frac{\omega}{\sqrt{a}} \tag{28}\]
\[\sum_{-\mathfrak{i}/2a\leq n\leq\mathfrak{i}l/2a}\exp_{\mathfrak{p}}(-a\frac{n ^{2}}{2\mathfrak{I}}\mathbf{v})=\mu\frac{\omega}{\sqrt{a}} \tag{29}\]
_where \(\omega\in\mathrm{F}_{\mathfrak{p}}\) is a primitive root of \(1\) of order \(8.\) and \(\mu,\iota\) and \(d,l\) should be seen as elements of \(\mathrm{F}_{\mathfrak{p}}\) corresponding to the respective integers._
**Proof.** Let \(\nu\in{}^{*}\mathbb{N}\) be even and \(2\nu^{2}|(\mathfrak{p}-1).\) Let \(\xi\in\mathrm{F}_{\mathfrak{p}}\) be a primitive root of order \(2\nu^{2},\) equivalently, for some \(\epsilon\in\mathrm{F}_{\mathfrak{p}},\) a primitive root of order \(\mathfrak{p}-1,\)
\[\xi=\epsilon^{\frac{\mathfrak{p}-1}{2\nu^{2}}}.\]
Writing \(n=m\nu+k,\)\(0\leq m,k<\nu,\) we get
\[\sum_{0\leq n<\nu^{2}}\xi^{n^{2}}=\sum_{0\leq k<\nu}\xi^{k^{2}}\cdot\sum_{0 \leq m<\nu}\xi^{m^{2}\nu^{2}+2mk\nu}. \tag{30}\]
Now we use the fact that \(\frac{m^{2}-m}{2}\in{}^{*}\mathbb{Z}\) and get in \(\mathrm{F}_{\mathfrak{p}}\)
\[\sum_{0\leq m<\nu}\xi^{m^{2}\nu^{2}+2mk\nu}=\sum_{0\leq m<\nu}\xi^{m\nu^{2}+2 mk\nu}=\sum_{0\leq m<\nu}\xi^{2m\nu(\frac{\nu}{2}+k)}=\]
\[=\{\begin{array}{ll}\nu,&\mbox{if $\frac{\nu}{2}+k\equiv 0$}\mod\nu,\\ 0,&\mbox{otherwise}\end{array}\]
using that \(\xi^{2m\nu(\frac{\nu}{2}+k)}=\xi^{2\nu^{2}}=1\) in the first line and that
\[\sum_{0\leq m<\nu}\zeta^{m}=0,\]
for \(\zeta:=\xi^{2\nu(\frac{\nu}{2}+k)},\)\(\zeta^{\nu}=1\) but \(\zeta\neq 1,\) in the second line.
Hence in the sum (30) only \(k=\frac{\nu}{2}\) contributes, and we get
\[\sum_{0\leq n<\nu^{2}}\xi^{n^{2}}=\nu\,\xi^{\frac{\nu^{2}}{4}}=\nu\epsilon^{ \frac{\mathfrak{p}-1}{8}}=\nu\cdot\omega \tag{31}\]
for \(\omega:=\epsilon^{\frac{\mathfrak{p}-1}{8}},\) a primitive root of \(1\) of order \(8.\)
Note that
\[\sum_{0\leq n<\nu^{2}}\xi^{n^{2}}=\sum_{-\nu^{2}/2\leq n<\nu^{2}/2}\xi^{n^{2}}\]
because of periodicity
\[\xi^{(n+\nu^{2})^{2}}=\xi^{n^{2}}.\]
Finally, set
\[\nu:=\frac{\mu\iota}{\sqrt{a}}\mbox{ for \eqref{eq:11} \mbox{ and } }\nu:=\frac{\mu}{\sqrt{a}}\mbox{ for \eqref{eq:12}}\]
and (28) and (29) follow. \(\Box\)
**4.2 Corollary (proof of the Main Theorem, (13) and (14))**
\[\mathsf{Im}_{\mathrm{F}}:\ \frac{1}{\mu}\sum_{-\mathsf{i}\mathsf{I}/a \leq n<\mathsf{i}\mathsf{I}/a}\exp_{\mathfrak{p}}(a\frac{n^{2}}{2\mathsf{I}} \mathbf{u}) \mapsto \frac{1}{\sqrt{a}}\] \[\mathsf{Im}_{\mathrm{F}}:\ \frac{1}{\mu}\sum_{-\mathsf{I}/a \leq n<\mathsf{I}/a}\exp_{\mathfrak{p}}(a\frac{n^{2}}{2\mathsf{I}}\mathbf{v}) \mapsto \frac{e^{\frac{\pi i}{4}}}{\sqrt{a}}\]
This follows from the facts that \(\mathsf{Im}_{\mathrm{F}}(\iota)=e^{-\frac{\pi i}{4}}\) and that \(\mathsf{Im}_{\mathrm{F}}(\epsilon^{\frac{p-1}{8}})=e^{\frac{\pi i}{4}},\) see (26) and (27) together with Theorem 3.1.
**4.3**: Note that \(\frac{1}{\sqrt{a}}=\int_{\mathbb{R}}e^{-ax^{2}}dx,\) the classical Gaussian integral. Analogously, in quantum mechanics \(\int_{\mathbb{R}}e^{iax^{2}}dx:=\frac{e^{\frac{\pi i}{4}}}{\sqrt{a}},\) although the integral is not classically defined since \(e^{iax^{2}}\) is oscillating on the whole of \(\mathbb{R}.\) One of the ways of justifying the assignment of the value to the integral expression is by referring to the fact that the respective Fresnel integral \(\int_{-A}^{A}e^{iax^{2}}dx\) is well-defined for any \(A>0\) and
\[\lim_{A\to\infty}\int_{-A}^{A}e^{iax^{2}}dx=\frac{e^{\frac{\pi i}{4}}}{\sqrt{a }}.\]
**4.4 The domains of integrations and domains of summation.**
Let, for \(l\in\mathbb{N}\)
\[\mathrm{I}_{l}=\{n\in{}^{*}\mathbb{Z}:\ -l\mu\leq n\leq l\mu\}\mbox{ and } \mathrm{I}=\bigcup_{l\in\mathbb{N}}\mathrm{I}_{l}.\]
Let \(a=\frac{m}{l}\) and
\[\mathrm{I}_{a,\mathbf{u}}=\{n\in{}^{*}\mathbb{Z}:-\mathsf{i}\mathsf{I}/2a \leq n\leq\mathsf{i}\mathsf{I}/2a\}\mbox{ and }\mathrm{I}_{a,\mathbf{v}}=\{n\in{}^{*}\mathbb{Z}:- \mathsf{I}/2a\leq n\leq\mathsf{I}/2a\}\]
the domains of summations of (28) and (29). Clearly, using the assumptions on \(\mathfrak{l}\) and \(\mu\),
\[\mathrm{I}\subset\mathrm{I}_{a,\mathbf{u}}\text{ and }\mathrm{I}\subset\mathrm{I}_{a,\mathbf{v}}\]
and
\[\frac{\mathbf{u}}{\mu}\cdot\mathrm{I}\subset{}^{\prime}\mathbb{R}^{\prime} \text{ and }\frac{\mathbf{v}}{\mu}\cdot\mathrm{I}\subset{}^{\prime}i\mathbb{R}^{\prime}.\]
The application of \(\mathsf{Im}_{\mathbb{U}}\) defined in 3.13 gives us
\[\mathsf{Im}_{\mathbb{U}}:\ \frac{\mathbf{u}}{\mu}\cdot\mathrm{I}_{l} \twoheadrightarrow\mathbb{R}\cap[-l\pi,l\pi]\text{ and }\frac{\mathbf{u}}{\mu}\cdot\mathrm{I} \twoheadrightarrow\mathbb{R},\]
\[\mathsf{Im}_{\mathbb{U}}:\ \frac{\mathbf{v}}{\mu}\cdot\mathrm{I}_{l} \twoheadrightarrow i(\mathbb{R}\cap[-l\pi,l\pi])\text{ and }\frac{\mathbf{v}}{\mu}\cdot\mathrm{I} \twoheadrightarrow i\mathbb{R},\]
that is \(\frac{\mathbf{u}}{\mu}\cdot\mathrm{I}\) can be seen as a Riemann integration partition of sets \({}^{\prime}\mathbb{R}^{\prime}\) with the infinitesimal mesh \(\frac{\mathbf{u}}{\mu}\), and respectively \(\frac{\mathbf{v}}{\mu}\cdot\mathrm{I}\) in \(i\mathbb{R}\) with mesh \(\frac{\mathbf{v}}{\mu}\).
Note that in (28) and (29)
\[\frac{n^{2}}{2\mathrm{I}}=\frac{1}{2}(\frac{n}{\mu})^{2};\ \ \mathsf{Im}_{ \mathbb{U}}(\frac{n^{2}}{2\mathrm{I}}\mathbf{u})=-\pi\frac{x^{2}}{2},\ \ \mathsf{Im}_{ \mathbb{U}}(\frac{n^{2}}{2\mathrm{I}}\mathbf{v})=-i\pi\frac{x^{2}}{2}\]
for \(x=\mathrm{st}\big{(}\frac{n}{\mu}\big{)}\).
**4.5 Lemma**.: _For every \(l\in\mathbb{N}\)_
\[\mathsf{Im}_{\mathrm{F}}:\frac{1}{\mu}\sum_{n\in\mathrm{I}_{l}}\exp_{ \mathfrak{p}}(-a\frac{n^{2}}{2\mathrm{I}}\mathbf{u})\mapsto\int_{-l}^{\,l}e ^{-ax^{2}}dx\]
\[\mathsf{Im}_{\mathrm{F}}:\frac{1}{\mu}\sum_{n\in\mathrm{I}_{l}}\exp_{ \mathfrak{p}}(-a\frac{n^{2}}{2\mathrm{I}}\mathbf{v})\mapsto\int_{-l}^{\,l}e ^{-iax^{2}}dx\]
_and the integrals are well-defined._
Proof. Note that by (21) and (22) the map \(\mathcal{I}\) translates the respective elements of the sums from \(\mathrm{F}_{\mathfrak{p}}\) to the elements of the non-standard model \({}^{*}\mathbb{C}\) of complex numbers:
\[\mathcal{I}:\ \exp_{\mathfrak{p}}(-a\frac{n^{2}}{2\mathrm{I}}\mathbf{u}) \mapsto e^{-a\pi(\frac{n}{\mu})^{2}}\]
\[\mathcal{I}:\ \exp_{\mathfrak{p}}(-a\frac{n^{2}}{2\mathrm{I}}\mathbf{v}) \mapsto e^{-ia\pi(\frac{n}{\mu})^{2}}\]
and thus
\[\mathcal{I}\{\frac{1}{\mu}\sum_{n\in\mathrm{I}_{l}}\exp_{\mathfrak{p}}(-a\pi\frac{ n^{2}}{2\mathrm{I}}\mathbf{u})\}\text{ and }\mathcal{I}\{\frac{1}{\mu}\sum_{n\in\mathrm{I}_{l}}\exp_{\mathfrak{p}}(-a\pi \frac{n^{2}}{2\mathrm{I}}\mathbf{v})\}\]
become non-standard Riemann sums with infinitesimal mesh \(\frac{1}{\mu}\)
\[\sum_{-l<\frac{n}{\mu}<l}\frac{1}{\mu}e^{-a\pi(\frac{n}{\mu})^{2}}\text{ and }\sum_{-l<\frac{n}{\mu}<l}\frac{1}{\mu}e^{-ia\pi(\frac{n}{\mu})^{2}}\]
Since the summation is over a compact interval \([-l,l]\subset{}^{*}\mathbb{R}\) the application of the standard part map gives us (see e.g. the integration via non-standard analysis in [12])
\[\mathrm{st}(\sum_{-l<\frac{n}{\mu}<l}\frac{1}{\mu}e^{-a\pi(\frac{n}{\mu})^{2} })=\int_{-l}^{l}e^{-a\pi x^{2}}dx\text{ and }\mathrm{st}(\sum_{-l<\frac{n}{\mu}<l} \frac{1}{\mu}e^{-ia\pi(\frac{n}{\mu})^{2}})=\int_{-l}^{l}e^{-ia\pi x^{2}}dx\]
**4.6 Corollary**.: \[\mathsf{Im}_{\mathrm{F}}:\frac{1}{\mu}\sum_{n\in\mathrm{I}}\exp_{\mathfrak{p }}(-a\frac{n^{2}}{2\mathrm{I}}\mathbf{u})\mapsto\int_{\mathbb{R}}e^{-ax^{2}} dx=\frac{1}{\sqrt{a}}\]
\[\mathsf{Im}_{\mathrm{F}}:\frac{1}{\mu}\sum_{n\in\mathrm{I}}\exp_{\mathfrak{p} }(-a\frac{n^{2}}{2\mathrm{I}}\mathbf{v})\mapsto\lim_{l\to\infty}\int_{-l}^{l} e^{-iax^{2}}dx=\frac{e^{\frac{\pi}{4}}}{\sqrt{a}}\]
Proof. In both cases the right-hand side is the limit of integrals in 4.5 since \(\mathrm{I}=\bigcup_{l}\mathrm{I}_{l}.\) In the first case the classical Gaussian integral over the whole of \(\mathbb{R}\) converges and is equal to the limit. In the second case the right-hand side as the limit of the Fresnel integral is well-defined but the Riemann integral is not.
**4.7 Discussion**. The left-hand sides of 4.2 and of 4.6 differ in the domains of summations but the right-had sides are the same. This implies that our definition of \(\mathsf{Im}_{\mathrm{F}}\) is such that
\[\mathsf{Im}_{\mathrm{F}}\left(\frac{1}{\mu}\sum_{n\in\mathrm{I}_{a},\mathrm{u }\setminus\mathrm{I}}\exp_{\mathfrak{p}}(-a\frac{n^{2}}{2\mathrm{I}}\mathbf{u} )\right)=0\text{ and }\mathsf{Im}_{\mathrm{F}}\left(\frac{1}{\mu}\sum_{n\in \mathrm{I}_{a},\mathrm{v}\setminus\mathrm{I}}\exp_{\mathfrak{p}}(-a\frac{n^{2} }{2\mathrm{I}}\mathbf{v})\right)=0\]
In these "tail domains" \(\mathrm{I}_{a,\mathbf{u}}\setminus\mathrm{I}\) and \(\mathrm{I}_{a,\mathbf{v}}\setminus\mathrm{I}\) the respective values under exponentiation are very large, non-feasible numbers, and according to the interpretation of \(\mathrm{F}_{\mathfrak{p}}\) in \(\mathbb{C}\) the application of \(\exp\) to such values oscillates uncontrollably. R.Feynman intuition was that for this reason the sum should be considered negligible, see e.g. [13], 2-3.
|
2304.03683
|
Quantum interference between distant creation processes
|
The search for macroscopic quantum phenomena is a fundamental pursuit in
quantum mechanics. It allows us to test the limits quantum physics and provides
new avenues for exploring the interplay between quantum mechanics and
relativity. In this work, we introduce a novel approach to generate macroscopic
quantum systems by demonstrating that the creation process of a quantum system
can span a macroscopic distance. Specifically, we generate photon pairs in a
coherent superposition of two origins separated by up to 70 meters. This new
approach not only provides an exciting opportunity for foundational experiments
in quantum physics, but also has practical applications for high-precision
measurements of distributed properties such as pressure and humidity of air or
gases.
|
Johannes Pseiner, Manuel Erhard, Mario Krenn
|
2023-04-07T15:09:51Z
|
http://arxiv.org/abs/2304.03683v1
|
# Quantum interference between distant creation processes
###### Abstract
The search for macroscopic quantum phenomena is a fundamental pursuit in quantum mechanics. It allows us to test the limits quantum physics and provides new avenues for exploring the interplay between quantum mechanics and relativity. In this work, we introduce a novel approach to generate macroscopic quantum systems by demonstrating that the creation process of a quantum system can span a macroscopic distance. Specifically, we generate photon pairs in a coherent superposition of two origins separated by up to 70 meters. This new approach not only provides an exciting opportunity for foundational experiments in quantum physics, but also has practical applications for high-precision measurements of distributed properties such as pressure and humidity of air or gases.
## I Introduction
In quantum mechanics, if two alternatives cannot be distinguished - even in principle - interference can occur. Feynman said that this property "has in it the heart of quantum mechanics" Feynman (1944). In 1994, Herzog et al. Herzog et al. (1994) demonstrated that this phenomenon can not only be observed for properties of individual or entangled photons, but for the creation process of photons themselves. Expanding an experiment by Zou, Wang and Mandel Wang and Mandel (1999), they have overlapped the paths of a photon pair generated by one creation process with the paths generated by another creation process in such a way. The setup, depicted in Fig. 1 has been aligned in such a way that there is no information (not even in principle) to find out in which of the two creation process the photon pair has been generated. Therefore the photon pair is in a coherent position of being created in the first or second process. By adapting a phase between the two processes, constructive and destructive interference can be observed. For constructive interference, the total number of generated photon pairs can be enhanced by a factor of four compared to a single crystal, while for destructive interference, the number of generated photon pairs is zero. A conceptual sketch of this experiment can be seen in Fig. 1.
This peculiar quantum phenomenon has been employed in recent years for numerous applications Zurek (2003), ranging from spectroscopy Zurek (2003) to sensing Zurek (2003) and entanglement generation Zurek (2003); Zurek (2003). So far, this process has only been observed with very small spatial distance between the two creation processes. Either, the two creation processes occur at the same location (for instance, by pumping the same nonlinear crystal from two directions Herzog et al. (1994); Zurek (2003)), or are separated at the millimeter scale (for instance, at an integrated photonic chip Zurek (2003); Zurek (2003)).
Here, we observe coherent quantum superposition between the origins of two macroscopically separated photon creation processes. Specifically, we use two nonlinear crystal, spatially separated by up to 70 meters. Each crystal can create photon pairs. Importantly, by overlapping the paths of the photons from the two crystals, we create a scenario in which the generated photon pairs cannot reveal any distinguishing information about their origin.
Pushing the distance between two creation processes has multiple motivations. First, on a technical side, one can envision highly sensitive, quantum enhanced sensing methods for large scale properties such as air pressure or temperature fluctuations. Second, spatially separating the creation process is necessary for demonstrating the nonlocal nature of a new multiphoton quantum interference effects without explicit entanglement, which has been theoretically proposed in Zurek (2003); Zurek (2003) and observed at a local scale in Zurek (2003); Zurek (2003). Third, at a fundamental physics level, the unusual feature of our experiment is that the entire generation process of the quantum state scales over a macroscopic distance. Thus, our experiment is a first step towards a new way of testing the limits of ever larger and more complex quantum systems and the concept of _macroscopicity_Han et al. (2005), complementary to quantum systems with large masses Zurek (2003), photon numbers Zurek (2003), angular momentum Zurek (2003) or large scale entanglement distribution Zurek (2003).
## II Methods
In our work, we choose two interfering quantum processes as an experimental demonstration for macroscopically large quantum systems. The idea is simple: Investigate how much we can expand a seemingly coherent quantum system without losing the associated quantum effects. Our system is comprised of two quantum processes that create pairs of photons in a nonlinear optical
process called spontaneous parametric down-conversion (SPDC). The reason for choosing such a system is manifold. It can be used for long-distance experiments and multi-photon experiments. Hence it is extendable in terms of space, the number of photons, and the dimensionality of information contained in the system. The principle of path identity allows for a conceptually simple implementation of the proposed idea. The SPDC process can be approximated by a power series expansion [19]
\[S_{ab}=1+g_{ab}(a^{\dagger}b^{\dagger})+g_{ab}^{2}/2(a^{\dagger}b^{\dagger})^{2} +O(g_{ab}^{3}), \tag{1}\]
where the generation rate \(g\) is proportional to the second-order nonlinear coefficient and the pump power, and \(a^{\dagger},b^{\dagger}\) refer to the creation operator of photons in the paths \(a\) and \(b\). We neglect all higher orders (\(\geq O(g^{2})\)) for further discussions by operating our experiment at low pump powers, see Supplementary for details.
Inserting another nonlinear crystal into the path of the first according to the path identity principle with a general phase \(U_{\phi}\) between the two processes leads
\[S_{cd}U_{\phi}S_{ab} =[1+g_{cd}(c^{\dagger}d^{\dagger})][1+e^{i\phi}g_{ab}(a^{\dagger} b^{\dagger})]\] \[=1+g_{cd}c^{\dagger}d^{\dagger}+e^{i\phi}g_{ab}a^{\dagger}b^{ \dagger}+O(g^{2}),\]
with \(\phi\) denoting a phase between the two nonlinear processes. Finally, using the principle of path identity (i.e. overlaying the paths of the single photons such that \(c\to a\) and \(d\to b\)), we arrive at
\[S_{cd}U_{\phi}S_{ab}|vac\rangle \rightarrow[1+g_{cd}c^{\dagger}d^{\dagger}+e^{i\phi}g_{ab}a^{ \dagger}b^{\dagger}]|vac\rangle\] \[\rightarrow[0,0)_{a,b}+g_{ab}(1+e^{i\phi})|1,1\rangle_{a,b},\]
where in the first step the quantum system \(S_{cd}U_{\phi}S_{ab}\) acts on the vacuum mode \(|vac\rangle\), with \(|1,1\rangle_{a,b}\) and \(|0,0\rangle_{a,b}\) denoting one or no photon pair in the paths \(a\) and \(b\), respectively.
Therefore, the derived equation shows that our proposed system either produces pairs of photons or not, depending only on the relative phase \(\phi\) between the two processes. Most importantly, there is no theoretical evidence that the spatial distance between the two processes reduces the quantum behavior of the suppressed or enhanced emission of photons. The suppressed and enhanced photon pair emission also shows the fundamental difference between our system and other spatially extended interference or quantum experiments. Our entire setup consists of two spatially distant quantum processes, which, dependent on their relative phase, either produce pairs of photons or not. In this case, it is not the product of a process measured at a large distance from one another, as is the case, for example, with loophole-free bell experiments [20; 21; 22; 23]. Instead, the processes are separated far from one another such that the quantum system extends over a large distance. Theoretically, there is no upper limit to the distance between the two creation processes.
From an experimental perspective, the question arises what does the coherence of the two processes depend on physically? Here, two conditions must be met. The first
Figure 1: A schematic picture of the simplified experimental arrangement is depicted. A continuous-wave pump laser was used to create a down-converted photon pair in a nonlinear crystal in the modes \(s_{1}\) and \(i_{1}\). Within the same modes the pump beam propagated collinearly to the second crystal so an additional possibility for creating photon pairs indicated by \(s_{2}\) and \(i_{2}\) arises. Aligning both signal and idler beams such that the which-crystal information is removed, interference fringes can be observed while scanning the phase difference between the pump and down-conversion beams. These effects shall be observed with increased spatial separation of the crystals. Taken from [18].
Figure 2: A schematic picture of the experimental setup, which contains the coherent pumping of two nonlinear crystals (NL I at the sending station and NL II at the receiving station) with pump and down-converted beams propagating collinearly, is shown. The phase difference \(\Delta\phi\) was introduced via a trombone system (TS) within a Mach-Zehnder interferometer splitting up the pump and the down- converted beam and combining them again with a dichroic mirror (DM). To avoid chromatic aberration of the lenses between pump and SPDC photons, two concave mirrors (CM I and II) for both sending and receiving the signals were used. To filter out the undesired pump signal as well as to narrow down the wavelength distribution, bandpass filters (BPF) were implemented in the detection system. The recorded detection events were labeled with a timestamp provided by a time tagging module. Simultaneous clicks within a coincident timing window tc, which was chosen to be 1.5ns, were identified as coincidences. Taken from [18].
condition, which is analogous to Eq. (6), is that the optical pathlength difference between the pump beam and the two down-converted photons must be smaller than the coherence length of the pump laser,
\[|L_{p}-L_{DC}^{a}-L_{DC}^{b}|\leq L_{p}^{coh-len}, \tag{2}\]
where \(L_{p}\) denotes the path length of the pump laser, \(L_{DC}^{a,b}\) describes the path length of the down-converted photon from the process a or b, and \(L_{p}^{coh-len}\) the coherence length of the pump laser.
The second condition is given by the following optical pathlength difference of the down-conversion photons and their coherence length:
\[|L_{DC}^{a}-L_{DC}^{b}|\leq L_{DC}^{coh-len}, \tag{3}\]
with \(L_{DC}^{coh-len}\) the coherence length of the down-converted photons.
Additionally, all degrees of freedom of the down-converted photons must be identical to achieve perfect indistinguishability. Hence, to ensure path indistinguishability, we overlay the down-converted photons from the two processes and remove residual path information by coupling them into single-mode optical fibers. To ensure identical spectral properties, we utilize narrow-band optical band path filters. We use a quarter-half-quarter (QHQ) waveplate combination to align the polarisations of both down-converted photon pairs. Finally, we control the brightness of both photon creation processes by altering the polarisation of the pump beam before the second nonlinear crystal.
We split the down-conversion and pump beam using dichroic mirrors and introduce path length differences with a trombone system to change the phase between the two quantum processes.
## III Results
We confirm the quantumness of the system by measuring the interference between the two distant quantum pair creation processes. The visibility of the interference is defined as \(V=(max-min)/(max+min)\), where \(max\) describes the two-photon count rate at phase setting \(\phi=0\), and \(min\) the two-photon count rate \(\phi=\pi\).
For a classical, incoherent system, we would expect
Figure 3: The results of the quantum-interference experiment are depicted. (a) The coincident count rates of down-conversion photons while moving the trombone system and hence changing \(\Delta\mathcal{L}\) after a propagation distance of \(70\,\mathrm{m}\) is shown. (b) The scaling behavior of the system with increasing distance between nonlinear crystals are shown in terms of the visibility. The red dots correspond to the peak value of the respective distribution and the red error bars equal the standard deviation of the estimated distributions. (c) A summary of all distributions of visibilities over the respective propagation distances is shown.
a vanishing visibility. For a perfect quantum system, we expect to observe interference patterns in the two-photon coincidence events by varying the relative phase between the pump and down-converted photons, according to equation (3). In principle, measuring 2 phase settings (\(\phi=0\) and \(\phi=\pi\)) over a sufficiently long time interval increases the statistical significance of the visibility measurement. However, we chose to alternate the relative phase \(\phi\) at a speed of 180nm/second to minimize phase fluctuations from turbulent air and ensure the experiment's proper functioning at all times. Proper functioning here refers to a "null result," meaning no counts detected, which would always yield perfect visibility and could also occur if no photons were produced, as would be the case for a malfunctioning experiment, for example.
Figure 3 shows the observed interference pattern of two quantum processes separated by 70m. The visibility pattern is visible, and we added a sinusoidal curve to guide the eye. The visibility was calculated by extracting the maxims and minimas over a total period of 70 seconds. This resulted in approximately 70 measurement points per minima and maxima setting. Calculating the average visibility from those measurement points results in \(83\%\pm 15\%\). This main result demonstrates the interference of two spatially distant quantum processes.
In addition, we also show two visibility measurements at shorter distances of 2m and 20 meters, respectively. The average visibilities are \(96\%\pm 3\%\) for 2 meters and \(92\%\pm 6\%\) for a 20-meter distance. The apparent drop in visibilities with increasing distance between the two quantum processes can be explained in the following way. Due to the atmospheric turbulences caused by the air conditioning system in the laboratory, the pump beam (at 400nm wavelength) experienced an angle of arriving fluctuations at the second nonlinear crystal. These angles of arriving fluctuations resulted in a high variance of created photon pairs and hence different amplitudes of the two-photon creation processes. The different amplitudes effectively lead to decreasing visibility, analogously to standard single photon interference experiments, see Supplementary for details.
## IV Discussion
A summary of the visibilities of the coincident count rates in dependency of the distance between the crystals is displayed in Fig. 3. While the width of distribution for coincident counts was relatively narrow for 2m, for longer distances the error bars increased, and the position of the peak went to lower visibilities. Note that due to the high number of evaluated statistics, i.e. high number of photons \(n\), during measuring (70 s total measurement time per measurement session) and the accumulated high sampling number in post-processing, the error of the mean value was negligibly small. The broadening of the error of the visibility distributions could be explained with the fact that, as for the three measurements the measurement times were equal, the count rates over 70 m were lower in magnitude (few counts per integration time \(t_{int}=70\) ms, see Fig. 8) compared to the ones for the other distances (\(\sim 10^{2}\) counts per \(t_{int}\), see Fig. 6 and Fig. 7). Obviously, higher statistical significance could have been achieved by focusing on the single count rates, which, however, proved to be more experimentally challenging, as the visibility of single count rates was highly dependent on the information about the partner photon. By definition, within the coincident count rates, no information was leaked to the environment and coherence was conserved.
## V Outlook
In our experiment we show that the birth place of a quantum state can be spatially spread over a distance of 70 meters. With linear interpolations, we expect that visibility above 50% will be possible up to 250 meters (10% for 500 meters), without additional control mechanisms. An interesting future research question is how effects from special or general relativity affect the interference in these systems, for example by dephasing or decoherence, as investigated in related proposals [24].
The results presented here are a crucial first step for the observation of new non-local multi-photon interference effects, proposed in [4; 12]. There - similar to the experiment demonstrated here - a four-photon quantum state is generated in a coherent superposition of two locations. At each location, two photon-pair creation can lead to four photons, see Fig. 4. While the four-photon interference has been observed experimentally [9; 11], the observation of its non-local nature would require a spatial separation of the four crystals. Our experiment therefore can be seen as a pilot feasibility study for future studies
Figure 4: Proposed experimental setup for observing a nonlocal quantum interference phenomenon of the origins of a photon-quadruple. Here, four photons are created either in crystal I&III or II&IV, with a large distance L between the crystals which could lead to the observation of the nonlocal features of a multipartite quantum system. The experiment has been proposed in [4; 12] and recently demonstrated for small L in [9; 11].
of non-local multiphoton quantum phenomena.
###### Acknowledgements.
The authors thank Anton Zeilinger for initiating and motivating this research, and Armin Hochreiner and Mayukh Lahiri for valueable discussions on the topic of path identity over the years.
|
2303.08771
|
Strong arboricity of graphs
|
An edge coloring of a graph $G$ is \emph{woody} if no cycle is monochromatic.
The \emph{arboricity} of a graph $G$, denoted by $\arb (G)$, is the least
number of colors needed for a woody coloring of $G$. A coloring of $G$ is
\emph{strongly woody} if after contraction of any single edge it is still
woody. In other words, not only any cycle in $G$ can be monochromatic but also
any \emph{broken cycle}, i.e., a simple path arising by deleting a single edge
from the cycle. The least number of colors in a strongly woody coloring of $G$
is denoted by $\zeta(G)$ and called the \emph{strong arboricity} of $G$.
We prove that $\zeta(G)\leqslant \chi_a(G)$, where $\chi_a(G)$ is the
\emph{acyclic chromatic number} of $G$ (the least number of colors in a proper
vertex coloring without a $2$-colored cycle). In particular, we get that
$\zeta(G)\leqslant 5$ for planar graphs and $\zeta(G)\leqslant 4$ for
otuterplanar graphs. We conjecture that $\zeta(G)\leqslant 4$ holds for all
planar graphs. We also prove that $\zeta(G)\leqslant 4(\arb(G))^2$ holds for
arbitrary graph $G$. A natural generalziation of strong arboricity to
\emph{matroids} is also discussed, with a special focus on cographic matroids.
|
Tomasz Bartnicki, Sebastian Czerwiński, Jarosław Grytczuk, Zofia Miechowicz
|
2023-03-15T17:12:34Z
|
http://arxiv.org/abs/2303.08771v1
|
# Strong Arboricity of Graphs
###### Abstract.
An edge coloring of a graph \(G\) is _woody_ if no cycle is monochromatic. The _arboricity_ of a graph \(G\), denoted by \(\operatorname{arb}(G)\), is the least number of colors needed for a woody coloring of \(G\). A coloring of \(G\) is _strongly woody_ if after contraction of any single edge it is still woody. In other words, not only any cycle in \(G\) can be monochromatic but also any _broken cycle_, i.e., a simple path arising by deleting a single edge from the cycle. The least number of colors in a strongly woody coloring of \(G\) is denoted by \(\zeta(G)\) and called the _strong arboricity_ of \(G\).
We prove that \(\zeta(G)\leqslant\chi_{a}(G)\), where \(\chi_{a}(G)\) is the _acyclic chromatic number_ of \(G\) (the least number of colors in a proper vertex coloring without a \(2\)-colored cycle). In particular, we get that \(\zeta(G)\leqslant 5\) for planar graphs and \(\zeta(G)\leqslant 4\) for outerplanar graphs. We conjecture that \(\zeta(G)\leqslant 4\) holds for all planar graphs. We also prove that \(\zeta(G)\leqslant 4(\operatorname{arb}(G))^{2}\) holds for arbitrary graph \(G\). A natural generalziation of strong arboricity to _matroids_ is also discussed, with a special focus on cographic matroids.
The third author was supported in part by Narodowe Centrum Nauki, grant 2020/37/B/ST1/03298.
Introduction
Let \(G\) be a graph \(G\) with vertex set \(V\) and let \(\operatorname{arb}_{1}(G)\) be the set of vertices of \(G\). We say that \(G\) is _strongly_ if \(G\) is strongly
monochromatic. However, a cycle \(C\) (of size bigger than \(3\)) may be \(2\)-colored and still can be safe, what happens exactly when each of the two colors occurs at least twice on \(C\) (see Fig. 1).
### Planar graphs
We start with showing that the strong arboricity is at most \(\chi_{a}(G)\). Recall that \(\chi_{a}(G)\) is the least number of colors in a proper vertex coloring of \(G\), in which no cycle is \(2\)-colored. So, there is no problem with odd cycles as they must be \(3\)-colored, but one has to place the third color also on every even cycle.
**Theorem 1**.: _Every simple graph \(G\) satisfies \(\zeta(G)\leqslant\chi_{a}(G)\)._
Proof.: Assume that \(\chi_{a}(G)=k\) and fix any acyclic coloring \(f:V(G)\to\mathbb{Z}_{k}\). Consider a derived coloring of the edges of \(G\) defined for any edge \(e=uv\) by \(g(e)=f(u)+f(v)\) in the group \(\mathbb{Z}_{k}\). We claim that in this coloring no broken cycle is monochromatic. Indeed, suppose that \(C-e\) is monochromatic for some cycle \(C\). Denote the vertices of \(C\) in the cyclic order as \(v_{1},v_{2},\ldots,v_{r}\), with \(r\geqslant 3\). Assume that \(e=v_{1}v_{r}\). Since \(C-e\) is monochromatic, we have
\[g(v_{1}v_{2})=g(v_{2}v_{3})=\cdots=g(v_{r-1}v_{r}), \tag{2.1}\]
which implies that
\[f(v_{1})+f(v_{2})=f(v_{2})+f(v_{3})=\cdots=f(v_{r-1})+f(v_{r}). \tag{2.2}\]
But this implies that
\[f(v_{1})=f(v_{3})=\cdots \tag{2.3}\]
and
\[f(v_{2})=f(v_{4})=\cdots. \tag{2.4}\]
Figure 1. An example of a graph \(G\) with \(\operatorname{arb}(G)=3\) and \(\zeta(G)=4\) (a woody \(3\)-coloring (left) and a strongly woody \(4\)-coloring (right)).
So, \(C\) is a \(2\)-colored cycle, which contradicts the acyclicity of the coloring \(f\).
A famous result of Borodin [6] asserts that \(\chi_{a}(G)\leqslant 5\) for every planar graph \(G\), which is optimal in this class of graphs (see Fig. 2).
**Corollary 1**.: _Every planar graph \(G\) satisfies \(\zeta(G)\leqslant 5\)._
We do not know if this bound is optimal. There exist planar graphs with \(\zeta(G)=4\) (see Fig. 1), but we have not found one demanding the fifth color (see the final section for a discussion).
**Corollary 2**.: _Every outerplanar graph \(G\) satisfies \(\zeta(G)\leqslant 3\)._
Proof.: It suffices to notice that \(\chi_{a}(G)\leqslant 3\).
Notice that this bound is optimal if only \(G\) contains a triangle. In general, if a planar graph \(G\) on \(n\) vertices is triangle-free, then by Euler's formula it has at most \(2n-4\) edges, so \(\operatorname{arb}(G)\leqslant 2\). This leads to the following simple result.
**Theorem 2**.: _Every triangle-free planar graph \(G\) satisfies \(\zeta(G)\leqslant 4\)._
Proof.: First notice that any forest \(F\) has a \(2\)-coloring of the edges such every path of length \(3\) is non-monochromatic. Indeed, this can be done by coloring the edges alternately, accordingly to the parity of the distance to the root. Now, since \(\operatorname{arb}(G)\leqslant 2\), \(G\) has an edge decomposition into two forests, \(F_{1}\) and \(F_{2}\). We may color each of these forests by two disjoint pairs of colors, as above, and there is no monochromatic path of length \(3\) in the whole graph \(G\). Since there are no broken cycles of size two in \(G\), the proof is complete.
Figure 2. A planar graph \(G\) satisfying \(\chi(G)=3\) and \(\chi_{a}(G)=5\) (a proper \(3\)-coloring (left) and an acyclic \(5\)-coloring (right)).
A natural intuition is that we should further go down with the number of colors if the girth of a graph is sufficiently large. Indeed, in [5] we proved that \(\operatorname{arb}_{2}(G)\leqslant 3\) if the girth of a planar graph \(G\) is at least \(8\). Hence, for such graphs we have \(\zeta(G)\leqslant 3\). The next result shows that we may attain the best possible bound for graphs of bounded genus if the girth is sufficiently large.
**Theorem 3**.: _If \(G\) is a planar graph of girth at least \(13\), then \(\zeta(G)\leqslant 2\). More generally, for every genus \(\gamma>0\) there exists \(g(\gamma)\) such that every graph \(G\) with genus \(\gamma\) and girth \(g(\gamma)\) satisfies \(\zeta(G)\leqslant 2\)._
Proof.: Let \(G\) be a graph with genus \(\gamma\). It is well known that if girth of \(G\) is sufficiently large, then the vertices of \(G\) can be split into two subsets, \(V(G)=A\cup F\), such that \(F\) induces a forest, while \(A\) is \(2\)-independent, which means that every pair of vertices in \(A\) is at distance at least \(3\). In other words, the edges of \(G\) between \(A\) and \(F\) form a star forest with star centers in \(A\). Clearly, coloring the edges of the forest \(G[F]\) by one color and the rest of edges (the star forest between \(A\) and \(F\)) by the other color gives a coloring with no monochromatic broken cycle. The first part of the theorem follows from the result in [7].
### Graphs of bounded arboricity
In the next result we obtain a simple upper bound on \(\zeta(G)\) in terms of the arboricity and the chromatic number. Since the later parameter is bounded in terms of the former, we get that \(\zeta(G)\) is bounded for graphs of bounded \(\operatorname{arb}(G)\).
**Theorem 4**.: _Every graph \(G\) satisfies \(\zeta(G)\leqslant 2\chi(G)\operatorname{arb}(G)\). Moreover, if \(G\) is triangle-free, then \(\zeta(G)\leqslant 2\operatorname{arb}(G)\)._
Proof.: Assume that \(\chi(G)=k\) and \(\operatorname{arb}(G)=\ell\). Let \(f\) be a proper vertex coloring of \(G\) by \(k\) colors taken from the group \(\mathbb{Z}_{k}\). Let \(g\) be a derived coloring of the edges, defined as in the previous proof by \(g(uv)=f(u)+f(v)\). Notice that in coloring \(g\), every triangle is rainbow. Hence, there are no monochromatic broken cycles of size \(2\) in coloring \(g\).
To handle longer broken cycles we use the arboricity. Let \(h\) be any woody coloring of the edges of \(G\) with \(\ell\) colors. So, each color class is a forest and we may color every tree with two shades of the color of the forest it belongs to so that every path with at least three edges is not monochromatic. It follows that every cycle with at least \(4\) edges is either \(3\)-colored or it is a \(2\)-colored \(C_{4}\) with exactly two edges in each of the two colors. So, in coloring \(h\) there are no monochromatic broken cycles of size at least \(3\).
To complete the proof of the first assertion it suffices to construct the product coloring \(p\) of the edges of \(G\) defined by \(p(e)=(g(e),h(e))\). For the second assertion the coloring \(h\) alone is sufficient.
**Corollary 3**.: _Every graph \(G\) satisfies \(\zeta(G)\leqslant 4(\operatorname{arb}(G))^{2}\)._
Proof.: Assume that \(\operatorname{arb}(G)=\ell\). Then \(G\) is \((2\ell-1)\)-degenerate, which implies that \(\chi(G)\leqslant 2\ell\). By the above theorem we get \(\zeta(G)\leqslant 4\ell^{2}\).
### Graphs of bounded degree
Let \(G\) be a graph of maximum degree \(\Delta\). By the Nash-Williams theorem [13], \(\operatorname{arb}(G)\leqslant\left\lceil\frac{\Delta+1}{2}\right\rceil\). For instance, for a clique \(K_{\Delta+1}\) we have \(\operatorname{arb}(K_{\Delta+1})=\left\lceil\frac{\Delta+1}{2}\right\rceil\). On the other hand, any strongly woody coloring of the clique must be proper in the usual sense. Thus, we have \(\zeta(K_{\Delta+1})=\chi^{\prime}(K_{\Delta+1})=\Delta\) or \(\Delta+1\), which is twice the arboricity of \(K_{\Delta+1}\). We shall demonstrate, however, that for graphs of maximum degree \(\Delta\) and sufficiently large girth the strong arboricity attains the minimum possible value and equals \(\operatorname{arb}(G)\).
We will derive this fact as a consequence of the following result of Alon, Ding, Oporowski, and Vertigan [2], concerning graph partitions into parts having small connected components. Indeed, suppose that in an edge colored graph \(G\) by \(k\) colors, the maximum size of any monochromatic connected subgraph is at most \(C\). In particular, there is no monochromatic path of length \(C+1\). Then, assuming that the girth of \(G\) is at least \(C+2\), we get that there is no monochromatic broken cycle in \(G\), and therefore \(\zeta(G)\leqslant k\).
**Theorem 5** (Alon, Ding, Oporowski, and Vertigan, [2]).: _Every graph of maximum degree \(\Delta\geqslant 2\) has an edge \(\left\lceil\frac{\Delta+1}{2}\right\rceil\)-coloring such that every monochromatic component has at most \(60\Delta-63\) edges._
Using this theorem we get immediately the following result.
**Corollary 4**.: _Every graph of maximum degree \(\Delta\geqslant 2\) and girth at least \(60\Delta-61\) satisfies \(\zeta(G)=\operatorname{arb}(G)\)._
Notice that in a special case of \(\Delta=3\) we may get a better bound on the girth, by using a celebrated result of Thomassen [17], asserting that every cubic graph has a \(2\)-edge-coloring in which every monochromatic component is a path with at most five edges.
**Corollary 5**.: _Every cubic graph \(G\) of girth at least \(7\) satisfies \(\zeta(G)=2\)._
### Minor-closed classes of graphs
It is clear that large girth alone is not sufficient for bounded arboricity. This follows from the celebrated result of Erdos (see [3]) establishing existenece of graphs with arbitrarily large girth and chromatic number. However, if we restrict to a proper minor-closed class of graphs, then the situation looks different.
**Theorem 6** (Thomassen [16]).: _Let \(\mathcal{M}\) be a proper minor-closed class of graphs. Then there exists a constant \(g=g(\mathcal{M})\) such that every graph \(G\in\mathcal{M}\) of girth at least \(g\) is \(2\)-degenerate (in consequence, \(\operatorname{arb}(G)\leqslant 2\))._
By Theorem 4 it follows that graphs from proper minor-closed classes with sufficiently large girth satisfy \(\zeta(G)\leqslant 4\). However, this bound does not seem optimal. It is perhaps true that two colors are sufficient for strongly woody coloring of \(2\)-degenerated graphs with sufficiently large girth.
## 3. Final remarks and open problems
We conclude this short note with a collection of open problems. The most intriguing is the question concerning the strong arboricity of planar graphs.
**Conjecture 1**.: _Every planar graph \(G\) satisfies \(\zeta(G)\leqslant 4\)._
By the celebrated Four Color Theorem one gets easily a \(3\)-edge coloring of any planar graph \(G\) with rainbow triangles. Indeed, one may start with a proper vertex coloring by the group \(\mathbb{Z}_{2}\times\mathbb{Z}_{2}\) and then color every edge by the sum of colors on its ends. It seems plausible that including the fourth color one may also take care of longer broken cycles.
It is also natural to wonder about the best possible upper bound on \(\zeta(G)\) in terms of the arboricity. As mentioned before, for every clique \(K_{n}\) we have \(\zeta(K_{n})=\chi^{\prime}(K_{n})\), while \(\operatorname{arb}(K_{n})=\lceil n/2\rceil\). Thus, we have \(\zeta(K_{n})=2\operatorname{arb}(K_{n})-1\). This suggests the following supposition.
**Conjecture 2**.: _Every graph \(G\) satisfies \(\zeta(G)\leqslant 2\operatorname{arb}(G)\)._
It is known that every graph \(G\) with \(\operatorname{arb}(G)=k\) is at most \((2k-1)\)-degenerated. Recall that the _coloring number_\(\operatorname{col}(G)\) is the least integer \(r\) for which there is a vertex ordering with maximum back-degree equal to \(r-1\). This leads to the following stronger conjecture.
**Conjecture 3**.: _Every graph \(G\) satisfies \(\zeta(G)\leqslant\operatorname{col}(G)\)._
Finally, we formulate a conjecture expressing a natural guess that large girth allows for strong arboricity to be as low as possible.
**Conjecture 4**.: _For every integer \(k\) there is an integer \(g(k)\) such that every graph \(G\) with \(\operatorname{arb}(G)\leqslant k\) and girth at least \(g(k)\) satisfies \(\zeta(G)=\operatorname{arb}(G)\)._
Let us mention at the end that the notion of strong arboricity can be defined and studied for arbitrary _matroids_ in much the same way as it is with the usual arboricity. Indeed, broken cycles can be defined _mutatis mutandis_ in the matroid setting. It seems plausible that the corresponding parameter \(\zeta(M)\) is bounded in terms of \(\operatorname{arb}(M)\) for an arbitrary matroid \(M\).
|
2310.06664
|
Spin attributes of structured vector fields constructed by Hertz
potentials
|
In this paper, we use the Hertz vector potential to define the
electromagnetic vector of different structured wavefields, and analyze the spin
properties of the wavefields. We show that for the single evanescent waves, the
total spin provides by the transverse spin and originates from the spatial
inhomogeneity of the momentum density of the field. However, for non-single
evanescent wave, there may be a part of the extraordinary spin component sE,
and the direction of sE is also perpendicular to the wave propagation
direction. In other words, it is transverse, but it does not originate from the
curl of the wave field momentum density. In addition, we also calculate the
spins of non-planar propagating waves, and analyze the spin characteristics of
these wave fields.
|
Zhikang Xiong, Zhenlai Wang, Liu. Y, Bin Zhou
|
2023-10-10T14:40:06Z
|
http://arxiv.org/abs/2310.06664v1
|
# Spin attributes of structured vector fields constructed by Hertz potentials
###### Abstract
In this paper, we use the Hertz vector potential to define the electromagnetic vector of different structured wavefields, and analyze the spin properties of the wavefields. We show that for the single evanescent waves, the total spin provides by the transverse spin and originates from the spatial inhomogeneity of the momentum density of the field. However, for non-single evanescent wave, there may be a part of the "extraordinary" spin component \(\mathbf{s}_{E}\), and the direction of \(\mathbf{s}_{E}\) is also perpendicular to the wave propagation direction. In other words, it is transverse, but it does not originate from the curl of the wave field momentum density. In addition, we also calculate the spins of non-planar propagating waves, and analyze the spin characteristics of these wave fields.
## I Introduction
Light has spin angular momentum, which is related to the electromagnetic filed distribution of the wavefields [1]. For a plane wave in free space or a wave under the paraxial approximation, the spin angular momentum \(\mathbf{s}\propto\sigma\hat{k}\). Where \(\sigma=\pm 1\), corresponding to the left-handed and right-handed circular polarization, respectively [1; 2; 3; 4; 5; 6]. In addition, the result also shows the two quantum spin states in quantum optics [7; 8]. And \(\hat{k}\) is the unit propagation direction of the wave, it can be seen that the direction of the spin angular momentum \(\mathbf{s}\) and the wave propagation direction are parallel, in other words, the spin is longitudinal.
In recent years, the emergence of transverse spin [9; 10; 11; 12]\(\mathbf{s}_{t}\) perpendicular to the wave propagation direction has enriched our understanding of spin angular momentum. This type of spin is ubiquitous in structured wave fields, such as evanescent waves, focused beams, two-wave interference, etc.[9; 10; 13; 14; 15; 16]. In addition, comparing with the longitudinal spin \(\mathbf{s}_{l}\), the transverse spin \(\mathbf{s}_{l}\) has extraordinary properties. For example, it is independent of the helicity of the wave and can appear even for linearly polarized waves. This special property lead to many important phenomena and applications possible [17; 18; 19]. So, there is a natural question, what causes this particular transverse spin?
A generally accepted explanation [9; 14; 15] is that the transverse constraints \(\nabla\cdot\mathbf{E}=0\) results an imaginary longitudinal electric field component, which parallel to the wave propagation direction. From the transverse constraints, because of the imaginary longitudinal electric component, the transverse and longitudinal field components have \(\pi/2\) phase difference, which causes the real-valued electric field vector \(\mathbf{E}(\mathbf{r},t)=\Re[\mathbf{E}(\mathbf{r})\exp(-i\omega t)]\) (where symbol \(\Re\) representes the real part) to rotate in the propagation plane, resulting in a transverse spin perpendicular to the propagation direction.
In 2021, Shi _et al._[20] proposed that the transverse spin of linearly polarized evanescent can be characterized by the wave field Poynting vector \(\mathbf{P}\) (or the kinetic momentum \(\mathbf{p}=\mathbf{P}/c^{2}\) of the wavefields, c is the speed of light in vacuum), as \(\mathbf{s}_{t}\propto\nabla\times\mathbf{P}\). which resembles the locked spin momentum \(\mathbf{p}_{s}\) by spin angular momentum \(\mathbf{s}\) i.e. \(\mathbf{p}_{s}\propto\nabla\times\mathbf{s}\). That is, the spin momentum of light originates from the curl operation of the electromagnetic spin density. Similarly, the transverse spin of the evanescent wave fields originates from the spatial inhomogeneity of the momentum density of the field. Indeed, this approach is beautiful and ingenious, and it can even be extended to other propagation wave fields [21; 22]. However, it doesn't always seem to be complete.
In this paper, we use the Hertz vector potential [23; 24; 25; 26] to define the electromagnetic field of different structured waves, and calculate the momentum and spin angular momentum density distributions of these wave fields. We find that for the non-single evanescent waves of H-type or E-type wave [27], there are intrinsic attributes of transverse spin and longitudinal spin. Only when the main polarization direction [28] of the evanescent wave is aligning with the decaying direction, the total spin provides by the transverse spin and originates from the spatial inhomogeneity of the momentum density of the field, as \(\mathbf{s}=\mathbf{s}_{t}\propto\nabla\times\mathbf{p}\). Otherwise, there will be a part of the "extraordinary" spin component \(\mathbf{s}_{E}\), and the direction of \(\mathbf{s}_{E}\) is also perpendicular to the wave propagation direction. In other words, it is transverse, but it does not originate from the curl of the wave field momentum density. In addition, we also calculate the spins of non-planar propagating waves, such as Bessel beam, Airy beam and Gaussian beam, and analyze the spin characteristics of these wave fields. In particular, for wavefields with evanescent tail wave characteristics, such as Airy beams, can also exhibit evanescent wave-like spin-momentum locking. Our results provide further insights into the understanding of spin angular momentum of the optical fields.
Main content
### Basic Equations
Hertz showed that in a source-free uniform linear isotropic medium, electromagnetic field can be derived by a single vector potential \(\mathbf{\Pi}\)[29]. The Hertz vector potential is an efficient mathematical formalism for solving electromagnetic problems. Assuming that there is a time-dependent term \(e^{-i\omega t}\), the Hertz vector wave equation in the Cartesian coordinate can be expressed as:
\[\nabla^{2}\mathbf{\Pi}+k^{2}\mathbf{\Pi}=0 \tag{1}\]
where Hertz vector potential \(\mathbf{\Pi}=\Pi(x,y,z)\hat{n}\), \(k=\omega^{2}\epsilon\mu\) is the wave number in the medium, \(\epsilon\) and \(\mu\) are the permittivity and permeability of the medium. We emphasize that \(\hat{n}\) may take x-, y-, or z-directions, although the selection of directions has no actual physical meaning. In isotropic media, the Hertz vector wave equation Eq.(1) has two independent solutions \(\mathbf{\Pi}_{e}\) and \(\mathbf{\Pi}_{m}\) called electric and magnetic Hertz potentials respectively [30], which lead to E-type and H-type waves (In some references[21; 22; 26], it is also called TM, TE wave) as [27]
\[\mathrm{E-type:} \mathbf{E} = \nabla(\nabla\cdot\mathbf{\Pi}_{e})+k^{2}\mathbf{\Pi}_{e} \tag{2}\] \[\mathbf{H} = -i\omega\epsilon\nabla\times\mathbf{\Pi}_{e}\] (3) \[\mathbf{H-type:} \mathbf{E} = i\omega\mu\nabla\times\mathbf{\Pi}_{m}\] (4) \[\mathbf{H} = \nabla(\nabla\cdot\mathbf{\Pi}_{m})+k^{2}\mathbf{\Pi}_{m} \tag{5}\]
In a source-free homogeneous linear isotropic medium, the results obtained for E-type and H-type waves are equivalent, and we only need to consider one of them to define the electromagnetic field. Here, we choose \(\mathbf{\Pi}_{m}\) to define H-type waves.
For a vector wave field, the kinetic momentum density \(\mathbf{p}\) can be divided into orbital momentum \(\mathbf{p}_{o}\) and spin momentum \(\mathbf{p}_{s}\)[14; 31; 32; 16] i.e. \(\mathbf{p}=\mathbf{p}_{o}+\mathbf{p}_{s}\), where
\[\mathbf{p}_{o} = \frac{1}{4\omega}\Im[\epsilon\mathbf{E}^{*}\cdot(\nabla)\mathbf{ E}+\mu\mathbf{H}^{*}\cdot(\nabla)\mathbf{H}] \tag{6}\] \[\mathbf{p}_{s} = \frac{1}{2}\nabla\times\mathbf{s} \tag{7}\]
with
\[\mathbf{s} = \frac{1}{4\omega}\Im[\epsilon\mathbf{E}^{*}\times\mathbf{E}+\mu \mathbf{H}^{*}\times\mathbf{H}] \tag{8}\]
where the symbol \(\Im\) representes the imaginary part. \(\mathbf{s}\) is the spin angular momentum density. In addition, the kinetic momentum density \(\mathbf{p}\) also be represented by Poynting vector [1], namely
\[\mathbf{p} = \frac{\epsilon\mu}{2}\Re[\mathbf{E}^{*}\times\mathbf{H}] \tag{9}\]
According to Eqs.(1-9), we can obtain the spin and momentum density distributions corresponding to different wave fields, and then analyze their properties.
### Calculation and Discussion
#### ii.2.1 Spin properties of elliptically polarized plane waves
Firstly, we consider an elliptically polarized single evanescent wave, which we know contains a spin component perpendicular to the direction of wave propagation, called _transverse spin_[9]. However, there is a different understanding about the transverse spin in [21].
Assuming an elliptical polarization single evanescent wave propagating along the z-direction and decaying in x-direction, the wave vector of the evanescent can be written as \(\mathbf{k}=k_{z}\hat{z}+i\kappa\hat{x}\), where \(k_{z}\) is longitudinal wave number and \(\kappa\) is decaying coefficient of the evanescent wave. We can define the Hertz vector potential of the field as
\[\mathbf{\Pi}_{m} = \frac{A}{\sqrt{1+|m|^{2}}}(\hat{x}+m\frac{k_{z}}{k}\hat{y})e^{ik _{z}z-\kappa x} \tag{10}\]
which is the solution of the Hertz vector wave equation 1. where A is the wave amplitude, m is a complex number describing the elliptical polarization state with \(\sigma=\frac{2\Im m}{1+|m|^{2}}\)[9]. And \(k\equiv|\mathbf{k}|\) is the wave number. Taking Eq.(10) into Eqs.(4-5), we can obtain the electromagnetic field of this evanescent wave. And then, we can further get the momentum and spin angular momentum by taking the electromagnetic vector into Eqs.(6-9), as
\[\mathbf{p} = \frac{W}{\omega}(\sigma\frac{k\kappa}{k_{z}}\hat{y}+\frac{k^{2}} {k_{z}}\hat{z}) \tag{11}\] \[\mathbf{s} = \frac{W}{\omega}(\frac{\kappa}{k_{z}}\hat{y}+\sigma\frac{k}{k_{z} }\hat{z}) \tag{12}\]
where the energy density W of the wave field is \(W=\frac{\mu}{2}k_{z}^{4}A^{2}e^{-2\kappa x}\). Eq.(12) shows that there is a special spin component \(s_{y}\) in this evanescent wave field, which called _transverse spin_ in [9]. And the z-component called _longitudinal spin_. However, there is no physical meaning in distinguishing longitudinal and transverse directions solely based on the direction of the spin vector. So, according the approach in [20; 21] to define the transver spin of this case might be more reasonable, as
\[\mathbf{s}_{t} = \frac{W}{\omega}[\frac{\kappa}{k_{z}}\hat{y}-\sigma\frac{\kappa^{ 2}}{kk_{z}}\hat{z}] \tag{13}\]
Figure 1: The spin characteristics of single evanescent wave for Eq.(10) (a) linear polarization (m=0, \(\sigma=0\)) (b) circular polarization (m=i, \(\sigma=1\))
Although there is a spin component parallel to the direction of wave propagation in Eq.(13), it still can be seen as transverse spin [21]. In addition, the longitudinal spin \({\bf s}_{l}\) can be obtained by subtracting the transverse spin \({\bf s}_{t}\) from the total spin \({\bf s}\), as
\[{\bf s}_{l} = {\bf s}-{\bf s}_{t} \tag{14}\] \[= \hbar\sigma\hat{k}\]
This is logical in physics because the elementary feature of longitudinal spin \({\bf s}_{l}\) for the same kind of wave-packet is constant. That is, the transverse spin defined by [21] is reasonable. FIG. 1 shows the spin vector distribution of the evanescent wave shown in Eq.(10) for linear and circular polarization.
When the evanescent wave decaying parameter \(\kappa=0\) i.e. \(k=k_{z}\), corresponding to the case of propagating wave, Eq.(10) will be transformed into
\[{\bf\Pi}_{m} = \frac{A}{\sqrt{1+|m|^{2}}}(\hat{x}+m\hat{y})e^{ikz} \tag{15}\]
Similarly, we can get the spin characteristics of the plane wave field,
\[{\bf s}_{t}=0,\ {\bf s}_{l}=\hbar\sigma\hat{k} \tag{16}\]
As we konw, the momentum density of propagating plane wave is uniformly distributed on the propagation plane. So the result obtained by calculating its curl is of course 0. That is to say, its transverse spin \({\bf s}_{t}=0\), and the total spin \({\bf s}\) is provided by the longitudinal spin \({\bf s}_{l}\) i.e. \({\bf s}={\bf s}_{l}\propto\hbar\sigma\).
From the above content, the definition of transverse spin \({\bf s}_{t}\) in [20; 21] are always valid for plane waves, whether they are evanescent waves or propagating waves. However, for some non-single evanescent waves and some structurally propagated waves, it seems that it is not enough to define the transverse spin using the wave field momentum density inhomogeneity. As we shall show soon, there may be an "extraordinary" spin component \({\bf s}_{E}\), which direction is perpendicular to the local wave vector. And it does not originate from the inhomogeneity of the wave field momentum density.
#### ii.1.2 H-type evanescent waves
In Sec. II **A**, we show that electromagnetic can be defined by a single Hertz vector. Here, we use the magnetic Hertz vector potential to define the H-type waves electromagnetic vector. We consider the evanescent wave case firstly, for an evanescent wave decaying along the z direction, we can obtain the magnetic Hertz potential by solving the Hertz's vector wave equation in Cartesian coordinate
\[\nabla^{2}{\bf\Pi}_{m}+k^{2}{\bf\Pi}_{m}=0 \tag{17}\]
Hertz's vector wave equation for source free homogeneously isotropic medium can be reduced to a scalar wave equation [33], so Eq.(17) can be simplified to
\[\nabla^{2}\Pi_{m}+k^{2}\Pi_{m}=0 \tag{18}\]
A feasible solution with
\[{\bf\Pi}_{m}=F(x,y)e^{-\kappa z}\hat{n} \tag{19}\]
where \(F(x,y)\) fulfill \(\nabla_{\perp}^{2}F+\beta^{2}F=0\) with \(\nabla_{\perp}^{2}=\frac{\partial^{2}}{\partial x^{2}}+\frac{\partial^{2}}{ \partial y^{2}},\beta^{2}=k^{2}+\kappa^{2}\), \(\kappa\) is the decaying coefficient of the evanescent wave, \(\hat{n}\) can be either in the x-, y- or z-direction.
Consider now \(\hat{n}=\hat{z}\), in this case the Hertz vector potential can be written as
\[{\bf\Pi}_{m}=F(x,y)e^{-\kappa z}\hat{z} \tag{20}\]
Taking Eq.(20) into Eqs.(4-5), we can get the electromagnetic field of the evanescent wave in this case
\[{\bf E} = i\omega\mu(\frac{\partial F}{\partial y}\hat{x}-\frac{\partial F }{\partial x}\hat{y}+0\hat{z})e^{-\kappa z} \tag{21}\] \[{\bf H} = (-\kappa\frac{\partial F}{\partial x}\hat{x}-\kappa\frac{\partial F }{\partial y}\hat{y}+\beta^{2}F\hat{z})e^{-\kappa z} \tag{22}\]
The above formula Eqs.(22-22) is naturally satisfied Maxwell equations, which is consistent with the electromagnetic field defined in [20]. And the final result is naturally the same, as
\[{\bf s}_{t} = \frac{1}{2k^{2}}\nabla\times{\bf p} \tag{23}\] \[= \frac{\mu}{2\omega}\Im[\kappa\beta^{2}F^{*}\frac{\partial F}{ \partial y}\hat{x}+\kappa\beta^{2}F\frac{\partial F^{*}}{\partial x}\hat{y}+ \beta^{2}\frac{\partial F}{\partial y}\frac{\partial F^{*}}{\partial x}\hat{z}]\] \[= {\bf s}\]
where symbol \(*\) represents complex conjugate. The result of Eq.(23) show that the wave field spin angular momentum \({\bf s}\) is only caused by the spatial inhomogeneity of the wave field momentum density. And the total spin angular momentum satisfies the spin-momentum locking [20; 34], which can be used as an example to demonstrate the quantum spin Hall effect of light [35; 36].
As we emphasized earlier, the Hertz vector can take the x-, y-, and z- directions. The selection of different directions has no actual physical meaning, but will lead to different results. Here, we still assume that an evanescent wave decays along the z-direction, but define the Hertz vector along the x-direction, at this time the polarization of the wave field will be changed.
For the case of \({\bf\Pi}_{m}=F(x,y)e^{-\kappa z}\hat{x}\), the electromagnetic field of the evanescent wave can be expressed as
\[{\bf E} = i\omega\mu(0\hat{x}-\kappa F\hat{y}-\frac{\partial F}{\partial y }\hat{z})e^{-\kappa z} \tag{24}\] \[{\bf H} = [(\frac{\partial^{2}F}{\partial x^{2}}+k^{2}F)\hat{x}+\frac{ \partial^{2}F}{\partial y\partial x}\hat{y}-\kappa\frac{\partial F}{\partial x }\hat{z}]e^{-\kappa z} \tag{25}\]
which still satisfies Maxwell equations. Taking Eqs.(24-25) into Eqs.(8-9), one can obtain
\[{\bf s}=\frac{\mu}{2\omega}\Im\left(\begin{array}{c}k^{2}\kappa F^{*}\frac{ \partial F}{\partial y}+\kappa\frac{\partial^{2}F}{\partial y\partial x}\frac{ \partial F^{*}}{\partial x}\\ \kappa\frac{\partial F}{\partial x}(\frac{\partial^{2}F^{*}}{\partial x^{*}}+ k^{2}F^{*})\\ \frac{\partial^{2}F}{\partial y\partial x}(\frac{\partial^{2}F^{*}}{\partial x^{*}}+ k^{2}F^{*})\end{array}\right)e^{-2\kappa z},\ {\bf p}=\frac{\mu}{2\omega}k^{2}\Im\left(\begin{array}{c}\kappa^{2}F^{*} \frac{\partial F}{\partial x}+\frac{\partial^{2}F}{\partial y\partial x}\frac {\partial F^{*}}{\partial y}\\ -\frac{\partial F^{*}}{\partial y}(\frac{\partial^{2}F}{\partial x^{*}}+k^{2} F)\\ \kappa F^{*}(\frac{\partial^{2}F}{\partial x^{*}}+k^{2}F)\end{array}\right)e^{-2 \kappa z} \tag{26}\]
According the definition of transverse spin in [20; 21], one can obtain
\[{\bf s}_{t} = \frac{\mu}{4\omega}\Im\left(\begin{array}{c}\kappa F^{*} \frac{\partial^{3}F}{\partial y\partial x^{2}}-\kappa\frac{\partial F^{*}}{ \partial y}\frac{\partial F^{2}}{\partial x^{2}}+2\kappa^{2}\kappa F^{*}\frac {\partial F}{\partial y}\\ -(\kappa\frac{\partial F^{*}}{\partial x}\frac{\partial^{2}F}{\partial x^{*}} +\kappa F^{*}\frac{\partial^{3}F}{\partial x^{*}}+2\kappa^{3}F^{*}\frac{ \partial F}{\partial x}+2\kappa\frac{\partial^{2}F}{\partial y\partial x}\frac {\partial F^{*}}{\partial y})\\ -(\frac{\partial^{2}F^{*}}{\partial y\partial x}\frac{\partial^{2}F}{\partial x ^{*}}+\frac{\partial^{2}F}{\partial y\partial x}\frac{\partial^{2}F^{*}}{ \partial x^{*}}+k^{2}F\frac{\partial^{2}F^{*}}{\partial y\partial x}+\kappa^{ 2}F^{*}\frac{\partial^{2}F}{\partial y\partial x}+\frac{\partial F^{*}}{ \partial y}\frac{\partial^{3}F}{\partial x^{*}}+\frac{\partial F^{*}}{ \partial y}\frac{\partial^{3}F}{\partial y^{*}\partial x}+\beta^{2}\frac{ \partial F^{*}}{\partial y}\frac{\partial F}{\partial x})\end{array}\right)e^ {-2\kappa z}\] \[\neq {\bf s}\]
We can see that Eq.(27) and Eq.(23) show very different results. In the case of Eq.(28), the total spin is not entirely provided by the transverse spin caused by inhomogeneity momentum density. There should also be an "extraordinary" transverse spin \({\bf s}_{E}\), as
\[{\bf s}_{E} = {\bf s}-{\bf s}_{t} \tag{28}\] \[= \frac{\mu}{4\omega}\Im\left(\begin{array}{c}\kappa\frac{ \partial F^{*}}{\partial y}\frac{\partial^{2}F}{\partial x^{2}}-\kappa F^{*} \frac{\partial^{3}F}{\partial x^{2}\partial y}+2\kappa\frac{\partial F^{*}}{ \partial x}\frac{\partial^{2}F}{\partial y\partial x}\\ \kappa\frac{\partial F}{\partial x}\frac{\partial^{2}F^{*}}{\partial x^{*}}+ \kappa F^{*}\frac{\partial^{3}F}{\partial x^{*}}+2\kappa\frac{\partial F^{*}}{ \partial y}\frac{\partial^{2}F}{\partial y\partial x}+2\kappa\beta^{2}F^{*} \frac{\partial F}{\partial x}\\ 0\end{array}\right)e^{-2\kappa z}\]
Certainly, in the case of single evanescent wave, the result of Eq.(28) is zero i.e. \({\bf s}_{E}=0\) and \({\bf s}={\bf s}_{t}\). However, for non-single evanescent wave, there may be non-zero spin \({\bf s}_{E}\), it does not originates from the inhomogeneity momentum density. And the directions of \({\bf s}_{E}\) may be perpendicular to the local wave vector. That is, \({\bf s}_{E}\) is also transverse. So, the definition of transverse spin in [20; 21] doesn't seem to be enough. In fact, for non-single evanescent waves, only when the main polarization direction [28] of the evanescent wave is aligning with the decaying direction, the total spin provides by the transverse spin and originates from the spatial inhomogeneity of the momentum density of the field, as shown in Eq.(23). Otherwise, there is an "extraordinary" spin \({\bf s}_{E}\), which is also transverse.
Here, we give an example, for a cosine wave [1; 20] propagating in the y-direction and decaying in the z-direction,the wave vector is \({\bf k}=k_{x}\hat{x}+k_{y}\hat{y}+i\kappa\hat{z}\), its Hertz vector can be expressed as
\[{\bf\Pi}_{m}=\cos(k_{x}x)e^{i{\bf k}_{y}y-\kappa z}\hat{x} \tag{29}\]
Taking Eq.(29) into Eqs.(8-9), and Eq.(28), we can obatin
\[{\bf s}_{E} = \frac{\mu}{2\omega}k_{x}^{2}k_{y}\kappa e^{-2\kappa z}\hat{x} \tag{30}\]
The momentum density and spin angular momentum density distribution of this cosine field as shown in FIG. 2. Eq.(30) shows that there is an "extraordinary" spin \({\bf s}_{E}\), it is obvious that it is perpendicular to the direction of wave propagation, so it should be transverse spin. But it does not originate from the inhomogeneity momentum density. In other words, it is not enough to use inhomogeneity momentum density to characterize the transverse spin in this case.
In addition, we also calculate the case of \({\bf\Pi}_{m}=\cos(k_{x}x)e^{ik_{y}y-\kappa z}\hat{y}\), there is also a non-zero \({\bf s}_{E}\). When we change the direction of wave propagation, the results are similar. That is, for single evanescent, the definition of transverse spin in [20; 21] is absolutely valid. But for non-single evanescent, only when the main polarization direction [28] of the evanescent wave is aligning with the
decaying direction, the definition is also valid. Otherwise, it is not sufficient to rely solely on spatial inhomogeneities in momentum density to define transverse spin.
#### ii.2.3 H-type propagating waves
We have discussed the evanescent wave of H-type, and analyzed their spin characteristics. In addition, we know that the propagating wave also has non-trivial spin properties. For a wave propagates along a guided structure only occurs in one direction [33], we assume that the wave propagates along the z-direction. Therefore, the waves along a uniform guided structure have only an \(e^{ik_{z}z}\) dependence in the direction. This means that Hertz vector potentials for two-dimensional uniform guided structures are of the form
\[\mathbf{\Pi}_{m}=\psi(x,y)e^{ik_{z}z}\hat{n} \tag{31}\]
where \(\psi(x,y)\) fulfill \(\nabla_{\perp}^{2}\psi+\zeta^{2}\psi=0\) with \(\zeta^{2}=k^{2}-k_{z}^{2}\), \(k\equiv|\mathbf{k}|\) is the wave number, and \(\hat{n}\) can be either in the x-, y- or z-direction.
Here, we still define \(\hat{n}=\hat{z}\), so Eq.(31) can be written as
\[\mathbf{\Pi}_{m}=\psi(x,y)e^{ik_{z}z}\hat{z} \tag{32}\]
Taking Eq.(32) into Eqs.(4-5), we can get the electromagnetic field of the propagating wave as
\[\mathbf{E} = i\omega\mu(\frac{\partial\psi}{\partial y}\hat{x}-\frac{ \partial\psi}{\partial x}\hat{y}+0\hat{z})e^{ik_{z}z} \tag{33}\] \[\mathbf{H} = (ik_{z}\frac{\partial\psi}{\partial x}\hat{x}+ik_{z}\frac{ \partial\psi}{\partial y}\hat{y}+\zeta^{2}\psi\hat{z})e^{ik_{z}z} \tag{34}\]
Taking Eqs.(33-34) into Eqs.(8-9), we can get the momentum and spin angular momentum in this case. Here, we again use the definition of transverse spin in [21], assuming it is valid, we can get
\[\mathbf{s}_{E} = \frac{\mu k_{z}}{2\omega}\Re\left(\begin{array}{c}\frac{\partial ^{2}\psi^{*}}{\partial y^{2}}\frac{\partial\psi}{\partial y}-\frac{\partial^ {2}\psi}{\partial x\partial y}\frac{\partial\psi^{*}}{\partial x}-\zeta^{2} \psi\frac{\partial\psi^{*}}{\partial y}\\ \\ \frac{\partial^{2}\psi^{*}}{\partial x\partial y}\frac{\partial\psi}{ \partial y}-\frac{\partial^{2}\psi}{\partial x^{2}}\frac{\partial\psi^{*}}{ \partial x}+\zeta^{2}\psi\frac{\partial\psi^{*}}{\partial x}\\ \\ 0\end{array}\right) \tag{35}\]
Similar to Sec.II B.2, for the non-planar propagating wave, there also be non-zero spin \(\mathbf{E}\), and it is obviously transverse. So, the definition of transverse spin in [21] is not complete. when we change the direction of the Hertz vector along the x-, y- and z-direction, the results are similar, there are non-zero \(\mathbf{s}_{E}\).
Here, we exemplify the zero-order Bessel beam [37; 38; 26; 39]. The reason for considering the zero-order is to avoid the influence of spin-orbit coupling of high-order Bessel beams [40; 41]. The Hertz vector potential of the Bessel beam can be written as
\[\mathbf{\Pi}_{m} = J_{0}(rk_{r})e^{ik_{z}z}\hat{z} \tag{36}\]
which is the solution of the Hertz vector wave equation in cylindrical coordinates. The wave number fulfill \(k^{2}=k_{r}^{2}+k_{z}^{2}\). Taking Eq.(36) into Eqs.(4-5), Eqs.(8-9) and Eq.(35), where \(\psi=J_{0}(rk_{r})\), we can obtain the momentum density and spin density distributions of the
Figure 3: Momentum density \(\mathbf{p}=p_{z}\), total spin angular momentum density \(\mathbf{s}_{\phi}\), transverse spin \(\mathbf{s}_{t\phi}\) and extraordinary spin \(\mathbf{s}_{E\phi}\) angular momentum density distribution of the Bessel beam in Eq.(36).
zero-order Bessel field as shown in FIG. 3.
\[{\bf p} = \frac{\mu}{2\omega}k^{2}k_{r}^{2}k_{z}J_{1}(rk_{r})^{2}\hat{z} \tag{37}\] \[{\bf s} = -\frac{\mu}{2\omega}k_{r}^{3}k_{z}J_{0}(rk_{r})J_{1}(rk_{r})\hat{\phi}\] (38) \[{\bf s}_{t} = -\frac{\mu}{2\omega}k_{r}^{2}k_{z}[k_{r}J_{0}(rk_{r})J_{1}(rk_{r} )-\frac{J_{1}(rk_{r})^{2}}{r}]\hat{\phi}\] (39) \[{\bf s}_{E} = -\frac{\mu}{2\omega}k_{r}^{2}k_{z}\frac{J_{1}(rk_{r})^{2}}{r}\hat {\phi} \tag{40}\]
From Eq.(37) we can see that the Bessel field momentum only contains the longitudinal component z, the total spin angular momentum only contains the azimuth component, that is, the spin is azimuth polarization [42; 43], and there is an "extraordinary" transverse spin \({\bf s}_{E}\) along the azimuth direction. In addition, we also calculate the case of \({\bf\Pi}_{m}=J_{0}(rk_{r})e^{ik_{z}z}\hat{x}\) and \({\bf\Pi}_{m}=J_{0}(rk_{r})e^{ik_{z}z}\hat{y}\), and there are still non-zero \({\bf s}_{E}\).
#### iii.1.4 The solution of Hertz vector wave equation under paraxial approximation
Previously, we mainly discussed the exact solution of the Hertz vector wave equation, so as to solve the Hertz vectors of different wave fields, and then solve the electromagnetic vectors corresponding to different wave fields. In this Sec., we will obtain the Hertz vector by solving the Hertz vector wave equation under the paraxial approximation [44; 45; 1; 46], and analyze the spin vector distribution of its wave field.
Under the paraxial approximation \(|\partial_{z}^{2}\psi|\ll|2k\partial_{z}\psi|\), we can obtain the paraxial wave equation [47; 48],
\[i\frac{\partial\psi}{\partial\xi}+\frac{1}{2}\frac{\partial^{2} \psi}{\partial s^{2}}=0 \tag{41}\]
where \(\psi\) is the electric field envelope, \(s=x/x_{0}\) represents a dimensionless transverse coordinate, \(x_{0}\) is an arbitrary transverse scale, and \(\xi=z/kx_{0}^{2}\) is a normalized propagation distance, k is the wave number. We can also replace the electric field envelope \(\psi\) in Eq.(41) with Hertz vector potential \({\bf\Pi}_{m}\) to make it a Hertz potential wave equation under the paraxial approximation, as
\[i\frac{\partial{\bf\Pi}_{m}}{\partial\xi}+\frac{1}{2}\frac{ \partial^{2}{\bf\Pi}_{m}}{\partial s^{2}}=0 \tag{42}\]
Obviously, one of the accelerating solutions of Eq.(42) is the well-known Airy function with the characteristic infinite oscillatory tail, which can be written as
\[{\bf\Pi}_{m}=Ai(s-\frac{\xi^{2}}{4})e^{\frac{iz\xi}{2}-\frac{iz ^{3}}{42}}\hat{n} \tag{43}\]
Similarly, the polarized direction \(\hat{n}\) can be chosen as x-, y- and z- directions. For the convenience of calculation, here we choose \(\hat{n}=\hat{y}\), so the Hertz vector of the propagating Airy beam is \({\bf\Pi}_{m}=Ai(s-\frac{\xi^{2}}{4})e^{\frac{iz\xi}{2}-\frac{iz^{3}}{42}}\hat{y}\), taking it into Eqs.(4-5), one obtains the electromagnetic vector of the H-type wave, as
\[{\bf E} = \omega\mu[(\frac{x{\cal A}}{2kx_{0}^{3}}-\frac{z^{2}{\cal A}}{4 k^{3}x_{0}^{6}}+\frac{iz{\cal A}^{{}^{\prime}}}{2k^{2}x_{0}^{2}})\hat{x} \tag{44}\] \[+(\frac{-z{\cal A}}{2kx_{0}^{3}}+\frac{i{\cal A}^{{}^{\prime}}}{ x_{0}})\hat{z}]e^{\frac{izx_{0}}{2kx_{0}}-\frac{iz^{3}}{12k^{3}x_{0}^{6}}}\] \[{\bf H} = k^{2}{\cal A}\hat{y}e^{\frac{izx_{0}}{2k^{3}x_{0}^{6}}}-\frac{ i^{3}}{12k^{3}x_{0}^{6}} \tag{45}\]
where \({\cal A}=Ai(\frac{x}{x_{0}}-\frac{z^{2}}{4k^{2}x_{0}^{6}})\) and \({\cal A}^{{}^{\prime}}=Ai^{{}^{\prime}}(\frac{x}{x_{0}}-\frac{z^{2}}{4k^{2}x_{ 0}^{6}})\) represents the first derivative of the Airy function. And then, we can get the momentum and spin angular momentum properties of this Airy beam by taking the electromagnetic field into Eqs.(8-9), the spin angular momentum density distribution as shown in FIG. 4.
\[{\bf s} = -\frac{\mu}{4\omega}\frac{xk}{x_{0}^{4}}Ai(\frac{x}{x_{0}}-\frac{ z^{2}}{4k^{2}x_{0}^{4}})Ai^{{}^{\prime}}(\frac{x}{x_{0}}-\frac{z^{2}}{4k^{2}x_{0}^{4 }})\hat{y} \tag{46}\] \[{\bf p} = \frac{\mu}{2\omega}k^{4}[\frac{z}{2kx_{0}^{3}}Ai^{2}(\frac{x}{x_ {0}}-\frac{z^{2}}{4k^{2}x_{0}^{4}})\hat{x}\] (47) \[- (\frac{z^{2}}{4k^{3}x_{0}^{6}}-\frac{x}{2kx_{0}^{3}})Ai^{2}( \frac{x}{x_{0}}-\frac{z^{2}}{4k^{2}x_{0}^{4}})\hat{z}]\] \[{\bf s}_{t} = \frac{1}{2k^{2}}\nabla\times{\bf p}={\bf s},\quad{\bf s}_{E}=0 \tag{48}\]
where \(Ai^{\prime}\) is the first derivative of the Airy function. Eq.(46) shows a special result that an Airy beam propagating in the x-z plane has a spin angular momentum \({\bf s}\) along the y direction, and the spin is completely caused by the spatial inhomogeneity of the beam momentum density transverse spin \({\bf s}_{t}\) provided. That is, the definition of transverse spin in [20; 21] also applies in this case.
However, when we choose \(\hat{n}=\hat{x}\), at this time, the form of the wave field will change. So we may get different results. In the case of \({\bf\Pi}_{m}=Ai(s-\frac{\xi^{2}}{4})e^{\frac{iz\xi}{2}-\frac{iz^{3}}{12}}\hat{x}\), we can get the electromagnetic vector as
\[{\bf E} = \omega\mu(\frac{-x{\cal A}}{2kx_{0}^{3}}+\frac{z^{2}{\cal A}}{4 k^{3}x_{0}^{6}}-\frac{iz{\cal A}^{{}^{\prime}}}{2k^{2}x_{0}^{4}})\hat{y}e^{\frac{izx_{ 0}}{2k^{3}x_{0}^{6}}-\frac{iz^{3}}{12k^{3}x_{0}^{6}}} \tag{49}\] \[{\bf H} = [(\frac{x{\cal A}}{x_{0}^{3}}-\frac{z^{2}{\cal A}}{2k^{2}x_{0}^{6} }+\frac{iz{\cal A}^{{}^{\prime}}}{kx_{0}^{4}}+k^{2}{\cal A})\hat{x}\] (50) \[+(\frac{i{\cal A}}{2kx_{0}^{3}}-\frac{3xz{\cal A}}{4k^{2}x_{0}^{6} }+\frac{z^{3}{\cal A}}{4k^{4}x_{0}^{9}}-\frac{iz^{2}{\cal A}^{{}^{\prime}}}{2k^{3 }x_{0}^{7}}\] \[+\frac{ix{\cal A}^{{}^{\prime}}}{2kx_{0}^{4}})\hat{z}]e^{\frac{izx_{ 0}}{2k^{3}x_{0}^{6}}-\frac{i^{3}}{12k^{3}x_{0}^{6}}}\]
we can also obtain the phase distributions of the electric field and the phase difference between the two electric filed components as shown in FIG. 5. And the we can get the spin and momentum density by taking the electromagnetic vector into Eqs.(8-9), which shown in FIG. 4.
\[{\bf s} = \frac{\mu}{2\omega}[\frac{-x{\cal A}^{2}}{2kx_{0}^{6}}+\frac{z^{2}{ \cal A}^{2}}{4k^{3}x_{0}^{9}}-\frac{k{\cal A}^{2}}{2x_{0}^{3}}+\frac{z^{2}{\cal A }{\cal A}^{{}^{\prime}}}{2kx_{0}^{7}}-\frac{x^{2}{\cal A}{\cal A}^{{}^{\prime}} }{2kx_{0}^{7}}-\frac{kx{\cal A}{\cal A}^{{}^{\prime}}}{2x_{0}^{4}}]\hat{y} \tag{51}\] \[{\bf p} = \frac{\mu}{2\omega}\left(\begin{array}{c}\frac{3x^{2}{\cal A}^ {2}}{8kx_{0}^{2}}+\frac{z^{5}{\cal A}^{2}}{16k^{3}x_{0}^{9}}-\frac{5xz^{3}A^{2 }}{16k^{3}x_{0}^{2}}-\frac{z{\cal A}{\cal A}^{{}^{\prime}}}{4kx_{0}^{7}}+\frac {z^{3}{\cal A}^{{}^{\prime}2}}{4k^{3}x_{0}^{1}}-\frac{xz{\cal A}^{{}^{\prime }2}}{4kx_{0}^{8}}\\ \\ \frac{kx^{2}{\cal A}^{2}}{2x_{0}^{6}}-\frac{xz^{2}{\cal A}^{2}}{2kx_{0}^{2}}+ \frac{z^{4}{\cal A}^{2}}{8k^{3}x_{0}^{1}}+\frac{rk^{3}{\cal A}^{2}}{2x_{0}^{ 5}}-\frac{kz^{2}{\cal A}^{2}}{4x_{0}^{6}}+\frac{z^{2}{\cal A}^{{}^{\prime}2}}{ 2kx_{0}^{6}}\end{array}\right)\] (53) \[{\bf s}_{t} = \frac{1}{2k^{2}}\nabla\times{\bf p}=s_{t}\hat{y}\neq 0,\ {\bf s}_{E}={\bf s}-{\bf s}_{t}=s_{E}\hat{y}\neq 0 \tag{54}\]
Eq.(54) shows a very different result with Eq.(49), there is non-zero extraordinary transverse spin in this case as showed in FIG. 4 (d). Similarly, we also calculate the case of \({\bf\Pi}_{m}=Ai(s-\frac{\xi^{2}}{4})e^{\frac{i\omega t}{2}-\frac{i\omega^{3}}{ 12}}\hat{z}\), and there is similar result.
In addition, another typical solution of the wave equation in the paraxial approximation is the Gaussian beam [14, 14, 44, 45, 46]. The Hertz vector wave equation Eq.(2) under paraxial approximation \(|\partial_{z}^{2}\Pi|\ll|2k\partial_{z}\Pi|\) and \(|\partial_{z}^{2}\Pi|\ll|\partial_{x}^{2}\Pi|,|\partial_{y}^{2}\Pi|\) can be expressed as
\[\nabla_{\perp}^{2}{\bf\Pi}+2ik\partial_{z}{\bf\Pi}=0 \tag{55}\]
A feasible solution in Eq.(55) is the Gaussian beam, as
\[{\bf\Pi}_{m}=A_{0}\frac{z_{R}}{z-iz_{R}}e^{\frac{iz_{R}\rho^{2}}{2(z-iz_{R})}} \hat{n} \tag{56}\]
where \(A_{0}\) is a constant field amplitude, \(z_{R}=\frac{kw_{0}^{2}}{2}\) is the Rayleigh diffraction length, and \(w_{0}\) is the beam waist [49].
When we choose \(\hat{n}=\hat{x}\), so we can get the electromagnetic field by taking Eq.(55) into Eqs.(55), as
\[{\bf E} = \omega\mu A_{0}[0\hat{x}+(\frac{k\rho^{2}z_{R}}{2(z-iz_{R})^{3}}- \frac{iz_{R}}{(z-iz_{R})^{2}})\hat{y} \tag{57}\] \[+\frac{kyz_{R}}{(z-iz_{R})^{2}}\hat{z}]e^{\frac{iz_{R}\rho^{2}}{2(z -iz_{R})}}\] \[{\bf H} = A_{0}[(-\frac{k^{2}x^{2}z_{R}}{(z-iz_{R})^{3}}+\frac{ikz_{R}}{(z -iz_{R})^{2}}+\frac{k^{2}z_{R}}{z-iz_{R}})\hat{x}\] (58) \[-\frac{k^{2}xyz_{R}}{(z-iz_{R})^{3}}\hat{y}+(\frac{k^{2}\rho^{2}xz _{R}}{2(z-iz_{R})^{4}}-\frac{2ikxz_{R}}{(z-iz_{R})^{3}})\hat{z}]e^{\frac{ik\rho^ {2}}{2(z-iz_{R})}}\]
we can get the phase distributions of the electric field near the focal plane (we choose z=0.001), as shown in FIG. 6, and then we can further get the momentum and spin density distribution of the Gaussian beam by Eqs.(90), as
\[{\bf s} = s_{x}\hat{x}+s_{y}\hat{y}+s_{z}\hat{z}\] (59) \[s_{x} = \frac{\mu A_{0}^{2}}{2\omega}[\frac{k^{3}yz_{R}^{2}}{(z^{2}+z_{R} ^{2})^{2}}-\frac{k^{4}\rho^{2}yz_{R}^{3}}{2(z^{2}+z_{R}^{2})^{3}}\] (60) \[+\frac{2k^{3}x^{2}yz_{R}^{2}}{(z^{2}+z_{R}^{2})^{3}}-\frac{k^{4} \rho^{2}x^{2}yz_{R}^{3}}{2(z^{2}+z_{R}^{2})^{4}}]e^{-\frac{k\rho^{2}z_{R}}{z^{2} +z_{R}}}\] \[s_{y} = \frac{\mu A_{0}^{2}}{2\omega}[\frac{k^{4}x^{3}\rho^{2}z_{R}^{2}} {2(z^{2}+z_{R}^{2})^{4}}+\frac{k^{3}x\rho^{2}z_{R}^{2}}{2(z^{2}+z_{R}^{2})^{4}} -\frac{k^{3}x^{2}\rho^{2}z_{R}^{4}}{2(z^{2}+z_{R}^{2})^{4}}\] \[-\frac{3k^{4}x\rho^{2}z^{2}z_{R}^{2}}{2(z^{2}+z_{R}^{2})^{4}}+ \frac{k^{4}x\rho^{2}z_{R}^{2}}{2(z^{2}+z_{R}^{2})^{4}}-\frac{2k^{3}x^{3}z_{R}^{2} }{(z^{2}+z_{R}^{2})^{3}}\] \[+\frac{2k^{2}xz_{R}^{3}}{(z^{2}+z_{R}^{2})^{3}}+\frac{2k^{3}xz^{2 }z_{R}^{2}}{(z^{2}+z_{R}^{2})^{3}}-\frac{2k^{3}x^{4}}{(z^{2}+z_{R}^{2})^{3}}]e^{- \frac{k\rho^{2}}{2(z_{R})}}\] \[s_{z} = \frac{\mu A_{0}^{2}}{2\omega}[\frac{k^{3}xyz_{R}^{2}}{(z^{2}+z_{R} ^{2})^{3}}-\frac{2k^{4}xyz_{R}^{3}}{(z^{2}+z_{R}^{2})^{3}}]e^{-\frac{k\rho^{2}}{ 2(z^{2}+z_{R}^{2})}}\] (61) \[{\bf p} = p_{x}\hat{x}+p_{y}\hat{y}+p_{z}\hat{z}\] (62) \[p_{x} = \frac{\mu k^{2}A_{0}^{2}}{2\omega}[-\frac{k^{2}x\rho^{2}zz_{R}^{3}} {(z^{2}+z_{R}^{2})^{4}}+\frac{2kzzzz_{R}^{2}}{(z^{2}+z_{R}^{2})^{3}}\] (63) \[+\frac{k^{3}x^{2}\rho^{2}zz_{R}^{2}}{4(z^{2}+z_{R}^{2})^{4}}+\frac {k^{3}xy^{2}zz_{R}^{2}}{(z^{2}+z_{R}^{2})^{3}}]e^{-\frac{k\rho^{2}}{2(z^{2}+z_{R} ^{2})}}\] \[p_{y} = \frac{\mu k^{2}A_{0}^{2}}{2\omega}[-\frac{k^{3}x^{2}yz_{R}^{2}}{(z ^{2}+z_{R}^{2})^{3}}+\frac{k^{3}yzz_{R}^{2}}{(z^{2}+z_{R}^{2})^{2}}]e^{-\frac{k \rho^{2}}{z^{2}+z_{R}^{2}}}\] (64) \[p_{z} = \frac{\mu k^{2}A_{0}^{2}}{2\omega}[-\frac{k^{2}x^{2}z_{R}^{2}}{(z ^{2}+z_{R}^{2})^{3}}+\frac{kz_{R}^{2}}{(z^{2}+z_{R}^{2})^{2}}-\frac{k^{2}z_{R}^{ 2}}{(z^{2}+z_{R}^{2})^{2}}\] (65) \[+\frac{k^{3}x^{2}\rho^{2}z_{R}^{2}}{2(z^{2}+z_{R}^{
Gaussian beam, as shown in FIG. 7.
\[\mathbf{s}_{t} = \frac{1}{2k^{2}}\nabla\times\mathbf{p} \tag{66}\] \[= s_{tx}\hat{x}+s_{ty}\hat{y}\] \[s_{tx} = \frac{\mu A_{0}^{2}}{2\omega}[\frac{2k^{3}x^{2}y}{z_{R}^{4}}- \frac{k^{4}x^{2}y}{2z_{R}^{3}}-\frac{k^{4}x^{2}y^{3}}{2z_{R}^{5}}+\frac{k^{3}y ^{3}}{2z_{R}^{4}}\] (67) \[-\frac{k^{4}y^{3}}{2z_{R}^{3}}-\frac{3k^{2}y}{2z_{R}^{3}}+\frac{3 k^{3}y}{2z_{R}^{2}}-\frac{k^{4}x^{4}y}{2z_{R}^{5}}]e^{-\frac{k^{2}}{2\hbar}}\] \[s_{ty} = \frac{\mu A_{0}^{2}}{2\omega}[\frac{k^{4}x^{3}y^{2}}{2z_{R}^{5}}- \frac{5k^{3}x^{3}}{2z_{R}^{4}}+\frac{k^{4}x^{3}}{2z_{R}^{3}}-\frac{k^{3}xy^{2} }{z_{R}^{4}}\] (68) \[+\frac{k^{4}xy^{2}}{2z_{R}^{3}}+\frac{5k^{2}x^{3}}{2z_{R}^{3}}- \frac{3k^{3}x}{2z_{R}^{2}}+\frac{k^{4}x^{5}}{2z_{R}^{5}}]e^{-\frac{k^{2}}{2 \hbar}}\] \[\mathbf{s}_{E} = \mathbf{s}-\mathbf{s}_{t}\] (69) \[= \frac{\mu A_{0}^{2}}{2\omega}[(-\frac{k^{3}y}{2z_{R}^{3}}-\frac{ k^{3}y^{3}}{2z_{R}^{4}}+\frac{3k^{2}y}{2z_{R}^{3}})\hat{x}\] \[+(\frac{k^{3}xy^{2}}{2z_{R}^{4}}-\frac{k^{3}x}{2z_{R}^{2}}-\frac{ k^{2}x}{2z_{R}^{3}})\hat{y}]e^{-\frac{k^{2}}{4\hbar}}\]
Similarly, we can also choose \(\hat{n}=\hat{y}\), after a series of tedious calculations, we can obtain the momentum and spin angular momentum density distribution at this time, and there is still a non-zero \(\mathbf{s}_{E}\) as
\[\mathbf{s}_{E} = \mathbf{s}-\mathbf{s}_{t} \tag{70}\] \[= \frac{\mu A_{0}^{2}}{2\omega}[(\frac{k^{3}y}{2z_{R}^{2}}+\frac{k ^{2}y}{2z_{R}^{3}}-\frac{k^{3}x^{2}y}{2z_{R}^{3}})\hat{x}\] \[+(\frac{k^{3}x}{2z_{R}^{2}}-\frac{3k^{2}x}{2z_{R}^{3}}+\frac{k^{ 3}x^{3}}{2z_{R}^{4}})\hat{y}]e^{-\frac{k^{2}}{2\hbar}}\]
## III Conclusion
We have carried out an expansion on the basis of [21]. We know that the transverse spin is an intrinsic property of all structural wave fields, and it is necessary to explore its physical origin. Compared with the traditional method of judging whether the transverse spin is perpendicular to the direction of the wave propagation and the direction of the spin vector, there is no physical meaning. However, it seems that it is reasonable to judge the transverse spin by the spatial inhomogeneity of the momentum density of the wave field, because for the structural wave field, its momentum density is always non-uniform distributed, so there will be a transverse spin produce. However, it is not enough to define transverse spin by momentum density alone. In addition to the transverse spin caused by inhomogeneity momentum, there may be another part of "extraordinary" spin, it is also transverse. In particular, for single evanescent waves and non-single evanescent wave whose attenuation direction is the same as the main polarization direction, the "extraordinary" spin vanishes. And for non-planar propagating waves, the transverse spin usually provides by inhomogeneities momentum and non-zero "extraordinary" spin.
## Appendix
In a general scenario from any waves either evanescent or propagating, we derive the following curl of momentum from Maxwell equations in Gaussian units,
\[\nabla\times\mathbf{p} = \nabla\times\mathbf{p}_{\mathrm{O}}+\frac{1}{2}\nabla\times \nabla\times\mathbf{s}=-\frac{1}{2}\nabla^{2}\mathbf{s}+\nabla\times\mathbf{p} _{\mathrm{O}} \tag{72}\] \[= k^{2}\mathbf{s}-\frac{g}{\omega}\nabla\times\mathbf{A_{1}}+ \nabla\times\mathbf{p}_{\mathrm{O}}.\]
Here
\[\nabla\times\mathbf{A}_{1}=\Im\left(\begin{array}{c}\nabla E_{y}^{*}\cdot \nabla E_{z}+\nabla H_{y}^{*}\cdot\nabla H_{z}\\ \nabla E_{z}^{*}\cdot\nabla E_{x}+\nabla H_{z}^{*}\cdot\nabla H_{x}\\ \nabla E_{x}^{*}\cdot\nabla E_{y}+\nabla H_{x}^{*}\cdot\nabla H_{y}\end{array} \right). \tag{73}\]
According to Eq. 71, we obtain two wave equations,
\[\nabla^{2}\mathbf{s}+2k^{2}\mathbf{s}=\frac{2g}{\omega}\nabla \times\mathbf{A}_{1}, \tag{74}\] \[\nabla^{2}\mathbf{p}_{\mathrm{S}}+2k^{2}\mathbf{p}_{\mathrm{S}}=- \frac{g}{\omega}\nabla^{2}\mathbf{A}_{1}. \tag{75}\]
We may choose \(A_{1}\) as
\[\mathbf{A}_{1}=\frac{\omega}{g}\left(\frac{1}{2}\nabla^{2}+k^{2}\right)\mathbf{ A}_{\mathbf{s}}, \tag{76}\]
where \(\mathbf{A}_{\mathbf{s}}\) serves as the potential of \(\mathbf{s}\).
\[\mathbf{s}=\nabla\times\mathbf{A}_{\mathbf{s}}, \tag{77}\] \[\mathbf{A}_{\mathbf{s}}=\frac{1}{2\pi}\int_{V}\mathrm{d}\tau^{ \prime}\frac{\mathbf{p}_{S}}{|\mathbf{r}-\mathbf{r}^{\prime}|}. \tag{78}\]
###### Acknowledgements.
Z. X. and Y. L. are supported by Young Scientist Fund [NSFC11804087], Natural National Science Foundation[NSFC12047501]; Science and Technology Department of Hubei Province [2022CFB553, 2018CFB148]; and Educational Commission of Hubei Province of China [Q20211008]. Y. L. and B. Z. are supported by National Natural Science Foundation of China [NSFC12074107], Science and Technology Department of Hubei Province [2022CFA012], and Educational Commission of Hubei Province [T2020001].
|
2301.10747
|
$h(1) \oplus su(2)$ vector algebra eigenstates with eigenvalues in the
matrix domain
|
A new set of $ h(1) \oplus su(2)$ vector algebra eigenstates on the matrix
domain is obtained by defining them as eigenstates of a generalized
annihilation operator formed from a linear combination of the generators of
this algebra which eigenvalues are distributed as the elements of a square
complex normal matrix. A combined method is used to compute these eigenstates,
namely, the method of exponential operators and that of a system of first-order
linear differential equations. We compute these states for all possible
combination of generators and classify them in different categories according
to a generalized commutation relation as well as according to the value of a
characteristic parameter related to the $su(2)$ algebra eigenvalues. Proceeding
in this way, we found a subset of generalized vector coherent states in the
matrix domain which can be easily separated from the general set of
Schr\"odinger-Robertson minimum uncertainty intelligent states. In particular,
for a special choice of the matrix eigenvalue parameters we found the so-called
vector coherent states with matrices associated to the Heisenberg-Weyl group as
well as a generalized version of them, and also a direct connection with the
coherent state quantization of quaternions.
|
Nibaldo-Edmundo Alvarez-Moraga
|
2023-01-25T18:10:01Z
|
http://arxiv.org/abs/2301.10747v1
|
# \(h(1)\oplus su(2)\) vector algebra eigenstates with eigenvalues in the matrix domain
###### Abstract
A new set of \(h(1)\oplus su(2)\) vector algebra eigenstates on the matrix domain is obtained by defining them as eigenstates of a generalized annihilation operator formed from a linear combination of the generators of this algebra which eigenvalues are distributed as the elements of a square complex normal matrix. A combined method is used to compute these eigenstates, namely, the method of exponential operators and that of a system of first-order linear differential equations. We compute these states for all possible combination of generators and classify them in different categories according to a generalized commutation relation as well as according to the value of a characteristic parameter related to the \(su(2)\) algebra eigenvalues. Proceeding in this way, we found a subset of generalized vector coherent states in the matrix domain which can be easily separated from the general set of Schrodinger-Robertson minimum uncertainty intelligent states. In particular, for a special choice of the matrix eigenvalue parameters we found the so-called vector coherent states with matrices associated to the Heisenberg-Weyl group as well as a generalized version of them, and also a direct connection with the coherent state quantization of quaternions.
## 1 Introduction
Coherent states [1] expressed as eigenvectors of the lowering harmonic oscillator operator verifying important physical properties were first introduced by J. R. Klauder [2]. From there, several definitions of coherent states related to this and other quantum physical system were introduced in the literature as well as generalization of this concept of theoretical interest in Physics and Mathematics became to know, all of them assuring the accomplishment of some desired properties such as minimum uncertainty states, resolution of the identity, temporal stability or action-identity. Thus, some consider them as states generated from the action of a unitary irreducible representation of the group on a fixed state of the Hilbert space representation, others define these states as eigenstates of ladder operators generating a representation of the associated Lie algebra or as states associated to physical Hamiltonians having a non-degenerate spectra [3]-[11]. The concept of algebra eigenstates associated to an arbitrary Lie group was introduced by C. Brif who defined them as eigenstates of elements of the corresponding complex Lie algebra [12, 13]. He used this method, among others, to compute the algebra eigenstates of the \(SU(2)\) and \(SU(1,1)\) Lie groups and shown that these states included different subsets of Perelomov's generalized coherent states as well as of the so called intelligent states that minimize the Schrodinger-Robertson uncertainty relation. An adaptation of this last concept to the \(h(1)\oplus su(2)\) Lie algebra allowed us to generate new sets of generalized coherent and squeezed states associated with physical systems whose Hamiltonians were constructed as the product of well defined creation and annihilation operators extracted from the set of elements of this algebra [14]. Then, a new generalization of the concept of vector coherent states to the matrix domain arose by substituting the characteristic complex parameter of the canonical Heisenberg-Weyl coherent states by a normal matrix and requiring that the resulting
states to verify the resolution of the identity [15]. The same successful substitution was done in the \(SU(1,1)\) Gilmore-Perelomov and Barut-Giraldello type coherent states. An important consequence of this extension is that it led to the coherent state quantization of quaternions [16]. In this article we extend the definition of algebra eigenstates with eigenvalues in the complex domain, where by the way the eigenvectors are states in the Hilbert space of representation of the algebra, to consider eigenvalues on the complex matrix domain, that is, complex matrix eigenvalues, where now the associated eigenvectors, that from now on we will call vector algebra eigenstates, are vectors whose components are by themselves states of the Hilbert space of representation of the algebra. We use this definition to compute the vector algebra eigenstates of the \(h(1)\oplus su(2)\) Lie algebra, but it will be clear by regarding the subsequent development that it can be used to compute the vector algebra eigenstates for other Lie algebras.
This article is organized as follows. In section 2 we introduce the \(h(1)\oplus su(2)\) Lie algebra generators, its commutation relations and its action on a chosen irreducible representation space of this algebra and establish the nomenclature to describe the vector states. In section 3 we give the definition and compute the vector algebra eigenstates of the annihilation operator associated to the quantum harmonic oscillator and compare the resulting expressions with the set of Heisenberg-Weyl vector coherent states on the matrix domain that we can find in the literature. Also, we choose the entries of the matrix eigenvalue as equal to the matrix elements of the elements of the \(su(2)\oplus\hat{I}_{j}\) algebra in the \(j\) irreducible representation. This alternative allows us, for a particular choice of the parameters, to generalize the quaternionic vector coherent states to an arbitrary \(su(2)\) irreducible representation space. Moreover, we there we show the connection of our general method with the coherent state quantization of quaternions. In section 4, we give the definition of vector algebra eigenstates for the \(h(1)\oplus su(2)\) Lie algebra and compute these states for all possible linear combinations of generators of this algebra in the case when the matrix eigenvalue is complex and normal. In appendices A and B, we recalculate completely the \(h(1)\oplus su(2)\) algebra eigenstates by combining the exponential operator and the linear differential equations system methods and prepare appropriately the expressions for using them in the main body of this article.
## 2 \(h(1)\oplus su(2)\) Lie algebra and its generators
The \(h(1)\oplus su(2)\) Lie algebra is the direct sum of the Heisenberg Weyl algebra, whose generators are given by \(\hat{a},\hat{a}^{\dagger}\) and the identity \(\hat{I}\), and the \(su(2)\) algebra, generated by \(\hat{I}_{-},\hat{I}_{+}\) and \(\hat{I}_{3}\). These set of operators satisfy the commutation relations
\[\left[\hat{a},\hat{a}^{\dagger}\right]=\hat{I},\quad\left[\hat{I},\hat{a} \right]=\hat{0}\quad\text{and}\quad\left[\hat{I},\hat{a}^{\dagger}\right]=\hat{0}, \tag{2.1}\]
where \(\hat{0}\) is the null operator, and
\[\left[\hat{I}_{+},\hat{J}_{-}\right]=2\hat{I}_{3}\quad\text{and}\quad\left[ \hat{I}_{3},\hat{I}_{\pm}\right]=\pm\hat{J}_{\pm}, \tag{2.2}\]
respectively.
The action of the former on the orthonormal basis vectors \(\mid n\rangle,n=0,1,2,..\) spanning the infinite dimensional Hilbert space \(\mathcal{H}\) is given by
\[\hat{a}\mid n\rangle=\sqrt{n}\mid n-1\rangle,\quad\hat{a}^{\dagger}\mid n \rangle=\sqrt{n+1}\mid n+1\rangle \tag{2.3}\]
and
\[\hat{I}\mid n\rangle=\mid n\rangle, \tag{2.4}\]
with \(n=0,1,2...\) and the action of the latter on the orthonormal basis vectors \(\mid j,m\rangle\) of the irreducible representation \(j\), of dimension \(2j+1\), is given by
\[\hat{J}_{\pm}\mid j,m\rangle=\sqrt{(j\mp m)(j\pm m+1)}\mid j,m\rangle\quad \text{and}\quad\quad\hat{I}_{3}\mid j,m\rangle=m\mid j,m\pm 1\rangle, \tag{2.5}\]
where \(j\) is a not negative integer or half-integer and \(m=-j,-j+1,...,j-1,j\).
The orthonormal basis vectors of the \(h(1)\oplus su(2)\) algebra, for fixed \(j\), can be written as the direct product
\[\mid n\rangle\otimes\mid j,m\rangle=\mid n;j,m\rangle,\quad n=0,1,2...;\quad m =-j,....,j. \tag{2.6}\]
Then, for fixed \(j\), a general state \(|\,\psi\rangle^{j}\) of the algebra \(h(1)\oplus su(2)\) can be expressed in the form
\[|\,\psi\rangle^{j}=\sum_{n=0}^{\infty}\sum_{m=-j}^{j}C_{n,m}^{j}\,|\,n;j,m\rangle, \tag{2.7}\]
where \(C_{n,m}^{j}\) are complex coefficients for all \(n,m\).
### Realization of \(h(1)\) generators on analytic functions space
Another realization of the oscillator algebra can be found in the Fock -Bargman repesentation space \(\mathcal{H}\) of analytic functions \(f(\zeta),\zeta\in\mathbb{C}\), verifying the scalar product
\[(f_{1},f_{2})=\int_{\mathbb{C}}f_{1}^{*}(\zeta)\,f_{2}(\zeta)\,e^{\zeta[z]^{2} }\,\frac{d\zeta_{s}d\zeta^{*}}{2\pi i},\zeta\in\mathbb{C}. \tag{2.8}\]
In this space, an arbitrary analytic function \(f(\zeta)\) can be expressed as a linear combination of analytic elementary functions \(\langle\zeta\mid n\rangle=\varphi_{n}(\zeta)=\frac{\zeta^{n}}{\sqrt{n!}},n=0,1,2.\).., which span \(\mathcal{H}\) and verify the orthonormality property
\[(\varphi_{r}(\zeta),\varphi_{s}(\zeta))=\delta_{rs},\,\forall r,s\in\mathbb{ N}_{0}. \tag{2.9}\]
In terms of these states the function \(f(\zeta)\) writes
\[f(\zeta)=\sum_{n=0}^{\infty}c_{n}\varphi_{n}(\zeta)=\sum_{n=0}^{\infty}c_{n} \frac{\zeta^{n}}{\sqrt{n!}}. \tag{2.10}\]
In this representation the \(h(1)\) algebra generators take the form:
\[\hat{a}=\frac{d}{d\zeta},\quad\hat{a}^{\dagger}=\zeta,\quad\text{and}\ \hat{I}=1. \tag{2.11}\]
As we can see above, the elementary analytic states can be thought as projections of the number harmonic oscillator eigenstates \(|\,n\rangle\) over the \(|\,\zeta\rangle\) state. Using this fact, the projection of the general state (2.7) onto \(|\,\zeta\rangle\otimes\hat{I}^{j}\) state, where \(\hat{I}^{j}=\sum_{m=-j}^{j}|\,j,m\rangle\langle j,m\,|\) is the identity operator in the \(2j+1\) dimensional \(j\) space, is given by
\[|\,\psi(\zeta)\rangle^{j}=\hat{I}^{j}\otimes\langle\zeta\mid\psi\rangle^{j}= \sum_{m=-j}^{j}\psi_{m}^{j}(\zeta)\otimes|\,j,m\rangle, \tag{2.12}\]
where
\[\psi_{m}^{j}(\zeta)\rangle=\sum_{n=0}^{\infty}C_{n,m}^{j}\frac{\zeta^{n}}{ \sqrt{n!}},\quad m=-j,\cdots,j, \tag{2.13}\]
are analytic states functions, which can also be interpreted as a kind of coefficients of the expansion of the state \(|\,\psi(\zeta)\rangle^{j}\) in terms of the basis states \(|\,j,m\rangle\).
## 3 Quantum oscillator annihilation operator vector eigenstates with matrix eigenvalues
In this section we define and solve the matrix eigenvalue equation for the quantum harmonic oscillator annihilation operator. We will use the basis vector associated to the \(h(1)\oplus su(2)\) algebra representation to express the states and we will see that the set of solutions of this matrix eigenvalue equation contains the subset of vector coherent states over the matrix domain studied in the literature [15]. We note that, in this particular case, the use of the \(su(2)\) sector representation space is optional, it could be replaced by another algebra representation space.
Let us start with a preliminary definition of the vector algebra eigenstates of \(\hat{a}\) as those states that satisfy the matrix eigenvalue equation
\[\hat{a}\,|\,\Psi\rangle_{K}^{j}=\bar{M}\,|\,\Psi\rangle_{K}^{j}, \tag{3.1}\]
where \(|\,\Psi\rangle_{K}^{j}\) is the \(K\) component vector state given by
\[|\,\Psi\rangle_{K}^{j}=\left(\begin{array}{c}|\,\psi\rangle_{\{1\}}^{j}\\ |\,\psi\rangle_{\{2\}}^{j}\\ \vdots\\ |\,\psi\rangle_{\{K-1\}}^{j}\\ |\,\psi\rangle_{\{K\}}^{j}\end{array}\right), \tag{3.2}\]
where each state \(|\,\psi\rangle_{\{s\}}^{j}=\sum_{n=0}^{\infty}\sum_{m=-j}^{j}C_{nm}^{[s]j}|\,n \rangle\otimes j,m\rangle\), \(s=1,\cdots,K\) is a unknown state of the \(h(1)\oplus su(2)\) representation space, to be determined, and \(\bar{M}\) a diagonalizable complex square matrix of dimension \(K\times K\).
### When \(\bar{M}\) is a normal matrix
In general, when \(\bar{M}\) is normal, i.e., when \(\bar{M}\bar{M}^{\dagger}=\bar{M}^{\dagger}\bar{M}\), a similarity transformation of \(\bar{M}\) by an unitary matrix \(\bar{U}\) can be done that leads (3.1) to the simplified form
\[\hat{a}\,|\,\bar{\Psi}\rangle_{K}^{j}=\bar{D}\,|\,\bar{\Psi}\rangle_{K}^{j}, \tag{3.3}\]
where
\[\bar{D}=\bar{U}^{\dagger}\bar{M}\bar{U},\quad\text{and}\quad|\,\bar{\Psi} \rangle_{K}^{j}=\bar{U}^{\dagger}\,|\,\Psi\rangle_{K}^{j}, \tag{3.4}\]
where \(\bar{D}\) is a diagonal matrix which entries are the eigenvalues of \(\bar{M}\) and \(|\,\bar{\Psi}\rangle_{K}^{j}\) is the corresponding transformed state. Thus, each component of the matrix equation (3.3) verifies the eigenvalue equation
\[\hat{a}\,|\,\bar{\psi}\rangle_{\{s\}}^{j}=\lambda_{\{s\}}^{j}\,|\,\bar{\psi} \rangle_{\{s\}}^{j},\quad s=1,\cdots,K. \tag{3.5}\]
By solving this equations we realize that each component \(|\,\bar{\psi}\rangle_{\{s\}}^{j}\) can be written as a direct product between a canonical oscillator coherent state and a generic state of the \(j\) representation space of the \(su(2)\) Lie algebra, i.e,
\[|\,\bar{\psi}\rangle_{\{s\}}^{j}=e^{\lambda_{\{s\}}^{j}\,|\,d^{\alpha}}\,|\, 0\rangle\otimes\sum_{m=-j}^{j}\,\bar{\psi}_{\{s\}m}^{j}(0)\,|\,j,m\rangle, \quad s=1,\cdots K, \tag{3.6}\]
then we have
\[|\,\Psi(\bar{M})\rangle_{K}^{j}=\bar{U}\,|\,\bar{\Psi}(\Lambda)\rangle_{K}^{ j}=\mathcal{N}^{-1/2}\bar{U}e^{\mathcal{D}d^{\dagger}}\,|\,\bar{\Psi}(0)\rangle_{K}^ {j}=\mathcal{N}^{-1/2}e^{\mathcal{M}d^{\dagger}}\,|\,\Psi(0)\rangle_{K}^{j} \tag{3.7}\]
where \(\mathcal{N}\) is a normalization constant depending on \(\lambda_{\{s\}}^{j}\) values but also on the choice of the \(K\times 2j+1\) arbitrary constants \(\bar{\psi}_{\{s\}m}^{j}(0),\quad s=1,\cdots,K,\quad m=-j,\cdots,j\), and
\[|\,\bar{\Psi}(0)\rangle_{K}^{j}=\left(\begin{array}{c}|\,0\rangle\otimes| \,\bar{\psi}(0)\rangle_{\{1\}}^{j}\\ |\,0\rangle\otimes|\,\bar{\psi}(0)\rangle_{\{2\}}^{j}\\ \vdots\\ |\,0\rangle\otimes|\,\bar{\psi}(0)\rangle_{\{K-1\}}^{j}\\ |\,0\rangle\otimes|\,\bar{\psi}(0)\rangle_{\{K\}}^{j}\end{array}\right), \tag{3.8}\]
with \(|\,\ddot{\psi}(0)\rangle^{j}_{[s]}=\sum_{m=-j}^{j}\ddot{\psi}^{j}_{[s]m}(0)\,|\,j,m\rangle\), \(s=1,\cdots,K\).
We notice that (3.8) represent a set of \(K\times 2j+1\) vector eigenstates of \(\hat{a}\) with associated eigenvalues equal to zero, just like the states \(|\,\Psi(0)\rangle^{j}_{K}=\bar{U}\,|\,\ddot{\Psi}(0)\rangle^{j}_{K}\). Then, by using the Baker-Campbell-Hausdorff formula \(\exp(A+B)=\exp(A)\exp(B)\exp(-\frac{1}{2}[A,B])\), for two operators \(A\) and \(B\) whose commutator commutes with both \(A\) and \(B\), we get
\[|\,\Psi(\bar{M})\rangle^{j}_{K}=\mathcal{N}^{-1/2}\bar{U}\,e^{\frac{1}{2}\bar {U}\bar{D}^{\dagger}}e^{(\bar{D}\bar{a}^{\dagger}-\bar{D}^{\dagger}\bar{a})}\,| \,\bar{\Psi}(0)\rangle^{j}_{K}=\mathcal{N}^{-1/2}e^{\frac{1}{2}\bar{M}\bar{M} \bar{M}^{\dagger}}e^{(\bar{M}\bar{a}^{\dagger}-\bar{M}^{\dagger}\bar{a})}\,|\, \Psi(0)\rangle^{j}_{K}. \tag{3.9}\]
#### 3.1.1 A special choice of basis vectors
The choice of the vector \(|\,\ddot{\Psi}(0)\rangle^{j}_{K}\) has influence in the normalization constant \(\mathcal{N}\). Choosing it as a vector which components form an orthonormal basis in the fixed \(j\) representation subspace makes it easier to compute, choosing, for example, \(|\,\ddot{\psi}(0)\rangle^{j}_{[s]}=\,|\,0\rangle\otimes|\,j,m_{s}\rangle\), \(s=1,\cdots,K\), with \(\langle j,m_{\bar{s}}\,|\,j,m_{s}\rangle=\delta_{ss}\), we get the normalized vector coherent states
\[|\,\Psi(\Lambda)\rangle^{j}_{K}=\frac{1}{\sqrt{\sum_{s=1}^{K}e^{\|\lambda^{j}_ {[s]}\|^{2}}}}\bar{U}e^{\bar{D}\bar{a}^{\dagger}}\,|\,\bar{\Psi}(0)\rangle^{j} _{K}, \tag{3.10}\]
which in the matrix form look like
\[|\,\Psi(\Lambda)\rangle^{j}_{K} = \frac{1}{\sqrt{\sum_{s=1}^{K}e^{\|\lambda^{j}_{[s]}\|^{2}}}} \tag{3.11}\] \[\times \bar{U}\left(\begin{array}{cccccc}e^{\lambda^{j}_{[1]}\bar{a} ^{\dagger}}&0&0&\cdots&\cdots&0\\ 0&e^{\lambda^{j}_{[2]}\bar{a}^{\dagger}}&0&\cdots&\cdots&0\\ 0&0&\ddots&0&\cdots&0\\ \vdots&\vdots&\vdots&\ddots&\vdots&\vdots\\ 0&\cdots&\cdots&0&e^{\lambda^{j}_{[K-1]}\bar{a}^{\dagger}}&0\\ 0&\cdots&\cdots&0&0&e^{\lambda^{j}_{[K]}\bar{a}^{\dagger}}\end{array}\right) \left(\begin{array}{c}|\,0\rangle\otimes|\,j,m_{1}\rangle\\ |\,0\rangle\otimes|\,j,m_{2}\rangle\\ \vdots\\ |\,0\rangle\otimes|\,j,m_{K-1}\rangle\\ |\,0\rangle\otimes|\,j,m_{K}\rangle\end{array}\right).\]
On the other hand, if we choose these states in such a way that
\[|\,\ddot{\Psi}(0)\rangle^{j}_{K}=\bar{U}^{\dagger}\left(\begin{array}{c}|\, 0\rangle\otimes|\,j,m_{1}\rangle\\ |\,0\rangle\otimes|\,j,-m_{2}\rangle\\ \vdots\\ |\,0\rangle\otimes|\,j,m_{K-1}\rangle\\ |\,0\rangle\otimes|\,j,m_{K}\rangle\end{array}\right), \tag{3.12}\]
the corresponding vector coherent state in (3.7) writes
\[|\,\Psi(\bar{M})\rangle^{j}=\mathcal{N}^{-1/2}e^{\bar{M}\bar{a}^{\dagger}} \left(\begin{array}{c}|\,0\rangle\otimes|\,j,m_{1}\rangle\\ |\,0\rangle\otimes|\,j,m_{2}\rangle\\ \vdots\\ |\,0\rangle\otimes|\,j,m_{k-1}\rangle\\ |\,0\rangle\otimes|\,j,m_{K}\rangle\end{array}\right)=\mathcal{N}^{-1/2}\sum_ {n=0}^{\infty}\frac{\bar{M}^{n}}{\sqrt{n!}}\left(\begin{array}{c}|\,n \rangle\otimes|\,j,m_{1}\rangle\\ |\,n\rangle\otimes|\,j,m_{2}\rangle\\ \vdots\\ |\,n\rangle\otimes|\,j,m_{K-1}\rangle\\ |\,n\rangle\otimes|\,j,m_{K}\rangle\end{array}\right), \tag{3.13}\]
which, with a little difference in the choice of the basis vectors, belong to the class of vector coherent states on the matrix domain studied in [15]. Now, the normalization constant \(\mathcal{N}\) is more difficult to calculate, the general expression for it is:
\[\mathcal{N}=j\langle\Psi(0)\,|\,e^{\bar{M}\bar{M}^{\dagger}}\,|\,\Psi(0)\rangle ^{j}. \tag{3.14}\]
In this way, the multiple possibilities of choosing the ground state vectors show us the high level of degeneracy [17] of the energy eigenvalues of the oscillator harmonic Hamiltonian \(H=\hat{a}^{\dagger}\hat{a}\). Indeed, the construction of the energy eigenstates \(|\,\mathbb{E}_{n}\rangle^{j},n=0,1,\cdots,\) of \(H\), associated to the eigenvalue \(n\), which can be found in the usual way
\[|\,\mathbb{E}_{n}\rangle^{j}=\frac{\hat{a}^{n}}{\sqrt{n!}}\,|\,\tilde{\Psi}(0) \rangle^{j}, \tag{3.15}\]
makes evidence of this phenomena. In the remainder of this section, excepting section 3.1.4, in order to illustrate the theory, we will use the choice made in equation (3.11) for the fundamental vector state.
#### 3.1.2 Vector coherent states linked to the matrix elements of the \(su(2)\) algebra generators
In this section we will study the special case when the matrix elements of \(\tilde{M}\) are given by the matrix elements of the operator \(\beta\hat{I}^{j}-\beta_{+}\hat{J}_{-}+\beta_{-}\hat{J}_{+}+\beta_{3}\hat{J}_{3}\), where \(\beta,\beta_{\pm}\) and \(\beta_{3}\) are complex numbers, in the usual basis spanning the \(j\) irreducible representation space of the \(su(2)\) algebra, i.e,
\[\tilde{M}_{m\ell}=\langle j,m\,|\,[\beta\hat{I}^{j}-\beta_{+}\hat{J}_{-}+\beta \hat{J}_{+}+\beta_{3}\hat{J}_{3}\,|\,j,\ell\rangle,\quad m,\ell=-j,-j+1,\cdots,j-1,j. \tag{3.16}\]
The explicit form of this matrix is given by
\[M=\begin{pmatrix}\beta+j\beta_{3}&-\sqrt{2j}\beta_{+}&0&0&-&0\\ -\sqrt{2j}\beta_{-}&\beta+(j-1)\beta_{3}&-\sqrt{(2j-1)2}\beta_{+}&0&-&0\\ 0&-\sqrt{(2j-1)2}\beta_{-}&\beta+(j-2)\beta_{3}&-\sqrt{(2j-2)3}\beta_{+}&-&0\\ \vdots&\vdots&\ddots&\ddots&\ddots&\vdots\\ 0&0&-\sqrt{3(2j-2)}\beta_{-}&\beta-(j-2)\beta_{3}&-\sqrt{2(j-1)}\beta_{+}&0\\ 0&0&0&-\sqrt{2(2j-1)}\beta_{-}&\beta-(j-1)\beta_{3}&-\sqrt{2}\beta_{+}\\ 0&0&0&0&-\sqrt{2}j\beta_{-}&\beta-j\beta_{3}\end{pmatrix} \tag{3.17}\]
This matrix becomes normal when the \(\beta_{\pm}\) and \(\beta_{3}\) parameters verify equation (A.32), i.e, \(||\beta_{-}||=||\beta_{+}||\) and \(\beta_{3}\beta_{-}{}^{*}=\beta_{3}^{*}\beta_{+}\). Under these conditions the eigenvalues of this matrix \(\lambda_{[m]}^{j}=\beta+mb\), \(m=-j,\cdots,j\), where \(b=\sqrt{4\beta_{+}\beta_{-}+\beta_{3}^{2}}\neq 0\), are all different.Thus with the help of equation ( 3.11) and by spanning the vector component states in the same \(su(2)\) basis vector that we used to compute the matrix elements of \(\tilde{M}\), we can build the vector coherent states associated to the \(su(2)\) algebra:
\[|\,\Psi(\Lambda)\rangle^{j} = \frac{1}{\sqrt{\sum_{m-j}^{j}e^{\|\tilde{U}(\beta-mb)\|^{2}}}} \tag{3.18}\] \[\times \tilde{U}\left(\begin{array}{cccccc}e^{(\beta+jb)d^{\dagger}}& 0&0&\cdots&\cdots&0\\ 0&e^{(\beta+(j-1)b)d^{\dagger}}&0&\cdots&\cdots&0\\ 0&0&\ddots&0&\cdots&0\\ \vdots&\vdots&\vdots&\ddots&\vdots&\vdots\\ 0&\cdots&\cdots&0&e^{(\beta-(j-1)b)d^{\dagger}}&0\\ 0&\cdots&\cdots&0&0&e^{(\beta-j)b^{\dagger}}\end{array}\right)\left(\begin{array} []{c}|\,0;j,-j\rangle\\ |\,0;j,-j+1\rangle\\ \vdots\\ |\,0;j,-1\rangle\\ |\,0;j,j\rangle\end{array}\right),\]
where the entries of the unitary matrix \(\tilde{U}\) are given in appendix B, more precisely in equation (B9):
\[U_{m\ell}=T_{m\ell}^{j}[\beta_{+},\beta_{-},\beta_{3}]. \tag{3.19}\]
#### 3.1.3 Vector coherent states in the \(j=\frac{1}{2}\) representation
For example, in the special case when \(j=\frac{1}{2}\), the unitary matrix \(\tilde{U}\) is given by
\[U=\begin{pmatrix}\sqrt{\frac{b+\beta_{1}}{2b}}&\frac{2\beta_{*}}{ \sqrt{2b(b+\beta_{3})}}\\ \frac{-2\beta_{*}}{\sqrt{2b(b+\beta_{3})}}&\sqrt{\frac{b+\beta_{3}}{2b}}\end{pmatrix} \tag{3.20}\]
and the normal vector coherent states in (3.18) becomes
\[|\,\Psi(\Lambda)\rangle^{\frac{1}{2}}=\frac{1}{\sqrt{e^{\|(\beta +\frac{b}{2})\|^{2}}+e^{\|(\beta-\frac{b}{2})\|^{2}}}}\begin{pmatrix}\sqrt{ \frac{b+\beta_{3}}{2b}}&\frac{2\beta_{*}}{\sqrt{2b(b+\beta_{3})}}\\ \sqrt{\frac{-2\beta_{*}}{\sqrt{2b(b+\beta_{3})}}}&\sqrt{\frac{b+\beta_{3}}{2b }}\end{pmatrix}\] \[\begin{pmatrix}e^{(\beta+b/2)d^{4}}&0\\ 0&e^{(\beta-b/2)d^{4}}\end{pmatrix}\begin{pmatrix}|0;\frac{2}{2},-\frac{1}{2} \rangle\\ |0;\frac{1}{2},+\frac{1}{2}\rangle\end{pmatrix} \tag{3.21}\]
where \(b=\sqrt{4\beta_{+}\beta_{-}+\beta_{3}^{2}}\), or better, after some few manipulations on the normalization constant:
\[|\,\Psi[\beta,b(\beta_{+},\beta_{-},\beta_{3})]\rangle^{\frac{1} {2}}=\frac{e^{-\frac{1}{2}\left[\|\beta\|^{2}+\frac{1}{2}\|b\|^{2}\right]}}{ \sqrt{2\cosh\left[\frac{1}{2}(\beta b^{*}+\beta^{*}b)\right]}}\begin{pmatrix} \sqrt{\frac{b+\beta_{3}}{2b}}&\frac{2\beta_{*}}{\sqrt{2b(b+\beta_{3})}}\\ \frac{-2\beta_{*}}{\sqrt{2b(b+\beta_{3})}}&\sqrt{\frac{b+\beta_{3}}{2b}}\end{pmatrix}\] \[\times\begin{pmatrix}e^{(\beta+\frac{1}{2}b)d^{4}}&0\\ 0&e^{(\beta-\frac{1}{2}b)d^{4}}\end{pmatrix}\begin{pmatrix}|0;\frac{1}{2},- \frac{1}{2}\rangle\\ |0;\frac{1}{2},+\frac{1}{2}\rangle\end{pmatrix}\]
These last states are eigenstates of \(\hat{a}\) with matrix type eigenvalues:
\[\bar{M}=\begin{pmatrix}\beta-\frac{1}{2}\beta_{3}&-\beta_{*}\\ -\beta_{-}&\beta+\frac{1}{2}\beta_{3}\end{pmatrix}, \tag{3.23}\]
where \(\beta_{\pm}\) and \(\beta_{3}\) verify (A.32).
#### 3.1.4 Quaternionic canonical vector coherent states class from \(su(2)\) algebra
For the special choice of parameters
\[\beta=r\cos\theta\qquad\qquad\beta_{3}=2ir\sin\theta\cos\phi\] \[\beta_{+}=-ir\sin\theta\sin\phi e^{i\phi}\quad\beta_{-}=-ir\sin \theta\sin\phi e^{-i\phi},\quad r\neq 0, \tag{3.24}\]
the parameter \(b=2ir\sin\theta\) and the \(\bar{M}\) matrix in (3.23) becomes
\[\bar{M}=r\cos\theta\ \sigma_{0}+i\ r\sin\theta\begin{pmatrix}\cos\phi&\sin\phi e ^{i\phi}\\ \sin\phi e^{-i\phi}&-\cos\phi\end{pmatrix}, \tag{3.25}\]
which is a complex representation of quaternions by \(2\times 2\) matrices in polar coordinates. Inserting the values of (3.24) into equation (3.22), and simplifying the results we get the normalized vector coherent states
\[|\,\Psi[r,\theta,\phi,\psi]\rangle^{\frac{1}{2}} = \begin{pmatrix}\cos\frac{\phi}{2}&-\sin\frac{\phi}{2}e^{i\phi}\\ \sin\frac{\phi}{2}\ e^{-i\phi}&\cos\frac{\phi}{2}\end{pmatrix} \tag{3.26}\] \[\times \frac{e^{-\frac{r^{2}}{2}}}{\sqrt{2}}\begin{pmatrix}e^{(re^{i\theta })d^{4}}&0\\ 0&e^{(re^{-i\theta})d^{4}}\end{pmatrix}\begin{pmatrix}|0;\frac{1}{2},-\frac{1}{2 }\rangle\\ |0;\frac{1}{2},+\frac{1}{2}\rangle\end{pmatrix}\]
These states are eigenstates of the annihilation operator associated to the quantum harmonic oscillator with matrix eigenvalues given by \(\tilde{M}\) of equation (3.25). These states are the so-called quaternionic canonical coherent states that we can find the literature [15]. The only difference here is that our states, by virtue of our particular choice of the ground state, mix the basis states of the \(su(2)\) sector of the \(h(1)\otimes su(2)\) algebra. Indeed, these states can be written in the form
\[|\,\Psi(r,\theta,\phi,\psi)\rangle^{\frac{1}{2}}=\frac{e^{-\frac{r^{2}}{2}}}{ \sqrt{2}}\begin{pmatrix}\cos(\frac{\phi}{2})\,|\,re^{i\theta};\frac{1}{2},- \frac{1}{2}\rangle-\sin(\frac{\phi}{2})\,e^{i\phi}\,|\,re^{-i\theta};\frac{1}{ 2},+\frac{1}{2}\rangle\\ \sin(\frac{\phi}{2})\,e^{-i\psi}\,|\,re^{-i\theta};\frac{1}{2},-\frac{1}{2} \rangle+\cos(\frac{\phi}{2})\,|\,re^{-i\theta};\frac{1}{2},+\frac{1}{2}\rangle, \end{pmatrix} \tag{3.27}\]
that shows us explicitly that mixture.
On the other hand, if we choose the fundamental state in the form shown in (3.12) and calculate the factor \(\mathcal{N}\) with the help of (3.14), as in this case \(\tilde{M}\tilde{M}^{\dagger}=r^{2}I\), we get \(\mathcal{N}=2\,e^{r^{2}}\). Finally, if we insert this last result in equation (3.13), adapted to particular case that is being studied here, we obtain
\[|\,\Psi(\tilde{M})\rangle^{\frac{1}{2}}=\frac{e^{-\frac{r^{2}}{2}}}{\sqrt{2}} \sum_{n=0}^{\infty}\frac{\tilde{M}^{n}}{\sqrt{n!}}\begin{pmatrix}|\,n\rangle \otimes|\,\frac{1}{2},-\frac{1}{2}\rangle\\ |\,n\rangle\otimes|\,\frac{1}{2},+\frac{1}{2}\rangle,\end{pmatrix}, \tag{3.28}\]
which reproduces the results studied in [15], again with the little difference of the mixing of the states on the \(su(2)\) sector, due to the particular choice of the fundamental state we are using here.
Annihilation operator vector algebra eigenstates with non-normal but diagonalizable eigenvalue matrix
When \(\tilde{M}\) in (3.17) is diagonalizable but not normal, the process of obtaining the algebra eigenstates associated to \(\hat{a}\) is identical to that we followed when \(\tilde{M}\) was normal, except for the fact that now the passing matrix \(P\) that leads \(\tilde{M}\) to the its diagonal form is not unitary. Indeed, the diagonalized form of \(\tilde{M}\) is reached by performing the similarity transformation \(\tilde{D}=P^{-1}\tilde{M}P\), where \(P^{-1}\) denotes the inverse of \(P\). Then, following the same reasoning of section 3.1, we can establish that the vector eigenstates are given by
\[|\,\Psi(\Lambda)\rangle^{j} = \tilde{\mathcal{N}}^{\frac{1}{2}} \tag{3.29}\] \[\times P\left(\begin{array}{cccccc}e^{\lambda^{j}_{[j]}a^{\dagger}}& 0&0&\cdots&\cdots&0\\ 0&e^{\lambda^{j}_{[j-1]}a^{\dagger}}&0&\cdots&\cdots&0\\ 0&0&\ddots&0&\cdots&0\\ \vdots&\vdots&\vdots&\ddots&\vdots&\vdots\\ 0&\cdots&\cdots&0&e^{\lambda^{j}_{[-j+1]}a^{\dagger}}&0\\ 0&\cdots&\cdots&0&0&e^{\lambda^{j}_{[-j-1]}a^{\dagger}}\end{array}\right),\]
where \(\lambda^{j}_{[s]}\), \(s=-j,\cdots,j\), are the eigenvalues of \(\tilde{M}\) and \(\tilde{\mathcal{N}}\) is a normalization constant to be determined.
As in general \(P\) is not unitary, the computation of the normalization constant of these vector states is now more difficult than the previous case when \(\tilde{U}\) was unitary. Indeed, the eigenvectors composing the columns of the matrix \(P\) are in general not orthogonal to each other.
2.1 Annihilation operator algebra eigenstates with a non-normal but diagonalizable \(su(2)\) eigenvalue matrix
Returning to the \(su(2)\) eigenvalue matrix \(\tilde{M}=M\) given by (3.17). When that matrix is diagonalizable but when no special conditions on the beta parameters are given, in certain cases, the matrix \(P\) can still be computed with the formulas of appendix B. Indeed, that is the case when \(\beta_{\pm}\neq 0\), whichever is the value of \(\beta_{3}\), provided that the \(b\) parameter is different from zero, or when \(\beta_{+}\) or \(\beta_{-}\) is equal to zero, but not both, and \(\beta_{3}\neq 0\). In all these cases the matrix elements of \(P\), for fixed \(j\), can be extracted from the matrix elements of \(\tilde{T}\) given in (B9) in the following way
\[P_{m\ell}=\begin{cases}T^{j}_{m\ell}[\beta_{+},\beta_{-},\beta_{3}]&\text{when $ \beta_{+}\neq 0,\beta_{-}\neq 0$ and $\beta_{3}\neq 0$ with $b\neq 0$}\\ T^{j}_{m\ell}[\beta_{-},\beta_{3}]&\text{when $\beta_{+}=0,\beta_{-}\neq 0$ and $\beta_{3}\neq 0$}\\ T^{j}_{m\ell}[\beta_{+},0,\beta_{3}]&\text{when $\beta_{+}\neq 0,\beta_{-}=0$ and $\beta_{3}\neq 0$}\\ T^{j}_{m\ell}[\beta_{+},\beta_{-},0]&\text{when $\beta_{+}\neq 0,\beta_{-}\neq 0$ and $\beta_{3}=0$}\end{cases}. \tag{3.30}\]
For example for \(j=\frac{1}{2}\), when \(\beta_{+}\neq 0,\beta_{-}\neq 0\) and \(\beta_{3}\neq 0\), with \(b\neq 0\), the normalized vector states are given by
\[\begin{split}\mid\Psi(\Lambda)\rangle^{\frac{1}{2}}& =\frac{\sqrt{2\|b\|\|b+\beta_{3}\|}}{\sqrt{(4\|\beta_{-}\|^{2}+\|b+ \beta_{3}\|^{2})e^{\|(\beta+\frac{b}{2})\|^{2}}+(4\|\beta_{+}\|^{2}+\|b+\beta_ {3}\|^{2})e^{\|(\beta-\frac{b}{2})\|^{2}}}}\\ &\times\begin{pmatrix}\sqrt{\frac{b+\beta_{3}}{2b}}&\frac{2\beta_ {+}}{\sqrt{2b(b+\beta_{3})}}\\ \frac{-2\beta_{-}}{\sqrt{2b(b+\beta_{3})}}&\sqrt{\frac{b+\beta_{3}}{2b}}\\ \end{pmatrix}\begin{pmatrix}e^{(\beta+\frac{1}{2}b)d^{\dagger}}&0\\ 0&e^{(\beta-\frac{1}{2}b)d^{\dagger}}\\ \end{pmatrix}\begin{pmatrix}0;\frac{1}{2},-\frac{1}{2}\\ 0;\frac{1}{2},+\frac{1}{2}\\ \end{pmatrix},\end{split}\]
which is an eigenstate of \(\hat{a}\) with eigenvalues on the matrix domain given by (3.23), but now without any constraint on the beta parameters regarding the normality of that matrix.
Furthermore, when for example, \(\beta_{+}\neq 0,\beta_{-}=0\) and \(\beta_{3}\neq 0\), we get
\[\begin{split}\mid\Psi(\Lambda)\rangle^{\frac{1}{2}}&= \frac{\|\beta_{3}\|}{\sqrt{\|\beta_{3}\|^{2}e^{\|(\beta+\frac{\beta_{3}}{2}) \|^{2}}+(\|\beta_{+}\|^{2}+\|\beta_{3}\|^{2})e^{\|(\beta-\frac{\beta_{3}}{2}) \|^{2}}}}\\ &\times\begin{pmatrix}1&\frac{\beta_{+}}{\beta_{3}}\\ 0&1\\ \end{pmatrix}\begin{pmatrix}e^{(\beta+\frac{1}{2}\beta_{3})d^{\dagger}}&0\\ 0&e^{(\beta-\frac{1}{2}\beta_{3})d^{\dagger}}\\ \end{pmatrix}\begin{pmatrix}0;\frac{1}{2},-\frac{1}{2}\\ 0;\frac{1}{2},+\frac{1}{2}\\ \end{pmatrix}\end{split} \tag{3.31}\]
which is an eigenstate of \(\hat{a}\) with matrix eigenvalues
\[\begin{pmatrix}\beta+\frac{\beta_{3}}{2}&-\beta_{+}\\ 0&\beta-\frac{\beta_{3}}{2}\\ \end{pmatrix}, \tag{3.32}\]
where no restrictions are assumed on the beta parameters with respect to the normality of this matrix.
#### 3.2.2 An example with a non-diagonalizable \(su(2)\) eigenvalue matrix
When \(\tilde{M}\) in (3.17) is not diagonalizable, the process of obtaining the algebra eigenstates of \(\hat{a}\) is, in general, more elaborate than the case when this matrix is diagonalizable, sometimes we can use the exponential form of the matrix \(M\) and the operator algebra or to solve the eigenvalue equation in a systematic way, component by component. Let us illustrate the solving techniques with a simple example using the \(j=\frac{1}{2}\) basis vector representation for calculating the matrix elements of \(\tilde{M}\) and leaving free choice for the dimension of the irreducible representation used to span the components of the vector states. Let us take for this purpose the case when \(\beta_{+}\neq 0\) and \(\beta_{-}=\beta_{3}=0\), and consequently \(b=0\). Thus, the eigenvalue we have to solve is
\[\hat{a}\begin{pmatrix}\psi^{1}\\ \psi^{2}\end{pmatrix}=\begin{pmatrix}\beta&-\beta_{+}\\ 0&\beta\end{pmatrix}\begin{pmatrix}\begin{pmatrix}\psi^{1}\\ \psi^{2}\end{pmatrix}, \tag{3.33}\]
where the states \(\mid\psi\rangle^{k}=\sum_{n=0}^{\infty}\sum_{m=-j}^{j}C^{k}_{nm}\mid n\rangle \otimes\mid j,m\rangle\), \(k=1,2\). By projecting both sides of this equation on the analytic basis states \(\langle\zeta\mid\) and then using the realization (2.11) for the annihilation operator \(\hat{a}\), we get a system of ordinary linear differential equation for the components of \(\psi^{k}(\zeta)=\langle\zeta\mid\psi\rangle^{k}\), \(k=1,2\), that is,
\[\frac{d}{d\zeta}\begin{pmatrix}\psi_{m}^{1}(\zeta)\\ \psi_{m}^{2}(\zeta)\end{pmatrix}=\begin{pmatrix}\beta&-\beta_{+}\\ 0&\beta\end{pmatrix}\begin{pmatrix}\psi_{m}^{1}(\zeta)\\ \psi_{m}^{2}(\zeta)\end{pmatrix},\quad m=-1,\cdots,j, \tag{3.34}\]
which explicitly corresponds to
\[\frac{d}{d\zeta}\psi_{m}^{1}(\zeta) = \beta\psi_{m}^{1}(\zeta)-\beta_{+}\psi_{m}^{2}(\zeta)\] \[\frac{d}{d\zeta}\psi_{m}^{2}(\zeta) = \beta\psi_{m}^{2}(\zeta),\quad m=-j,\cdots,j. \tag{3.35}\]
The integration of \(\psi_{m}^{2}(\zeta)\) is direct, we get \(\psi_{m}^{2}(\zeta)=e^{\beta\zeta}\,\psi_{m}^{2}(0),\;m=-1,\cdots,j.\) Inserting this last result into the first of equations (3.35) and then integrating the resulting non-homogeneous first order linear differential equation for each function \(\psi_{m}^{1}(\zeta),\;m=-j,\cdots,j,\) we get \(\psi_{m}^{1}(\zeta)=e^{\beta\zeta}\,\psi_{m}^{1}(0)-\beta_{+}\zeta e^{\beta \zeta}\,\psi_{m}^{2}(0),\;m=-j,\cdots,j.\) Finally, returning to the Fock basis of the number eigenstates \(|\,n\rangle\) we obtain the vector component state
\[|\,\psi\rangle^{1}=e^{\beta\hat{a}^{\dagger}}\left[\sum_{m=-j}^{j}\psi_{m}^{1 }(0)\,|\,0\rangle\otimes|\,j,m\rangle-\beta_{+}\hat{a}^{\dagger}\sum_{m=-j}^{ j}\psi_{m}^{2}(0)\,|\,0\rangle\otimes|\,j,m\rangle\right]\]
and the vector component state
\[|\,\psi\rangle^{2}=e^{\beta\hat{a}^{\dagger}}\sum_{m=-j}^{j}\psi_{m}^{2}(0)\,| \,0\rangle\otimes|\,j,m\rangle,\]
which represent the super-coherent states of the super-symmetric Harmonic oscillator introduced by Aragone and Zypman [18]. Let us recall that this same states also are included in the set of algebra eigenstates of the generalized annihilation operator \(\hat{a}+\beta_{+}\hat{J}_{-}\), the difference is that here these states arise as the components of a vector state, that is,
\[|\,\Psi\rangle_{2}^{\frac{1}{2}}=\begin{pmatrix}|\,\psi\rangle^{1}\\ |\,\psi\rangle^{2}\end{pmatrix}=e^{\beta\hat{a}^{\dagger}}\begin{pmatrix}1&- \beta_{+}\hat{a}^{\dagger}\\ 0&1\end{pmatrix}\begin{pmatrix}\sum_{m=-j}^{j}\psi_{m}^{1}(0)\,|\,0\rangle \otimes|\,j,m\rangle\\ \sum_{m=-j}^{j}\psi_{m}^{2}(0)\,|\,0\rangle\otimes|\,j,m\rangle\end{pmatrix}= e^{M\hat{a}^{\dagger}}\begin{pmatrix}\sum_{m=-j}^{j}\psi_{m}^{1}(0)\,|\,0\rangle \otimes|\,j,m\rangle\\ \sum_{m=-j}^{j}\psi_{m}^{2}(0)\,|\,0\rangle\otimes|\,j,m\rangle\end{pmatrix}, \tag{3.36}\]
and are eigenstates of the annihilator \(\hat{a}\) with matrix eigenvalues given by \(\begin{pmatrix}\beta-\beta_{+}\\ 0&\beta\end{pmatrix}\).
## 4 \(h(1)\oplus su(2)\) vector algebra eigenstates with matrix eigenvalues
In this section we propose a generalization of the concept of vector coherent states associated to the annihilation operator \(\hat{a}\) to include those states which are eigenvectors of a class of generalized \(h(1)\oplus su(2)\) type annihilation operator. To start, let us consider the problem of finding the vector states that satisfy the following eigenvalue equation system:1
Footnote 1: In fact, we could define this equation as \([a_{-}\hat{a}+\alpha_{+}a^{\dagger}+\alpha_{3}I+\beta_{-}I_{+}+\beta_{+}I_{-} +\beta_{3}I_{3}]\,|\,\psi\rangle_{[\xi]}^{j}=\bar{M}\,|\,\psi\rangle_{[\xi]}^{j}\) but by performing a suitable squeezed state transformation and then by absorbing the identity operator coefficient into the \(\beta\) parameter we get (4.1), see appendix A.
\[\left|\,\Psi\rangle_{[K]}^{j}=\begin{pmatrix}|\,\psi\rangle_{[1]}^{j}\\ |\,\psi\rangle_{[2]}^{j}\\ \vdots\\ |\,\psi\rangle_{[K-1]}^{j}\\ |\,\psi\rangle_{[K]}^{j}\end{pmatrix}, \tag{4.2}\]
where each state \(|\;\psi\rangle^{j}_{[s]}=\sum_{m=-}^{j}\sum_{n=0}^{\infty}C^{[s]j}_{nm}\;|\;n \rangle\otimes j,m\rangle\;1\leq s\leq K\), is a unknown state of the \(h(1)\oplus su(2)\) representation space to be determined.
As before, if \(\tilde{M}\) is diagonalizable, then there is a \(P\) matrix of dimension \(K\times K\) such that \(P^{-1}\tilde{M}P=\tilde{D}\) is a diagonal matrix which non-null entries are given by the eigenvalues of \(\tilde{M}\). Therefore, firstly, defining
\[|\;\Psi\rangle^{j}_{[K]}=P\;|\;\tilde{\Psi}\rangle^{j}_{[K]},\quad\mbox{or equivalently}\quad|\;\Psi\rangle^{j}_{[K]}=P^{-1}\;|\;\Psi\rangle^{j}_{[K]}, \tag{4.3}\]
and then by acting with \(P^{-1}\) from the left on both sides of equation (4.1), we get
\[\left[\hat{a}+\beta_{-}\hat{f}_{+}+\beta_{+}\hat{f}_{-}+\beta_{3}\hat{f}_{3} \right]|\;\tilde{\Psi}\rangle^{j}_{[K]}=\tilde{D}\;|\;\tilde{\Psi}\rangle^{j }_{[K]}. \tag{4.4}\]
If we denote by \(\tilde{\lambda}_{[s]}\), \(s=1,2,\cdots,K\), the eigenvalues of \(\tilde{M}\) which are distributed in this same order, from top to bottom, on the diagonal principal of \(\tilde{D}\), we obtain a system of \(K\) independent \(h(1)\oplus su(2)\) algebra eigenstates, that is
\[\left[\hat{a}+\beta_{-}\hat{f}_{+}+\beta_{+}\hat{f}_{-}+\beta_{3}\hat{f}_{3} \right]|\;\tilde{\psi}\rangle^{j}_{[s]}=\tilde{\lambda}_{[s]}\;|\;\tilde{\psi} \rangle^{j}_{[s]},\quad s=1,2,\cdots,K. \tag{4.5}\]
As discussed in section 3.2.1, the \(h(1)\oplus su(2)\) algebra eigenstates can be obtained for all different values of the beta parameters. That means a wide variety of situations to be analyzed, each of which provides us with an interesting set of vector states verifying the original equation system. For example, when \(b\neq 0\), the general expression for the \(h(1)\oplus su(2)\) algebra eigenstates \(|\;\tilde{\psi}\rangle^{j}_{[s]}\) is given by equation (A.46), with the corresponding \(\tilde{\lambda}_{[s]}\) in place of the \(\beta\) parameter, that is
\[|\;\tilde{\psi}\rangle^{j}_{[s]} = N^{-1/2}_{[s]} \tag{4.6}\] \[\times \left(\tilde{\psi}^{j}_{[s]-j}(0)e^{\lambda^{[s]j}_{j}{}^{4t}} \quad\tilde{\psi}^{j}_{[s]-j+1}(0)e^{\lambda^{[s]j}_{j-1}{}^{4t}}\quad\cdots \quad\tilde{\psi}^{j}_{[s],j-1}(0)e^{\lambda^{[s]j}_{-j+1}{}^{4t}}\quad\tilde {\psi}^{j}_{[s],j}(0)e^{\lambda^{[s]j}_{-j}{}^{4t}}\right)\] \[\times V^{t}\left(\begin{array}{c}|\;0;j,-j\rangle\\ |\;0;j,-j+1\rangle\\ \vdots\\ |\;0;j,j\rangle\end{array}\right),\quad s=1,2,\cdots,K,\]
where \(\lambda^{[s]j}_{m}=\tilde{\lambda}_{[s]}+mb\), \(s=1,2,\cdots,K\) and \(V\) is an invertible matrix whose matrix elements are given explicitly in equation (B9), then
\[V_{m\ell}=T^{j}_{m\ell}[\beta_{+},\beta_{-},\beta_{3}],\quad b\neq 0. \tag{4.7}\]
Finally using (4.3), we can return to the original vector states which are solutions of equation (4.1), thus we get the \(K\) components of the \(h(1)\oplus su(2)\) vector state (4.2) that verifies (4.1), they are:
\[|\;\psi\rangle^{j}_{[u]}=\sum_{s=1}^{K}P_{us}\;|\;\tilde{\psi}\rangle^{j}_{[s ]},\quad u=1,2,\cdots,K. \tag{4.8}\]
We recall that for each \(s\), equation (4.6) represent the general solution of equation (4.5). It corresponds to a state formed from a set of \(2j+1\) linearly independent states which are by themselves solutions of (4.5). Then, for each \(s\), we can choose an arbitrary combination of these states and then insert it in equation (4.8) to obtain the final structure of the vector states which verify equation (4.1). This freedom of choice implies a very large variety of linearly independent solutions, more precisely \(K\times(2j+1)\) independent solutions.
### Generalized \(h(1)\oplus su(2)\) vector coherent states on the matrix domain
In this article, we are interested in states that could be interpreted as vector coherent states of a given physics system. Then, instead of proceeding with the case-by-case analysis, let us select the parameters according to the necessary conditions they have to fulfill at the moment of constructing a suitable generalized annihilation operator extracted from the linear combination of generators shown in equation (4.5). Proceeding in this way, let us define the operator
\[\mathbb{A}=\hat{a}+\beta_{-}\hat{J}_{+}+\beta_{+}\hat{J}_{-}+\beta_{3}\hat{J}_ {3}, \tag{4.9}\]
whose commutator with \(\mathbb{A}^{\dagger}\) is given by
\[\left[\mathbb{A},\mathbb{A}^{\dagger}\right]=\hat{I}+2\left(\|\beta_{-}\|^{2} -\|\beta_{+}\|^{2}\right)\hat{J}_{3}+(\beta_{3}\beta_{+}^{*}-\beta_{3}^{*}\beta _{-})\hat{J}_{+}+(\beta_{3}^{*}\beta_{+}-\beta_{3}\beta_{-}^{*})\hat{J}_{-}. \tag{4.10}\]
From this last equation we can see that when \(\|\beta_{+}\|=\|\beta_{-}\|\) and \(\beta_{3}\beta_{+}^{*}=\beta_{3}^{*}\beta_{-}\),which are the same conditions that make the matrix \(M\) in (3.17) normal, including certainty here the special case when \(\beta_{+}=\beta_{-}=0\) and \(\beta_{3}\neq 0\), the commutator becomes
\[\left[\mathbb{A},\mathbb{A}^{\dagger}\right]=\hat{I}, \tag{4.11}\]
i.e., the corresponding set of vector states which are solutions of equation(4.1) can be interpreted as vector coherent states in the context of the physical system whose Hamiltonian is given by \(\mathbb{H}=\mathbb{A}^{\dagger}\mathbb{A}\). If we write the beta parameters in the form \(\beta_{\pm}=Re^{i\theta_{\pm}}\) and \(\beta_{3}=R_{3}e^{i\theta_{3}}\), with \(R\) and \(R_{3}\) being no negative real numbers, the new operator thus defined takes the form
\[\mathbb{A}=\hat{a}+R[e^{i\theta_{-}}\hat{J}_{+}+e^{i\theta_{+}}\hat{J}_{-}]+R _{3}e^{i\frac{(\theta_{+}\theta_{-})}{2}}\hat{J}_{3}, \tag{4.12}\]
and then the above newly defined Hamiltonian writes
\[\mathbb{H} = \hat{a}^{\dagger}\hat{a}+R\hat{a}^{\dagger}[e^{i\theta_{-}}\hat{J }_{+}+e^{i\theta_{+}}\hat{J}_{-}]+R[e^{-i\theta_{-}}\hat{J}_{+}+e^{-i\theta_{+ }}\hat{J}_{+}]\hat{a}+R_{3}[e^{i\frac{(\theta_{+}\theta_{-})}{2}}\hat{a}^{ \dagger}+e^{-i\frac{(\theta_{+}\theta_{-})}{2}}\hat{a}]\hat{J}_{3} \tag{4.13}\] \[+ R^{2}[\hat{J}_{-}+\hat{J}_{+}\hat{J}_{-}]+R^{2}[e^{i(\theta_{+} \theta_{-})}\hat{J}_{-}^{2}+e^{-i(\theta_{+}\theta_{-})}\hat{J}_{+}^{2}]+R_{3} ^{2}\hat{J}_{3}^{2}\] \[+ RR_{3}e^{-i\frac{(\theta_{+}\theta_{-})}{2}}[\hat{J}_{+}\hat{J} _{3}+\hat{J}_{3}\hat{J}_{+}]+RR_{3}e^{i\frac{(\theta_{+}\theta_{-})}{2}}[\hat{ J}_{-}\hat{J}_{3}+\hat{J}_{3}\hat{J}_{-}].\]
We can see that, by the way this last Hamiltonian was constructed and in accordance with the commutation relation (4.11), this Hamiltonian is isospectral with the standard harmonic oscillator Hamiltonian although its eigenstates are very different. Nevertheless, if we define \(|\,\bar{0}\rangle\) as being a suitable ground state of this system such that \(\mathbb{A}\,|\,\bar{0}\rangle=0\), we can be sure that the associated vector coherent states, which are eigenstates of \(\mathbb{A}\) with the entries of \(\bar{M}\) as its eigenvalues, are given by \(e^{[\bar{M}\mathbb{A}^{\dagger}-\bar{M}^{\dagger}\mathbb{A}]}\,|\,\bar{0}\rangle\), as long as \(\bar{M}\) is normal. We will probe now in general this last assertion and show explicitly the technique used in doing that at the end of the following section with an example for the the special case \(K=2\) and \(j=\frac{1}{2}\).
#### 4.1.1 Generalized matrix displacement operator
Using the operator algebra described in appendix B, we can show that a generic algebra eigenstate \(|\,\bar{\psi}\rangle^{j}_{[s]m}\) verifying (4.5), in the case that \(b\neq 0\), can be written in the form
\[|\,\bar{\psi}\rangle^{j}_{[s]m}=|\,\bar{\lambda}_{[s]}-mb\rangle\otimes\hat{ T}\,|\,j,m\rangle=\hat{D}(\bar{\lambda}_{[s]}-mb)\,|\,0\rangle\otimes\hat{T}\,|\,j,m\rangle, \tag{4.14}\]
where \(\hat{T}\) is the exponential operator defined in (B2), which parameters are defined in (B3). We notice that when \(\beta_{+}=\beta_{-}=0\) and \(\beta_{3}\neq 0\), the operator \(T=I\).
Using the displacement operator property \(\hat{D}(z_{1}+z_{2})=\hat{D}(z_{1})\hat{D}(z_{2})\exp\frac{1}{2}[z_{1}^{*}z_{2} -z_{1}z_{2}^{*}]\), we get
\[|\,\bar{\psi}\rangle^{j}_{[s]m}=\hat{D}(\bar{\lambda}_{[s]})\exp\frac{m}{2}[ \bar{\lambda}_{[s]}b^{*}-\bar{\lambda}_{[s]}^{*}b]\,|\,-mb\rangle\otimes\hat{ T}\,|\,j,m\rangle. \tag{4.15}\]
Defining \(\vec{\beta}\cdot\vec{\hat{J}}=\beta_{+}\hat{J}_{-}+\beta_{-}\hat{J}_{+}+\beta_{3} \hat{J}_{3}\) and using the fact that in this particular case
\[(\vec{\beta}\cdot\vec{\hat{J}})\hat{T}\,|\,j,m\rangle=mb\,\hat{T}\,|\,j,m \rangle,\quad\text{and}\quad(\vec{\beta}\cdot\vec{\hat{J}})^{\dagger}\hat{T}\,| \,j,m\rangle=mb^{*}\hat{T}\,|\,j,m\rangle, \tag{4.16}\]
equation (4.15) becomes
\[|\ \bar{\psi}\rangle^{j}_{[s]m}=\hat{D}(\bar{\lambda}_{[s]})\exp\left(\frac{1}{2 }\left[\bar{\lambda}_{[s]}(\vec{\beta}\cdot\vec{j})^{\dagger}-\bar{\lambda}^{*}_ {[s]}\vec{\beta}\cdot\vec{j}\right]\right)|-mb\rangle\otimes\hat{T}\ |\ j,m\rangle. \tag{4.17}\]
Thus the normalized general solution of (4.5) is given by
\[|\ \bar{\psi}\rangle^{j}_{[s]}=\frac{\sum_{m=-j}^{j}\varphi^{j}_{[s]m}(0)\ |\ \bar{\psi} \rangle^{j}_{[s]m}}{\sqrt{\sum_{m=-j}^{j}\|\varphi^{j}_{[s]m}(0)\|^{2}}}, \tag{4.18}\]
or more explicitly by
\[|\ \bar{\psi}\rangle^{j}_{[s]}=\hat{D}(\bar{\lambda}_{[s]})\exp\left(\frac{1}{2 }\left[\bar{\lambda}_{[s]}(\vec{\beta}\cdot\vec{j})^{\dagger}-\bar{\lambda}^{*} _{[s]}\vec{\beta}\cdot\vec{j}\right]\right)\frac{\sum_{m=-j}^{j}\varphi^{j}_{[ s]m}(0)\ |\ -mb\rangle\otimes\hat{T}\ |\ j,m\rangle}{\sqrt{\sum_{m=-j}^{j}\|\varphi^{j}_{[s]m}(0)\|^{2}}}. \tag{4.19}\]
Inserting this last result into equation (4.8), which give us the component of the general vector coherent states verifying (4.1), we obtain the matrix realization of these states:
\[|\ \Psi\rangle^{j}_{[K]} = \begin{pmatrix}\hat{D}(\bar{\lambda}_{[1]})&0&\cdots&0&0\\ 0&\hat{D}(\bar{\lambda}_{[2]})&0&\cdots&0\\ \vdots&0&\ddots&0&\vdots\\ 0&\cdots&0&\hat{D}(\bar{\lambda}_{[K-1]})&0\\ 0&0&\cdots&0&\hat{D}(\bar{\lambda}_{[K]})\end{pmatrix} \tag{4.20}\] \[\times\]
where \(\hat{B}(\bar{\lambda}_{[s]})=\exp\left(\frac{1}{2}\left[\bar{\lambda}_{[s]}( \vec{\beta}\cdot\vec{j})^{\dagger}-\bar{\lambda}^{*}_{[s]}\vec{\beta}\cdot \vec{j}\right]\right)\), \(s=1,\cdots,K\). Equation (4.20) represent a set of \((2j+1)^{K}\) linearly independent vector coherent states associated to the generalized harmonic oscillator system here defined.
By setting the integration parameters appropriately we can change the form of these states to a more familiar one, for example, by choosing
\[\varphi^{j}_{[s]m}(0)=\sum_{u=1}^{K}\sum_{r=1}^{K}\bar{U}^{\dagger}_{[s]|u|} \exp\left(\frac{m}{2}\left[\bar{M}b^{*}-\bar{M}^{\dagger}b\right]\right)_{[u| ]r}\delta_{m_{[r]}m}, \tag{4.21}\]
where \(\delta_{m_{[r]}m}\) is the Kronecker delta, (4.20) writes
\[|\,\Psi\rangle^{j}_{[K]} = \bar{U}\begin{pmatrix}\hat{D}(\bar{\lambda}_{[1]})&0&\cdots&0&0\\ 0&\hat{D}(\bar{\lambda}_{[2]})&0&\cdots&0\\ \vdots&0&\ddots&0&\vdots\\ 0&\cdots&0&\hat{D}(\bar{\lambda}_{[K-1]})&0\\ 0&0&\cdots&0&\hat{D}(\bar{\lambda}_{[K]})\end{pmatrix} \tag{4.22}\] \[\times \begin{pmatrix}\hat{B}(\bar{\lambda}_{[1]})&0&\cdots&0&0\\ 0&\hat{B}(\bar{\lambda}_{[2]})&0&\cdots&0\\ \vdots&0&\ddots&0&\vdots\\ 0&\cdots&0&\hat{B}(\bar{\lambda}_{[K-1]})&0\\ 0&0&\cdots&0&\hat{B}(\bar{\lambda}_{[K]})\end{pmatrix}\bar{U}^{\dagger}\] \[\times \exp\left[\frac{1}{2}\left(\bar{M}(\vec{\beta}\cdot\vec{\hat{f}} )^{\dagger}-\bar{M}^{\dagger}(\vec{\beta}\cdot\vec{\hat{f}})\right)\right] \begin{pmatrix}|-m_{1}b\rangle\otimes\hat{T}\;|\;j,m_{1}\rangle\\ |-m_{2}b\rangle\otimes\hat{T}\;|\;j,m_{2}\rangle\\ \vdots\\ |-m_{K}b\rangle\otimes\hat{T}\;|\;j,m_{K}\rangle\end{pmatrix}.\]
Finally, using the fact that the exponential of a diagonal square matrix is equal to a diagonal square matrix which elements are given by the exponential of the diagonal elements of the starting matrix and vice versa, and that \(\bar{U}\bar{D}\bar{U}^{\dagger}=\bar{M}\), we can put (4.22) in the compact form
\[|\,\Psi\rangle^{j}_{[K]}=\exp\left[\bar{M}\mathbb{A}^{\dagger}-\bar{M}^{ \dagger}\mathbb{A}\right]\begin{pmatrix}|-m_{1}b\rangle\otimes\hat{T}\;|\;j,m _{1}\rangle\\ |-m_{2}b\rangle\otimes\hat{T}\;|\;j,m_{2}\rangle\\ \vdots\\ |-m_{K}b\rangle\otimes\hat{T}\;|\;j,m_{K}\rangle\end{pmatrix},. \tag{4.23}\]
This result can evidently be obtained in a more heuristic way. Indeed, as here \([\mathbb{\hat{A}},\mathbb{\hat{A}}^{\dagger}]=\hat{I}\), we can try a solution of (4.5) in the exponential form \(|\,\Psi\rangle=\exp\left(\bar{M}\mathbb{\hat{A}}\right)|\,\Omega\rangle.\) Then with the help of the relation \(\mathbb{\hat{A}}\exp\left(\bar{M}\mathbb{\hat{A}}\right)=\exp\left(\bar{M} \mathbb{\hat{A}}\right)\mathbb{\hat{A}}+\bar{M}\exp\left(\bar{M}\mathbb{\hat{ A}}\right)\), we can reduce the problem to simply calculate the fundamental state, that is, the eigenstate of \(\mathbb{\hat{A}}\) with matrix eigenvalue equal to zero, \(\mathbb{\hat{A}}\;|\,\Omega\rangle=0\).
This set of these vector coherent states verify all properties of the standard harmonic oscillator coherent states with respect to completeness, resolution of the identity and minimum uncertainty states. They are obtained by acting with a generalized matrix displacement operator on a zero energy eigenstate.
#### 4.1.2 Vector coherent states of two components
When \(K=2\), the \(\bar{M}\) matrix has the general form
\[\bar{M}=\begin{pmatrix}\bar{m}_{11}&\bar{m}_{12}\\ \bar{m}_{21}&\bar{m}_{22}\end{pmatrix}, \tag{4.24}\]
and its associated eigenvalues are
\[\bar{\lambda}_{[\pm]}=\frac{1}{2}(\bar{m}_{11}+\bar{m}_{22})\pm\bar{b},\quad \text{where}\quad\bar{b}=\sqrt{4\bar{m}_{12}\bar{m}_{21}+\left(\bar{m}_{11}- \bar{m}_{22}\right)^{2}}. \tag{4.25}\]
Assuming that \(\bar{b}\neq 0\), we can show that the passing matrices take the form
\[P=\begin{pmatrix}2\bar{m}_{12}&\bar{m}_{11}-\bar{m}_{22}-\bar{b}\\ \bar{m}_{22}-\bar{m}_{11}+\bar{b}&2\bar{m}_{21}\end{pmatrix},\quad P^{-1}= \frac{\begin{pmatrix}2\bar{m}_{21}&\bar{m}_{22}-\bar{m}_{11}+\bar{b}\\ \bar{m}_{11}-\bar{m}_{22}-\bar{b}&2\bar{m}_{12}\\ 2\bar{b}(1+\bar{m}_{22}-\bar{m}_{11})\end{pmatrix}}{2\bar{b}(1+\bar{m}_{22}- \bar{m}_{11})}, \tag{4.26}\]
and we can verify that
\[P^{-1}\bar{M}P=\bar{D}=\left(\begin{matrix}\bar{\lambda}_{+}&0\\ 0&\bar{\lambda}_{-}\end{matrix}\right). \tag{4.27}\]
Then the two component vector coherent state writes
\[|\,\Psi\rangle^{j}_{[2]}=\mathcal{N}^{-1/2}P\,|\,\bar{\Psi}\rangle^{j}_{[2]}= \mathcal{N}^{-1/2}\begin{pmatrix}2\bar{m}_{12}&\bar{m}_{11}-\bar{m}_{22}-\bar{ b}\\ \bar{m}_{22}-\bar{m}_{11}+\bar{b}&2\bar{m}_{21}\end{pmatrix}\begin{pmatrix}|\, \bar{\psi}\rangle^{j}_{[1]}\\ |\,\bar{\psi}\rangle^{j}_{[2]}\end{pmatrix} \tag{4.28}\]
where \(\mathcal{N}\) is a normalization constant to be determined by the condition \({}^{j}_{[2]}\langle\Psi\ |\ \Psi\rangle^{j}_{[2]}=1\), or more explicitly
\[|\,\Psi\rangle^{j}_{[2]}=\mathcal{N}^{-1/2}\begin{pmatrix}2\bar{m}_{12}\,|\, \bar{\psi}\rangle^{j}_{[1]}+[\bar{m}_{11}-\bar{m}_{22}-\bar{b}]\,|\,\bar{\psi }\rangle^{j}_{[2]}\\ \begin{pmatrix}|\,\bar{m}_{22}-\bar{m}_{11}+\bar{b}\,|\,\bar{\psi}\rangle^{j} _{[1]}+2\bar{m}_{21}\,|\,\bar{\psi}\rangle^{j}_{[2]}\end{pmatrix}. \tag{4.29}\]
When \(\bar{M}\) in (4.24) is normal, i.e., when its entries verify \(\bar{m}_{12}(\bar{m}_{11}-\bar{m}_{22})^{*}=\bar{m}_{21}^{*}(\bar{m}_{11}-\bar {m}_{22})\) and \(\|\bar{m}_{12}\|=\|\bar{m}_{21}\|\), the \(P\) matrix in (4.26) becomes unitary, which we name \(\bar{U}\), then normalization constant in (4.28) is easier to compute.
#### 4.1.3 Generalized quaternionic vector coherent states
When we choose the entries of \(\bar{M}\) in equation (4.24) as
\[\bar{m}_{11} = r[\cos\theta+i\sin\theta\cos\phi],\quad\bar{m}_{12}=ir\sin\theta \sin\phi e^{i\phi}\] \[\bar{m}_{21} = ir\sin\theta\sin\phi e^{-i\phi}\quad\text{and}\quad\bar{m}_{22}=r [\cos\theta-i\sin\theta\cos\phi], \tag{4.30}\]
then \(\bar{b}=2ir\sin\theta\), \(\bar{\lambda}_{[k]}=re^{\pm i\theta}\) and \(\bar{M}\) becomes normal and takes the form given in equation (3.25), which corresponds to a complex representation of quaternions by square matrices of dimension 2. Also, the matrix \(P\) becomes unitary and the states in equation (4.29) turn into the following generalized quaternionic vector coherent states:
\[|\,\Psi\rangle^{j}_{[2]}=\mathcal{N}^{-1/2}\begin{pmatrix}\cos\left(\frac{ \phi}{2}\right)e^{i\phi}|\,\bar{\psi}\rangle^{j}_{[1]}-\sin\left(\frac{\phi}{ 2}\right)|\,\bar{\psi}\rangle^{j}_{[2]}\\ \sin\left(\frac{\phi}{2}\right)|\,\bar{\psi}\rangle^{j}_{[1]}+\cos\left(\frac{ \phi}{2}\right)e^{-i\phi}|\,\bar{\psi}\rangle^{j}_{[2]}\end{pmatrix}, \tag{4.31}\]
where some multiplicative factors have been absorbed by the normalization constant \(\mathcal{N}\).
For example, in the special case \(j=\frac{1}{2}\), the states
\[|\,\bar{\psi}\rangle^{\frac{1}{2}}_{[s]}=N^{-1/2}_{[s]}\begin{pmatrix}\bar{ \psi}^{\frac{1}{2}}_{[s]\frac{1}{2}}(0)\,e^{\lambda^{[1]\frac{1}{2}}_{-}\bar{ \lambda}^{4}}&\bar{\psi}^{\frac{1}{2}}_{[s]\frac{1}{2}}(0)\,e^{\lambda^{[s] \frac{1}{2}}_{-}\bar{\lambda}^{4}}\end{pmatrix}\,V^{t}\begin{pmatrix}|\,0;\frac {1}{2},-\frac{1}{2}\rangle\\ |\,0;\frac{1}{2},+\frac{1}{2}\rangle\end{pmatrix},\quad s=1,2, \tag{4.32}\]
where \(\lambda^{[s]\frac{1}{2}}_{\frac{1}{2}}=\bar{\lambda}_{[s]}+\frac{1}{2}b,s=1,2\) and \(\lambda^{[s]\frac{1}{2}}_{-}=\bar{\lambda}_{[s]}-\frac{1}{2}b,s=1,2\), with \(b=\sqrt{4\beta_{+}\beta_{-}+\beta_{3}^{2}}\), and
\[V=\begin{pmatrix}\sqrt{\frac{b+\beta_{3}}{2b}}&\frac{2\beta_{+}}{\sqrt{2b(b+ \beta_{3})}}\\ \frac{-2\beta_{-}}{\sqrt{2b(b+\beta_{3})}}&\sqrt{\frac{b+\beta_{3}}{2b}}\end{pmatrix}, \tag{4.33}\]
where \(\beta_{\pm}\) and \(\beta_{3}\) satisfy the necessary conditions for \([\mathbb{A},\mathbb{A}^{\dagger}]=\mathbb{I}\). Explicitly, the eigenvalues of the \(h(1)\otimes su(2)\) sector, depending on the eigenvalues of the \(\bar{M}\) matrix, are given by
\[\lambda^{[1]\frac{1}{2}}_{+\frac{1}{2}}=re^{i\theta}+\frac{b}{2}, \quad\lambda^{[1]\frac{1}{2}}_{-\frac{1}{2}}=re^{i\theta}-\frac{b}{2},\] \[\lambda^{[2]\frac{1}{2}}_{+\frac{1}{2}}=re^{-i\theta}+\frac{b}{2}, \quad\lambda^{[2]\frac{1}{2}}_{-\frac{1}{2}}=re^{-i\theta}-\frac{b}{2}, \tag{4.34}\]
with \(b=e^{i\theta_{3}}\sqrt{4R^{2}+R_{3}^{2}}\), where \(e^{i\theta_{3}}=e^{\frac{i}{2}(\theta+\theta_{-})}\), and the unitary matrix \(V\) reads
\[V=\sqrt{\frac{R_{3}+\sqrt{4R^{2}+R_{3}^{2}}}{2\sqrt{4R^{2}+R_{3}^{2}}}}\left( \begin{array}{cc}1&\frac{2R\ e^{\frac{i}{2}(\theta_{+}-\theta_{-})}}{R_{3}+ \sqrt{4R^{2}+R_{3}^{2}}}\\ \frac{-2R\ e^{\frac{i}{2}(\theta_{+}-\theta_{-})}}{R_{3}+\sqrt{4R^{2}+R_{3}^{2} }}&1\end{array}\right). \tag{4.35}\]
Finally using (4.6) for the particular case being treated here we get the set of states
\[|\,\bar{\psi}\rangle_{[1]}^{\frac{1}{2}} = \sqrt{\frac{R_{3}+\sqrt{4R^{2}+R_{3}^{2}}}{2\sqrt{4R^{2}+R_{3}^{2 }}}} \tag{4.36}\] \[\times \left\{\bar{\psi}_{[1]\frac{1}{2}}^{\frac{1}{2}}(0)\left[|\,re^{ i\theta}+\frac{b}{2};+\frac{1}{2},-\frac{1}{2}\right)-\frac{2R\ e^{\frac{-i}{2}(\theta_{+}-\theta_{-})}}{R_{3} +\sqrt{4R^{2}+R_{3}^{2}}}\ |\,re^{i\theta}+\frac{b}{2};+\frac{1}{2},+\frac{1}{2} \right)\right]\] \[+ \left.\bar{\psi}_{[1]\frac{1}{2}}^{\frac{1}{2}}(0)\left[\frac{2R \ e^{\frac{i}{2}(\theta_{+}-\theta_{-})}}{R_{3}+\sqrt{4R^{2}+R_{3}^{2}}}\ |\,re^{i\theta}-\frac{b}{2};+\frac{1}{2},-\frac{1}{2} \right)+|\,re^{i\theta}-\frac{b}{2};+\frac{1}{2},+\frac{1}{2}\right)\right]\right\}\]
and
\[|\,\bar{\psi}\rangle_{[2]}^{\frac{1}{2}} = \sqrt{\frac{R_{3}+\sqrt{4R^{2}+R_{3}^{2}}}{2\sqrt{4R^{2}+R_{3}^{2 }}}} \tag{4.37}\] \[\times \left\{\bar{\psi}_{[2]\frac{1}{2}}^{\frac{1}{2}}(0)\left[|\,re^{ i\theta}+\frac{b}{2};+\frac{1}{2},-\frac{1}{2}\right)-\frac{2R\ e^{\frac{-i}{2}(\theta_{+}-\theta_{-})}}{R_{3} +\sqrt{4R^{2}+R_{3}^{2}}}\ |\,re^{-i\theta}+\frac{b}{2};+\frac{1}{2},+\frac{1}{2} \right)\right]\right.\] \[+ \left.\bar{\psi}_{[2]\frac{1}{2}}^{\frac{1}{2}}(0)\left[\frac{2R \ e^{\frac{i}{2}(\theta_{+}-\theta_{-})}}{R_{3}+\sqrt{4R^{2}+R_{3}^{2}}}\ |\,re^{-i\theta}-\frac{b}{2};+\frac{1}{2},-\frac{1}{2} \right)+|\,re^{-i\theta}-\frac{b}{2};+\frac{1}{2},+\frac{1}{2}\right)\right\}.\]
Thus, if we insert these last results into equation (4.29) for fixed \(j=1/2\), with the corresponding unitary matrix \(\tilde{U}\) given by
\[\tilde{U}=\begin{pmatrix}\cos\left(\frac{\phi}{2}\right)e^{i\psi}&-\sin\left( \frac{\phi}{2}\right)\\ \sin\left(\frac{\phi}{2}\right)&\cos\left(\frac{\phi}{2}\right)e^{-i\psi} \end{pmatrix}, \tag{4.38}\]
we finally obtain the generalized quaternionic vector coherent states based on the \(h(1)\oplus su(2)\) algebra.
\[|\,\Psi\rangle_{[2]}^{\frac{1}{2}}=\mathcal{N}^{\frac{-1}{2}}\tilde{U}\begin{pmatrix} \bar{\psi}_{[1]}^{\frac{1}{2}}\\ \tilde{\psi}_{[2]}^{\frac{1}{2}}\end{pmatrix}=\mathcal{N}^{\frac{-1}{2}} \begin{pmatrix}\cos\left(\frac{\phi}{2}\right)e^{i\psi}\ |\,\bar{\psi}_{[1]}^{\frac{1}{2}}-\sin\left(\frac{\phi}{2}\right)|\,\bar{ \psi}_{[2]}^{\frac{1}{2}}\\ \sin\left(\frac{\phi}{2}\right)|\,\bar{\psi}_{[1]}^{\frac{1}{2}}+\cos\left( \frac{\phi}{2}\right)e^{-i\psi}\ |\,\bar{\psi}_{[2]}^{\frac{1}{2}}\end{pmatrix}. \tag{4.39}\]
These last states are then eigenstates of the generalized annihilation operator \(\mathbb{A}\) with eigenvalues on a \(2\times 2\) complex matrix representing the quaternions. We emphasize that equation 4.39 represent a set of four linearly independent vector coherent states associated to the quantum system described by the Hamiltonian (4.13), in the \(j=\frac{1}{2}\) representation. Also we note that in this case the vector coherent states
(4.39) can be written in the normalized form
\[|\,\Psi\rangle^{\frac{1}{2}}_{[2]} = \begin{pmatrix}\cos\left(\frac{\phi}{2}\right)e^{i\psi}&-\sin\left( \frac{\phi}{2}\right)\\ \sin\left(\frac{\phi}{2}\right)&\cos\left(\frac{\phi}{2}\right)e^{-i\psi} \end{pmatrix}\begin{pmatrix}e^{(re^{i\phi}d^{4}-re^{-i\phi}d)}&0\\ 0&e^{(re^{-i\phi}d^{4}-re^{i\phi}d)}\end{pmatrix} \tag{4.40}\] \[\times \begin{pmatrix}e^{\frac{1}{2}(re^{i\phi}(\mathbf{A}-\bar{d})^{ \dagger}-re^{-i\phi}(\mathbf{A}-\bar{d}))}&0\\ 0&e^{\frac{1}{2}(re^{-i\phi}(\mathbf{A}-\bar{d})^{\dagger}-re^{i\phi}(\mathbf{ A}-\bar{d}))}\end{pmatrix}\] \[\times \sqrt{\frac{R_{3}+\sqrt{4R^{2}+R_{3}^{2}}}{4\sqrt{4R^{2}+R_{3}^{ 2}}}}\begin{pmatrix}\frac{\psi_{[1]\frac{1}{2}-\frac{1}{2}(0)}^{\frac{1}{2}- \frac{1}{2}(0)}}{\sqrt{\|\psi_{[1]\frac{1}{2}-\frac{1}{2}(0)}^{\frac{1}{2}- \frac{1}{2}(0)}\|^{2}+\|\psi_{[1]\frac{1}{2}-\frac{1}{2}(0)}\|^{2}}}&\frac{\psi _{[1]\frac{1}{2}-\frac{1}{2}(0)}^{\frac{1}{2}-\frac{1}{2}(0)}\|^{2}+\|\psi_{[1 ]\frac{1}{2}-\frac{1}{2}(0)}\|^{2}}{\sqrt{\|\psi_{[1]\frac{1}{2}-\frac{1}{2} (0)}^{\frac{1}{2}-\frac{1}{2}(0)}\|^{2}+\|\psi_{[1]\frac{1}{2}-\frac{1}{2}(0)} \|^{2}}}\\ \frac{\psi_{[2]\frac{1}{2}-\frac{1}{2}(0)}^{\frac{1}{2}-\frac{1}{2}(0)}\|^{2}+ \|\psi_{[2]\frac{1}{2}-\frac{1}{2}(0)}\|^{2}}{\sqrt{\|\psi_{[2]\frac{1}{2}- \frac{1}{2}(0)}^{\frac{1}{2}-\frac{1}{2}(0)}\|^{2}+\|\psi_{[2]\frac{1}{2}-\frac {1}{2}(0)}\|^{2}}}\end{pmatrix}\] \[\times \begin{pmatrix}\frac{1}{2},\frac{b}{2},\frac{b}{2},-\frac{1}{2},- \frac{1}{2}\right)-\frac{2R_{3}\,e^{\frac{b}{2}+(b_{+}-b_{-})}}{R_{3}+\sqrt{4R ^{2}+R_{3}^{2}}}\,\frac{b}{2},+\frac{1}{2},+\frac{1}{2}\right)\\ \frac{2R_{3}\,e^{\frac{b}{2}(b_{+}-b_{-})}}{R_{3}+\sqrt{4R^{2}+R_{3}^{2}}}\, \left|-\frac{b}{2},+\frac{1}{2},-\frac{1}{2}\right)+\,\left|-\frac{b}{2},+ \frac{1}{2},+\frac{1}{2}\right)\end{pmatrix}.\]
Choosing now the integration constants as follows
\[\begin{pmatrix}\frac{\psi_{[1]\frac{1}{2}}^{\frac{1}{2}}(0)}{\sqrt{\|\psi_{[1] \frac{1}{2}-\frac{1}{2}(0)}^{\frac{1}{2}}}(0)^{\frac{1}{2}-\frac{1}{2}(0)}\|^{ 2}}&\frac{\psi_{[1]\frac{1}{2}}^{\frac{1}{2}}(0)}{\sqrt{\|\psi_{[1]\frac{1}{2} -\frac{1}{2}(0)}^{\frac{1}{2}}}(0)^{\frac{1}{2}-\frac{1}{2}(0)\|^{2}+\|\psi_{[1 ]\frac{1}{2}+\frac{1}{2}(0)}\|^{2}}}{\sqrt{\|\psi_{[1]\frac{1}{2}-\frac{1}{2} (0)}^{\frac{1}{2}}}(0)^{\frac{1}{2}-\frac{1}{2}(0)\|^{2}+\|\psi_{[1]\frac{1}{2} +\frac{1}{2}(0)}\|^{2}}}\\ \frac{\psi_{[2]\frac{1}{2}-\frac{1}{2}(0)}^{\frac{1}{2}}(0)}{\sqrt{\|\psi_{[2] \frac{1}{2}-\frac{1}{2}(0)}^{\frac{1}{2}}}(0)^{\frac{1}{2}-\frac{1}{2}(0)\|^{2} +\|\psi_{[2]\frac{1}{2}+\frac{1}{2}(0)}\|^{2}}}\end{pmatrix}=\tilde{U}^{\dagger} \,\exp\left[-\frac{1}{2}\left(\tilde{M}\frac{b^{*}}{2}-\tilde{M}^{\dagger} \frac{b}{2}\right)\sigma_{3}\right] \tag{4.41}\]
and using the Baker-Campbell-Hausdorff formula for splitting the displacement operators and the fact that \(\tilde{U}\exp\left(\begin{smallmatrix}re^{i\phi}&0\\ 0&re^{-i\phi}\end{smallmatrix}\right)\tilde{U}^{\dagger}=\exp\tilde{M}\), and then recomposing the resulting expression following the inverse process, we can put (4.40) in the compact form
\[|\,\Psi(\tilde{M})\rangle^{\frac{1}{2}}_{[2]} = e^{(\tilde{M}\mathbf{A}^{\dagger}-\tilde{M}^{\dagger}\mathbf{A} )}\sqrt{\frac{R_{3}+\sqrt{4R^{2}+R_{3}^{2}}}{4\sqrt{4R^{2}+R_{3}^{2}}}} \begin{pmatrix}&\frac{b}{2};+\frac{1}{2},-\frac{1}{2}\right)-\frac{2R\,e^{ \frac{b^{2}}{2}(\sigma_{+}\sigma_{-})}}{R_{3}+\sqrt{4R^{2}+R_{3}^{2}}}\,|\, \frac{b}{2};+\frac{1}{2},+\frac{1}{2}\rangle\\ \frac{2R\,e^{\frac{b}{2}(\sigma_{+}\sigma_{-})}}{R_{3}+\sqrt{4R^{2}+R_{3}^{2}}} \,|-\frac{b}{2};+\frac{1}{2},-\frac{1}{2}\rangle+\,|-\frac{b}{2};+\frac{1}{2}, +\frac{1}{2}\rangle\end{pmatrix}, \tag{4.42}\]
from where we can extract one version of the fundamental state of the system governed by the Hamiltonian (4.13), that is
\[|\,\Psi(0)\rangle^{\frac{1}{2}}_{[2]}=\sqrt{\frac{R_{3}+\sqrt{4R^{2}+R_{3}^{2}}} {4\sqrt{4R^{2}+R_{3}^{2}}}}\begin{pmatrix}&\frac{b}{2};+\frac{1}{2},-\frac{1}{2} \right)-\frac{2R\,e^{\frac{b^{2}}{2}(\sigma_{+}\sigma_{-})}}{R_{3}+\sqrt{4R^{2} +R_{3}^{2}}}\,|\,\frac{b}{2};+\frac{1}{2},+\frac{1}{2}\rangle\\ \frac{2R\,e^{\frac{b}{2}(\sigma_{+}\sigma_{-})}}{R_{3}+\sqrt{4R^{2}+R_{3}^{2}}} \,|-\frac{b}{2};+\frac{1}{2},-\frac{1}{2}\rangle+\,|-\frac{b}{2};+\frac{1}{2},+ \frac{1}{2}\rangle\end{pmatrix}. \tag{4.43}\]
To conclude this section, let us say a few words about the system we are dealing with. The fundamental state (4.43) can be used for finding the energy eigenstates of the hamiltonian (4.13). By the way it latter was conceived, we know that it is isospectral with the harmonic oscillator Hamiltonian. The normalized energy eigenstates of this system are given by
\[|\,\bar{n}\rangle^{\frac{1}{2}}=\frac{(\hat{\mathbf{A}}^{\dagger})^{n}}{\sqrt{n!}}\,|\,\Psi(0)\rangle^{\frac{1}{2}}_{[2]}. \tag{4.44}\]
The vector coherent states (4.42) based on the \(h(1)\oplus su(2)\) algebra constitute an interesting generalization of the quaternionic vector coherent states studied in the literature. It depends certainly on the choice of the integration constants that remains us the highly degenerate system we are studying.
### Extended canonical commutation relation and intelligent states
Let us now choose the beta parameters in such a way that the commutation relation (4.10) verify
\[\left[\mathbb{A},\mathbb{A}^{\dagger}\right]=\hat{I}+2x\hat{f}_{3}, \tag{4.45}\]
where \(x\in\mathbb{R}\) such that \(x>0\) or \(x<0\). Here four scenarios are possible, all of them requiring \(\beta_{3}=0:\)
\[\begin{array}{llll}(i)&x>0;&\beta_{+}=\sqrt{x}\sinh\alpha e^{i\theta_{+}}, \quad\beta_{-}=\sqrt{x}\cosh\alpha e^{i\theta_{-}};\quad\alpha\neq 0\Rightarrow b \neq 0\\ (ii)&x>0;&\beta_{+}=0,\quad\beta_{-}=\sqrt{x}e^{i\theta_{-}};\Rightarrow\ b=0 \\ (iii)&x<0;&\beta_{+}=\sqrt{-x}\cosh\alpha e^{i\theta_{+}},\quad\beta_{-}=\sqrt{- x}\cosh\alpha e^{i\theta_{-}};\quad\alpha\neq 0;\ \Rightarrow b\neq 0\\ (iv)&x<0;&\beta_{+}=\sqrt{-x}e^{i\theta_{+}},\quad\beta_{-}=0;\Rightarrow b=0 \end{array}. \tag{4.46}\]
In the cases listed as (i) and (iii) the \(b\) parameter is different from zero and hence, in the process of solving the eigenvalue equation (4.5 ), which is necessary to solve the matrix eigenvalue equation (4.1), we find that the matrix formed from the matrix elements of the linear combination of the generators of the \(su(2)\) algebra is diagonalizable but the passing matrix \(V\) is not unitary. However, the technique to obtain the generalized vector states has no difference with the one followed in the above section except for the fact that the computation of the normalization constants is now more difficult. We will not deal with this case now, we will leave it for last, we will treat it together with all similar cases. The cases listed as (ii) and (iv)are very special because in both the parameter \(b\) is equal to zero and the aforementioned matrix diagonalization process does not apply. In appendix B we show explicitly the method to solve the corresponding \(h(1)\oplus su(2)\) eigenvalue type equations appearing after of performing the diagonalization of the \(\tilde{M}\) matrix in (4.1), in the special case where \(\beta_{-}=\beta_{3}=0\) and \(\beta_{+}\neq 0\), and also we adapt the results of these latter for obtaining those in the case \(\beta_{+}=\beta_{3}=0\) and \(\beta_{-}\neq 0\).
Thus, in the case listed as (iv), inserting (A.55) into (A.57) and this latter into (4.8), but not without first substituting the \(\beta\) parameter by the corresponding eigenvalues of \(\tilde{M}\), i.e., \(\tilde{\lambda}_{[s]},s=1,2,\cdots,K\), we get the components of the vector state verifying (4.1):
\[\mid\psi\rangle_{[u]}^{j}=\sum_{s=1}^{K}P_{us}\mid\Psi[\tilde{\lambda}_{[s]}, \beta_{+}]\rangle^{j}=\sum_{s=1}^{K}\sum_{m=-j}^{j}P_{us}\tilde{\sigma}_{[s]m} ^{j}(0)\mid\psi[\tilde{\lambda}_{[s]},\beta_{+}]\rangle_{m}^{j}\quad u=1,2, \cdots,K, \tag{4.47}\]
or better using A.58
\[\mid\psi\rangle_{[u]}^{j}=\sum_{s=1}^{K}\sum_{m=-j}^{j}P_{us}\tilde{\sigma}_{ [s]m}^{j}(0)\hat{D}(\tilde{\lambda}_{[s]})\mid\tilde{\psi}[\tilde{\lambda}_{[s ]},\beta_{+}]\rangle_{m}^{j}\quad u=1,2,\cdots,K, \tag{4.48}\]
which in the matrix form look like
\[\mid\Psi\rangle_{[K]}^{j}=P\begin{pmatrix}\hat{D}(\tilde{\lambda}_{[1]})&0& \cdots&0&0\\ 0&\hat{D}(\tilde{\lambda}_{[2]})&0&\cdots&0\\ \vdots&0&\ddots&0&\vdots\\ 0&\cdots&0&\hat{D}(\tilde{\lambda}_{[K-1]})&0\\ 0&0&\cdots&0&\hat{D}(\tilde{\lambda}_{[K]})\end{pmatrix}\begin{pmatrix}\sum_{ m=-j}^{j}\tilde{\sigma}_{[1]m}^{j}(0)\mid\tilde{\psi}[\tilde{\lambda}_{[1]}, \beta_{+}]\rangle_{m}^{j}\\ \sum_{m=-j}^{j}\tilde{\sigma}_{[2]m}^{j}(0)\mid\tilde{\psi}[\tilde{\lambda}_{ [2]},\beta_{+}]\rangle_{m}^{j}\\ \vdots\\ \sum_{m=-j}^{j}\tilde{\sigma}_{[K]m}^{j}(0)\mid\tilde{\psi}[\tilde{\lambda}_{ [K]},\beta_{+}]\rangle_{m}^{j}\end{pmatrix}. \tag{4.49}\]
When \(\tilde{M}\) is normal \(P=\tilde{U}\) is unitary, then the vector states (4.49) can be written in the suggested form
\[\mid\Psi\rangle_{[K]}^{j}=\mathcal{N}^{-\frac{1}{2}}\exp\left[\tilde{M}\hat{ a}^{\dagger}-\tilde{M}^{\dagger}\hat{a}\right]\tilde{U}\begin{pmatrix}\sum_{m=-j}^{j} \tilde{\sigma}_{[1]m}^{j}(0)\mid\tilde{\psi}[\tilde{\lambda}_{[1]},\beta_{+}] \rangle_{m}^{j}\\ \sum_{m=-j}^{j}\tilde{\sigma}_{[2]m}^{j}(0)\mid\tilde{\psi}[\tilde{\lambda}_{ [2]},\beta_{+}]\rangle_{m}^{j}\\ \vdots\\ \sum_{m=-j}^{j}\tilde{\sigma}_{[K]m}^{j}(0)\mid\tilde{\psi}[\tilde{\lambda}_{ [K]},\beta_{+}]\rangle_{m}^{j}\end{pmatrix}. \tag{4.50}\]
We note that we cannot do the same with the vector states (4.49) due to the fact that in this case \(\tilde{M}\) is not normal. Indeed, in such a case we can use the Baker-Campbell-Hausdorff formula to separate
the harmonic oscillator type displacement operators but we cannot reconstitute it in a closed unitary exponential form by the inverse process because \(\tilde{M}\) does not commute with \(\tilde{M}^{\ddagger}\).
We remark that
\[\begin{split}\begin{pmatrix}\sum_{m=-j}^{j}\tilde{\varphi}_{[1]m}^{ j}(0)\,|\,\tilde{\psi}[\tilde{\lambda}_{[1]},\beta_{+}]_{m}^{j}\\ \sum_{m=-j}^{j}\tilde{\varphi}_{[2]m}^{j}(0)\,|\,\tilde{\psi}[\tilde{\lambda}_ {[2]},\beta_{+}]_{m}^{j}\\ \vdots\\ \sum_{m=-j}^{j}\tilde{\varphi}_{[K]m}^{j}(0)\,|\,\tilde{\psi}[\tilde{\lambda}_ {[K]},\beta_{+}]_{m}^{j}\end{pmatrix}\end{split} \tag{4.51}\]
is an eigenstate of \(\hat{a}+\beta_{+}\hat{J}_{-}\) with matrix eigenvalue equals to \(0\). Then using the displacement operator property \(\exp\left[-(\tilde{M}\hat{a}^{\ddagger}-\tilde{M}^{\ddagger})\hat{a}\right]( \hat{a}+\beta_{+}\hat{J}_{-})\exp\left[\tilde{M}\hat{a}^{\ddagger}-\tilde{M}^{ \ddagger}\hat{a}\right]=\hat{a}+\beta_{+}\hat{J}_{-}+\tilde{M}\), we can also show that (4.50) is effectively an eigenstate of \(\hat{a}+\beta_{+}\hat{J}_{-}\) with matrix eigenvalue equals to \(\tilde{M}\).
The vector states in (4.50) represent a large set of different states depending on the choice of the integration constants \(\tilde{\varphi}_{[s]m}^{j}(0)\). In general, they cannot be considered as vector coherent states associated to the Hamiltonian \(\mathrm{H}=\mathbb{A}^{\dagger}\mathbb{A}=\hat{a}^{\dagger}\hat{a}+\beta_{+} \hat{a}^{\dagger}\hat{J}_{+}+\beta_{+}^{*}\hat{a}\hat{J}_{-}+\|\beta_{+}\|^{2 }\hat{J}_{-}\hat{J}_{+}\), but as intelligent states in the sense that they minimize the generalized Schrodinger-Robertson uncertainty relation \((\Delta\hat{\mathcal{X}})^{2}(\Delta\hat{\mathcal{P}})^{2}\geq\frac{1}{4} \left[\langle\hat{C}\rangle^{2}+\langle\hat{F}\rangle^{2}\right]\), where \(i\hat{C}=[\hat{\mathcal{X}},\hat{\mathcal{P}}]\) and \(\langle\hat{F}\rangle=\langle\{\hat{\mathcal{X}},\hat{\mathcal{X}}\}\rangle-2 \langle\hat{\mathcal{X}}\rangle\)\(\langle\hat{\mathcal{P}}\rangle\), with \(\hat{\mathcal{X}}=\frac{\mathbb{A}+\hat{\mathcal{X}}^{\dagger}}{\sqrt{2}}\) and \(\hat{\mathcal{P}}=\frac{\mathbb{A}-\hat{\mathcal{X}}^{\dagger}}{\sqrt{2}i}\).
#### 4.2.1 Heisenberg-Weyl vector coherent states on the matrix domain
A special choice of the integration constants in (4.50) is given by
\[\tilde{\varphi}_{[s]m}^{j}(0)=\delta_{-j,m}^{[s]}\sum_{u=1}^{K}\tilde{U}_{su} ^{\dagger},\quad s=1,\cdots,K;\quad m=-j,\cdots,j, \tag{4.52}\]
where \(\delta_{-j,m}^{[s]}\) is the Kronecker delta which is equal to \(1\) if \(m=-j\) and \(0\) otherwise. With this choice the states (4.50) become
\[|\,\Psi[\tilde{M}]\,\rangle_{[K]}^{j}=\frac{1}{\sqrt{K}}\exp\left[\tilde{M} \hat{a}^{\ddagger}-\tilde{M}^{\ddagger}\hat{a}\right]\begin{pmatrix}|\,0 \rangle\otimes|\,j,-j\rangle\\ |\,0\rangle\otimes|\,j,-j\rangle\\ \vdots\\ |\,0\rangle\otimes|\,j,-j\rangle\end{pmatrix}, \tag{4.53}\]
which are standard Heisenberg- Weyl vector coherent states on the matrix domain, which have been obtained here by applying the matrix extension of the harmonic oscillator displacement operator on the simpler vector eigenstate of \(\hat{a}+\beta_{+}\hat{J}_{-}\) with zero matrix eigenvalue.
.2.2 Vector algebra eigenstates associated to the standard \(h(1)\oplus su(2)\) annihilator operator
Looking at equation (4.50) we can see that we can go even further in the change of structure of these states and express them in a more compact form, where the contribution of each sub algebra is more easily visualized. To do this, we can insert equation (A.60 ) into (4.50) and proceed with the inverse diagonalization process in the same way we have done in previous subsections, doing all that we get
\[|\,\Psi\rangle_{[K]}^{j}=\mathcal{N}^{-\frac{1}{2}}\exp\left[\tilde{M}\hat{a}^ {\ddagger}-\tilde{M}^{\ddagger}\hat{a}\right]\exp\left[-(\hat{a}^{\ddagger} \hat{I}+\tilde{M}^{\ddagger})\beta_{+}\hat{J}_{-}\right]\hat{U}\begin{pmatrix} \sum_{m=-j}^{j}\frac{\phi_{[1]m}^{j}(0)}{\sqrt{\hat{\mathcal{N}}_{[m]}\{[ \hat{\mathcal{X}}_{[1]}^{j},\beta_{+}]}}}\,|\,0\rangle\otimes|\,j,m\rangle\\ \vdots\\ \sum_{m=-j}^{j}\frac{\phi_{[K]m}^{j}(0)}{\sqrt{\hat{\mathcal{N}}_{[m]}\{[ \hat{\mathcal{X}}_{[1]}^{j},\beta_{+}]}}}\,|\,0\rangle\otimes|\,j,m\rangle \end{pmatrix}. \tag{4.54}\]
Choosing now the set of arbitrary parameters in the form
\[\frac{\varphi^{j}_{[s]m}(0)}{\sqrt{\tilde{N}^{j}_{m}[\tilde{\lambda}^{j}_{[s]^{ \prime}}\beta_{+}]}}=\sum_{r=1}^{K}\tilde{U}^{\dagger}_{[s][r]}\delta_{m_{[r]m}}, \tag{4.55}\]
we arrive to the special set of \((2j+1)^{K}\) linearly independent vector states labeled by \(m1,\cdots,m_{K}\), that is.
\[\left|\,\Psi V\right\rangle_{[K]}^{j}=\mathcal{N}^{-\frac{1}{2}}\exp\left[ \bar{M}\tilde{a}^{\dagger}-\bar{M}^{\dagger}\tilde{a}\right]\exp\left[-(\tilde {a}^{\dagger}I+\bar{M}^{\dagger})\beta_{+}\hat{J}\right]\left(\begin{array}{c }\left|\,0\right\rangle\otimes\left|\,j,m_{1}\right\rangle\\ \left|\,0\right\rangle\otimes\left|\,j,m_{2}\right\rangle\\ \vdots\\ \left|\,0\right\rangle\otimes\left|\,j,m_{K}\right\rangle\end{array}\right). \tag{4.56}\]
Thus, for example, if \(m_{1}=m_{2}=\cdots=m_{K}=-j\), we recuperate the states (4.53). On the other hand, if we take \(m_{1}=m_{2}=\cdots=m_{K}=j\), and use the standard disentangled formula of the exponential operator
\[\exp\left[-\frac{\bar{\theta}}{2}\left(e^{-i\hat{\phi}}\hat{J}_{+}-e^{i\hat{ \phi}}\hat{J}_{-}\right)\right]=\exp\left[\tan\left(\frac{\bar{\theta}}{2} \right)e^{i\hat{\phi}}\hat{J}_{-}\right]\exp\left[-\ln\left(\sec^{2}(\frac{ \bar{\theta}}{2})\right)\hat{J}_{3}\right]\exp\left[-\tan\left(\frac{\bar{ \theta}}{2}\right)e^{-i\hat{\phi}}\hat{J}_{+}\right],\]
we get formally the symmetric form
\[\left|\,\Psi\right\rangle_{[K]}^{j} = \mathcal{N}^{-\frac{1}{2}}\exp\left[\bar{M}\tilde{a}^{\dagger}- \bar{M}^{\dagger}\tilde{a}\right]\exp\left[-I\tilde{a}^{\dagger}\beta_{+}\hat{J }_{-}\right] \tag{4.57}\] \[\times \exp\left[\frac{\arctan\left(\|\beta_{+}\|\bar{M}\bar{M}^{ \dagger}\right)}{\sqrt{\|\beta_{+}\|\bar{M}\bar{M}^{\dagger}}}\left(\bar{M} \beta_{+}^{*}\hat{J}_{+}-\bar{M}^{\dagger}\beta_{+}\hat{J}_{-}\right)\right] \left(\begin{array}{c}\left|\,0\right\rangle\otimes\left|\,j,j\right\rangle \\ \left|\,0\right\rangle\otimes\left|\,j,j\right\rangle\\ \vdots\\ \left|\,0\right\rangle\otimes\left|\,j,j\right\rangle\end{array}\right),\]
which extrapolate between the Heisenberg-Weyl vector coherent states and the vector eigenstates of the generalized annihilation operator \(\tilde{a}+\beta_{+}\hat{J}_{-}\).
### Non-canonical commutation relation
Let us now choose the beta parameters in such a way that the commutation relation (4.10) verify
\[\left[\mathbb{A},\mathbb{A}^{\dagger}\right]=\hat{I}+\rho e^{iv}\hat{J}_{+}+ \rho e^{-iv}\hat{J}_{-} \tag{4.58}\]
where \(\rho\in\mathbb{R}\) such that \(\rho>0\) and \(0\leq v\leq 2\pi\). Here two scenarios are possible depending of the value of \(b\), both requiring \(\|\beta_{+}\|=\|\beta_{-}\|=R>0\), and \(\beta_{3}\neq 0\), with \(\theta_{3}\neq\frac{\theta_{+}+\theta_{-}}{2}+k\pi,k=0,1,\cdots,\) they are
\[\begin{split}(i)&\beta_{+}=Re^{i\theta_{+}},\quad\beta_{-}= Re^{i\theta_{-}};\quad\beta_{3}=R_{3}e^{i\theta_{3}}\\ & R_{3}\neq 2R\quad\text{or}\quad\theta_{3}\neq\frac{\theta_{+}+ \theta_{-}}{2}+(k+\frac{1}{2})\pi,\;k=0,1,\cdots,\Rightarrow\;b\neq 0,\\ &\rho e^{i\nu}=2iRR_{3}\sin\left(\theta_{3}-\frac{\theta_{+}+ \theta_{-}}{2}\right)e^{-\frac{i}{2}(\theta_{+}-\theta_{-})},\\ (ii)&\beta_{+}=Re^{i\theta_{+}},\quad\beta_{-}=Re^{i \theta_{-}};\quad\beta_{3}=R_{3}e^{i\theta_{3}}\\ & R_{3}=2R\quad\text{and}\quad\theta_{3}=\frac{\theta_{+}+\theta_{-}}{2 }+(k+\frac{1}{2})\pi,\;k=0,1,\cdots,\Rightarrow\;b=0,\\ &\rho e^{i\nu}=4R^{2}e^{-\frac{i}{2}(\theta_{+}-\theta_{-})}\;e^{i(k +\frac{1}{2})\pi},\;k=1,2,\cdots.\end{split} \tag{4.59}\]
As before, we will leave the case listed in (i), where \(b\neq 0\), for the last and we will analyze here the case listed in (ii), where \(b=0\). Then, if we insert the algebra eigenstates (A.69) into (4.8), which gives us the vector components of the state that represents the solutions of the matrix eigenvalue equation (4.1), and supposing that \(\bar{M}\) is normal, we get
\[\left|\,\psi\right\rangle_{[u]}^{j}=\mathcal{N}_{[u]}^{-\frac{1}{2}}\sum_{s=1}^ {K}\tilde{U}_{us}\sum_{m=-j}^{j}\tilde{\varphi}_{m}^{j}(0)\hat{D}(\tilde{ \lambda}_{[s]})\left|\,\tilde{\psi}[\tilde{\lambda}_{[s]},\beta_{+},\beta_{-}, \beta_{3}]\right\rangle_{m}^{j}\quad u=1,2,\cdots,K, \tag{4.60}\]
with the states \(|\,\bar{\psi}[\tilde{\lambda}_{[s]},\beta_{+},\beta_{-},\beta_{3}]\rangle_{m}^{j}\) given by equation (A.71):
\[|\,\bar{\psi}[\tilde{\lambda}_{[s]},\beta_{+},\beta_{-},\beta_{3}] \rangle_{m}^{j} = \frac{1}{\sqrt{\tilde{N}_{m}^{j}[\tilde{\lambda}_{[s]},\beta_{+},\beta_{-},\beta_{3}]}}\sum_{n=0}^{j+m}\quad\sum_{\ell=0}^{j+n-m}\frac{(\bar{ \beta}f_{+})^{\ell}}{\ell!} \tag{4.61}\] \[\times (-1)^{n}\,\frac{\left((\hat{a}^{\dagger}+\tilde{\lambda}_{[s]}^{ \dagger})\beta_{+}\tilde{J}_{-}\right)^{n}}{n!}\,|\,0\rangle\otimes|\,j,m),\quad m =-j,\cdots,j,\]
where in this special case \(\beta=\frac{\beta_{3}}{2\beta_{+}}=-\frac{2\beta_{-}}{\beta_{3}}=e^{i(\theta_ {3}-\theta_{+})}=(-1)^{k}ie^{-\frac{i}{2}(\theta_{+}-\theta_{-})},k=1,2\). In a first step we can write equation (4.60) in matrix form, in the same way we did in the last section,
\[|\,\Psi\rangle_{[K]}^{j}=\mathcal{N}^{-\frac{1}{2}}\exp\left[\bar{M}\hat{a}^ {\dagger}-\bar{M}^{\dagger}\hat{a}\right]\bar{U}\begin{pmatrix}\sum_{m=-j}^{ j}\bar{\psi}_{[1]m}^{j}(0)\,|\,\bar{\psi}[\tilde{\lambda}_{[1]},\beta_{+}, \beta_{-},\beta_{3}]\rangle_{m}^{j}\\ \sum_{m=-j}^{j}\bar{\psi}_{[2]m}^{j}(0)\,|\,\bar{\psi}[\tilde{\lambda}_{[2]}, \beta_{+},\beta_{-},\beta_{3}]\rangle_{m}^{j}\\ \vdots\\ \sum_{m=-j}^{j}\bar{\psi}_{[2]m}^{j}(0)\,|\,\bar{\psi}[\tilde{\lambda}_{[K]}, \beta_{+},\beta_{-},\beta_{3}]\rangle_{m}^{j}\end{pmatrix}. \tag{4.62}\]
We could do better but the states (4.61) are not easy to disentangle and write as a product of exponential factors acting on the basis states, except for the case \(m=-j\), which we will study in the next subsection. To achieve this last goal, we will try another method in the subsequent subsection.
#### 4.3.1 Vector coherent states from non-canonical commutation relations
When \(m=-j\), from (4.61) we obtain the normalized state
\[|\,\bar{\psi}[\tilde{\lambda}_{[s]},\beta_{+},\beta_{-},\beta_{3 }]\rangle_{-j}^{j} = \frac{\sum_{\ell=0}^{2j}\frac{(\bar{\delta}f_{+})^{\ell}}{\ell!} }{\sqrt{\tilde{N}_{-j}^{j}[\tilde{\lambda}_{[s]},\beta_{+},\beta_{-},\beta_{3 }]}}\,|\,0\rangle\otimes|\,j,-j\rangle \tag{4.63}\] \[= \frac{\exp[\bar{\delta}f_{+}]}{\sqrt{\tilde{N}_{-j}^{j}[\tilde{ \lambda}_{[s]},\beta_{+},\beta_{-},\beta_{3}]}}\,|\,0\rangle\otimes|\,j,-j\rangle\] \[= \exp\left[(-1)^{k}\frac{i\pi}{4}\left(e^{\frac{-i}{2}(\theta_{+} -\theta_{-})}\tilde{f}_{+}+e^{\frac{i}{2}(\theta_{+}-\theta_{-})}\tilde{f}_{- }\right)\right]|\,0\rangle\otimes|\,j,-j\rangle,\quad k=0,1,\]
which is independent on \(\tilde{\lambda}_{[s]}\).
Thus choosing (4.62) the integration constants as follows:
\[\varphi_{[s]m}^{j}(0)=\sum_{r=1}^{K}\tilde{U}_{[s][r]}^{\dagger}\delta_{m_{[r] m}},\quad m_{[r]}=-j,\forall r=1,\cdots,K, \tag{4.64}\]
we finally reach the vector coherent states
\[|\,\Psi\rangle_{[K]}^{j} = \frac{1}{\sqrt{K}}\exp\left[\bar{M}\hat{a}^{\dagger}-\bar{M}^{ \dagger}\hat{a}\right] \tag{4.65}\] \[\times \exp\left[\frac{i\pi}{4}(-1)^{k}\left(e^{\frac{-i}{2}(\theta_{+} -\theta_{-})}\tilde{f}_{+}+e^{\frac{i}{2}(\theta_{+}-\theta_{-})}\tilde{f}_{- }\right)\right]\] \[\times \begin{pmatrix}|\,0\rangle\otimes|\,j,-j\rangle\\ |\,0\rangle\otimes|\,j,-j\rangle\\ \vdots\\ |\,0\rangle\otimes|\,j,-j\rangle\end{pmatrix},\quad k=0,1.\]
These last states, certainty, are eigenstates of the annihilation operator
\[\mathbb{A}=a+Re^{i\theta_{+}}\tilde{f}_{-}+Re^{i\theta_{-}}\tilde{f}_{+}+2iR(- 1)^{k}e^{\frac{i}{2}(\theta_{+}+\theta_{-})},\quad k=0,1, \tag{4.66}\]
with matrix eigenvalue equal to \(\bar{M}\).
#### 4.3.2 Disentangling through a unitary transformation
The commutator (4.58) can be written in the form
\[\left[\mathbb{A},\mathbb{A}^{\dagger}\right]=\hat{I}-2\rho\hat{\mathfrak{J}}_{3}, \tag{4.67}\]
where
\[\hat{\mathfrak{J}}_{3}=\frac{-1}{2}\left[e^{iv}\hat{\mathfrak{J}}_{+}+e^{-iv} \hat{\mathfrak{J}}_{-}\right]=(-1)^{k+1}\frac{i}{2}\left[e^{-\frac{i}{2}( \theta_{+}-\theta_{-})}\hat{\mathfrak{J}}_{+}-e^{\frac{i}{2}(\theta_{+}- \theta_{-})}\hat{\mathfrak{J}}_{-}\right],\quad k=0,1. \tag{4.68}\]
On the other hand, the operator \(\mathbb{A}\) in (4.66) also can be rewritten in a different way, that is:
\[\mathbb{A}=a+\mathcal{B}_{+}\hat{\mathfrak{J}}_{-}, \tag{4.69}\]
with
\[\mathcal{B}_{+}=2Re^{\frac{i}{2}(\theta_{+}+\theta_{-})}\quad\text{and}\quad \hat{\mathfrak{J}}_{-}=\frac{1}{2}\left[e^{\frac{i}{2}(\theta_{+}-\theta_{-}) }\hat{\mathfrak{J}}_{-}+e^{-\frac{i}{2}(\theta_{+}-\theta_{-})}\hat{\mathfrak{ J}}_{+}\right]+(-1)^{k}i\hat{\mathfrak{J}}_{3},\quad k=0,1. \tag{4.70}\]
Thus, by defining \(\hat{\mathfrak{J}}_{+}=\hat{\mathfrak{J}}_{-}^{\dagger}\) we observe that the set generators \(\hat{\mathfrak{J}}_{+},\hat{\mathfrak{J}}_{-}\) and \(\hat{\mathfrak{J}}_{3}\) verify the \(su(2)\) Lie algebra commutation relations (2.2 ), for all \(k=0,1\). Hence, the problem of finding the algebra eigenstates of (4.66) reduces to the same already solved in the last section, the only difference is a change of the \(su(2)\) representation basis vectors at the moment of spanning the states. We will denote this new basis vectors as \(|\ \tilde{j},\bar{m}\rangle,\ \bar{m}=-\tilde{j},\cdots\tilde{j}\). Here, it is important to mention that the Casimir operator is invariant under the transformation, then the label \(\tilde{j}\) denoting the representation coincides with the label \(j\). Using then equation (4.56), but taking into account the new operators and basis states, the solution of the eigenvalue equation (4.1), in this particular case, is given by
\[|\ \Psi\rangle_{[K]}^{j}=\mathcal{N}^{-\frac{1}{2}}\exp\left[\tilde{M}\hat{a}^ {\dagger}-\tilde{M}^{\dagger}\hat{a}\right]\exp\left[-(\hat{a}^{\dagger}I+ \tilde{M}^{\dagger})\mathcal{B}_{+}\hat{\mathfrak{J}}_{-}\right]\left(\begin{array} []{c}|\ 0\rangle\otimes|\ \tilde{j},\bar{m}_{1}\rangle\\ |\ 0\rangle\otimes|\ \tilde{j},\bar{m}_{2}\rangle\\ \vdots\\ |\ 0\rangle\otimes|\ \tilde{j},\bar{m}_{K}\rangle\end{array}\right), \tag{4.71}\]
where \(|\ \tilde{j},\bar{m}_{r}\rangle,\ r=1,\cdots,k\) are eigenstates of \(\hat{\mathfrak{J}}_{3}\) associated to the eigenvalue \(\bar{m}_{r}\), respectively.
To know the connection of these last states with the regular states \(|j,m\rangle\), we need to solve the following \(su(2)\) algebra eigenstate equation for \(\hat{\mathfrak{J}}_{3}\) given in (4.68):
\[[\alpha_{-}\hat{\mathfrak{J}}_{+}+\alpha_{+}\hat{\mathfrak{J}}_{-}]\ |\ \tilde{j},\bar{m}\rangle=\bar{m}\ |\ \tilde{j},\bar{m}\rangle, \tag{4.72}\]
where \(\alpha_{-}=-\frac{1}{2}e^{-\frac{i}{2}(\theta_{+}-\theta_{-})}e^{i\left(k+ \frac{1}{2}\right)\pi}\), \(k=0,1\), \(\alpha_{+}=-\frac{1}{2}e^{\frac{i}{2}(\theta_{+}-\theta_{-})}e^{-i\left(k+ \frac{1}{2}\right)\pi}\), \(k=0,1\), and \(\alpha_{3}=0\). Then using the results of appendix B, in particular, equations (B2) and (B3), we find the parameter \(b=\sqrt{4\alpha_{+}\alpha_{-}+\alpha_{3}^{2}}=1\), which implies \(\bar{m}=mb=m\),
\[\frac{\bar{\theta}}{2}=\arctan\left(\sqrt{\frac{b-\alpha_{3}}{b+\alpha_{3}}} \right)=\arctan(1)=\frac{\pi}{4},\quad\text{and}\quad e^{i\bar{\phi}}=\sqrt{ \frac{\alpha_{+}}{\alpha_{-}}}=e^{\frac{i}{2}(\theta_{+}-\theta_{-})}e^{-i \left(k+\frac{1}{2}\right)\pi},\quad k=0,1, \tag{4.73}\]
from where we extract the unitary operator
\[T=\exp\left[\frac{i\pi}{4}(-1)^{k}\left(e^{\frac{-i}{2}(\theta_{+}-\theta_{-}) }\hat{\mathfrak{J}}_{+}+e^{\frac{i}{2}(\theta_{+}-\theta_{-})}\hat{\mathfrak{ J}}_{-}\right)\right],\quad k=0,1. \tag{4.74}\]
Thus, following the reasoning of appendix B, we finally conclude that the transformed states \(|\tilde{j},\bar{m}\rangle=T|\ j,m\rangle,\ m=-j,\cdots,j\), are eigenstates of \(\hat{\mathfrak{J}}_{3}\), corresponding to the eigenvalue \(\bar{m}=m\).
At this point, before concluding the section, let us return briefly to the case \(m=-j\). As \(\hat{\mathfrak{J}}_{-}\ |\ \tilde{j},-j\rangle=0\), then when in (4.71) we choose \(\bar{m}_{r}=-\tilde{j}=-j,\forall r=1,\cdots,K\) and we use the results we have just obtained, we regain the vector coherent states (4.65).
Continuing now with the line of argument, using the recently obtained results, we can return to the original \(su(2)\) representation basis states and write (4.71) in the disentangled form
\[\begin{array}{l}|\,\Psi\rangle_{[K]}^{j}={\cal N}^{-\frac{1}{2}}\exp\left[\bar{M} \hat{a}^{\dagger}-\bar{M}^{\dagger}\hat{a}\right]\exp\left[-(\hat{a}^{\dagger}I+ \bar{M}^{\dagger})\mathcal{B}_{+}\hat{\mathbb{J}}_{-}\right]\\ \\ \exp\left[\frac{i\pi}{4}\left(e^{\frac{-i}{2}(\theta_{+}-\theta_{-})}\hat{I}_{ +}+e^{\frac{i}{2}(\theta_{+}-\theta_{-})}\hat{I}_{-}\right)I\right]\left( \begin{array}{c}|\,0\rangle\otimes|\,\tilde{J},m_{1}\rangle\\ |\,0\rangle\otimes|\,\tilde{J},m_{2}\rangle\\ \vdots\\ |\,0\rangle\otimes|\,\tilde{J},m_{K}\rangle\end{array}\right).\end{array} \tag{4.75}\]
#### 4.3.3 Disentangled form of the vector states
On the other hand, when we choose \(\bar{m}_{r}=\tilde{j}=j,\forall r=1,\cdots,K\) the vector states in (4.71) can be written in the form
\[|\,\Psi\rangle_{[K]}^{j} = {\cal N}^{-\frac{1}{2}}\exp\left[\bar{M}\hat{a}^{\dagger}-\bar{M }^{\dagger}\hat{a}\right] \tag{4.76}\] \[\times \exp\left[-I\hat{a}^{\dagger}\mathcal{B}_{+}\hat{\mathbb{J}}_{- }\right]\exp\left[\frac{\arctan\left(\|\mathcal{B}_{+}\|\sqrt{\bar{M}\bar{M}^{ \dagger}}\right)}{\|\mathcal{B}_{+}\|\sqrt{\bar{M}\bar{M}^{\dagger}}}\left( \bar{M}\mathcal{B}_{+}^{*}\hat{\mathbb{J}}_{+}-\bar{M}^{\dagger}\mathcal{B}_{ +}\hat{\mathbb{J}}_{-}\right)\right]\] \[\times \exp\left[\frac{i\pi}{4}(-1)^{k}\left(e^{\frac{-i}{2}(\theta_{+} -\theta_{-})}\hat{I}_{+}+e^{\frac{i}{2}(\theta_{+}-\theta_{-})}\hat{I}_{-} \right)\right]\left(\begin{array}{c}|\,0\rangle\otimes|\,j,j\rangle\\ |\,0\rangle\otimes|\,j,j\rangle\\ \vdots\\ |\,0\rangle\otimes|\,j,j\rangle\end{array}\right),\quad k=0,1,\]
which extrapolates between the vector coherent states over the matrix domain associated to the quantum harmonic oscillator and the intelligent vector states associated to the generalized annihilation operator \(\hat{a}+\mathcal{B}_{+}\hat{\mathbb{J}}_{-}\), whose matrix eigenvalues are given by \(\bar{M}\).
### More general sets of vector algebra eigenstates
When we are interested in physics systems which Hamiltonian is formed from the suitable product of the generalized annihilation operator (4.9) and its adjoint, both verifying the commutation relation (4.10), and we want that all coefficients of the left side of this last expression be different from \(0\), again we need to distinguish between several possibilities, all of them requiring \(\beta_{3}\neq 0\). Indeed, in this case we need \(\|\beta_{-}\|^{2}-\|\beta_{+}\|^{2}\neq 0\), and \(\beta_{3}\beta_{+}^{*}-\beta_{3}^{*}\beta_{-}\neq 0\). Thus, we have
\[\begin{array}{ccccc}(i)&\beta_{+}\neq 0,&\beta_{-}=0,&\beta_{3}\neq 0,& \Rightarrow b\neq 0\\ (ii)&\beta_{+}=0,&\beta_{-}\neq 0,&\beta_{3}\neq 0,&\Rightarrow b\neq 0\\ (iii)&\beta_{+}=R_{+}e^{i\theta_{+}}\neq 0,&\beta_{-}=R_{-}e^{i\theta_{-}}\neq 0,& \beta_{3}=R_{3}e^{i\theta_{3}}\neq 0,&R_{+}\neq R_{-}\\ \text{and}&R_{3}\neq 2\sqrt{R_{+}R_{-}}&\text{or}&\theta_{3}\neq\frac{ \theta_{+}+\theta_{-}}{2}+\left(k+\frac{1}{2}\right)\pi,&k=0,1,&\Rightarrow b \neq 0\\ (iv)&\beta_{+}=R_{+}e^{i\theta_{+}}\neq 0,&\beta_{-}=R_{-}e^{i\theta_{-}}\neq 0,& \beta_{3}=R_{3}e^{i\theta_{3}}\neq 0,&R_{+}\neq R_{-}\\ &R_{3}=2\sqrt{R_{+}R_{-}}&\text{and}&\theta_{3}=\frac{\theta_{+}+\theta_{-}}{2 }+\left(k+\frac{1}{2}\right)\pi,&k=0,1,&\Rightarrow b=0\\ \end{array} \tag{4.77}\]
\[\rho e^{iv}= R_{3}(R_{+}+R_{-})e^{-\frac{i}{2}(\theta_{+}-\theta_{-})}\ e^{i(k+\frac{1}{2})\pi},\quad k=0,1.\]
The cases listed in (i,ii,iii), where parameter \(b\neq 0\), will be treated in the next subsection together with their similar ones mentioned above. The case listed in \((iv)\), where \(b=0\), will be solved here using the technique of the \(su(2)\) operator algebra transformations used in the last subsection.
As we have done in the in section 4.3.2, the commutator (4.58) can be written in the form
\[\left[\mathbb{A},\mathbb{A}^{\dagger}\right]=\hat{I}-2\rho\frac{R_{+}+R_{-}}{R _{3}}\hat{\mathbb{J}}_{3}, \tag{4.78}\]
where now
\[\hat{\mathbb{J}}_{3} = R_{3}\frac{(R_{+}-R_{-})}{\rho}f_{3}-\frac{R_{3}}{2(R_{+}+R_{-})} \left[e^{i\nu\hat{f}_{+}}+e^{-i\nu\hat{f}_{-}}\right] \tag{4.79}\] \[= \frac{(R_{+}-R_{-})}{(R_{+}+R_{-})}\hat{f}_{3}-\frac{\sqrt{R_{+}R_ {-}}}{(R_{+}+R_{-})}\left[e^{-\frac{i}{2}(\theta_{+}-\theta_{-})}e^{i\left(k+ \frac{1}{2}\right)\pi}\hat{f}_{+}+e^{\frac{i}{2}(\theta_{+}-\theta_{-})}e^{-i \left(k+\frac{1}{2}\right)\pi}\hat{f}_{-}\right],\quad k=0,1.\]
On the other hand, the operator \(\mathbb{A}\) in (4.66) again can be written in the form:
\[\mathbb{A}=a+\mathcal{B}_{+}\hat{\mathbb{J}}_{-}, \tag{4.80}\]
where now
\[\mathcal{B}_{+}=(R_{+}+R_{-})e^{\frac{i}{2}(\theta_{+}+\theta_{-})} \tag{4.81}\]
and
\[\hat{\mathbb{J}}_{-}=\frac{1}{(R_{+}+R_{-})}\left[R_{+}\ e^{\frac{i}{2}( \theta_{+}-\theta_{-})}\hat{f}_{-}+R_{-}\ e^{-\frac{i}{2}(\theta_{+}-\theta_{ -})}\hat{f}_{+}+2\sqrt{R_{+}R_{-}}\ e^{i\left(k+\frac{1}{2}\right)\pi}\hat{f}_ {3}\right],\quad k=0,1. \tag{4.82}\]
Thus, as before, by defining \(\hat{\mathbb{J}}_{+}=\hat{\mathbb{J}}_{-}^{\dagger}\), we realize that the transformed operators verify the \(su(2)\) algebra. Then the problem of computing the algebra eigenstates reduces to that of the previous sections.
Using again equation (4.56), but taking into account the new operators and basis states, the solution of the eigenvalue equation (4.1), in this particular case, is given by
\[\mid\Psi\rangle_{[K]}^{j}=\mathcal{N}^{-\frac{1}{2}}\exp\left[\bar{M}\hat{a}^ {\dagger}-\bar{M}^{\dagger}\hat{a}\right]\exp\left[-(\hat{a}^{\dagger}I+\bar{ M}^{\dagger})\mathcal{B}_{+}\hat{\mathbb{J}}_{-}\right]\left(\begin{array}{c} \mid 0\rangle\otimes\mid\bar{j},\bar{m}_{1}\rangle\\ \mid 0\rangle\otimes\mid\bar{j},\bar{m}_{2}\rangle\\ \mid 0\rangle\otimes\mid\bar{j},\bar{m}_{K}\rangle\end{array}\right), \tag{4.83}\]
where \(\mid\bar{j},\bar{m}_{r}\rangle\), \(r=1,\cdots,k\) are eigenstates of \(\hat{\mathbb{J}}_{3}\) associated to the eigenvalue \(\bar{m}_{r}\), respectively.
To know the connection of these last states with the original states \(\mid j,m\rangle\), we have to solve the following \(su(2)\) algebra eigenstate equation for \(\hat{\mathbb{J}}_{3}\) given in (4.79):
\[[\alpha_{-}\hat{f}_{+}+\alpha_{+}\hat{f}_{-}+\alpha_{3}\hat{f}_{3}]\mid\bar{j},\bar{m}\rangle=\hat{m}\mid\bar{j},\bar{m}\rangle, \tag{4.84}\]
where
\[\alpha_{-}=-\frac{\sqrt{R_{+}R_{-}}}{(R_{+}+R_{-})}e^{-\frac{i}{2}(\theta_{+ }-\theta_{-})}e^{i\left(k+\frac{1}{2}\right)\pi},\quad\alpha_{+}=-\frac{\sqrt{ R_{+}R_{-}}}{(R_{+}+R_{-})}e^{\frac{i}{2}(\theta_{+}-\theta_{-})}e^{-i \left(k+\frac{1}{2}\right)\pi},\quad k=0,1, \tag{4.85}\]
and
\[\alpha_{3}=\frac{R_{+}-R_{-}}{R_{+}+R_{-}},\quad k=0,1. \tag{4.86}\]
Then using the results of appendix B, in particular, equations (B2) and (B3), we find
\[b=\sqrt{4\alpha_{+}\alpha_{-}+\alpha_{3}^{2}}=1,\quad\mbox{ which implies}\quad\bar{m}=m, \tag{4.87}\]
\[\frac{\bar{\theta}}{2}=\arctan\left(\sqrt{\frac{b-\alpha_{3}}{b+\alpha_{3}}} \right)=\arctan\left(\sqrt{\frac{R_{-}}{R_{+}}}\right)\quad\mbox{and}\quad e^ {i\bar{\phi}}=\sqrt{\frac{\alpha_{+}}{\alpha_{-}}}=e^{\frac{i}{2}(\theta_{+}- \theta_{-})}e^{-i\left(k+\frac{1}{2}\right)\pi},\quad k=0,1, \tag{4.88}\]
from where we extract the unitary operator
\[T=\exp\left[i\arctan\left(\sqrt{\frac{R_{-}}{R_{+}}}\right)(-1)^{k}\left(e^{ \frac{i}{2}(\theta_{+}-\theta_{-})}\hat{f}_{+}+e^{\frac{i}{2}(\theta_{+}-\theta _{-})}\hat{f}_{-}\right)\right],\quad k=0,1. \tag{4.89}\]
Thus, following the reasoning of appendix B, we finally conclude that the transformed states \(\mid\bar{j},\bar{m}\rangle=T\mid j,m\rangle\), \(m=-j,\cdots,j\), are eigenstates of \(\hat{\mathbb{J}}_{3}\), corresponding to the eigenvalue \(\bar{m}=m\).
Finally, inserting these last states into (4.83) and using (4.89) for \(T\), we arrive to the generalized vector states
\[|\,\Psi\rangle^{j}_{[K]} = {\cal N}^{-\frac{1}{2}}\exp\left[\bar{M}\hat{a}^{\dagger}-\bar{M}^{ \dagger}\hat{a}\right]\exp\left[-(\hat{a}^{\dagger}I+\bar{M}^{\dagger}){\cal B} _{+}\hat{\mathfrak{J}}_{-}\right] \tag{4.90}\] \[\times \exp\left[i\arctan\left(\sqrt{\frac{R_{-}}{R_{+}}}\right)(-1)^{k }\left(e^{\frac{-i}{2}(\theta_{+}-\theta_{-})}\hat{J}_{+}+e^{\frac{i}{2}( \theta_{+}-\theta_{-})}\hat{J}_{-}\right)\,I\right)\left(\begin{array}{c}|\, 0\rangle\otimes|\,j,m_{1}\rangle\\ |\,0\rangle\otimes|\,j,m_{2}\rangle\\ \vdots\\ |\,0\rangle\otimes|\,j,m_{K}\rangle\end{array}\right),\,\,\,k=0,1,\]
which constitute a set of \((2j+1)^{K}\) of linearly independent vector states.
#### 4.4.1 Generalized disentangled form of new vector coherent states
If we choose \(\bar{m}_{r}=-\bar{j}=-j,\forall r=1,\cdots,K\) the vector states in (4.83) can be written in the form
\[|\,\Psi\rangle^{j}_{[K]} = \frac{1}{\sqrt{K}}\exp\left[\bar{M}\hat{a}^{\dagger}-\bar{M}^{ \dagger}\hat{a}\right] \tag{4.91}\] \[\times \exp\left[i\arctan\left(\sqrt{\frac{R_{-}}{R_{+}}}\right)(-1)^{k }\left(e^{\frac{-i}{2}(\theta_{+}-\theta_{-})}\hat{J}_{+}+e^{\frac{i}{2}( \theta_{+}-\theta_{-})}\hat{J}_{-}\right)\,I\right)\left(\begin{array}{c}|\, 0\rangle\otimes|\,j,-j\rangle\\ |\,0\rangle\otimes|\,j,-j\rangle\\ \vdots\\ |\,0\rangle\otimes|\,j,-j\rangle\end{array}\right),\quad k=0,1,\]
that generalize (4.65), and represent the product of the canonical harmonic oscillator coherent states and the Perelomov type \(su(2)\) coherent states, in the matrix domain.
#### 4.4.2 Generalized disentangled form of the vector states II
On the other hand, if we choose \(\bar{m}_{r}=\bar{j}=j,\forall r=1,\cdots,K\) the vector states in (4.83) can be written in the form
\[|\,\Psi\rangle^{j}_{[K]} = {\cal N}^{-\frac{1}{2}}\exp\left[\bar{M}\hat{a}^{\dagger}-\bar{M }^{\dagger}\hat{a}\right] \tag{4.92}\] \[\times \exp\left[-I\hat{a}^{\dagger}{\cal B}_{+}\hat{\mathfrak{J}}_{-} \right]\exp\left[\frac{\arctan\left(\|{\cal B}_{+}\|\sqrt{\bar{M}\bar{M}^{ \dagger}}\right)}{\|{\cal B}_{+}\|\sqrt{\bar{M}\bar{M}^{\dagger}}\,\left(\bar{M }{\cal B}_{+}^{*}\hat{\mathfrak{J}}_{+}-\bar{M}^{\dagger}{\cal B}_{+}\hat{ \mathfrak{J}}_{-}\right)\,\right]}\] \[\times \exp\left[i\arctan\left(\sqrt{\frac{R_{-}}{R_{+}}}\right)(-1)^{k }\left(e^{\frac{-i}{2}(\theta_{+}-\theta_{-})}\hat{J}_{+}+e^{\frac{i}{2}( \theta_{+}-\theta_{-})}\hat{J}_{-}\right)\right]\left(\begin{array}{c}|\, 0\rangle\otimes|\,j,j\rangle\\ |\,0\rangle\otimes|\,j,j\rangle\\ \vdots\\ |\,0\rangle\otimes|\,j,j\rangle\end{array}\right),\quad k=0,1,\]
which extrapolates between the vector coherent states over the matrix domain associated to the harmonic oscillator annihilation operator \(\hat{a}\), and the algebra eigenstates of the generalized operator \(\hat{a}+{\cal B}_{+}\hat{\mathfrak{J}}_{-}\), which associated eigenvalues are given by \(\bar{M}\).
### The cases with \(b\neq 0\)
In this section we are going to compute the vector algebra eigenstates for the set of generators from the \(su(2)\) sector whose parameter b is different from zero, except for the generators that give rise to the canonical case whose vector coherent states have already been calculated in section 4.1.1. In all the remaining non-canonical cases, the operator \(\hat{T}\) that act on the basis states spanning the complete space of the \(j\) irreducible representation of the \(su(2)\) algebra is not unitary. In any case, this fact does not greatly affect the procedure for obtaining the desired states. Actually, the process of obtaining such states is identical to the one used in the canonical case. Furthermore, despite the non-unitary character of the \(T\) operators, there are some situations that depend on the choice of integration constants in which the states can be written in the form of a unitary operator acting on the ground state, in other words, it
is still possible to construct some vector coherent states. On the other hand, the non-unitary attribute of \(\hat{T}\) provides us with an adequate metric that can be used to construct some linear and quadratics \(su(2)\) pseudo-Hermitian Hamiltonians [19] in the context of this article.
According to the results of appendices A and B, the solutions of the eigenvalue equation
\[[\hat{a}+\beta\cdot\vec{\hat{f}}]\mid\bar{\psi}\rangle^{j}_{[s]}=\tilde{\lambda }_{[s]}\mid\bar{\psi}\rangle^{j}_{[s]} \tag{4.93}\]
are given by
\[\mid\bar{\psi}\rangle^{j}_{[s]}=\mid\tilde{\lambda}_{[s]}-mb\rangle\otimes\hat {T}\mid j,m\rangle,\quad m=-j,\cdots,j. \tag{4.94}\]
Using the fact that \(\hat{D}(\tilde{\lambda}_{[s]}-mb)=\hat{D}(\tilde{\lambda}_{[s]})\ \hat{D}(-mb)\ \exp \left[\frac{1}{2}\left(\tilde{\lambda}_{[s]}b^{*}-\tilde{\lambda}^{*}_{[s]}b \right)m\right]\), we can see that this last equation can be written in the form
\[\mid\bar{\psi}\rangle^{j}_{[s]}=\hat{D}(\tilde{\lambda}_{[s]})\ \hat{T}\ \exp \left[\frac{1}{2}\left(\tilde{\lambda}_{[s]}b^{*}-\tilde{\lambda}^{*}_{[s]}b \right)\hat{J}_{3}\right]\mid-mb\rangle\otimes\mid j,m\rangle,\quad m=-j, \cdots,j. \tag{4.95}\]
Thus, the general solution of the eigenvalue equation (4.93) is
\[\mid\bar{\psi}\rangle^{j}_{[s]}=\hat{D}(\tilde{\lambda}_{[s]})\ \hat{T}\ \exp \left[\frac{1}{2}\left(\tilde{\lambda}_{[s]}b^{*}-\tilde{\lambda}^{*}_{[s]}b \right)\hat{J}_{3}\right]\sum_{m=-j}^{j}\bar{\varphi}^{j}_{[s]m}(0)\mid-mb \rangle\otimes\mid j,m\rangle \tag{4.96}\]
and by consequent the \((2j+1)^{K}\) solutions of equation (4.1), depending on the \(\bar{\varphi}^{j}_{[s]m}\) parameter values are given by
\[\mid\Psi\rangle^{j}_{[K]}=\mathcal{N}^{-\frac{1}{2}}\ \hat{D}(\bar{M})\ \hat{T}\ \exp \left[\frac{1}{2}\left(\bar{M}b^{*}-\bar{M}^{*}b\right)\hat{J}_{3}\right]\ \bar{U}\ \left(\begin{array}{c}\sum_{m=-j}^{j}\bar{ \varphi}^{j}_{[1]m}(0)\mid-mb\rangle\otimes\mid j,m\rangle\\ \sum_{m=-j}^{j}\bar{\varphi}^{j}_{[2]m}(0)\mid-mb\rangle\otimes\mid j,m\rangle \\ \vdots\\ \sum_{m=-j}^{j}\bar{\varphi}^{j}_{[K-1]m}(0)\mid-mb\rangle\otimes\mid j,m \rangle\\ \sum_{m=-j}^{j}\bar{\varphi}^{j}_{[K]m}(0)\mid-mb\rangle\otimes\mid j,m\rangle \end{array}\right). \tag{4.97}\]
By choosing the integration constant in the form
\[\bar{\varphi}^{j}_{[s]m}=\sum_{r=1}^{K}\bar{U}^{\dagger}_{[s][r]}\delta_{m_{[r ]}m} \tag{4.98}\]
we get the elementary set of solutions
\[\mid\Psi\rangle^{j}_{[K]}=\mathcal{N}^{-\frac{1}{2}}\ \hat{D}(\bar{M})\ \hat{T}\ \exp \left[\frac{1}{2}\left(\bar{M}b^{*}-\bar{M}^{\dagger}b\right)\hat{J}_{3} \right)\left(\begin{array}{c}\mid-m_{1}b\rangle\otimes\mid j,m_{1}\rangle \\ \mid-m_{2}b\rangle\otimes\mid j,m_{2}\rangle\\ \vdots\\ \mid-m_{K-1}b\rangle\otimes\mid j,m_{K-1}\rangle\\ \mid-m_{K}b\rangle\otimes\mid j,m_{K}\rangle\end{array}\right). \tag{4.99}\]
#### 4.5.1 The case \([\hat{\mathbf{A}},\hat{\mathbf{A}}^{\dagger}]=\hat{I}+2x\hat{J}_{3}\)
- When \(x>0\) and with the choice of parameters \(\beta_{+}=\sqrt{x}\sinh\alpha e^{i\theta_{+}}\), \(\beta_{-}=\sqrt{x}\cosh\alpha e^{i\theta_{-}}\) and \(\beta_{3}=0\), we have \(b=e^{\frac{i}{2}(\theta_{+}-\theta_{-})}\sqrt{2x\cosh(2\alpha)}\). Thus, from equations (B2) and (B3) we deduce
\[\hat{T}=\exp\left[-\frac{\pi}{4}\Big{(}\sqrt{\coth(\alpha)}e^{-\frac{i}{2}( \theta_{+}-\theta_{-})}\hat{I}_{+}-\sqrt{\tanh(\alpha)}e^{\frac{i}{2}(\theta_{+ }-\theta_{-})}\hat{I}_{-}\Big{)}\right]. \tag{4.100}\]
Inserting this last result into equation (4.99), we obtain the vector algebra eigenstates of the element \(\hat{\mathbf{A}}=a+\sqrt{x}\sinh\alpha e^{i\theta_{+}}\hat{I}_{-}+\sqrt{x}\cosh \alpha e^{i\theta_{-}}\hat{I}_{+}\), that is,
\[\left|\,\Psi\right\rangle_{[K]}^{j} = {\cal N}^{-\frac{1}{2}}\,\hat{D}(\bar{M})\exp\left[-\frac{\pi}{4} \left(\sqrt{\coth(\alpha)}e^{-\frac{i}{2}(\theta_{+}-\theta_{-})}\hat{f}_{+}- \sqrt{\tanh(\alpha)}e^{\frac{i}{2}(\theta_{+}-\theta_{-})}\hat{f}_{-}\right)\right] \tag{4.101}\] \[\times \exp\left[\frac{1}{2}\left(\bar{M}b^{*}-\bar{M}^{\dagger}b\right) \hat{f}_{3}\right]\left(\begin{array}{c}\left|-m_{1}b\right\rangle\otimes \left|\,j,m_{1}\right\rangle\\ \left|-m_{2}b\right\rangle\otimes\left|\,j,m_{2}\right\rangle\\ \vdots\\ \left|-m_{K-1}b\right\rangle\otimes\left|\,j,m_{K-1}\right\rangle\\ \left|-m_{K}b\right\rangle\otimes\left|\,j,m_{K}\right\rangle\end{array}\right).\]
From this set of vector states we can extract two classes of vector coherent states. Indeed, if we choose \(m_{1}=m_{2}=\cdots=m_{K}=-j\), and suitably express the exponential operator \(\hat{T}\) as a product of three exponential operators depending of \(\hat{f}_{+}\), \(\hat{f}_{3}\) and \(\hat{f}_{-}\), respectively and then perform the action of the ensemble of the exponential operators on the state \(\left|\,j,-j\right\rangle\), we obtain the equivalent exponential operator action on this last state: \(\exp\left[-\sqrt{\coth(\alpha)}e^{-\frac{i}{2}(\theta_{+}-\theta_{-})}\hat{f}_ {+}\right]\left|\,j,-j\right\rangle\). Finally, by replacing this last non-unitary exponential operator by its unitary equivalent, we obtain the normalized vector coherent states
(4.102)
In the same way, if we choose \(m_{1}=m_{2}=\cdots=m_{K}=j\), and perform the same process that we just described, we get a second set of vector coherent states, that is,
\[\left|\,\Psi\right\rangle_{[K]}^{j}=\frac{1}{\sqrt{K}}\hat{D}(\bar{M})\exp \left[-\arctan\left(\sqrt{\tanh(\alpha)}\right)\left(e^{-\frac{i}{2}(\theta_ {+}-\theta_{-})}\hat{f}_{+}-e^{\frac{i}{2}(\theta_{+}-\theta_{-})}\hat{f}_{-} \right)\right]\left(\begin{array}{c}\left|-jb\right\rangle\otimes\left|\,j,j\right\rangle\\ \left|-jb\right\rangle\otimes\left|\,j,j\right\rangle\\ \vdots\\ \left|-jb\right\rangle\otimes\left|\,j,j\right\rangle\end{array}\right). \tag{4.103}\]
- When \(x<0\) and with the choice of parameters \(\beta_{+}=\sqrt{-x}\cosh\alpha e^{i\theta_{+}}\), \(\beta_{-}=\sqrt{-x}\sinh\alpha e^{i\theta_{-}}\) and \(\beta_{3}=0\), we have \(b=e^{\frac{i}{2}(\theta_{+}-\theta_{-})}\sqrt{2\left|\,x\,\right|\cosh(2\alpha)}\). The expressions for the corresponding algebra eigenstates in this case can be extracted from the expressions of the previous one by interchanging \(\sinh\alpha\) by \(\cosh(\alpha)\) and vice-versa. Doing that, we get
\[\left|\,\Psi\right\rangle_{[K]}^{j} = {\cal N}^{-\frac{1}{2}}\,\hat{D}(\bar{M})\exp\left[-\frac{\pi}{4} \left(\sqrt{\tanh(\alpha)}e^{-\frac{i}{2}(\theta_{+}-\theta_{-})}\hat{f}_{+}- \sqrt{\coth(\alpha)}e^{\frac{i}{2}(\theta_{+}-\theta_{-})}\hat{f}_{-}\right)\right] \tag{4.104}\] \[\times \exp\left[\frac{1}{2}\left(\bar{M}b^{*}-\bar{M}^{\dagger}b\right) \hat{f}_{3}\right]\left(\begin{array}{c}\left|-m_{1}b\right\rangle\otimes \left|\,j,m_{1}\right\rangle\\ \left|-m_{2}b\right\rangle\otimes\left|\,j,m_{2}\right\rangle\\ \vdots\\ \left|-m_{K-1}b\right\rangle\otimes\left|\,j,m_{K-1}\right\rangle\\ \left|-m_{K}b\right\rangle\otimes\left|\,j,m_{K}\right\rangle\end{array}\right).\]
From this last expression, as before, we can also extract two classes of vector coherent states. Indeed, if we choose \(m_{1}=m_{2}=\cdots=m_{K}=-j\), following the same procedure as above, we get
\[\left|\,\Psi\right\rangle_{[K]}^{j}=\frac{1}{\sqrt{K}}\hat{D}(\bar{M})\exp \left[-\arctan\left(\sqrt{\tanh(\alpha)}\right)\left(e^{-\frac{i}{2}(\theta_{+ }-\theta_{-})}\hat{f}_{+}-e^{\frac{i}{2}(\theta_{+}-\theta_{-})}\hat{f}_{-} \right)\right]\left(\begin{array}{c}\left|\,jb\right\rangle\otimes\left|\,j,-j\right\rangle\\ \vdots\\ \left|\,jb\right\rangle\otimes\left|\,j,-j\right\rangle\\ \left|\,jb\right\rangle\otimes\left|\,j,-j\right\rangle\end{array}\right). \tag{4.105}\]
On the other hand, if we choose \(m_{1}=m_{2}=\cdots=m_{K}=j\), we get
\[|\,\Psi\rangle^{j}_{[K]}=\frac{1}{\sqrt{K}}\hat{D}(\bar{M})\exp\left[-\arctan \left(\sqrt{\coth(\alpha)}\right)\left(e^{-\frac{i}{2}(\theta_{+}-\theta_{-})} \hat{f}_{+}-e^{\frac{i}{2}(\theta_{+}-\theta_{-})}\hat{f}_{-}\right)\right] \begin{pmatrix}|-jb\rangle\otimes|j,j\rangle\\ |-jb\rangle\otimes|j,j\rangle\\ \vdots\\ |-jb\rangle\otimes|j,j\rangle\\ \end{pmatrix}. \tag{4.106}\]
5.2 The case \([\hat{\mathbb{A}},\hat{\mathbb{A}}^{\dagger}]=\hat{I}+pe^{i\gamma}\hat{f}_{+} +pe^{-i\gamma}\hat{f}_{-}\)
In case that \(\beta_{+}=Re^{i\theta_{+}}\), \(\beta_{-}=Re^{i\theta_{-}}\) and \(\beta_{3}=R_{3}e^{i\theta_{3}}\), with \(\theta_{3}\neq\frac{1}{2}(\theta_{+}+\theta_{-})+k\pi\), \(k=1,2,\cdots\) and \(R_{3}\neq 2R\) or \(\theta_{3}\neq\frac{1}{2}(\theta_{+}+\theta_{-})+(k+\frac{1}{2})\pi\), then \(b=2R\)\(e^{\frac{i}{2}(\theta_{+}+\theta_{-})}\sqrt{1+\left[\frac{R_{3}}{2R}e^{i\left( \theta_{3}-\frac{1}{2}(\theta_{+}+\theta_{-})\right)}\right]^{2}}\neq 0\). Thus, from equation (B3) we can compute the parameters \(\bar{\theta}\) and \(\bar{\phi}\), serving to construct the operator \(\hat{T}\), we get
\[\tan\left(\frac{\bar{\theta}}{2}\right)=\sqrt{\frac{b-\beta_{3}}{b+\beta_{3}} }=\sqrt{1+\gamma^{2}}-\gamma,\quad e^{i\bar{\phi}}=e^{\frac{i}{2}(\theta_{+}- \theta_{-})}, \tag{4.107}\]
where \(\gamma=\frac{R_{3}}{2R}e^{i\left(\theta_{3}-\frac{1}{2}(\theta_{+}+\theta_{-} )\right)}\). From these last relations we can see that \(\bar{\phi}\) is a real parameters whereas \(\bar{\theta}\) is, in general, a complex one. Indeed, if we write \(\gamma=\rho e^{i\varphi}\), where \(\rho=\frac{R_{3}}{2R}\) and \(\varphi=\theta_{3}-\frac{1}{2}(\theta_{+}+\theta_{-})\neq k\pi\), \(k=1,2,\cdots\), and insert it into (4.107), after some manipulations we obtain
\[\tan\left(\frac{\bar{\theta}}{2}\right) = \left[\sqrt{\frac{\sqrt{1+\rho^{4}+2\rho^{2}\cos(2\varphi)+(1+ \rho^{2}\cos(2\varphi))}}{2}}-\rho\cos(\varphi)\right] \tag{4.108}\] \[+ i\left[\sqrt{\frac{\sqrt{1+\rho^{4}+2\rho^{2}\cos(2\varphi)-(1+ \rho^{2}\cos(2\varphi))}}{2}}-\rho\sin(\varphi)\right].\]
We note that if we write \(\tan\left(\frac{\bar{\theta}}{2}\right)=\mathcal{Z}=\mathcal{Z}_{R}+i \mathcal{Z}_{Im}\), where \(\mathcal{Z}_{Re}\) and \(\mathcal{Z}_{Im}\) are the real and imaginary parts of the complex number \(\mathcal{Z}\), respectively, then formally we have
\[\frac{\bar{\theta}}{2}=\frac{i}{2}\ln\left[\frac{1-i\mathcal{Z}}{1+i \mathcal{Z}}\right]=-\frac{\delta}{2}+\frac{i}{2}\ln\left[\frac{\sqrt{\left(1 -\|\mathcal{Z}\|^{2}\right)^{2}+4\mathcal{Z}_{Re}^{2}}}{1-2\mathcal{Z}_{Im}+ \|\mathcal{Z}\|^{2}}\right], \tag{4.109}\]
where \(\delta=\arctan\left[\frac{2\mathcal{Z}_{Re}}{\|\mathcal{Z}\|^{2}-1}\right]\) when \(\|\mathcal{Z}\|\neq 1\) and \(\delta=-\frac{\pi}{2}\) if \(\|\mathcal{Z}\|=1\) and \(\mathcal{Z}_{Re}>0\) and \(\delta=\frac{\pi}{2}\) if \(\|\mathcal{Z}\|=1\) and \(\mathcal{Z}_{Re}<0\), where \(\mathcal{Z}_{R}\) and \(\mathcal{Z}_{Im}\) represent the real and imaginary parts of the right side of equation (4.108), respectively. Thus, we can show that the general structure of the operator \(\hat{T}\) in this case is given by the product of two exponential operators, one of them unitary and the other one Hermitian:
\[\hat{T} = \exp\left[\frac{\delta}{2}\left(e^{-\frac{i}{2}(\theta_{+}-\theta_ {-})}\hat{f}_{+}-e^{\frac{i}{2}(\theta_{+}-\theta_{-})}\hat{f}_{-}\right)\right] \tag{4.110}\] \[\times \exp\left[-\frac{i}{2}\ln\left(\frac{\sqrt{\left(1-\|\mathcal{Z} \|^{2}\right)^{2}+4\mathcal{Z}_{Re}^{2}}}{1-2\mathcal{Z}_{Im}+\|\mathcal{Z}\| ^{2}}\right)\left(e^{-\frac{i}{2}(\theta_{+}-\theta_{-})}\hat{f}_{+}-e^{ \frac{i}{2}(\theta_{+}-\theta_{-})}\hat{f}_{-}\right)\right].\]
By inserting this operator into the general expression (4.97) or into (4.99) we finally obtain the vector algebra eigenstates of the operator \(\hat{\mathbb{A}}=\hat{a}+Re^{i\theta_{+}}\hat{f}_{-}+Re^{i\theta_{-}}\hat{f}_{+ }+R_{3}e^{i\theta_{3}}\hat{f}_{3}\), which parameter satisfy the condition given in this subsection:
\[|\,\Psi\rangle^{j}_{[K]} = {\cal N}^{-\frac{1}{2}}\,\hat{D}(\tilde{M})\exp\left[\frac{\delta}{2 }\left(e^{-\frac{i}{2}(\theta_{+}-\theta_{-})}\hat{f}_{+}-e^{\frac{i}{2}(\theta_ {+}-\theta_{-})}\hat{f}_{-}\right)\right] \tag{4.111}\] \[\times \exp\left[-\frac{i}{2}\ln\left(\frac{\sqrt{(1-\|{\cal Z}\|)^{2}+4 {\cal Z}_{Re}^{2}}}{1-2{\cal Z}_{Im}+\|{\cal Z}\|^{2}}\right)\left(e^{-\frac{i} {2}(\theta_{+}-\theta_{-})}\hat{f}_{+}-e^{\frac{i}{2}(\theta_{+}-\theta_{-})} \hat{f}_{-}\right)\right]\] \[\times \exp\left[\frac{1}{2}\left(\tilde{M}b^{*}-\tilde{M}^{\dagger}b \right)\hat{f}_{3}\right]\left(\begin{matrix}|-m_{1}b\rangle\otimes|\,j,m_{1} \rangle\\ |-m_{2}b\rangle\otimes|\,j,m_{2}\rangle\\ \vdots\\ |-m_{K-1}b\rangle\otimes|\,j,m_{K-1}\rangle\\ |-m_{K}b\rangle\otimes|\,j,m_{K}\rangle\end{matrix}\right).\]
If in (4.111 ) we choose \(m_{1}=m_{2}=\cdots=m_{K}=-j\), and proceed as in the previous sections, we get the set of normalized vector coherent states:
\[|\,\Psi\rangle^{j}_{[K]} = \frac{1}{\sqrt{K}}\hat{D}(\tilde{M})\] \[\times \exp\left[-\arctan\left(\|{\cal Z}\|\right)\left(e^{-i\left( \frac{\theta_{+}-\theta_{-}}{2}-\arctan\left(\frac{Z_{Im}}{Z_{Re}}\right) \right)\hat{f}_{+}}-e^{i\left(\frac{\theta_{+}-\theta_{-}}{2}-\arctan\left( \frac{Z_{Im}}{Z_{Re}}\right)\right)\hat{f}_{-}}\right)\right]\] \[\times \left(\begin{matrix}|\,jb\rangle\otimes|\,j,-j\rangle\\ |\,jb\rangle\otimes|\,j,-j\rangle\\ \vdots\\ |\,jb\rangle\otimes|\,j,-j\rangle\end{matrix}\right), \tag{4.113}\]
where \(\arctan\left(\frac{Z_{Im}}{Z_{Re}}\right)\) is replaced by \(\frac{\pi}{2}\) when \({\cal Z}_{Re}=0\).
On the other hand, if we choose \(m_{1}=m_{2}=\cdots=m_{K}=j\), and again we proceed as in the previous sections, we get a second set of normalized vector coherent states:
\[|\,\Psi\rangle^{j}_{[K]} = \frac{1}{\sqrt{K}}\hat{D}(\tilde{M}) \tag{4.114}\] \[\times \exp\left[-\arctan\left(\|{\cal Z}\|\right)\left(e^{-i\left( \frac{\theta_{+}-\theta_{-}}{2}+\arctan\left(\frac{Z_{Im}}{Z_{Re}}\right) \right)\hat{f}_{+}}-e^{i\left(\frac{\theta_{+}-\theta_{-}}{2}+\arctan\left( \frac{Z_{Im}}{Z_{Re}}\right)\right)\hat{f}_{-}}\right)\right]\] \[\times \left(\begin{matrix}|-jb\rangle\otimes|\,j,j\rangle\\ |-jb\rangle\otimes|\,j,j\rangle\\ \vdots\\ |-jb\rangle\otimes|\,j,j\rangle\end{matrix}\right). \tag{4.115}\]
Some special casesFor example, in the special case where \(\varphi=\frac{\pi}{2}\) and \(\rho>1\), we have \({\cal Z}=i(\sqrt{\rho^{2}-1}-\rho)\), that implies \(\delta=0\), and by consequent \(\frac{\tilde{\theta}}{2}=-\frac{i}{2}\ln\left(\sqrt{\frac{\rho+1}{\rho-1}} \right)=-\frac{i}{2}\,\ln\left(\sqrt{\frac{R_{3}+2R}{R_{3}-2R}}\right)\) and \(b=i\,e^{\frac{i}{2}(\theta_{+}+\theta_{-})}\sqrt{R_{3}^{2}-4R^{2}}\). Thus, the vector eigenstates of \({\bf A}=\hat{a}+Re^{i\theta_{+}}\hat{f}_{-}+Re^{i\theta_{-}}\hat{f}_{+}+iR_{3}e ^{\frac{i}{2}(\theta_{+}+\theta_{-})}\hat{f}_{3}\) with matrix eigenvalue \(\tilde{M}\) are given by
\[\left|\,\Psi\right\rangle_{[K]}^{j} = \frac{1}{\sqrt{K}}\hat{D}(\bar{M})\] \[\times \exp\left[\frac{1}{2}\left(\bar{M}b^{*}-\bar{M}^{\dagger}b\right) \hat{f}_{3}\right]\begin{pmatrix}\begin{matrix}-m_{1}b\rangle\otimes\left|\,j,m_ {1}\right\rangle\\ -m_{2}b\rangle\otimes\left|\,j,m_{2}\right\rangle\\ \vdots\\ \left|-m_{K-1}b\rangle\otimes\left|\,j,m_{K-1}\right\rangle\\ \left|-m_{K}b\rangle\otimes\left|\,j,m_{K}\right\rangle\end{matrix}\right. \end{pmatrix}. \tag{4.116}\]
On the other hand, when \(\varphi=\frac{\pi}{2}\) and \(0<\rho<1\), we have \(\mathcal{Z}=\sqrt{1-\rho^{2}}-i\rho\), that implies \(\delta=-\frac{\pi}{2}\), and by consequent \(\frac{\theta}{2}=\frac{\pi}{4}-\frac{i}{2}\ln\left(\sqrt{\frac{1+\rho}{1-\rho }}\right)=\frac{\pi}{4}-\frac{i}{2}\ln\left(\sqrt{\frac{2R+R_{3}}{2R-R_{3}}}\right)\) and \(b=e^{\frac{i}{2}(\theta_{+}+\theta_{-})}\sqrt{4R^{2}-R_{3}^{2}}\). Thus, the vector eigenstates of \(\mathbb{A}=\hat{a}+Re^{i\theta_{+}}\hat{f}_{-}+Re^{i\theta_{-}}\hat{f}_{+}+iR_ {3}\hat{e}^{\frac{i}{2}(\theta_{+}+\theta_{-})}\hat{f}_{3}\) with matrix eigenvalue \(\bar{M}\) are given by
\[\left|\,\Psi\right\rangle_{[K]}^{j} = \mathcal{N}^{-\frac{1}{2}}\ \hat{D}(\bar{M})\exp\left[-\frac{\pi}{4}\left(e^{- \frac{i}{2}(\theta_{+}-\theta_{-})}\hat{f}_{+}-e^{\frac{i}{2}(\theta_{+}- \theta_{-})}\hat{f}_{-}\right)\right] \tag{4.117}\] \[\times \exp\left[\frac{i}{2}\ln\left(\sqrt{\frac{2R+R_{3}}{2R-R_{3}}} \right)\left(e^{-\frac{i}{2}(\theta_{+}-\theta_{-})}\hat{f}_{+}-e^{\frac{i}{2 }(\theta_{+}-\theta_{-})}\hat{f}_{-}\right)\right]\] \[\times \exp\left[\frac{1}{2}\left(\bar{M}b^{*}-\bar{M}^{\dagger}b\right) \hat{f}_{3}\right]\begin{pmatrix}\begin{matrix}-m_{1}b\rangle\otimes\left|\,j,m _{1}\right\rangle\\ -m_{2}b\rangle\otimes\left|\,j,m_{2}\right\rangle\\ \vdots\\ \left|-m_{K-1}b\rangle\otimes\left|\,j,m_{K-1}\right\rangle\\ \left|-m_{K}b\rangle\otimes\left|\,j,m_{K}\right\rangle\end{matrix}\right. \end{pmatrix}.\]
From these last two expression we can also extract the corresponding set of vector coherent states. Indeed, putting \(m_{1}=m_{2}=\cdots=m_{K}=\mp j\) we can obtain the desired vector states:
\[\left|\,\Psi\right\rangle_{[K]}^{j} = \frac{1}{\sqrt{K}}\hat{D}(\bar{M})\] \[\times \exp\left[\mp i\arctan\left(\frac{R_{3}-\sqrt{R_{3}^{2}-4R^{2}}} {2R}\right)\left(e^{-\frac{i}{2}(\theta_{+}-\theta_{-})}\hat{f}_{+}+e^{\frac{i }{2}(\theta_{+}-\theta_{-})}\hat{f}_{-}\right)\right]\] \[\times \begin{pmatrix}\begin{matrix}\left|\,\pm jb\rangle\otimes\left|\,j, \mp j\right\rangle\\ \left|\,\pm jb\rangle\otimes\left|\,j,\mp\right\rangle\\ \vdots\\ \left|\,\pm jb\rangle\otimes\left|\,j,\mp\right\rangle\end{matrix}\right.\end{pmatrix}, \tag{4.119}\]
when \(R_{3}>2R\), and
\[\left|\,\Psi\right\rangle_{[K]}^{j} = \frac{1}{\sqrt{K}}\hat{D}(\bar{M})\] \[\times \exp\left[-\frac{\pi}{4}\left(e^{-i\left(\frac{\theta_{+}-\theta_{ -}}{2}+\arctan\left(\frac{R_{3}}{\sqrt{4R^{2}-R_{3}^{2}}}\right)\right)\hat{f} _{+}-e^{i\left(\frac{\theta_{+}-\theta_{-}}{2}+\arctan\left(\frac{R_{3}}{ \sqrt{4R^{2}-R_{3}^{2}}}\right)\right)\hat{f}_{-}\right)}}\right]\] \[\times \begin{pmatrix}\begin{matrix}\left|\,\pm jb\rangle\otimes\left|\,j, \mp j\right\rangle\\ \left|\,\pm jb\rangle\otimes\left|\,j,\mp j\right\rangle\\ \vdots\\ \left|\,\pm jb\rangle\otimes\left|\,j,\mp j\right\rangle\end{matrix}\right.\]
when \(0<R_{3}<2R\).
5.3 The case \([\hat{\mathbb{A}},\hat{\mathbb{A}}^{\dagger}]=\hat{I}+2x\int_{3}+\rho e^{iv}\int_{+ }\rho e^{-iv}\int_{-},\quad x\neq 0\) and \(\rho>0\)
In this subsection we will study the situation where the commutator between \(\hat{\mathbb{A}}\) and \(\hat{\mathbb{A}}^{\dagger}\) is equal to the identity plus a linear combination of the \(su(2)\) algebra generators with non-zero coefficients. Here, three cases are possible, they are:
- When \(\beta_{-}=R_{-}e^{i\theta_{-}}\neq 0\), \(\beta_{-}=0\) and \(\beta_{3}=R_{3}e^{i\theta_{3}}\neq 0\), the parameter \(b=\beta_{3}\)\(x=R_{-}^{2}\) and \(pe^{iv}=-R_{3}R_{-}e^{-\frac{i}{2}(\theta_{3}-\theta_{-})}\). Then, with help of equation (A.19) and the general structure (4.97) we can construct the vector states verifying (4.1), they are given by
\[\mid\Psi\rangle_{[K]}^{j}=\mathcal{N}^{-\frac{1}{2}}\,\hat{D}(\bar{M})\,\exp \left[-\frac{\beta_{-}}{\beta_{3}}\hat{J}_{+}\right]\,\exp\left[\frac{1}{2} \left(\bar{M}b^{*}-\bar{M}^{\dagger}b\right)\hat{J}_{3}\right]\,\bar{U}\, \left(\begin{array}{c}\sum_{m=-j}^{j}\bar{\varphi}_{[1]m}^{j}(0)\mid-mb \rangle\otimes\mid j,m\\ \sum_{m=-j}^{j}\bar{\varphi}_{[2]m}^{j}(0)\mid-mb\rangle\otimes\mid j,m\\ \vdots\\ \sum_{m=-j}^{j}\bar{\varphi}_{[K-1]m}^{j}(0)\mid-mb\rangle\otimes\mid j,m\\ \sum_{m=-j}^{j}\bar{\varphi}_{[K]m}^{j}(0)\mid-mb\rangle\otimes\mid j,m\\ \end{array}\right). \tag{4.122}\]
By choosing \(\bar{\varphi}_{[s]m}^{j}\), \(s=1,2,\cdots,K\), as in equation (4.98) and then taking \(m_{r}=-j\), \(\forall r=,1,2,\cdots,K\), and after that, replacing the exponential operator depending on \(J_{+}\) by its unitary equivalent, we get the set of normalized vector coherent states
\[\mid\Psi\rangle_{[K]}^{j}=\frac{1}{\sqrt{K}}\,\hat{D}(\bar{M})\exp\left[- \arctan\left(\frac{R_{-}}{R_{3}}\right)\left(e^{-i(\theta_{3}-\theta_{-})} \hat{J}_{+}-e^{i(\theta_{3}-\theta_{-})}\hat{J}_{-}\right)\right]\left( \begin{array}{c}\mid jb\rangle\otimes\mid j,-j\rangle\\ \mid jb\rangle\otimes\mid j,-j\rangle\\ \vdots\\ \mid jb\rangle\otimes\mid j,-j\rangle\\ \end{array}\right). \tag{4.123}\]
- On the other hand, when \(\beta_{-}=0\), \(\beta_{+}=R_{+}e^{i\theta_{+}}\neq 0\) and \(\beta_{3}=R_{3}e^{i\theta_{3}}\neq 0\), the parameter \(b=\beta_{3}\), \(x=-R_{+}^{2}0\) and \(\rho e^{iv}=R_{3}R_{+}e^{i(\theta_{3}-\theta_{-})}\). Then, again, with help of equation (A.19) and the general structure (4.97) we can construct the vector states verifying (4.1), they are given by
\[\mid\Psi\rangle_{[K]}^{j}=\mathcal{N}^{-\frac{1}{2}}\,\hat{D}(\bar{M})\,\exp \left[\frac{\beta_{+}}{\beta_{3}}\hat{J}_{-}\right]\,\exp\left[\frac{1}{2} \left(\bar{M}b^{*}-\bar{M}^{\dagger}b\right)\hat{J}_{3}\right]\,\bar{U}\, \left(\begin{array}{c}\sum_{m=-j}^{j}\bar{\varphi}_{[1]m}^{j}(0)\mid-mb \rangle\otimes\mid j,m\\ \sum_{m=-j}^{j}\bar{\varphi}_{[2]m}^{j}(0)\mid-mb\rangle\otimes\mid j,m\\ \vdots\\ \sum_{m=-j}^{j}\bar{\varphi}_{[K-1]m}^{j}(0)\mid-mb\rangle\otimes\mid j,m\\ \sum_{m=-j}^{j}\bar{\varphi}_{[K]m}^{j}(0)\mid-mb\rangle\otimes\mid j,m\\ \end{array}\right). \tag{4.124}\]
By choosing again \(\bar{\varphi}_{[s]m}^{j}\), \(s=1,2,\cdots,K\), as in equation (4.98) and then taking \(m_{r}=j\), \(\forall r=1,2,\cdots,K\), and after that, replacing the exponential operator depending on \(J_{-}\) by its unitary equivalent, we get the set of normalized vector coherent states
\[\mid\Psi\rangle_{[K]}^{j}=\frac{1}{\sqrt{K}}\,\hat{D}(\bar{M})\exp\left[- \arctan\left(\frac{R_{+}}{R_{3}}\right)\left(e^{i(\theta_{3}-\theta_{+})}\hat{ J}_{+}-e^{-i(\theta_{3}-\theta_{+})}\hat{J}_{-}\right)\right]\left(\begin{array}{c} \mid-jb\rangle\otimes\mid j,j\rangle\\ \mid-jb\rangle\otimes\mid j,j\rangle\\ \vdots\\ \mid-jb\rangle\otimes\mid j,j\rangle\\ \end{array}\right). \tag{4.125}\]
- Finally we have the case when \(\beta_{-}=R_{-}e^{i\theta_{-}}\neq 0\), \(\beta_{+}=R_{+}e^{i\theta_{+}}\neq 0\) and \(\beta_{3}=R_{3}e^{i\theta_{3}}\neq 0\), provided that both, \(R_{+}\neq R_{-}\) and \(R_{3}\neq 2\sqrt{R_{+}\,R_{-}}\) or \(\theta_{3}\neq\frac{1}{2}(\theta_{+}-\theta_{-})+\left(k+\frac{1}{2}\right)\pi\), \(k=1,2,\cdots\). Under these conditions
\[\rho e^{i\nu}=R_{3}e^{-i(\theta_{+}-\theta_{-})}\sqrt{R_{+}^{2}+R_{-}^{2}-2R_{+ }\,R_{-}\cos{(2\varphi)}}\exp\left[\frac{R_{+}+R_{-}}{R_{+}-R_{-}}\tan{\varphi} \right],\]
\(b=2\sqrt{R_{+}R_{-}}e^{\frac{i}{2}(\theta_{+}-\theta_{-})}\sqrt{1+\gamma^{2}}\) and \(x=R_{-}^{2}-R_{+}^{2}\), where \(\gamma=\rho e^{i\varphi}\) with \(\varphi=\theta_{3}-\frac{1}{2}(\theta_{+}+\theta_{-})\) and now \(\rho=\frac{R_{3}}{2\sqrt{R_{+}R_{-}}}\). Then, as in previous subsection, from equation (B3) we can compute the parameters \(\bar{\theta}\) and \(\bar{\phi}\), serving to construct the operator \(\hat{T}\), we get the following expressions
\[\tan\left(\frac{\bar{\theta}}{2}\right)=\sqrt{\frac{b-\beta_{3}}{b+\beta_{3}}} =\sqrt{1+\gamma^{2}}-\gamma,\quad e^{i\bar{\phi}}=\sqrt{\frac{R_{+}}{R_{-}}}e^ {\frac{i}{2}(\theta_{+}-\theta_{-})},. \tag{4.126}\]
We note that the expression used to calculate parameter \(\bar{\theta}\) has the same structure as the one in the previous subsection, then, to get its value we only have to use directly the expressions (4.108) and (4.109). On the other hand, with respect to the parameter \(\bar{\phi}\) now the is a little different because it represent complex phase. Indeed, this last parameter can be written in the form \(\bar{\phi}=\bar{\phi}_{Re}+i\bar{\phi}_{Im}\), where \(e^{-\bar{\phi}_{Im}}=\sqrt{\frac{R_{+}}{R_{-}}}\) and \(e^{i\bar{\phi}_{Re}}=e^{\frac{i}{2}(\theta_{+}-\theta_{-})}\).
Thus, using (B2) we can show that the general structure of the operator \(\hat{T}\) in this case is given by the product of two exponential operators, none of which is unitary or Hermitian, that is,
\[\hat{T} = \exp\left[\frac{\delta}{2}\left(\sqrt{\frac{R_{+}}{R_{+}}}e^{- \frac{i}{2}(\theta_{+}-\theta_{-})}\hat{\jmath}_{+}-\sqrt{\frac{R_{+}}{R_{-}} }e^{\frac{i}{2}(\theta_{+}-\theta_{-})}\hat{\jmath}_{-}\right)\right] \tag{4.127}\] \[\times \exp\left[-\frac{i}{2}\ln\left(\frac{\sqrt{(1-\|\mathcal{Z}\|^{2 })^{2}+4\mathcal{Z}_{Re}^{2}}}{1-2\mathcal{Z}_{Im}+\|\mathcal{Z}\|^{2}}\right) \left(\sqrt{\frac{R_{-}}{R_{+}}}e^{-\frac{i}{2}(\theta_{+}-\theta_{-})}\hat{ \jmath}_{+}-\sqrt{\frac{R_{+}}{R_{-}}}e^{\frac{i}{2}(\theta_{+}-\theta_{-})} \hat{\jmath}_{-}\right)\right].\]
By inserting this operator into the general expression (4.97) or into (4.99) we finally obtain the vector algebra eigenstates of the operator \(\hat{\mathbf{A}}=\hat{a}+Re^{i\theta_{+}}\hat{\jmath}_{-}+Re^{i\theta_{-}} \hat{\jmath}_{+}+R_{3}e^{i\theta_{3}}\), which parameter satisfy the condition given in this subsection:
\[|\,\Psi\rangle_{[K]}^{j} = \mathcal{N}^{-\frac{1}{2}}\,\hat{D}(\hat{M})\exp\left[\frac{ \delta}{2}\left(\sqrt{\frac{R_{-}}{R_{+}}}e^{-\frac{i}{2}(\theta_{+}-\theta_{- })}\hat{\jmath}_{+}-\sqrt{\frac{R_{+}}{R_{-}}}e^{\frac{i}{2}(\theta_{+}-\theta _{-})}\hat{\jmath}_{-}\right)\right] \tag{4.128}\] \[\times \exp\left[-\frac{i}{2}\ln\left(\frac{\sqrt{(1-\|\mathcal{Z}\|^{2 })^{2}+4\mathcal{Z}_{Re}^{2}}}{1-2\mathcal{Z}_{Im}+\|\mathcal{Z}\|^{2}}\right) \left(\sqrt{\frac{R_{-}}{R_{+}}}e^{-\frac{i}{2}(\theta_{+}-\theta_{-})}\hat{ \jmath}_{+}-\sqrt{\frac{R_{+}}{R_{-}}}e^{\frac{i}{2}(\theta_{+}-\theta_{-})} \hat{\jmath}_{-}\right)\right]\] \[\times \exp\left[\frac{1}{2}\left(\hat{M}b^{*}-\hat{M}^{*}b\right)\hat{ \jmath}_{3}\right]\left(\begin{array}{c}|-m_{1}b\rangle\otimes|\,j,m_{1} \rangle\\ |-m_{2}b\rangle\otimes|\,j,m_{2}\rangle\\ \vdots\\ |-m_{K-1}b\rangle\otimes|\,j,m_{K-1}\rangle\\ |-m_{K}b\rangle\otimes|\,j,m_{K}\rangle\end{array}\right).\]
From (4.128 ) we can find the associated vector coherent states for this case. If we choose \(m_{1}=m_{2}=\cdots=m_{K}=\mp j\), and proceed as in the previous sections, we get the set of normalized vector coherent states:
\[|\,\Psi\rangle_{[K]}^{j} = \frac{1}{\sqrt{K}}\hat{D}(\hat{M})\] \[\times \exp\left[-\arctan\left(\|Z\|e^{\pm\,\hat{\phi}_{Im}}\right) \left(e^{-i\left(\frac{\theta_{+}-\theta_{-}}{2}\mp\arctan\left(\frac{Z_{Im}}{ Z_{Re}}\right)\right)}\hat{\jmath}_{+}-e^{i\left(\frac{\theta_{+}-\theta_{-}}{2} \mp\arctan\left(\frac{Z_{Im}}{Z_{Re}}\right)\right)}\hat{\jmath}_{-}\right)\right]\] \[\times \begin{pmatrix}|\pm jb\rangle\otimes|\,j,\mp j\rangle\\ |\pm jb\rangle\otimes|\,j,\mp j\rangle\\ \vdots\\ |\pm jb\rangle\otimes|\,j,\mp j\rangle\end{pmatrix}, \tag{4.130}\]
where \(\arctan\left(\frac{Z_{Im}}{Z_{Re}}\right)\) is replaced by \(\frac{\pi}{2}\) when \({\cal Z}_{Re}=0\).
Some examplesLet us illustrate the results of this section by choosing the same range of variation of the \(\varphi\) and \(\rho\) parameters used in the previous subsection. Thus, when \(\varphi=\frac{\pi}{2}\) and \(\rho>1\), we have \({\cal Z}=i(\sqrt{\rho^{2}-1}-\rho)\), that implies \(\delta=0\), and by consequent \(\frac{\bar{\theta}}{2}=-\frac{i}{2}\)\(\ln\left(\sqrt{\frac{R_{3}+2\sqrt{R_{,}\,R_{-}}}{R_{3}-2\sqrt{R_{,}\,R_{-}}}}\right)\) and \(b=i\,e^{\frac{i}{2}(\theta_{+}+\theta_{-})}\sqrt{R_{3}^{2}-4R_{+}R_{-}}\). Thus, the vector eigenstates of \(\mathbb{A}=\hat{a}+R_{+}e^{i\theta_{+}}\hat{J}_{-}+R_{-}e^{i\theta_{-}}\hat{J} _{+}+iR_{3}e^{\frac{i}{2}(\theta_{+}+\theta_{-})}\hat{J}_{3}\) with matrix eigenvalue \(\tilde{M}\) are given by
\[|\,\Psi\rangle_{[K]}^{j} = {\cal N}^{-\frac{1}{2}}\,\hat{D}(\tilde{M})\exp\left[\frac{i}{2} \ln\left(\sqrt{\frac{R_{3}+2\sqrt{R_{+}\,R_{-}}}{R_{3}-2\sqrt{R_{+}\,R_{-}}}} \right)\left(\sqrt{\frac{R_{-}}{R_{+}}}e^{-\frac{i}{2}(\theta_{+}-\theta_{-}) }\hat{J}_{+}-\sqrt{\frac{R_{+}}{R_{-}}}e^{\frac{i}{2}(\theta_{+}-\theta_{-}) }\hat{J}_{-}\right)\right] \tag{4.131}\] \[\times \exp\left[\frac{1}{2}\left(\tilde{M}b^{*}-\tilde{M}^{*}b\right) \hat{J}_{3}\right)\left(\begin{array}{c}|-m_{1}b\rangle\otimes|\,j,m_{1} \rangle\\ |-m_{2}b\rangle\otimes|\,j,m_{2}\rangle\\ \vdots\\ |-m_{K-1}b\rangle\otimes|\,j,m_{K-1}\rangle\\ |-m_{K}b\rangle\otimes|\,j,m_{K}\rangle\end{array}\right).\]
On the other hand, when \(\varphi=\frac{\pi}{2}\) and \(0<\rho<1\), we have \({\cal Z}=\sqrt{1-\rho^{2}}-i\rho\), that implies \(\delta=-\frac{\pi}{2}\), and by consequent \(\frac{\bar{\theta}}{2}=\frac{\pi}{4}-\frac{i}{2}\ln\left(\sqrt{\frac{1+\rho}{ 1-\rho}}\right)=\frac{\pi}{4}-\frac{i}{2}\ln\left(\sqrt{\frac{2\sqrt{R_{,}\,R _{-}}+R_{3}}{2\sqrt{R_{,}\,R_{-}}-R_{3}}}\right)\) and \(b=e^{\frac{i}{2}(\theta_{+}+\theta_{-})}\sqrt{4R_{+}\,R_{-}-R_{3}^{2}}\). Thus, the vector eigenstates of \(\mathbb{A}=\hat{a}+R_{+}e^{i\theta_{+}}\hat{J}_{-}+R_{-}e^{i\theta_{-}}\hat{J} _{+}+iR_{3}e^{\frac{i}{2}(\theta_{+}+\theta_{-})}\hat{J}_{3}\) with matrix eigenvalue \(\tilde{M}\) are given by
\[|\,\Psi\rangle_{[K]}^{j} = {\cal N}^{-\frac{1}{2}}\,\hat{D}(\tilde{M})\exp\left[-\frac{\pi}{ 4}\left(\sqrt{\frac{R_{-}}{R_{+}}}e^{-\frac{i}{2}(\theta_{+}-\theta_{-})}\hat {J}_{+}-\sqrt{\frac{R_{+}}{R_{-}}}e^{\frac{i}{2}(\theta_{+}-\theta_{-})}\hat {J}_{-}\right)\right] \tag{4.132}\] \[\times \exp\left[\frac{i}{2}\ln\left(\sqrt{\frac{2\sqrt{R_{+}\,R_{-}}+R _{3}}{2\sqrt{R_{+}\,R_{-}}-R_{3}}}\right)\left(\sqrt{\frac{R_{-}}{R_{+}}}e^{- \frac{i}{2}(\theta_{+}-\theta_{-})}\hat{J}_{+}-\sqrt{\frac{R_{+}}{R_{-}}}e^{ \frac{i}{2}(\theta_{+}-\theta_{-})}\hat{J}_{-}\right)\right]\] \[\times \exp\left[\frac{1}{2}\left(\tilde{M}b^{*}-\tilde{M}^{*}b\right) \hat{J}_{3}\right)\left(\begin{array}{c}|-m_{1}b\rangle\otimes|\,j,m_{1} \rangle\\ |-m_{2}b\rangle\otimes|\,j,m_{2}\rangle\\ \vdots\\ |-m_{K-1}b\rangle\otimes|\,j,m_{K-1}\rangle\\ |-m_{K}b\rangle\otimes|\,j,m_{K}\rangle\end{array}\right).\]
From these last two expression we can also extract the corresponding set of vector coherent states. Indeed, putting \(m_{1}=m_{2}=\cdots=m_{K}=\mp j\) we can obtain the desired vector states:
\[|\,\Psi\rangle_{[K]}^{j} = \frac{1}{\sqrt{K}}\,\hat{D}(\tilde{M})\] \[\times \exp\left[\mp i\arctan\left(\frac{R_{3}-\sqrt{R_{3}^{2}-4R_{+}\,R_ {-}}}{2R_{\pm}}\right)\left(e^{-\frac{i}{2}(\theta_{+}-\theta_{-})}\hat{J}_{+}+ e^{\frac{i}{2}(\theta_{+}-\theta_{-})}\hat{J}_{-}\right)\right]\] \[\times \begin{pmatrix}|\pm jb\rangle\otimes|\,j,\mp j\rangle\\ |\pm jb\rangle\otimes|\,j,\mp\rangle\\ \vdots\\ |\pm jb\rangle\otimes|\,j,\mp\rangle\\ |\pm jb\rangle\otimes|\,j,\mp\rangle\end{pmatrix},\]
when \(\varphi=\frac{\pi}{2}\) and \(R_{3}>2\sqrt{R_{+}\,R_{-}}\) and
\[|\,\Psi\rangle_{[K]}^{j} = \frac{1}{\sqrt{K}}\hat{D}(\hat{M})\] \[\times \exp\left[-\arctan\left(\frac{R_{\mp}}{R_{\pm}}\right)\left(e^{-i \left(\frac{\theta_{+}-\theta_{-}}{2}\pm\arctan\left(\frac{R_{1}}{\sqrt{4R_{+ }\,R_{-}R_{5}^{+}}}\right)\right)}\hat{f}_{+}-e^{i\left(\frac{\theta_{+}-\theta_ {-}}{2}\pm\arctan\left(\frac{R_{1}}{\sqrt{4R_{+}\,R_{-}R_{5}^{+}}}\right) \right)}\hat{f}_{-}\right)\right]\] \[\times \begin{pmatrix}|\,\pm jb\rangle\otimes|\,j,\mp j\rangle\\ |\,\pm jb\rangle\otimes|\,j,\mp j\rangle\\ \vdots\\ |\,\pm jb\rangle\otimes|\,j,\mp j\rangle\\ \end{pmatrix}, \tag{4.136}\]
when \(\varphi=\frac{\pi}{2}\) and \(0<R_{3}<2R\).
## 5 Conclusions
In this article we have defined and computed the vector algebra eigenstates associated to the \(h(1)\oplus su(2)\) Lie algebra. We have shown that the set of these vector states includes the subset of vector coherent states over the matrix domain of the Heisenberg-Weyl algebra as well as those of the \(su(2)\) Lie algebra. Other subsets of vector coherent states have been also obtained which appear as consequence of the combination as a direct sum of these two algebras. We have shown that it is possible to classify the set of all these vector eigenstates by giving to the elements of the algebra the role of a generalized annihilation operator and then using the commutator between it and its corresponding adjoint as a selection rule. Also with the help of these two operators we have constructed Hermitian Hamiltonians to which we can associate those vector coherent states. For example, this allowed us to find generalized Hamiltonians which are isospectral with the standard harmonic oscillator Hamiltonian. Moreover, in the particular case where the normal eigenvalue matrix have been constructed of the matrix elements of a complex linear combination of the \(su(2)\) Lie algebra generators and the identity operator in a given \(su(2)\) irreducible representation space, we have shown that for a special choice of the parameters, we can obtain the so-called quaternionic vector coherent states on the matrix domain [15] and the corresponding generalized oscillator algebra characteristic of the coherent state quantization of quaternions [16], as well as a generalized version of all that. The techniques used in this article and the results obtained can be easily adapted to construct families of linear and quadratic \(su(2)\) and \(h(1)\oplus su(2)\) pseudo-Hermitian Hamiltonians, see [19] and references therein. Also, all kind of intelligent and coherent states associated to the generalized Schrodinger-Robertson uncertainty relation [20] are included in ours results. Indeed, by choosing two Hermitian operators, \(\hat{\mathcal{X}}\) and \(\hat{\mathcal{P}}\), from the set of elements of the \(h(1)\oplus su(2)\) Lie algebra, it can be shown that the minimum uncertainty states in the sense of the Schrodinger-Robertson relation verify the matrix eigenvalue equation \([\hat{\mathcal{X}}+i\lambda\hat{\mathcal{P}}]\,|\,\Phi\rangle=M\,|\,\Phi\rangle\), where \(\lambda\in\mathbb{C}.\) Then, families of vector coherent and squeezed states can be generated from it, depending on the values of the \(\lambda\) parameter, which selection rules in term of the commutator \([\hat{\mathcal{X}},\hat{\mathcal{P}}]\) coincide with those of the generalized creation and annihilation rules mentioned above. Finally, an important relation involving the generators of the \(su(2)\) algebra have been explicitly developed at the moment of disentangling the exponential operators in the case when the parameter \(b=0\), namely, that the non-canonical commutator \([\hat{a}+(\vec{\beta}\cdot\vec{j}),\hat{a}^{\dagger}+(\vec{\beta}\cdot\vec{j}) ^{\dagger}]=\hat{I}+2x\hat{f}_{3}+\rho e^{i\nu}\hat{f}_{+}+\rho e^{-i\nu}\hat{f }_{-}\), where \(|x|>0\), is equivalent to the canonical one \([\hat{a}+\mathbb{B}_{\mp}\hat{\mathbb{J}}_{\pm},\hat{a}^{\dagger}+\mathbb{B}_{ \mp}^{*}\hat{\mathbb{J}}_{\mp}]=\hat{I}\pm 2\|\mathbb{B}_{\mp}\|^{2}\hat{ \mathbb{J}}_{3}\), for suitably defined \(su(2)\) Lie algebra transformed generators \(\hat{\mathbb{J}}_{+},\hat{\mathbb{J}}_{-}\) and \(\hat{\mathbb{J}}_{3}.\) In other words, we have shown that to each element of the complex Lie algebra of \(su(2)\), whose coefficients verify \(b=0\), we can associate its adjoint element and the element formed from the commutator between them in such a way that the three generators reproduce the standard commutation relations of the \(su(2)\) Lie algebra.
## Acknowledgment
The author dedicates this article to his little purple doll Mary and his beloved children.
Algebra eigenstates of \(h(1)\oplus su(2)\) Lie algebra
The \(h(1)\oplus su(2)\) algebra eigenstates are defined to be the solutions of the eigenvalue equation
\[[\alpha_{-}\hat{a}+\alpha_{+}\hat{a}^{\dagger}+\alpha_{3}\hat{I}+ \beta_{-}\hat{J}_{+}+\beta_{+}\hat{J}_{-}+\beta_{3}\hat{I}_{3}]\,|\,\Phi\rangle= \beta\,|\,\Phi\rangle,\] (A.1)
where \(\alpha_{\mp},\alpha_{3},\beta_{\mp}\) and \(\beta_{3}\) are given complex numbers and \(\beta\) a complex number representing the associated eigenvalue.
Equation (A.1) have been already studied in the literature. We will continue here the study of these states in order to prepare it suitably and to use it effectively in the construction, simplification and factorization of the \(h(1)\oplus su(2)\) algebra eigenstates with eigenvalues over the matrix domain. From (A.1) we can see that without loss of generality we can set \(\alpha_{-}=1\) and \(\alpha_{3}=0.\) Then, by applying the following transformation to the states
\[|\,\Phi\rangle=\exp\left[-\frac{1}{2}\alpha_{+}\left(\hat{a}^{ \dagger}\right)^{2}\right]|\,\Psi\rangle,\] (A.2)
(A.1) reduces to:
\[\left[\hat{a}+\beta_{-}\hat{J}_{+}+\beta_{+}\hat{J}_{-}+\beta_{3 }\hat{I}_{3}\right]|\,\Psi\rangle=\beta\,|\,\Psi\rangle.\] (A.3)
Before continuing with the resolution of the eigenvalue problem, let us make some comments about the transformation we just performed. The transformation (A.2), corresponds to a standard squeezed transformation provided that \(\|\alpha_{+}\|<1.\) When we apply it to the fundamental state \(|\,0\rangle,\) it can be replaced by the unitary transformation \(\hat{S}(\xi)=\exp\left[-\frac{1}{2}\left(\xi\,\,\hat{a}^{\dagger}\,^{2}-\xi^{ *}\hat{a}^{2}\right)\right]|\,0\rangle,\) where \(\xi=\frac{\alpha_{+}}{\|\alpha_{+}\|}\tanh^{-1}\left(\|\alpha_{+}\|\right).\) On the other hand, when it is applied onto a coherent state, that is, \(e^{-\frac{1}{2}\alpha_{+}\,\hat{a}^{\dagger}\,^{2}}e^{2i\,\hat{a}^{\dagger}\, }\,|0\rangle,\) where \(z_{1}\) is an arbitrary complex number, we can first use the property \(\hat{S}^{\dagger}(\xi)\hat{a}^{\dagger}\hat{S}(\xi)=\left[\hat{a}^{\dagger} \cosh(\|\xi\|)-\frac{\xi^{*}}{\|\xi\|}\sinh(\|\xi\|)\hat{a}\right]\) to shift the squeezed unitary operator to the left hand, and then rewrite the resulting expression in terms of the unitary displacement operator associated to the standard harmonic oscillator to get the normalized equivalent relation \(\hat{S}(\xi)\hat{D}\left(z_{1}\cosh(\|\xi\|)\right)|\,0\rangle.\) Moreover, when, for example, the following situation occurs: \(e^{-\frac{1}{2}\alpha_{+}\,\hat{a}^{\dagger}\,^{2}}e^{\beta\hat{a}^{\dagger} \,}(za^{\dagger})^{n}|\,0\rangle,\) where \(n\) is a non-negative integer, by using the well know properties of the squeezed and displacement operators we can write this state in the form
\[\hat{S}(\xi)\hat{D}\left(\beta\cosh(\|\xi\|)\right)\left[z\left( \left(\hat{a}^{\dagger}+\beta^{*}\cosh(\|\xi\|)\right)\cosh(\|\xi\|)-\frac{\xi ^{*}}{\|\xi\|}\sinh(\|\xi\|)(\hat{a}+\beta\cosh(\|\xi\|)\right)\right]^{n}|\,0\rangle.\] (A.4)
When \(\|\alpha_{+}\|\geq 1,\) we can leave the transformation (sqeezed-transformation) as it is. So in this way, once the solutions of the eigenvalue equation (A.3) are know, proceeding as we have indicated here, we will be able to obtain the complete set of solutions of equation (A.1 ).
Now, let us rewrite (A.3) in the form
\[\hat{a}\,|\,\Psi\rangle=\left[\beta-\beta_{-}\hat{J}_{+}-\beta_{+ }\hat{J}_{-}-\beta_{3}\hat{I}_{3}\right]|\,\Psi\rangle,\] (A.5)
Projecting both sides of (A.5) onto the state \(|\,\zeta\rangle\otimes\hat{I}^{j}\) and using (2.13) the realization (2.11) for the \(\hat{a}\) annihilator, and the relations (2.5) for the \(su(2)\) algebra generators, we obtain, for fixed \(j,\) the following coupled linear system of differential equations:
\[\frac{d}{d\zeta}\psi^{j}_{m}(\zeta) = (\beta-m\beta_{3})\psi^{j}_{m}(\zeta)\] (A.6) \[- \beta_{-}\sqrt{(j-m)(j+m+1)}\psi^{j}_{m+1}(\zeta)\] \[- \beta_{+}\sqrt{(j+m)(j-m+1)}\psi^{j}_{m-1}(\zeta),\,\,\,m=-j,....,j.\]
This system can be written in the form
\[\frac{d}{d\zeta}\begin{pmatrix}\psi_{-j}^{j}(\zeta)\\ \psi_{-j+1}^{j}(\zeta)\\ \vdots\\ \psi_{j-1}^{j}(\zeta)\\ \psi_{j}^{j}(\zeta)\end{pmatrix}=M\begin{pmatrix}\psi_{-j}^{j}(\zeta)\\ \psi_{-j+1}^{j}(\zeta)\\ \vdots\\ \psi_{j-1}^{j}(\zeta)\\ \psi_{j}^{j}(\zeta)\end{pmatrix},\] (A.7)
where \(M\) is the \(2j+1\times 2j+1\) dimensional matrix:
\[M=\begin{pmatrix}\beta+j\beta_{3}&-\sqrt{2j}\beta_{+}&0&0&-&0\\ -\sqrt{2j}\beta_{-}&\beta+(j-1)\beta_{3}&-\sqrt{(2j-1)2}\beta_{+}&0&-&0\\ 0&-\sqrt{(2j-1)2}\beta_{-}&\beta+(j-2)\beta_{3}&-\sqrt{(2j-2)3}\beta_{+}&-&0\\ \vdots&\vdots&\ddots&\ddots&\ddots&\vdots\\ 0&0&-\sqrt{3(2j-2)}\beta_{-}&\beta-(j-2)\beta_{3}&-\sqrt{2(j-1)}\beta_{+}&0\\ 0&0&0&-\sqrt{2(j-1)}\beta_{-}&\beta-(j-1)\beta_{3}&-\sqrt{2j}\beta_{+}\\ 0&0&0&0&-\sqrt{2j}\beta_{-}&\beta-j\beta_{3}\end{pmatrix}.\] (A.8)
At this time it is important to mention that the eigenvalues of \(M\) are given by
\[\lambda_{m}^{j}=\beta+mb,\quad\text{where}\quad b=\sqrt{4\beta_{+}\beta_{-}+ \beta_{3}^{2}},\quad m=-j,\cdots,j,\] (A.9)
which implies that when \(b\neq 0\), all \(\lambda_{m}^{j}\)'s are different and when \(b=0\), all \(\lambda_{m}^{j}\)'s are equal to \(\beta\). Thus, in the first case, the matrix in (A.8) can be diagonalized by a similarity transformation, while in the second case it is not always possible to do it.
### Case \(b\neq 0\)
In this subsection we will construct the \(h(1)\oplus su(2)\) algebra eigenstates associated aux generators which parameters verify \(b\neq 0\). We will see that in all the different cases the structure of these states presents the form of a product of two untangled operators that act on the base states, the first one depending solely on the \(h(1)\) algebra generators and the second depending exclusively on the \(su(2)\) algebra generators. The method that we will use in the different cases to solve the equation (A.1) will combine the techniques of the algebra of operators and that of differential equations.
#### a.1.1 Case \(\beta_{+}=\beta_{-}=0\) and \(\beta_{3}\neq 0\)
Let us start with the computation of the algebra eigenstates of the simplest generator combining operators from the \(h(1)\) sector and \(su(2)\) sector, that is,
\[[\hat{a}+\beta_{3}\hat{J}_{3}]\mid\psi\rangle^{j}=\beta\mid\psi\rangle^{j}.\] (A.10)
It is clear that in this case \(b=\beta_{3}\neq 0\). Then, the set of normalized states verifying this equation is given by
\[\mid\psi\rangle_{m}^{j}=\mid\beta-m\beta_{3}\rangle\otimes\mid j,m\rangle, \quad m=-j,\cdots,j.\] (A.11)
Thus, the general solution of equation (A.10) writes
\[\mid\psi\rangle^{j}=\sum_{m=-j}^{j}\frac{\bar{\varphi}_{m}^{j}(0)}{\sqrt{\sum _{m=-j}^{j}\|\bar{\varphi}_{m}^{j}(0)\|^{2}}}\mid\beta-m\beta_{3}\rangle \otimes\mid j,m\rangle,\] (A.12)
where \(\bar{\varphi}_{m}^{j}(0),\;m=-1,\cdots,j\), are constants.
Using the fact that \(\dot{D}(\beta-m\beta_{3})=\dot{D}(\beta)\dot{D}(-m\beta_{3})\exp\left[\frac{m}{2} \left(\beta\dot{\beta}_{3}^{*}-\beta^{*}\beta_{3}\right)\right]\), the states in equation (A.12) can be written in the form
\[|\,\psi\rangle^{j}=\dot{D}(\beta)\exp\left[\frac{1}{2}\left(\beta\beta_{3}^{*}- \beta^{*}\beta_{3}\right)\dot{J}_{3}\right]\sum_{m=-j}^{j}\frac{\tilde{\varphi }_{m}^{j}(0)}{\sqrt{\sum_{m=-j}^{j}\|\tilde{\varphi}_{m}^{j}(0)\|^{2}}}\,|-m \beta_{3}\rangle\otimes|\,j,m\rangle.\] (A.13)
#### a.1.2 Case \(\beta_{+}\neq 0,\beta_{-}=0\) and \(\beta_{3}\neq 0\)
When \(\beta_{+}\neq 0,\beta_{-}=0\) and \(\beta_{3}\neq 0\), equation (A.1) becomes
\[[\hat{a}+\beta_{+}\hat{J}_{-}+\beta_{3}\hat{J}_{3}]\,|\,\psi\rangle^{j}=\beta \,|\,\psi\rangle^{j}.\] (A.14)
Again, in this case \(b=\beta_{3}\neq 0\).
By performing in (A.18) the following transformation on the states \(|\,\psi\rangle^{j}=e^{\frac{\beta_{+}}{\beta_{3}}\hat{J}_{-}}\,|\,\tilde{\psi }\rangle^{j}\) and acting at the same time from the left on both sides of this equation with the operator \(e^{-\frac{\beta_{+}}{\beta_{3}}\hat{J}_{-}}\), and then using the fact that \(e^{-\frac{\beta_{+}}{\beta_{3}}\hat{J}_{-}}\beta_{3}\hat{J}_{3}e^{\frac{\beta_ {+}}{\beta_{3}}\hat{J}_{-}}\,|\,\tilde{\psi}\rangle=-\beta_{+}\hat{J}_{-}+\beta _{3}\hat{J}_{3}\), we get
\[[\hat{a}+\beta_{3}\hat{J}_{3}]\,|\,\tilde{\psi}\rangle^{j}=\beta\,|\,\tilde{ \psi}\rangle^{j}.\] (A.15)
Hence, the set of normalized eigenstates verifying this last equation is given by
\[|\,\tilde{\psi}\rangle^{j}_{m}=|\,\beta-m\beta_{3}\rangle\otimes|\,j,m\rangle, \quad m=-j,\cdots,j.\] (A.16)
Finally, using the results of the previous subsection and then returning to the original states by performing the inverse transformation, we find that the general solution of equation (A.18) is given by
\[|\,\psi\rangle^{j}=\frac{1}{\sqrt{\mathcal{N}^{j}}}\dot{D}(\beta)e^{\frac{ \beta_{+}}{\beta_{3}}\hat{J}_{-}}\exp\left[\frac{1}{2}\left(\beta\dot{\beta}_{ 3}^{*}-\beta^{*}\beta_{3}\right)\dot{J}_{3}\right]\sum_{m=-j}^{j}\tilde{\varphi }_{m}^{j}(0)\,|\,-m\beta_{3}\rangle\otimes|\,j,m\rangle,\] (A.17)
where \(\mathcal{N}^{j}\) is a generic normalization constant.
#### a.1.3 Case \(\beta_{+}=0,\beta_{-}\neq 0\) and \(\beta_{3}\neq 0\)
When \(\beta_{+}=0,\beta_{-}\neq 0\) and \(\beta_{3}\neq 0\), equation (A.1) becomes
\[[\hat{a}+\beta_{-}\hat{J}_{+}+\beta_{3}\hat{J}_{3}]\,|\,\psi\rangle^{j}=\beta \,|\,\psi\rangle^{j}.\] (A.18)
Again, in this case \(b=\beta_{3}\neq 0\). Proceeding exactly as in the previous subsection, but taking in account that now the new transformation on the states is governed by the operator \(e^{-\frac{\beta_{-}}{\beta_{3}}\hat{J}_{+}}\), we find the general solution of (A.18) is given by
\[|\,\psi\rangle^{j}=\frac{1}{\sqrt{\mathcal{N}^{j}}}\dot{D}(\beta)e^{\frac{- \beta_{-}}{\beta_{3}}\hat{J}_{+}}\exp\left[\frac{1}{2}\left(\beta\beta_{3}^{*}- \beta^{*}\beta_{3}\right)\dot{J}_{3}\right]\sum_{m=-j}^{j}\tilde{\varphi}_{m}^ {j}(0)\,|\,-m\beta_{3}\rangle\otimes|\,j,m\rangle.\] (A.19)
#### a.1.4 Case \(\beta_{+}\neq 0,\beta_{-}\neq 0\) and \(\beta_{3}=0\)
When \(\beta_{+}\neq 0,\beta_{-}\neq 0\) and \(\beta_{3}=0\), equation (A.1) becomes
\[[\hat{a}+\beta_{-}\hat{J}_{+}+\beta_{+}\hat{J}_{-}]\,|\,\psi\rangle^{j}=\beta \,|\,\psi\rangle^{j}.\] (A.20)
Also, in this case \(b=2\sqrt{\beta_{+}\beta_{-}}\neq 0\). Then, by using equations (B2) and (B3), we can construct an operator \(\hat{T}\) with the following characteristics:
\[\hat{T}=\exp\left[-\frac{\pi}{4}\left(\sqrt{\frac{\beta_{-}}{\beta_{+}}}\hat{J} _{+}-\sqrt{\frac{\beta_{+}}{\beta_{-}}}\hat{J}_{-}\right)\right],\] (A.21)
such that
\[\hat{T}^{-1}[\beta_{+}\hat{J}_{+}+\beta_{+}\hat{J}_{-}]\hat{T}=b\hat{J}_{3}.\] (A.22)
We notice that, in general, \(\hat{T}\) in not unitary, it becomes unitary if and only if \(\|\beta_{+}\|=\|\beta_{-}\|\). Thus, using the following transformation \(|\psi\rangle^{j}=\hat{T}\ |\psi\rangle^{j}\) in equation (A.20), and then acting from the left with \(\hat{T}^{-1}\) on both sides of the corresponding transformed equation we get
\[[\hat{a}+b\hat{J}_{3}]\ |\ \bar{\psi}\rangle^{j}=\beta\ |\ \bar{\psi}\rangle^{j},\] (A.23)
which normalized solutions we already know from the previous sections, that is,
\[|\ \bar{\psi}\rangle^{j}_{m}=|\ \beta-mb\rangle\otimes|\ j,m\rangle,\quad m=-j, \cdots,j.\] (A.24)
Finally, proceeding as in the previous subsections, we reach the general solution of (A.20):
\[|\ \psi\rangle^{j}=\frac{1}{\sqrt{\mathcal{N}^{j}}}\hat{D}(\beta)\exp\left[- \frac{\pi}{4}\left(\sqrt{\frac{\beta_{-}}{\beta_{+}}}\hat{J}_{+}-\sqrt{\frac{ \beta_{+}}{\beta_{-}}}\hat{J}_{-}\right)\right]\exp\left[\frac{1}{2}\left( \beta b^{*}-\beta^{*}b\right)\hat{J}_{3}\right]\sum_{m=-j}^{j}\bar{\varphi}^{ j}_{m}(0)\ |\ -mb\rangle\otimes|\ j,m\rangle.\] (A.25)
#### a.1.5 Case \(\beta_{+}\neq 0,\beta_{-}\neq 0\) and \(\beta_{3}\neq 0\)
When all beta parameters of the \(su(2)\) algebra sector, which in principle are complex numbers, are different from zero, we need to select a subset of parameters for which \(b\neq 0\). If we define
\[\beta_{+}=R_{+}\ e^{i\theta_{+}},\quad\beta_{-}=R_{-}\ e^{i\theta_{-}},\quad \text{and}\quad\beta_{3}=R_{3}\ e^{i\theta_{3}},\] (A.26)
where \(R_{\pm}\) and \(R_{3}\in\mathbb{R}_{+}\), and \(\theta_{\pm}\) and \(\theta_{3}\in[0,2\pi]\), the conditions for \(b\) being different from zero are
\[R_{3}\neq 2\sqrt{R_{+}\ R_{-}}\quad\text{or}\quad\theta_{3}\neq\frac{1}{2}( \theta_{+}+\theta_{-})+\left(k+\frac{1}{2}\right)\pi,\quad k=0,1.\] (A.27)
Then, under these conditions, using again (B2) and (B3), we can construct an exponential operator \(\hat{T}\) such that
\[\hat{T}^{-1}[\beta_{+}\hat{J}_{-}+\beta_{-}\hat{J}_{+}+\beta_{3}\hat{J}_{3}] \hat{T}=b\hat{J}_{3}.\] (A.28)
Hence, proceeding exactly as in the previous section, we can reduce (A.1) to
\[[\hat{a}+b\hat{J}_{3}]\ |\ \bar{\psi}\rangle^{j}=\beta\ |\ \bar{\psi}\rangle^{j},\] (A.29)
which normalized solutions we already know from the previous sections, that is,
\[|\ \bar{\psi}\rangle^{j}_{m}=|\ \beta-mb\rangle\otimes|\ j,m\rangle,\quad m=-j, \cdots,j.\] (A.30)
and in consequence the general solution of (A.1) is given by
\[|\ \psi\rangle^{j}=\frac{1}{\sqrt{\mathcal{N}^{j}}}\hat{D}(\beta)\ \hat{T}\ \exp\left[\frac{1}{2}\left(\beta b^{*}-\beta^{*}b\right)\hat{J}_{3}\right] \sum_{m=-j}^{j}\bar{\varphi}^{j}_{m}(0)\ |\ -mb\rangle\otimes|\ j,m\rangle.\] (A.31)
Here, in general, \(\hat{T}\) is not unitary, it becomes unitary if and only if \(R_{+}=R_{-}\) and the angle \(\theta_{3}=\frac{1}{2}(\theta_{+}+\theta_{-})+\left(k+\frac{1}{2}\right)\pi\), \(k=0,1\) and \(R_{3}\neq\sqrt{R_{+}R_{-}}\). Remarkably, these same conditions make the matrix \(M\) in A.8 normal.
#### a.1.6 Case M normal
Indeed, when \(M\) in A.8 is a normal matrix, i.e., \(M^{\dagger}M=MM^{\dagger}\), where \(M^{\dagger}\) denotes the conjugate transpose of \(M\), the following conditions on the parameters forming the entries of \(M\) are imposed
\[\|\beta_{+}\|=\|\beta_{-}\|,\quad\text{and}\quad\beta_{3}\;\beta_{+}^{*}-\beta_{ 3}^{*}\;\beta_{-}=0,\] (A.32)
or if we define
\[\beta_{\pm}=\|\beta_{\pm}\|\;e^{\pm i\theta_{s}},\;\text{and}\;\beta_{3}=\| \beta_{3}\|\;e^{i\theta_{3}},\] (A.33)
and insert it in (A.32) we get their equivalents:
\[\|\beta_{+}\|=\|\beta_{-}\|=\|\beta_{\pm}\|\quad\text{and}\quad e^{i(\theta_{ +}+\theta_{-})}=e^{2i\theta_{3}}.\] (A.34)
From this last equation we can show that \(b\) in (A.9) is given \(b=e^{\frac{i}{2}(\theta+\theta_{-})}\sqrt{4\|\beta_{\pm}\|^{2}+\|\beta_{3}\|^ {2}}\) which means that it is equal to zero if and only if \(\beta_{+}=\beta_{-}=\beta_{3}=0\), in which case \(M=\beta I\), i.e, a diagonal matrix. On the other hand, when \(\beta_{+}=\beta_{-}=0\) and \(\beta_{3}\neq 0\), the \(M\) matrix is still diagonal and then, the unitary matrix that makes it diagonal is the identity matrix. In a more general case, there exist a unitary matrix \(U\) such that \(U^{\dagger}MU=D\), where
\[D=\begin{pmatrix}\lambda_{-j}^{j}&0&0&\cdots&0\\ 0&\lambda_{-j+1}^{j}&0&\cdots&0\\ \vdots&\vdots&\ddots&\vdots&\vdots\\ 0&\cdots&0&\lambda_{j-1}^{j}&0\\ 0&\cdots&0&0&\lambda_{j}^{j}\end{pmatrix}\] (A.35)
is a diagonal matrix formed out of the eigenvalues of \(M\).
Operating with \(U^{\dagger}\) on both sides of equation (A.7), we get the uncoupled linear differential equation system:
\[\frac{d}{d\zeta}\begin{pmatrix}\tilde{\psi}_{-j}^{j}(\zeta)\\ \tilde{\psi}_{-j+1}^{j}(\zeta)\\ \vdots\\ \tilde{\psi}_{j-1}^{j}(\zeta)\\ \tilde{\psi}_{j}^{j}(\zeta)\end{pmatrix}=D\begin{pmatrix}\tilde{\psi}_{-j}^{j} (\zeta)\\ \tilde{\psi}_{-j+1}^{j}(\zeta)\\ \vdots\\ \tilde{\psi}_{j-1}^{j}(\zeta)\\ \tilde{\psi}_{j}^{j}(\zeta)\end{pmatrix},\] (A.36)
where the tilde vector verify
\[\begin{pmatrix}\psi_{-j}^{j}(\zeta)\\ \psi_{-j+1}^{j}(\zeta)\\ \vdots\\ \psi_{j-1}^{j}(\zeta)\\ \psi_{j}^{j}(\zeta)\end{pmatrix}=U\begin{pmatrix}\tilde{\psi}_{-j}^{j}(\zeta) \\ \tilde{\psi}_{-j+1}^{j}(\zeta)\\ \vdots\\ \tilde{\psi}_{j-1}^{j}(\zeta)\\ \tilde{\psi}_{j}^{j}(\zeta)\end{pmatrix}.\] (A.37)
The integration of equation (A.36) is direct, we obtain
\[\tilde{\psi}_{m}^{j}(\zeta)=e^{\lambda_{-m}^{j}\zeta}\tilde{\varphi}_{m}^{j}(0 ),\quad m=-j,\cdots,j,\] (A.38)
where \(\tilde{\varphi}_{m}^{j}(0),\;m=-j,\cdots,j\), are arbitrary integration constants. With help of equation (A.37), the solution of the original system (A.7) can be reached, that is
\[\psi_{m}^{j}(\zeta)=\sum_{\ell=-j}^{j}\;U_{m\ell}\;\tilde{\psi}_{\ell}^{j}( \zeta)=\sum_{m=-\ell}^{j}\;U_{m\ell}\;e^{\lambda_{-\ell}^{j}\zeta}\tilde{ \varphi}_{\ell}^{j}(0).\] (A.39)
Inserting this result in equation (2.12) we arrive to
\[|\,\Psi(\zeta)\rangle^{j}=\sum_{m=-j}^{j}\bar{\varphi}_{m}^{j}(0)e^{\lambda^{j}_{ -m}\zeta}\otimes\sum_{\ell=-j}^{j}\ U_{m\ell}^{t}\,|\,j,\ell\rangle,\] (A.40)
where \(U^{t}\) stands for the transpose of \(U.\) We can then express this last states in terms of the energy eigenstates of the standard harmonic oscillator Hamiltonian, that is
\[|\,\Psi\rangle^{j}=N^{-\frac{1}{2}}\sum_{m=-j}^{j}\bar{\varphi}_{m}^{j}(0)e^{ \lambda^{j}_{-m}a^{\dagger}}\,|\,0\rangle\otimes\sum_{\ell=-j}^{j}\ U_{m\ell}^{t} \,|\,j,\ell\rangle.\] (A.41)
where \(N\) is a normalization constant which must be fixed by imposing \({}^{j}\langle\Psi\mid\Psi\rangle^{j}=1.\) We notice that the general solution (A.41) is a superposition of \(2j+1\) independent solutions of the \(h(1)\oplus su(1)\) algebra eigenstate equation which are orthogonal to each other. Indeed, each solution composing the general state in (A.41), in a normalized version, is given by
\[|\,\psi\rangle_{m}^{j} = e^{-\frac{1}{2}|\lambda^{j}_{-m}||^{2}}e^{\lambda^{j}_{-m}a^{ \dagger}}\,|\,0\rangle\otimes\sum_{\ell=-j}^{j}\ U_{m\ell}^{t}\,|\,j,\ell\rangle\] (A.42) \[= |\,\lambda^{j}_{-m}\rangle\otimes\sum_{\ell=-j}^{j}\ U_{m\ell}^ {t}\,|\,j,\ell\rangle\quad m=-j,\cdots,j,\]
where \(|\,\lambda^{j}_{-m}\rangle,m=-1,\cdots,j\) are the canonical coherent states of the harmonic oscillator system verifying \(\hat{a}\mid\lambda^{j}_{-m}\rangle=\lambda^{j}_{-m}\ \mid\lambda^{j}_{-m}\rangle,m=-j,\cdots,j,\) and according with the normality condition imposed to \(M,\) that implies the unitarity of \(U,\) these states verify
\[{}^{j}_{\ell}\langle\psi\mid\psi\rangle_{m}^{j}=\delta_{\ell,m}.\] (A.43)
Thus, the general state (A.41) can be written in the normalized form
\[|\,\Psi\rangle^{j}=N^{-\frac{1}{2}}\sum_{m=-j}^{j}\bar{\varphi}_{m}^{j}(0)\ |\ \psi\rangle_{m}^{j}=N^{-\frac{1}{2}}\sum_{m=-j}^{j}\bar{\varphi}_{m}^{j}(0)\,| \,\lambda^{j}_{-m}\rangle\otimes\sum_{\ell=-j}^{j}\ U_{m\ell}^{t}\,|\,j,\ell\rangle,\] (A.44)
where
\[N=\sum_{m=-j}^{j}\|\bar{\varphi}_{m}^{j}(0)\|^{2}.\] (A.45)
For comparative purposes of these states structure with that of the vector algebra eigenstates with eigenvalues on the matrix domain settled in this article, it is convenient to rewrite the general solution (A.44) in the matrix form, that is
\[|\,\Psi\rangle^{j} = N^{-1/2}\] \[\times \left(\bar{\varphi}_{-j}^{j}(0)e^{\lambda^{j}_{j}a^{\dagger}} \bar{\varphi}_{-j+1}^{j}(0)e^{\lambda^{j}_{j-1}a^{\dagger}}\cdots\quad\bar{ \varphi}_{j-1}^{j}(0)e^{\lambda^{j}_{-j+1}a^{\dagger}}\bar{\varphi}_{j}^{j}(0 )e^{\lambda^{j}_{-j}a^{\dagger}}\bar{a}^{t}\right)U^{t}\quad\left(\begin{array} []{c}|\,0;j,-j\rangle\\ |\,0;j,-j+1\rangle\\ \vdots\\ |\,0;j,j-1\rangle\\ |\,0;j,j\rangle\end{array}\right).\]
Equation (A.44) gives us the eigenstates of the operator \([\hat{a}+\beta_{-}\hat{J}_{+}+\beta_{+}\hat{J}_{-}+\beta_{3}\hat{J}_{3}]\) when \(M\) is normal. In fact, the process for computing the \(h(1)\oplus su(2)\) algebra eigenstates when \(b\neq 0\) without imposing this
condition to \(M\) is similar, the only difference is that in this case the passing matrix \(U\) is not unitary, which implies that the set of solutions, which are still linearly independent, are in general not orthogonal. Also, the non-unitary character of the passing matrix makes it more difficult to find a simple general expression for the normalization constants.
The passing matrix \(U\) can be built using the techniques of general linear algebra, linear differential equations or operator algebra. In in appendix B we have followed this last method to explicitly construct this matrix for all fixed values of \(j\). In fact, there we have shown that the matrix element of \(U\) are given by \(U_{\ell m}=\langle j,\ell\mid\hat{T}\mid j,m\rangle\), \(\ell,m=-j,\cdots,j\).
### Case b=0
When \(b=0\), all the eigenvalues of the \(M\) matrix are equal. To obtain the \(h(1)\oplus su(2)\) algebra eigenstates in this case we had better integrate systematically, function by function, the linear differential equation system (A.7). We distinguish here three cases, that it to say, when \(\beta_{-}=\beta_{3}=0\), and \(\beta_{+}\neq 0\), when \(\beta_{+}=\beta_{3}=0\), and \(\beta_{-}\neq 0\), or when \(\beta_{-}\neq 0,\beta_{-}\neq 0\) and \(\beta_{3}\neq 0\), but \(\frac{\beta_{3}}{2\beta_{+}}=-\frac{2\beta_{-}}{\beta_{3}}\).
#### a.2.1 Case \(\beta_{-}=\beta_{3}=0\) and \(\beta_{+}\neq 0\)
In the special case where \(b=0\), with \(\beta_{-}=\beta_{3}=0\) and \(\beta_{+}\neq 0\), the matrix \(M\) in A.8) becomes triangular superior and the corresponding linear differential equation system (A.7) can be integrated systematically row by row, from bottom to up or from up to bottom. As, in this case, all diagonal elements of \(M\) are equal to \(\beta\), we can try a solution of the type \(\psi_{m}^{j}(\zeta)=e^{\beta\zeta}\varphi_{m}^{j}(\zeta),\ m=-j,\cdots,j\). Inserting these functions in equation (A.7) and simplifying the expressions we get a simpler system of linear differential equations. For example, starting from bottom to up, we deduce
\[\frac{d}{d\zeta}\varphi_{j}^{j}(\zeta)=0,\quad\text{then}\quad\varphi_{j}^{j}( \zeta)=\bar{\varphi}_{j}^{j}(0),\] (A.47)
where \(\bar{\varphi}_{j}^{j}(0)\) is an integration constant. Continuing the process, we find
\[\frac{d}{d\zeta}\varphi_{j-1}^{j}(\zeta)=-\sqrt{(2j)!}\beta_{+}\varphi_{j}^{j }(\zeta),\] (A.48)
then inserting in this equation the newly obtained value of \(\varphi_{j}^{j}(\zeta)\) and integrating again we get
\[\varphi_{j-1}^{j}(\zeta)=\varphi_{j-1}^{j}(0)-\sqrt{(2j)!}\zeta_{j}\beta_{+} \varphi_{j}^{j}(0),\] (A.49)
where \(\bar{\varphi}_{j}^{j}(0)\) is a new integration constant. Continuing with the process, we can show that the \(k\) th integration leads to
\[\varphi_{j-k}^{j}(\zeta)=\sum_{n=0}^{k}(-1)^{n}\sqrt{\frac{(2j-k+n)!(k!)}{(2j -k)!(k-n)!}}\frac{(\zeta\beta_{+})^{n}}{n!}\bar{\varphi}_{j-k+n}^{j}(0),\] (A.50)
where \(\bar{\varphi}_{j-k+n}^{j}(0)\) are the corresponding integration constants. Let us recall that these function are the component of the the state representing the general solution of the linear differential equation system (A.7), this latter then writes
\[\mid\Psi(\zeta)\rangle^{j}=\sum_{k=0}^{2j}\psi_{j-k}^{j}(\zeta)\otimes\mid j,j-k)=\sum_{k=0}^{2j}e^{\beta\zeta}\ \varphi_{j-k}^{j}(\zeta)\otimes\mid j,j-k),\] (A.51)
or more explicitly
\[\mid\Psi(\zeta)\rangle^{j}=\sum_{k=0}^{2j}\ \sum_{n=0}^{k}(-1)^{n}\sqrt{\frac{(2j-k+n)!(k!)}{(2j -k)!(k-n)!}}\ e^{\beta\zeta}\ \frac{(\zeta\beta_{+})^{n}}{n!}\bar{\varphi}_{j-k+n}^{j}(0) \otimes\mid j,j-k).\] (A.52)
Interchanging the \(k,n\) order of summation and then defining a new index \(m\) in place of \(k\), \(m=j+n-k\), and again, interchanging the \(n,m\) order of summation we get the same state but now factorized according to the integration constants, that is,
\[|\,\Psi(\zeta)\rangle^{j}=\sum_{m=-j}^{j}\bar{\varphi}_{m}^{j}(0)\left[\sqrt{ \frac{(j+m)!}{(j-m)!}}\sum_{n=0}^{j+m}(-1)^{n}\sqrt{\frac{(j+n-m)!}{(j+m-n)!}} \,\,e^{g\zeta}\,\,\frac{(\zeta,\beta_{+})^{n}}{n!}\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
#### a.2.2 Case \(\beta_{+}=\beta_{3}=0\) but \(\beta_{-}\neq 0\),
In the case \(\beta_{+}=\beta_{3}=0\) but \(\beta_{-}\neq 0\), the process of obtaining the solution of (A.7) is analogous to the one followed above. It is not necessary to repeat it here, it is better to adapt the equation (A.55) to obtain the solution. Indeed, by changing in this latter \(m\) by \(-m\) on the top of the summation symbol and in the coefficients, \(n\) by \(-n\) in the \(su(2)\) basis states and \(\beta_{+}\) by \(\beta_{-}\), we get
\[|\,\psi[\beta,\beta_{-}]\rangle_{m}^{j}=\frac{\sum_{n=0}^{j-m}(-1)^{n}\sqrt{ \frac{(j+n+m)!}{(j-m-n)!}}\,(\frac{d^{k}\beta_{-})^{n}}{n!}\,|\,\beta\rangle \otimes|\,j,m+n\rangle}{\left[\sum_{n=0}^{j-m}(\frac{(j+n+m)!}{(j-m-n)!}\,\frac {\|\beta_{-}\|^{2n}}{n!}\sum_{k=0}^{n}{n\choose k}\frac{\|\beta\|^{2k}}{k!}) \right]^{\frac{1}{2}}}=\hat{D}(\beta)\,|\,\tilde{\psi}[\beta,\beta_{-}]\rangle _{m}^{j},\quad m=-j,\cdots,j,\] (A.62)
where the normalized states \(|\,\tilde{\psi}[\beta,\beta_{-}]\rangle_{m}^{j}\) are given by
\[|\,\tilde{\psi}[\beta,\beta_{-}]\rangle_{m}^{j}=\frac{\exp\left[-(\tilde{a}^ {+}+\beta^{*})\beta_{-}\hat{J}_{+}\right]\,|\,0\rangle\otimes|\,j,m\rangle}{ \sqrt{\tilde{\mathcal{N}}_{m}^{j}[\beta,\beta_{+}]}},\quad m=-j,\cdots,j,\] (A.63)
where
\[\tilde{\mathcal{N}}_{m}^{j}[\beta,\beta_{-}]=\frac{(j-m)!}{(j+m)!}\left[\sum_ {n=0}^{j-m}\frac{(j+n+m)!}{(j-m-n)!}\,\frac{\|\beta_{-}\|^{2n}}{n!}\sum_{k=0}^ {n}{n\choose k}\frac{\|\beta\|^{2k}}{k!}\right].\] (A.64)
Hence, in this case, the general solution of (A.3) is given by
\[|\,\Psi[\beta,\beta_{-}]\rangle^{j}=\sum_{m=-j}^{j}\tilde{\varphi}_{m}^{j}(0 )\,|\,\psi[\beta,\beta_{-}]\rangle_{m}^{j}.\] (A.65)
a.2.3 Case \(\beta_{+}\neq 0,\beta_{-}\neq 0\) and \(\beta_{3}\neq 0\) but \(\frac{\beta_{3}}{2\beta_{+}}=-\frac{2\beta_{-}}{\beta_{3}}\)
When \(\beta_{+}\neq 0\), \(\beta_{-}\neq 0\) and \(\beta_{3}\neq 0\), and \(\frac{\beta_{3}}{2\beta_{+}}=-\frac{2\beta_{-}}{\beta_{3}}\), the parameter \(b=0\), i.e., matrix \(M\) in A.8) has again \(2j+1\) repeated eigenvalues equal to \(\beta\). Then, again, the differential equation system (A.7) must be integrated term by term. As in appendix B, we can first try a solution of the type \(\psi_{m}^{j}(\zeta)=exp(\beta\zeta)\varphi_{m}^{j}(\zeta),\,\,m=-j,\cdots,j\), and then manipulate the equations to find an isolated ordinary differential equation of \(2j+1\) order for \(\varphi_{-j}^{j}(\zeta)\), that is, \(\prod_{m=-j}^{j}\left(\frac{d}{d\zeta}+j\beta_{3}\right)\varphi_{-j}^{j}( \zeta)=0\). The solution of this latter is given by \(\varphi_{-j}^{j}(\zeta)=\exp(-j\beta_{3}\zeta)\sum_{q=0}^{2j}A_{q}\zeta^{q}\), where \(A_{q},\quad q=0,\cdots,2j\), are integration constants. Finally, reinserting this last solution into (A.7), integrating interactively function by function and factorizing the resulting expressions in terms of the integration constants \(A_{q}\), we get the general solution
\[|\,\Psi(\zeta)\rangle^{j} = \sum_{q=0}^{2j}A_{q}\sum_{k=0}^{q}(-1)^{k}{q\choose k}\frac{(2j-k )!}{(2j)!}\] (A.66) \[\times \exp(\beta\zeta)\,\,\zeta^{q-k}\left(\frac{1}{\beta_{+}}\right)^ {k}\otimes\left\{\frac{d^{k}}{d\,\delta^{k}}\left[\sum_{r=0}^{2j}\sqrt{{2j \choose r}}\,\delta^{r}\,\,|\,j,-j+r\right)\right\}_{\delta=\frac{\beta_{3}}{2 \beta_{+}}=-\frac{2\beta_{-}}{\beta_{3}}},\]
which after some manipulations and redefinition of the integration constants takes the form
\[|\,\Psi(\zeta)\rangle^{j} = \frac{1}{\sqrt{\tilde{\mathcal{N}}^{j}}}\sum_{m=-j}^{j}\tilde{ \varphi}_{m}^{j}(0)\sum_{n=0}^{j+m}(-1)^{n}\,\frac{(j+n-m)!}{(j+m-n)!}\] (A.67) \[\times e^{\zeta\beta}\,\,\frac{(\zeta\beta_{+})^{n}}{n!}\otimes\sum_{ \ell=0}^{j+n-m}\sqrt{{(j+m-n+\ell)!\over(j+n-m-\ell)!}}\,\frac{\delta^{\ell}}{ \ell!}\,\,|\,j,m-n+\ell\rangle,\]
where \(\delta=\frac{\beta_{3}}{2\beta_{+}}=-\frac{2\beta_{-}}{\beta_{3}}\) and \({\cal N}^{j}\) is a generic normalization constant. Returning to the Fock space representation we finally arrive to
\[|\,\Psi[\beta,\beta_{+},\beta_{-},\beta_{3}]\rangle^{j} = \frac{1}{\sqrt{{\cal N}^{j}}}\sum_{m=-j}^{j}\tilde{\varphi}_{m}^{ j}(0)\sum_{n=0}^{j+m}(-1)^{n}\,\frac{(j+n-m)!}{(j+m-n)!}\] (A.68) \[\times \frac{\left(\hat{a}^{\dagger}\beta_{+}\right)^{n}}{n!}\,|\,\, \beta\rangle\otimes\sum_{\ell=0}^{j+n-m}\sqrt{\frac{(j+m-n+\ell)!}{(j+n-m-\ell )!}\,\frac{\delta^{\ell}}{\ell!}}\,\,|\,j,m-n+\ell\rangle,\]
This last equation can be written in the form
\[|\,\Psi[\beta,\beta_{+},\beta_{-},\beta_{3}]\rangle^{j}=\frac{1}{\sqrt{{\cal N }^{j}}}\sum_{m=-j}^{j}\tilde{\varphi}_{m}^{j}(0)\hat{D}(\beta)\,|\,\tilde{\psi }[\beta,\beta_{+},\beta_{-},\beta_{3}]\rangle_{m}^{j}\] (A.69)
where \(|\,\tilde{\psi}[\beta,\beta_{+},\beta_{-},\beta_{3}]\rangle_{m}^{j}\), \(m=-j\cdots,j\), is a set of \(2j+1\) normalized linearly independent states are given by
\[|\,\tilde{\psi}[\beta,\beta_{+},\beta_{-},\beta_{3}]\rangle_{m}^ {j} = \frac{1}{\sqrt{\tilde{{\cal N}}^{j}_{m}[\beta,\beta_{+},\beta_{- },\beta_{3}]}}\sqrt{\frac{(j+m)!}{(j-m)!}}\sum_{n=0}^{j+m}(-1)^{n}\,\frac{(j+n- m)!}{(j+m-n)!}\] \[\times \frac{\left(\left(a^{\dagger}+\beta^{*}\right)\beta_{+}\right)^{ n}}{n!}\,|\,0\rangle\otimes\sum_{\ell=0}^{j+n-m}\sqrt{\frac{(j+m-n+\ell)!}{(j+n-m- \ell)!}\,\frac{\delta^{\ell}}{\ell!}}\,\,|\,j,m-n+\ell\rangle,\]
where \(m=-j,\cdots,j\), or
\[|\,\tilde{\psi}[\beta,\beta_{+},\beta_{-},\beta_{3}]\rangle_{m}^ = \frac{1}{\sqrt{\tilde{{\cal N}}^{j}_{m}[\beta,\beta_{+},\beta_{- },\beta_{3}]}}\sum_{n=0}^{j+m}\,\,\sum_{\ell=0}^{j+n-m}\,\,\frac{(\delta\tilde {f}_{+})^{\ell}}{\ell!}\] (A.71) \[\times (-1)^{n}\,\frac{\left(\left(\hat{a}^{\dagger}+\beta^{*}\right) \beta_{+}\tilde{f}_{-}\right)^{n}}{n!}\,|\,0\rangle\otimes|\,j,m\rangle,\quad m =-j,\cdots,j,\]
where
\[\tilde{{\cal N}}^{j}_{m}[\beta,\beta_{+},\beta_{-},\beta_{3}] = \frac{(j+m)!}{(j-m)!}\sum_{n=0}^{j+m}\sum_{\bar{n}}^{j+m}(-1)^{(n -n)}\frac{(j+n-m)!(j+\bar{n}-m)!}{(j+m-n)!(j+\bar{m}-\bar{n})!}\frac{(\beta_{+ }^{*})^{\bar{n}}(\beta_{+})^{n}}{\bar{n}!n!n!}\] (A.72) \[\times \sum_{\ell=0}^{j+n-m}\frac{(j+m-n+\ell)!}{(j+n-m-\ell)!}\frac{ \|\,\tilde{\psi}\|^{2\ell}\,\mathfrak{s}^{(n-n)}}{(\ell+\bar{n}-n)!\ell!}\sum_ {k=0}^{min(\bar{n},n)}\binom{\bar{n}}{k}k!\beta^{(n-k)}\beta^{*(n-k)},\]
where \(min\,(\bar{n},n)\) means the smaller between \(\bar{n}\) and \(n\).
## Appendix B su(2) algebra eigenstates, a useful operator
Let us recall that, in the case \(\beta_{-}\neq 0,\beta_{+}\neq 0\) and for arbitrary \(\beta_{3}\), such that
\[b=\sqrt{4\beta_{+}\beta_{-}+\beta_{3}^{2}}\neq 0,\] (B.1)
in the process of computing the \(su(2)\) algebra eigenstates, we have built an operator \(T\) with the following characteristics
\[\hat{T}=\exp\left(-\frac{\tilde{\theta}}{2}[e^{-i\hat{\phi}}\hat{f}_{+}-e^{i\hat{ \phi}}\hat{f}_{-}]\right),\] (B2)
where
\[\frac{\tilde{\theta}}{2}=\arctan\left(\sqrt{\frac{b-\beta_{3}}{b+\beta_{3}}} \right),\quad\text{and}\quad e^{i\hat{\phi}}=\sqrt{\frac{\beta_{+}}{\beta_{-}}},\] (B3)
so that applied on a pure state \(\mid j,m\rangle\) of the irreducible representation \(j\), gives as result an eigenstate of the the general operator \([\beta_{-}\hat{f}_{+}+\beta_{+}\hat{f}_{-}+\beta_{3}\hat{f}_{3}]\) associated to the eigenvalue \(mb\), that is
\[[\beta_{-}\hat{f}_{+}+\beta_{+}\hat{f}_{-}+\beta_{3}\hat{f}_{3}]\big{(}\hat{T} \mid j,m\rangle\big{)}=mb\left(\hat{T}\mid j,m\rangle\right).\] (B4)
An explicit and disentangled form of it is given by
\[\hat{T}=\exp\left(-\frac{2\beta_{-}}{b+\beta_{3}}\hat{f}_{+}\right)\exp\left( \log\left(\frac{2b}{b+\beta_{3}}\right)\hat{f}_{3}\right)\exp\left(\frac{2 \beta_{+}}{b+\beta_{3}}\hat{f}_{-}\right).\] (B5)
Then, the action of \(\hat{T}\) on a pure state \(\mid j,\ell\rangle\) is given explicitly by
\[\hat{T}\mid j,\ell\rangle=\left(\frac{2b}{b+\beta_{3}}\right)^{ \ell}\exp\left(-\frac{2\beta_{-}}{b+\beta_{3}}\hat{f}_{+}\right)\exp\left( \frac{\beta_{+}}{b}\hat{f}_{-}\right)\mid j,\ell\rangle\] \[=\left(\frac{2b}{b+\beta_{3}}\right)^{\ell}\sqrt{\frac{(j+\ell)!} {(j-\ell)!}}\] \[\times\left[\sum_{u=\ell+1}^{j}\left(\frac{-2\beta_{-}}{b+\beta_{ 3}}\right)^{(u-\ell)}\sqrt{\frac{(j+u)!}{(j-u)!}}\sum_{n=0}^{(j+\ell)}(-1)^{n} \left(\frac{(1-\beta_{3}/b)}{2}\right)^{n}\frac{(j-\ell+n)!}{n!(n+u-\ell)!(j+ \ell-n)!}\right.\] \[+\left.\sum_{u=-j}^{\ell}\left(\frac{\beta_{+}}{b}\right)^{(\ell -u)}\sqrt{\frac{(j+u)!}{(j-u)!}}\sum_{n=0}^{(j+u)}(-1)^{n}\left(\frac{(1-\beta _{3}/b)}{2}\right)^{n}\frac{(j-u+n)!}{n!(n+\ell-u)!(j+u-n)!}\right]\mid j,u\rangle,\]
which after some algebraic manipulations can be written in the form
\[\hat{T}\mid j,\ell\rangle=\left(\frac{2b}{b+\beta_{3}}\right)^{\ell}\] \[\times\left[\sqrt{\frac{(j+\ell)!(j-\ell)!}{(2j)!}}\sum_{u=\ell+1 }^{j}\sqrt{\frac{(2j)!}{(j+u)!(j-u)!}}\left(\frac{-2\beta_{-}}{b+\beta_{3}} \right)^{u-\ell}P_{j+\ell}^{-\ell+u,-\ell-u}\left(\frac{\beta_{3}}{b}\right)\right.\] \[+\left.\sqrt{\frac{(2j)!}{(j+\ell)!(j-\ell)!}}\sum_{u=-j}^{\ell} \sqrt{\frac{(j+u)!(j-u)!}{(2j)!}}\left(\frac{\beta_{+}}{b}\right)^{\ell-u}P_ {j+u}^{-u+\ell,-u-\ell}\left(\frac{\beta_{3}}{b}\right)\right]\mid j,u\rangle,\]
where
\[P_{j+u}^{-u+\ell,-u-\ell}\left(\frac{\beta_{3}}{b}\right)=\frac{(j+\ell)!}{(j -u)!}\sum_{n=0}^{(j+u)}(-1)^{n}\left(\frac{(1-\beta_{3}/b)}{2}\right)^{n}\frac{ (j-u+n)!}{n!(n+\ell-u)!(j+u-n)!}.\] (B7)
are the Jacobi polynomials.
We are interested in the matrix elements of \(\hat{T}\), that is
\[T_{m\ell}^{j}=\langle j,m\mid\hat{T}\mid j,\ell\rangle,\] (B8)
which are equal to
\[T^{j}_{m\ell}[\beta_{+},\beta_{-},\beta_{3}]=\begin{cases}\left(\frac{-2\beta_{-}}{b +\beta_{3}}\right)^{m}\left(\frac{-\beta_{-}}{b-}\right)^{\ell}\sqrt{\frac{(j+ \ell)[(j-\ell]!}{(j+m)!(j-m)!}}P^{-\ell+m,-\ell-m}_{j+\ell}\left(\frac{\beta_{ 3}}{b}\right),\ \ell+1\leq m\leq j\\ \left(\frac{2\beta_{-}}{b+\beta_{3}}\right)^{\ell}\left(\frac{b}{b_{+}}\right) ^{m}\sqrt{\frac{(j+m)[(j-m)!}{(j+\ell)[(j-\ell]!}}P^{-m+\ell,-m-\ell}_{j+m} \left(\frac{\beta_{3}}{b}\right),\ -j\leq m\leq\ell.\end{cases}\] (B9)
### Unitary case
In the special case when \(\|\beta_{+}\|=\|\beta_{-}\|\) and \(\beta_{3}\beta_{-}^{*}-\beta_{3}^{*}\beta_{+}=0\), these matrix elements correspond to the matrix elements of a unitary matrix. Indeed, taking into account these conditions in equation (B3), we realize that the parameters \(\bar{\theta}\) and \(\bar{\phi}\) become real and then \(\hat{T}\) becomes unitary.
Now we will show that the unitary matrix \(T\), which matrix elements are given in (B9), makes the normal matrix \(M\) given in (A.8) diagonal. Indeed, if we express the \(su(2)\) algebra eigenstate \(|\psi\rangle^{j}\) in terms of the \(2j+1\) basis states spanning the \(j\) irreducible representation space in the form \(|\psi\rangle^{j}=\sum_{n=-j}^{j}C^{j}_{m}\,|\,j,m\rangle\) and insert it in the equation
\[[\beta_{-}\hat{J}_{+}+\beta_{+}\hat{J}_{-}+\beta_{3}\hat{J}_{3}]\,|\,\psi \rangle^{j}=\Gamma\,|\,\psi\rangle^{j},\] (B10)
we can see that the algebraic linear equation system for the coefficients \(C^{j}_{m},-j\leq m\leq j\), can be written in the form
\[(\beta I-M)\begin{pmatrix}C^{j}_{-j}\\ C^{j}_{-j+1}\\ \vdots\\ C^{j}_{j-1}\\ C^{j}_{j}\end{pmatrix}=\Gamma\begin{pmatrix}C^{j}_{-j}\\ C^{j}_{-j+1}\\ \vdots\\ C^{j}_{j-1}\\ C^{j}_{j}\end{pmatrix},\] (B11)
where \(M\) is given by equation (A.8). It is clear from this last equation that the unitary matrix which diagonalize \(\mathcal{M}=\beta I-M\), also diagonalize \(M\).
As \(\hat{T}\,|\,j;u\rangle\), for all fixed \(u\) such that \(u=-j,\cdots,j\), is a normalized \(su(2)\) algebra eigenstate verifying (B10), with eigenvalue \(\Gamma^{j}_{u}=ub\),
\[|\,\psi\rangle^{j}_{u}=\sum_{\ell=-j}^{j}C^{j}_{\ell}\,|\,j,\ell\rangle=\hat{ T}|j,u\rangle,\] (B12)
Projecting both sides of equation (B12) on a generic pure state \(|\,j,\ell\rangle\) we obtain a connection between the \(C^{j}_{\ell}\) coefficients and the matrix elements of \(\hat{T}\)
\[C^{j}_{\ell}=T_{\ell u},\quad\ell=-j,\cdots,j.\] (B13)
On the other hand, from (B11) we can see that the explicit form of the coefficient equation system is given by
\[\sum_{\ell=-j}^{j}\mathcal{M}_{m\ell}C^{j}_{\ell}=\Gamma^{j}C^{j}_{m},\quad m =-j,\cdots,j,\] (B14)
which is verified by (B13) for a given \(u\), i.e.,
\[\sum_{\ell=-j}^{j}\mathcal{M}_{m\ell}T_{\ell u}=\Gamma^{j}_{u}T_{mu},\quad m =-j,\cdots,j,\quad\Gamma^{j}_{u}=ub.\] (B15)
Finally operating from the left with the unitary matrix \(T^{\dagger}\) on both sides of equation (B15) we obtain
\[\sum_{m=-j}^{j}\sum_{\ell=-j}^{j}T_{\rho m}^{\dagger}\tilde{M}_{m\ell}T_{\ell u}= \Gamma^{j}\sum_{m=-j}^{j}T^{\dagger}{}_{\rho m}T_{mu}=\Gamma^{j}_{u}\delta_{ \rho u},\] (B16)
for all chosen \(u\), then we conclude that \(T\) diagonalize \(\tilde{M}\), and consequently also diagonalize \(M\).
In the case when \(M\) is not normal but diagonalizable, the same argument is valid, the only thing to do in the argumentation of the above process is to change \(T^{\dagger}\) by \(T^{-1}\).
|
2302.11109
|
A deformation of Asaeda-Przytycki-Sikora homology
|
We define a 1-parameter family of homology invariants for links in thickened
oriented surfaces. It recovers the homology invariant of
Asaeda-Przytycki-Sikora (arxiv:0409414) and the invariant defined by Winkeler
(arxiv:2106.03834). The new invariant can be regarded as a deformation of
Asaeda-Przytycki-Sikora homology; it is not a Lee-type deformation as the
deformation is only non-trivial when the surface is not simply connected. Our
construction is motivated by computations in singular instanton Floer homology.
We also prove a detection property for the new invariant, which is a stronger
result than the main theorem of arxiv:2208.13963.
|
Zhenkun Li, Yi Xie, Boyu Zhang
|
2023-02-22T03:08:28Z
|
http://arxiv.org/abs/2302.11109v1
|
# A deformation of Asaeda-Przytycki-Sikora homology
###### Abstract.
We define a \(1\)-parameter family of homology invariants for links in thickened oriented surfaces. It recovers the homology invariant of Asaeda-Przytycki-Sikora [1] and the invariant defined by Winkeler [21]. The new invariant can be regarded as a deformation of Asaeda-Przytycki-Sikora homology; it is not a Lee-type deformation as the deformation is only non-trivial when the surface is not simply connected. Our construction is motivated by computations in singular instanton Floer homology. We also prove a detection property for the new invariant, which is a stronger result than the main theorem of [1].
## 1. Introduction
Khovanov homology [14] is a link invariant that assigns a bi-graded homology group to every oriented link in \(\mathbb{R}^{3}\). Asaeda-Przytycki-Sikora [1] introduced a generalization of Khovanov homology for links in \((-1,1)\)-bundles over surfaces, where the bundles are required to be oriented as \(3\)-manifolds. Such \((-1,1)\)-bundles are called _thickened surfaces_. When the surface is an annulus, Asaeda-Przytycki-Sikora homology is also called _annular Khovanov homology_. Khovanov homology and Asaeda-Przytycki-Sikora homology have been essential tools for the study of knots and links for decades. More recently, Winkeler [21] introduced another variation of Khovanov homology for links in thickened multi-punctured disks, which is different from the invariant of Asaeda-Przytycki-Sikora.
Suppose \(\Sigma\) is an oriented surface. In this paper, we define a one-parameter family of homology invariants for oriented links in \((-1,1)\times\Sigma\). As bi-graded modules, the new invariant recovers both Asaeda-Przytycki-Sikora homology and the invariant of Winkeler, and it can be interpreted as a one-parameter deformation of Asaeda-Przytycki-Sikora homology. The deformation is not a Lee-type deformation as it is only non-trivial when the surface has a non-trivial fundamental group. The construction is motivated by computations from singular instanton Floer homology. We also use instanton Floer theory to prove a detection result for the deformed Asaeda-Przytycki-Sikora homology, which gives a stronger rank estimate than the main theorem of [1].
The paper is organized as follows. Section 2 introduces some notation and conventions. Sections 3 and 4 define the differential map and proves that \(d^{2}=0\). Section 5 defines the homology invariant and proves the invariance under Reidemeister moves. Section 6 explains the motivation from instanton Floer homology and prove the aforementioned detection result in Theorem 6.1.
## 2. Notation
Throughout this paper, we use \(R\) to denote a fixed commutative ring with unit. We use \(\Sigma\) to denote an oriented surface, possibly with boundary and possibly non-compact.
For every embedded closed \(1\)-manifold \(c\subset\Sigma\), we assign an \(R\)-module \(V(c)\) to \(c\) as follows:
1. If \(\gamma\) is a contractible simple closed curve on \(\Sigma\), define \(V(\gamma)\) to be the free \(R\)-module generated by \(\mathbf{v}(\gamma)_{+}\) and \(\mathbf{v}(\gamma)_{-}\), where \(\mathbf{v}(\gamma)_{+}\) and \(\mathbf{v}(\gamma)_{-}\) are formal generators associated with \(\gamma\).
2. If \(\gamma\) is a non-contractible simple closed curve, let \(\mathfrak{o}\), \(\mathfrak{o}^{\prime}\) be the two orientations of \(\gamma\). Define \(V(\gamma)\) to be the free module generated by \(\mathbf{v}(\gamma)_{\mathfrak{o}}\) and \(\mathbf{v}(\gamma)_{\mathfrak{o}^{\prime}}\), where \(\mathbf{v}(\gamma)_{\mathfrak{o}}\) and \(\mathbf{v}(\gamma)_{\mathfrak{o}^{\prime}}\) are formal generators.
3. In general, suppose the connected components of \(c\) are \(\gamma_{1},\dots,\gamma_{k}\), define \(V(c)\) to be \(\otimes_{i=1}^{k}V(\gamma_{i})\).
When the choice of \(\Sigma\) needs to be emphasized, we will write \(V(c)\) as \(V^{\Sigma}(c)\), and write \(\mathbf{v}(\gamma)_{\mathfrak{o}}\), \(\mathbf{v}(\gamma)_{\pm}\) respectively as \(\mathbf{v}^{\Sigma}(\gamma)_{\mathfrak{o}}\), \(\mathbf{v}^{\Sigma}(\gamma)_{\pm}\).
If \(\mathfrak{o}\) is an orientation of a curve \(\gamma\), we use \(\gamma_{\mathfrak{o}}\) to denote the corresponding oriented curve.
## 3. Band surgery homomorphisms
Suppose \(c\) is an embedded closed \(1\)-manifold on \(\Sigma\), suppose \(b\) is an embedded disk on \(\Sigma\) such that the interior of \(b\) is disjoint from \(c\) and the boundary of \(b\) intersects \(c\) at two arcs (see Figure 1). The surgery of \(c\) along \(b\) yields another embedded closed \(1\)-manifold on \(\Sigma\), which we denote by \(c_{b}\). We will call the disk \(b\) a _band_ that is _attached_ to \(c\).
For later reference, we record the following two elementary lemmas.
**Lemma 3.1**.: _The change from \(c\) to \(c_{b}\) has three possibilities:_
1. _two circle components of_ \(c\) _are merged to one circle,_
2. _one circle component of_ \(c\) _is split to two circles,_
3. _one circle component of_ \(c\) _is modified by the surgery to another circle._
Proof.: Since \(\partial b\cap c\) contains two arcs, at most two components of \(c\) are affect by the surgery. If \(\partial b\cap c\) are on two different components of \(c\), then the surgery merges these two components into one circle. If \(\partial b\cap c\) are on one component of \(c\), then the boundary
Figure 1. Band surgery
orientation of \(b\) defines an orientation on both components of \(\partial b\cap c\), so we have two oriented arcs embedded in one component \(\gamma\) of \(c\). If these two arcs induce the same orientation on \(\gamma\), then the surgery splits one component of \(c\) to two circles. If these two arcs induce opposite orientations on \(\gamma\), then the surgery changes this component to another circle.
Recall that if \(\mathfrak{o}\) is an orientation of a curve \(\gamma\), we use \(\gamma_{\mathfrak{o}}\) to denote the corresponding oriented curve.
**Lemma 3.2**.: _Suppose \(\gamma\) is a simple closed curve on a connected surface \(\Sigma\), and assume \(\Sigma\) is not diffeomorphic to \(S^{2}\). Suppose \(\mathfrak{o}\), \(\mathfrak{o}^{\prime}\) are the two orientations of \(\gamma\). Then \(\gamma_{\mathfrak{o}}\) and \(\gamma_{\mathfrak{o}^{\prime}}\) are not isotopic on \(\Sigma\)._
Proof.: If \(\gamma\) is non-separating, there exists an oriented simple closed curve \(\beta\) such that the algebraic intersection number of \(\beta\) and \(\gamma\) is non-zero. Since isotopies preserve the sign of algebraic intersection numbers, the desired result follows.
If \(\gamma\) is separating and \(\partial\Sigma\neq\emptyset\), then every orientation of \(\gamma\) defines an ordering of the two components of \(\Sigma\backslash\gamma\), which defines an ordered partition of the components of \(\partial\Sigma\). Since every isotopy of \(\gamma\) on \(\Sigma\) can be extended to an isotopy of \(\Sigma\) fixing the boundary, the desired result is proved.
If \(\gamma\) is separating and \(\Sigma\) is closed, then every orientation of \(\gamma\) defines an ordering of the two components of \(\Sigma\backslash\gamma\). Suppose \(\Sigma_{1}\) and \(\Sigma_{2}\) are the two components of \(\Sigma\backslash\gamma\) ordered by an orientation \(\mathfrak{o}\) of \(\gamma\). Since \(\Sigma\) is not a sphere, the images of \(H_{1}(\Sigma_{1};\mathbb{Z})\) and \(H_{1}(\Sigma_{2};\mathbb{Z})\) are distinct in \(H_{1}(\Sigma;\mathbb{Z})\). The images of \(H_{1}(\Sigma_{1};\mathbb{Z})\) and \(H_{1}(\Sigma_{2};\mathbb{Z})\) are invariant under isotopies of \(\gamma_{\mathfrak{o}}\), so the desired result is proved.
Take an arbitrary element \(\lambda\in R\), we define a homomorphism
\[T_{\lambda}(b):V(c)\to V(c_{b})\]
associated with the band surgery along \(b\). When the choice of \(\Sigma\) needs to be emphasized, we will write \(T_{\lambda}(b)\) as \(T_{\lambda}^{\Sigma}(b)\).
We first assume that the intersection of \(\partial b\) with every component of \(c\) is non-empty. The general case will be discussed later. By Lemma 3.1, if the intersection of \(\partial b\) with every component of \(c\) is non empty, then there are three cases:
**Case 1**: \(c\) has two components \(\gamma_{1}\) and \(\gamma_{2}\) and they are merged into one circle \(\gamma=c_{b}\) after the surgery. In this case, we define \(T_{\lambda}(b):V(\gamma_{1})\otimes V(\gamma_{2})\to V(\gamma)\) as follows:
1. If both \(\gamma_{1}\) and \(\gamma_{2}\) are contractible circles, then \(\gamma\) is also contractible, and we define \(T_{\lambda}(b)\) by \[\mathbf{v}(\gamma_{1})_{+}\otimes\mathbf{v}(\gamma_{2})_{+} \mapsto\mathbf{v}(\gamma)_{+}, \mathbf{v}(\gamma_{1})_{+}\otimes\mathbf{v}(\gamma_{2})_{-} \mapsto\mathbf{v}(\gamma)_{-},\] \[\mathbf{v}(\gamma_{1})_{-}\otimes\mathbf{v}(\gamma_{2})_{+} \mapsto\mathbf{v}(\gamma)_{-}, \mathbf{v}(\gamma_{1})_{-}\otimes\mathbf{v}(\gamma_{2})_{-} \mapsto 0.\]
2. If \(\gamma_{1}\) is contractible and \(\gamma_{2}\) is non-contractible, then \(\gamma_{2}\) is isotopic to \(\gamma\). The existence of non-contractible curves on \(\Sigma\) implies that \(\Sigma\) is not diffeomorphic to \(S^{2}\). By Lemma 3.2, the orientations of \(\gamma_{2}\) are canonically identified with the orientations
of \(\gamma\) via an isotopy. This identification defines a canonical isomorphism from \(V(\gamma_{2})\) to \(V(\gamma)\), which we denote by \(\iota\). In this case, the homomorphism \(T_{\lambda}(b)\) is defined by \[\mathbf{v}(\gamma_{1})_{+}\otimes x\mapsto\iota(x),\quad\mathbf{v}(\gamma_{1})_ {-}\otimes x\mapsto 0\] for all \(x\in V(\gamma_{2})\).
3. If \(\gamma_{1}\) is non-contractible and \(\gamma_{2}\) is contractible, define \(T_{\lambda}(b)\) by requiring the map to be symmetric with respect to \(\gamma_{1}\) and \(\gamma_{2}\) and deducing to case (2) above.
4. If \(\gamma_{1}\) and \(\gamma_{2}\) are both non-contractible and \(\gamma_{3}\) is contractible, then \(\gamma_{1}\) and \(\gamma_{2}\) must be isotopic. By Lemma 3.2, the orientations of \(\gamma_{1}\) and \(\gamma_{2}\) are canonically identified by the isotopy. Let \(\mathfrak{o}\), \(\mathfrak{o}^{\prime}\) be the two orientations of \(\gamma_{1}\), and use the same notation to denote the corresponding orientations of \(\gamma_{2}\). The map \(T_{\lambda}(b)\) is then defined by \[\mathbf{v}(\gamma_{1})_{\mathfrak{o}}\otimes\mathbf{v}(\gamma_{2})_ {\mathfrak{o}} \mapsto 0, \mathbf{v}(\gamma_{1})_{\mathfrak{o}^{\prime}}\otimes \mathbf{v}(\gamma_{2})_{\mathfrak{o}^{\prime}}\mapsto 0,\] \[\mathbf{v}(\gamma_{1})_{\mathfrak{o}}\otimes\mathbf{v}(\gamma_{2})_ {\mathfrak{o}^{\prime}} \mapsto\mathbf{v}(\gamma)_{-}, \mathbf{v}(\gamma_{1})_{\mathfrak{o}^{\prime}}\otimes \mathbf{v}(\gamma_{2})_{\mathfrak{o}}\mapsto\mathbf{v}(\gamma)_{-}.\]
5. If all of \(\gamma_{1}\), \(\gamma_{2}\), and \(\gamma\) are non-contractible, let \(N\) be the regular neighborhood of \(b\cup\gamma_{1}\cup\gamma_{2}\). Then \(N\) is a sphere with three disks removed, and the three boundary components of \(N\) are parallel to \(\gamma_{1}\), \(\gamma_{2}\), \(\gamma\). Since \(N\subset\Sigma\) is oriented, the boudary orientation of \(N\) defines an orientation on each of \(\gamma_{1},\gamma_{2},\gamma\), and we denote these orientations by \(\mathfrak{o}_{1},\mathfrak{o}_{2},\mathfrak{o}\) respectively. Denote their opposite orientations by \(\mathfrak{o}_{1}^{\prime},\mathfrak{o}_{2}^{\prime},\mathfrak{o}^{\prime}\). Then \(T_{\lambda}(b)\) is defined by \[\mathbf{v}(\gamma_{1})_{\mathfrak{o}_{1}^{\prime}}\otimes \mathbf{v}(\gamma_{2})_{\mathfrak{o}_{2}^{\prime}} \mapsto\lambda\cdot\mathbf{v}(\gamma)_{\mathfrak{o}}, \mathbf{v}(\gamma_{1})_{\mathfrak{o}_{1}^{\prime}}\otimes \mathbf{v}(\gamma_{2})_{\mathfrak{o}_{2}} \mapsto 0,\] \[\mathbf{v}(\gamma_{1})_{\mathfrak{o}_{1}}\otimes\mathbf{v}(\gamma_ {2})_{\mathfrak{o}_{2}^{\prime}} \mapsto 0, \mathbf{v}(\gamma_{1})_{\mathfrak{o}_{1}}\otimes\mathbf{v}( \gamma_{2})_{\mathfrak{o}_{2}} \mapsto 0.\]
**Case 2**: \(c\) contains one component \(\gamma\) and \(c_{b}\) has two components \(\gamma_{1}\) and \(\gamma_{2}\). In this case, we define \(T_{\lambda}(b):V(\gamma)\to V(\gamma_{1})\otimes V(\gamma_{2})\) as follows:
1. If \(\gamma_{1}\) and \(\gamma_{2}\) are both contractible circles, then \(\gamma\) is also contractible, and we define \(T_{\lambda}(b)\) by \[\mathbf{v}(\gamma)_{+} \mapsto\mathbf{v}(\gamma_{1})_{+}\otimes\mathbf{v}(\gamma_{2})_ {-}+\mathbf{v}(\gamma_{1})_{-}\otimes\mathbf{v}(\gamma_{2})_{+},\] \[\mathbf{v}(\gamma)_{-} \mapsto\mathbf{v}(\gamma_{1})_{-}\otimes\mathbf{v}(\gamma_{2})_ {-}.\]
2. If one of \(\{\gamma_{1},\gamma_{2}\}\) is contractible and the other is non-contractible, assume without loss of generality that \(\gamma_{1}\) is contractible and \(\gamma_{2}\) is non-contractible. Then \(\gamma\) is isotopic to \(\gamma_{2}\), and the orientations of \(\gamma\) and \(\gamma_{2}\) are canonically identified. Let \(\mathfrak{o}\), \(\mathfrak{o}^{\prime}\) be the two orientations of \(\gamma\), and use the same notation to denote the corresponding orientations of \(\gamma_{2}\). Define the map \(T_{\lambda}(b)\) by \[\mathbf{v}(\gamma)_{\mathfrak{o}}\mapsto\mathbf{v}(\gamma_{1})_{-}\otimes \mathbf{v}(\gamma_{2})_{\mathfrak{o}},\quad\mathbf{v}(\gamma)_{\mathfrak{o}^{ \prime}}\mapsto\mathbf{v}(\gamma_{1})_{-}\otimes\mathbf{v}(\gamma_{2})_{ \mathfrak{o}^{\prime}}.\]
3. If both \(\gamma_{1}\) and \(\gamma_{2}\) are non-contractible and \(\gamma\) is contractible, then \(\gamma_{1}\), \(\gamma_{2}\) are isotopic to each other, and the orientations of \(\gamma_{1}\) are \(\gamma_{2}\) are canonically identified. Let \(\mathfrak{o},\mathfrak{o}^{\prime}\) be the orientations of \(\gamma_{1}\) and use the same notation for the orientations of \(\gamma_{2}\). Define the map \(T_{\lambda}(b)\) by \[\mathbf{v}(\gamma)_{+}\mapsto\mathbf{v}(\gamma_{1})_{\mathfrak{o}}\otimes \mathbf{v}(\gamma_{2})_{\mathfrak{o}^{\prime}}+\mathbf{v}(\gamma_{1})_{ \mathfrak{o}^{\prime}}\otimes\mathbf{v}(\gamma_{2})_{\mathfrak{o}},\quad \mathbf{v}(\gamma)_{-}\mapsto 0.\]
4. If all of \(\gamma,\gamma_{1},\gamma_{2}\) are non-contractible, let \(N\) be the regular neighborhood of \(b\cup\gamma\). Then \(N\) is a sphere with three disk removed, and the three boundary components of \(N\) are parallel to \(\gamma_{1}\), \(\gamma_{2}\), \(\gamma\). The boundary orientation of \(N\) defines an orientation on each of \(\gamma_{1},\gamma_{2},\gamma\), and we denote them by \(\mathfrak{o}_{1},\mathfrak{o}_{2},\mathfrak{o}\) respectively. Denote their opposite orientations by \(\mathfrak{o}^{\prime}_{1},\mathfrak{o}^{\prime}_{2},\mathfrak{o}^{\prime}\). Define the map \(T_{\lambda}(b)\) by \[\mathbf{v}(\gamma)_{\mathfrak{o}^{\prime}}\mapsto\lambda\cdot\mathbf{v}( \gamma_{1})_{\mathfrak{o}_{1}}\otimes\mathbf{v}(\gamma_{2})_{\mathfrak{o}_{2}},\quad\mathbf{v}(\gamma)_{\mathfrak{o}}\mapsto 0.\]
**Case 3**: both \(c\) and \(c_{b}\) have exactly one component. In this case, define \(T_{\lambda}(b)\) to be zero.
In general, suppose \(c=c^{(1)}\sqcup c^{(2)}\) such that \(\partial b\) is disjoint from \(c^{(2)}\) and intersects every component of \(c^{(1)}\), we define the band surgery homomorphism \(T_{\lambda}(b):V_{\lambda}(c)\to V_{\lambda}(c_{b})\) to be
\[T_{\lambda}(b)=T_{\lambda}(b)|_{V(c^{(1)})}\otimes\operatorname{id}|_{V(c^{(2 )})}. \tag{3.1}\]
_Remark 3.3_.: In the above definition, the coefficient \(\lambda\) only appeared in Cases 1(5) and 2(4).
## 4. Commutativity of band surgery homomorphisms
The main result of this section is the following proposition.
**Proposition 4.1**.: _Suppose \(c\) is an embedded closed \(1\)-manifold on \(\Sigma\), and suppose \(b_{1}\) and \(b_{2}\) are two disjoint bands attached to \(c\). Then for all \(\lambda\in R\),_
\[T_{\lambda}(b_{1})\circ T_{\lambda}(b_{2})=T_{\lambda}(b_{2})\circ T_{\lambda }(b_{1}). \tag{4.1}\]
### The genus-zero case
We first establish (4.1) when \(\Sigma\) is a sphere or a finitely punctured sphere. Our argument here is inspired by the work of Winkeler [20].
**Lemma 4.2**.: _Equation (4.1) holds if \(\Sigma\) is a sphere or a finitely punctured sphere._
Proof.: If \(\Sigma\) is a sphere or a disk, then every curve is contractible, and Case (3) in Lemma 3.1 is not possible. In this case, our definition of \(T_{\lambda}(b)\) does not depend on \(\lambda\) and it coincides with the definition of the merge and split maps in standard Khovanov theory. Therefore Equation (4.1) holds.
When \(\Sigma\) has \(n\geq 2\) boundary components, we view \(\Sigma\) as a disk \(B\) with \(n-1\) interior disks \(B_{1},\dots,B_{n-1}\) removed. Assume the orientation of \(\Sigma\) is defined so that the boundary orientation on \(\partial B\) is given by the counter-clockwise orientation, and the boundary orientation on \(\partial B_{i}\) is the clockwise orientation.
Recall that when the surface \(\Sigma\) needs to be emphasized, we write \(V(c)\), \(\mathbf{v}(\gamma)_{\mathfrak{o}}\), \(\mathbf{v}(\gamma)_{\pm}\), \(T_{\lambda}(b)\) respectively as \(V^{\Sigma}(c)\), \(\mathbf{v}^{\Sigma}(\gamma)_{\mathfrak{o}}\), \(\mathbf{v}^{\Sigma}(\gamma)_{\pm}\), \(T_{\lambda}^{\Sigma}(b)\).
For each embedded closed \(1\)-manifold \(c\subset\Sigma\), define an isomorphism \(\Phi:V^{B}(c)\to V^{\Sigma}(c)\) as follows. For each component \(\gamma\) of \(c\), if \(\gamma\) is contractible in \(\Sigma\), define
\[\Phi(\mathbf{v}^{B}(\gamma)_{\pm})=\mathbf{v}^{\Sigma}(\gamma)_{\pm}.\]
If \(\gamma\) is non-contractible in \(\Sigma\), let \(\mathfrak{o}\) denote the counter-clockwise orientation of \(\gamma\), let \(\mathfrak{o}^{\prime}\) denote the clockwise orientation of \(\gamma\), and define
\[\Phi(\mathbf{v}^{B}(\gamma)_{+})=\mathbf{v}^{\Sigma}(\gamma)_{\mathfrak{o}}, \quad\Phi(\mathbf{v}^{B}(\gamma)_{-})=\mathbf{v}^{\Sigma}(\gamma)_{\mathfrak{o}^ {\prime}}.\]
Since \(T^{B}_{\lambda}(b)\) does not depend on \(\lambda\), we denote it by \(T^{B}(b)\). Then
\[\Phi\circ T^{B}(b)\circ\Phi^{-1}\]
is a homomorphism from \(V^{\Sigma}(c)\) to \(V^{\Sigma}(c_{b})\).
For each \(i\in\{1,\ldots,n-1\}\), define a grading on \(V^{\Sigma}(c)\) as follows. If a circle \(\gamma\) is a contractible curve on \(\Sigma\), define the degree of \(\mathbf{v}^{\Sigma}(\gamma)_{\pm}\) to be zero. If \(\gamma\) is non-contractible, for each orientation \(\mathfrak{o}\) of \(\gamma\), define the degree of \(\mathbf{v}^{\Sigma}(\gamma)_{\mathfrak{o}}\) to be the rotation number of \(\gamma_{\mathfrak{o}}\) around \(B_{i}\). Here, our convention on the rotation number is defined so that counterclockwise orientations always have non-negative rotation numbers. Define the grading of the tensor product of a set of generators to be the sum of the grading of each generator.
By checking all the cases in the definition of \(T_{\lambda}(b)\), it is straightforward to verify that the map \(T^{\Sigma}(b)\) preserves all the \(n-1\) gradings defined above. Moreover, for each \(i\in\{1,\ldots,n-1\}\), the map \(\Phi\circ T^{B}(b)\circ\Phi^{-1}\) does not increase the \(i^{th}\) grading. The components of \(\Phi\circ T^{B}(b)\circ\Phi^{-1}\) that preserve all the \(n-1\) gradings is equal to the map \(T^{\Sigma}_{1}(b)\), which is the map \(T^{\Sigma}_{\lambda}\) when \(\lambda=1\). Since \(T^{B}(b_{1})\circ T^{B}(b_{2})=T^{B}(b_{2})\circ T^{B}(b_{1})\) on \(B\), we conclude that (4.1) holds for \(T^{\Sigma}_{1}\).
To show that (4.1) holds for general \(\lambda\), define \(T^{\Sigma}_{\delta}=T^{\Sigma}_{1}-T^{\Sigma}_{0}\). Then
\[T^{\Sigma}_{\lambda}=T^{\Sigma}_{0}+\lambda\cdot T^{\Sigma}_{\delta}.\]
We define another grading on \(V^{\Sigma}(-)\) as follows. If a circle \(\gamma\) is a contractible curve on \(\Sigma\), define the degree of \(\mathbf{v}_{\Sigma}(\gamma)_{\pm}\) to be zero. If \(\gamma\) is non-contractible, for each orientation \(\mathfrak{o}\) of \(\gamma\), define the degree of \(\mathbf{v}^{\Sigma}(\gamma)_{\mathfrak{o}}\) to be \(1\) if \(\mathfrak{o}\) is the counter-clockwise orientation, and define the degree of \(\mathbf{v}^{\Sigma}(\gamma)_{\mathfrak{o}}\) to be \(-1\) if \(\mathfrak{o}\) is the clockwise orientation. Define the grading of the tensor product of a set of generators to be the sum of the grading of each generator.
By checking all the cases in the definition of \(T^{\Sigma}_{\lambda}\), it is straightforward to verify that under the above grading, the map \(T^{\Sigma}_{0}\) is homogeneous with degree \(0\), and \(T^{\Sigma}_{\delta}\) is homogeneous with degree \(-1\). Since (4.1) holds for \(\lambda=1\), we have
\[T^{\Sigma}_{0}(b_{1})\circ T^{\Sigma}_{0}(b_{2}) =T^{\Sigma}_{0}(b_{2})\circ T^{\Sigma}_{0}(b_{1})\] \[T^{\Sigma}_{\delta}(b_{1})\circ T^{\Sigma}_{0}(b_{2})+T^{\Sigma }_{0}(b_{1})\circ T^{\Sigma}_{\delta}(b_{2}) =T^{\Sigma}_{\delta}(b_{2})\circ T^{\Sigma}_{0}(b_{1})+T^{\Sigma}_{ 0}(b_{2})\circ T^{\Sigma}_{\delta}(b_{1})\] \[T^{\Sigma}_{\delta}(b_{1})\circ T^{\Sigma}_{\delta}(b_{2}) =T^{\Sigma}_{\delta}(b_{2})\circ T^{\Sigma}_{\delta}(b_{1})\]
Therefore (4.1) holds for all \(\lambda\in R\).
Lemma 4.2 can be used to verify (4.1) on surfaces with positive genera because of the following lemma.
**Lemma 4.3**.: _Suppose \(\Sigma\) is an oriented surface, and \(\Sigma^{\prime}\subset\Sigma\) is an embedded surface whose orientation is induced by \(\Sigma\). Suppose the embedding of \(\Sigma^{\prime}\) in \(\Sigma\) is \(\pi_{1}\)-injective. Suppose \(c\) is an embedded closed \(1\)-manifold in \(\Sigma^{\prime}\) and \(b_{1},b_{2}\) are two disjoint bands in \(\Sigma^{\prime}\) attached to \(c\). Then_
\[T^{\Sigma^{\prime}}_{\lambda}(b_{1})\circ T^{\Sigma^{\prime}}_{\lambda}(b_{2}) =T^{\Sigma^{\prime}}_{\lambda}(b_{2})\circ T^{\Sigma^{\prime}}_{\lambda}(b_{1})\]
_on \(V_{\Sigma^{\prime}}(c)\) if and only if_
\[T_{\Sigma}(b_{1})\circ T_{\Sigma}(b_{2})=T_{\Sigma}(b_{2})\circ T_{\Sigma}(b_{ 1})\]
on \(V_{\Sigma}(c)\)._
Proof.: Since the embedding of \(\Sigma^{\prime}\) in \(\Sigma\) is \(\pi_{1}\)-injective, there is a canonical isomorphism from \(V_{\Sigma^{\prime}}(c)\) to \(V_{\Sigma}(c)\) for every embedded \(1\)-manifold \(c\subset\Sigma^{\prime}\) which takes the generators of \(V_{\Sigma^{\prime}}(c)\) to the corresponding generators of \(V_{\Sigma}(c)\), and this isomorphism intertwines with \(T_{\lambda}^{\Sigma^{\prime}}\) and \(T_{\lambda}^{\Sigma}\), so the lemma is proved.
### The genus-one case
Now we prove Proposition 4.1 when \(\Sigma\) is a torus or a finitely punctured torus. Let \(\Sigma_{0}\) be a torus and suppose \(\Sigma=\Sigma_{0}\backslash\{p_{1},\dots,p_{n}\}\) with \(n\geqslant 0\). Let \(c,b_{1},b_{2}\) be as in Proposition 4.1. By the definition of \(T_{\lambda}\), we may assume without loss of generality that every component of \(c\) intersects \(\partial(b_{1}\cup b_{2})\) non-trivially.
**Lemma 4.4**.: _Assume every simple closed curve \(\gamma_{0}\subset\Sigma_{0}\) that is disjoint from \(c\cup b_{1}\cup b_{2}\) is contractible in \(\Sigma_{0}\). Then up to orientation-preserving diffeomorphisms of \(\Sigma_{0}\), there are only 8 possible configurations of \(c,b_{1},b_{2}\) as subsets of \(\Sigma_{0}\), which are shown in Figure 2._
In each case of Figure 2, the torus \(\Sigma_{0}\) is the quotient space obtained by gluing the two boundary components of the annulus. The blue curves denote the \(1\)-manifold \(c\), and the disks \(b_{1}\) and \(b_{2}\) are defined to be the thickening of the red arcs.
Proof.: We discuss the following cases:
If \(c\) contains two circles \(\gamma_{1}\), \(\gamma_{2}\), and both of them are contractible, let \(D_{1},D_{2}\subset\Sigma\) denote the disks bounded by \(\gamma_{1}\), \(\gamma_{2}\). Then \(D_{1}\cup D_{2}\cup b_{1}\cup b_{2}\) is a disk or an annulus, and hence there exists a circle \(\gamma_{0}\) in the complement of \(c\cup b_{1}\cup b_{2}\) that is contractible, contradicting the assumptions.
If \(c\) contains two circles \(\gamma_{1}\), \(\gamma_{2}\), such that both \(\gamma_{1}\) and \(\gamma_{2}\) are non-contractible, then \(\gamma_{1}\) and \(\gamma_{2}\) must be parallel to each other. The complement \(\Sigma_{0}\backslash(\gamma_{1}\cup\gamma_{2})\) contains two components. If every simple closed curve in \(\Sigma_{0}\backslash(c\cup b_{1}\cup b_{2})\) is contractible in \(\Sigma_{0}\), then the interior of \(b_{1}\) and \(b_{2}\) must be contained in different components of \(\Sigma_{0}\backslash(\gamma_{1}\cup\gamma_{2})\), and \(\partial b_{i}\) must intersect both components of \(c\) for each \(i\). Therefore, up to orientation-preserving diffeomorphisms of \(\Sigma_{0}\), the configuration is given by Case (1) of Figure 2.
If \(c\) contains two circles \(\gamma_{1}\), \(\gamma_{2}\), where \(\gamma_{1}\) is contractible and \(\gamma_{2}\) is not contractible, let \(D_{1}\) be the disk bounded by \(\gamma_{1}\). If either \(b_{1}\) or \(b_{2}\) is contained in \(D_{1}\), then \(D_{1}\cup c\cup b_{1}\cup b_{2}\) deformation retracts onto \(\gamma_{2}\), so there exists a non-contractible simple closed curve in \(\Sigma_{0}\) that is disjoint from \(D_{1}\cup c\cup b_{1}\cup b_{2}\), which contradicts the assumptions. Therefore, both \(b_{1}\) and \(b_{2}\) must be on the outside of \(D_{1}\), so \(b_{1}\cup D_{1}\cup b_{2}\) deformation retracts onto an arc with both end points on \(\gamma_{2}\). The assumptions then imply that \(c\cup b_{1}\cup b_{2}\) is given by Case (2) of Figure 2 up to orientation-preserving diffeomorphisms of \(\Sigma_{0}\).
If \(c\) consists of one simple closed curve \(\gamma\) that is contractible in \(\Sigma_{0}\), let \(D\) be the disk bounded by \(\gamma\), then \(b_{1}\) and \(b_{2}\) must be the thickening of two disjoint arcs \(r_{1}\) and \(r_{2}\) in \(\Sigma_{0}\backslash D\). For \(i=1,2\), let \(\overline{r_{i}}\) be the circle obtained by the union of \(r_{i}\) with an arc in \(D\). Since \(r_{1}\) and \(r_{2}\) are disjoint arcs, we may choose the arcs in \(D\) so that \(\overline{r_{1}}\) and \(\overline{r_{2}}\) are either disjoint or intersect transversely at one point. The assumptions then imply that \(\overline{r_{1}}\) and \(\overline{r_{2}}\) must intersect transversely at one point. Hence the configuration is given by Case (3) of Figure 2 up to orientation-preserving diffeomorphisms of \(\Sigma_{0}\).
If \(c\) consists of one non-contractible simple closed curve, then the possible configurations are given by Cases (4)-(8) of Figure 2.
**Lemma 4.5**.: _Equation (4.1) holds if \(\Sigma\) is a torus or a finitely punctured torus._
Proof.: If there exists a non-contractible simple closed curve \(\gamma_{0}\subset\Sigma_{0}\) that is disjoint from \(c\cup b_{1}\cup b_{2}\), we may cut open \(\Sigma_{0}\) along \(\gamma_{0}\), and the desired result follows from Lemma 4.2 and Lemma 4.3. Therefore, by Lemma 4.4, we only need to consider the 8 cases given by Figure 2.
In cases (2), (4), (5), (6), (7), (8), both sides of (4.1) are zero because Case (3) of Lemma 3.1 appears on both sides of the equations.
For Cases (1) and (3), the complement \(\Sigma\backslash(c\cup b_{1}\cup b_{2})\) has two connected components. Therefore, by Lemma 4.3 again, we only need to consider the cases when there is at most one puncture on each component.
Recall that \(n\) denotes the number of punctures on \(\Sigma_{0}\). For Case (1) with \(n=0\) or \(2\), and for Case (3), there is an orientation-preserving diffeomorphism of \(\Sigma_{0}\) that preserves \(c\) and \(\Sigma\), is orientation-preserving on \(c\), and switches \(b_{1}\) and \(b_{2}\). Therefore (4.1) holds.
For Case (1) with \(n=1\), it is straightforward to verify that both sides of (4.1) are zero.
### Proof of Proposition 4.1
Now we prove Proposition 4.1 for the general case.
Without loss of generality, we may assume that every component of \(c\) intersects \(b_{1}\) and \(b_{2}\) non-trivially, and that \(c\cup b_{1}\cup b_{2}\) is connected.
In this case, \(c\cup b_{1}\cup b_{2}\) is homotopy equivalent to the wedge sum of three circles. Therefore its Euler characteristic is \(-2\).
Figure 2. All possible configurations
Let \(N\) be a closed regular neighborhood of \(c\cup b_{1}\cup b_{2}\) in \(\Sigma\). Let \(\Sigma^{\prime}\) be obtained from \(N\) as follows: for each component \(\gamma\) of \(\partial N\), if \(\gamma\) is contractible in \(\Sigma\) but not contractible in \(N\), then \(\gamma\) bounds a disk \(D_{\gamma}\) in \(\Sigma\) such that \(D\cap N=\gamma\). Define \(\Sigma^{\prime}\) to be the union of \(N\) and all disks \(D_{\gamma}\) as above. Then the embedding of \(\Sigma^{\prime}\) in \(\Sigma\) is \(\pi_{1}\)-injective. Since \(\chi(N)=-2\), the genus of \(\Sigma^{\prime}\) is \(0\) or \(1\). Therefore by the previous results, (4.1) holds on \(\Sigma^{\prime}\). Hence by Lemma 4.3, the desired equation also holds on \(\Sigma\).
## 5. Khovanov homology
Suppose \(L\subset(-1,1)\times\Sigma\) is a link. For each \(\lambda\), we define a homology invariant for \(L\) using the maps \(T_{\lambda}\).
Suppose a link \(L\) is given by a diagram \(D\) on \(\Sigma\) with \(k\) crossings, and fix an ordering of the crossings. For \(v=(v_{1},v_{2},\dots,v_{k})\in\{0,1\}^{k}\), resolving the crossings of \(D\) by a sequence of \(0\)-smoothings and \(1\)-smoothings (see Figure 3) by \(v\) turns \(D\) to an embedded closed \(1\)-manifold in \(\Sigma\). Denote the resolved diagram by \(D_{v}\).
Whenever \(u\) is obtained from \(v\) by changing one coordinate from \(0\) to \(1\), there is a band \(b\) near the crossing such that \(v\) is obtained from \(u\) by a band surgery along \(b\). Define \(d^{\lambda}_{vu}:V(D_{v})\to V(D_{u})\) to be \(T_{\lambda}(b)\). Let \(e_{i}\) be the \(i\)-th standard basis vector of \(\mathbb{Z}^{k}\). Define
\[\operatorname{CKh}_{\Sigma,\lambda}(L)=\bigoplus_{v\in\{0,1\}^{k}}V(D_{v}),\]
and define an endomorphisms on \(\operatorname{CKh}_{\Sigma}(L)\) by
\[\mathcal{D}_{\Sigma,\lambda}=\sum_{i}\sum_{u-v=e_{i}}(-1)^{\sum_{i<j\leqslant c }v_{j}}d_{vu}.\]
By (4.1), we have \(\mathcal{D}_{\Sigma,\lambda}^{2}=0\).
We define a quantum grading and a homological grading on \(\operatorname{CKh}_{\lambda,\Sigma}(L)\) as follows. For each circle \(\gamma\), if \(\gamma\) is non-contractible, define the quantum grading on \(V(\gamma)\) to be zero. If \(\gamma\) is contractible, define the quantum grading of \(\mathbf{v}(\gamma)_{+}\) to be \(1\) and the quantum grading of \(\mathbf{v}(\gamma)_{-}\) to be \(-1\). This grading then extends to a grading on \(\operatorname{CKh}_{\lambda,\Sigma}(L)\). Define the homology grading of \(V(D_{v})\subset\operatorname{CKh}_{\lambda,\Sigma}(L)\) to be the sum of coordinates in \(v\).
There is also a grading on \(\operatorname{CKh}_{\lambda,\Sigma}(L)\) over \(H_{1}(\Sigma;\mathbb{Z})\) defined as follows. For each circle \(\gamma\), if \(\gamma\) is contractible, define the grading on \(V(\gamma)\) to be zero. If \(\gamma\) is non-contractible, for each orientation \(\mathfrak{o}\) of \(\gamma\), define the grading of \(\mathbf{v}(\gamma)_{\mathfrak{o}}\) to be the fundamental class of \(\gamma_{\mathfrak{o}}\).
Figure 3. Two types of smoothings
Following the standard convention, we use curly brackets \(\{l\}\) to denote the shifting in quantum gradings by \(l\) (namely, adding the quantum grading to each homogeneous element by \(l\)); we use the square brackets \([l]\) to denote the shifting in homology gradings by \(l\).
**Theorem 5.1**.: _The homology of_
\[\Big{(}\operatorname{CKh}_{\lambda,\Sigma}(L)[-n_{-}]\{n_{+}-2n_{-}\}, \mathcal{D}_{\Sigma,\lambda}\Big{)}\]
_as a \(\mathbb{Z}\oplus\mathbb{Z}\oplus H_{1}(\Sigma;\mathbb{Z})\) graded module is independent of the diagram or the ordering of the crossings, where \(n_{+}\) and \(n_{-}\) denote the number of positive and negative crossings of the diagram._
Proof.: The proof is identical to the proof of the invariance of the standard Khovanov homology under Reidemeister moves in [1]. Besides (3.1) and (4.1), the only properties about the band homomorphisms \(T_{\lambda}(b)\) needed in the proof are the following:
1. If \(\gamma\) is a contractible circle, then \(V(\gamma)\) is rank \(2\) with two generators \(\mathbf{v}(\gamma)_{\pm}\).
2. Suppose the band surgery along \(b\) merges two circles \(\gamma_{1}\) and \(\gamma_{2}\) to \(\gamma\), where \(\gamma_{1}\) is contractible. Then \(\gamma_{2}\) and \(\gamma\) are isotopic, and this isotopy defines a canonical isomorphism \(\iota:V(\gamma_{2})\to V(\gamma)\). Then \(T_{\lambda}(b)(\mathbf{v}(\gamma_{1})_{+}\otimes x)=\iota(x)\) for all \(x\in V(\gamma_{2})\).
3. Suppose the band surgery along \(b\) splits one circle \(\gamma\) to circles \(\gamma_{1}\) and \(\gamma_{2}\), where \(\gamma_{1}\) is contractible. Then \(\gamma_{2}\) and \(\gamma\) are isotopic, and this isotopy defines a canonical isomorphism \(\iota:V(\gamma)\to V(\gamma_{2})\). Then the composition map \[V(\gamma)\xrightarrow{T_{\lambda}(b)}V(\gamma_{1})\otimes V(\gamma_{2}) \xrightarrow{/\mathbf{v}(\gamma_{1})_{+}=0}\operatorname{span}\{\mathbf{v}( \gamma_{1})_{-}\}\otimes V(\gamma_{2})\] is given by the tensor product with \(\mathbf{v}(\gamma_{1})_{-}\), where the second map above is a quotient map.
The only remark worth making is that there is a typo in the definition of the "transpose" map in Section 3.5.5 of [1]. The map \(\Upsilon\) on the top layer should map the _quotient image_ of the pair \((\beta_{1},\gamma_{1})\) to the _quotient image_ of the pair \((\beta_{2},\gamma_{2})\)_such that_\(\gamma_{1}+\tau_{1}\beta_{1}=\gamma_{2}+\tau_{2}\beta_{2}\). The italicized phrases and the last equation in the previous sentence were missing in [1].
**Definition 5.2**.: We define the homology of
\[\Big{(}\operatorname{CKh}_{\lambda,\Sigma}(L)[-n_{-}]\{n_{+}-2n_{-}\}, \mathcal{D}_{\Sigma,\lambda}\Big{)}\]
as a \(\mathbb{Z}\oplus\mathbb{Z}\oplus H_{1}(\Sigma;\mathbb{Z})\) module to be the Khovanov invariant of \(L\subset(-1,1)\times\Sigma\), and denote it by \(\Sigma\mathrm{Kh}_{\lambda}(L;R)\).
_Remark 5.3_.: When \(\lambda=0\), the differential map \(\mathcal{D}_{\Sigma,\lambda}\) is identical to the differential map of Asaeda-Przytycki-Sikora homology defined in [1]. When \(R=\mathbb{Z}\), \(\lambda=1\), and \(\Sigma\) is a punctured disk, the homology \(\Sigma\mathrm{Kh}_{\lambda}\) recovers the invariant defined by Winkeler [20].
## 6. Motivation from instanton homology
This section explains the motivation of the definition of \(T_{\lambda}(b)\) from instanton homology. We will also prove the following detection result:
**Theorem 6.1**.: _Suppose \(\Sigma\) is a surface with genus zero, and \(L\subset(-1,1)\times\Sigma\) is a link. Then \(\operatorname{rank}_{\mathbb{Z}/2}\Sigma\mathrm{Kh}_{1}(L;\mathbb{Z}/2)\geqslant 2\), and the equality holds if and only if \(L\) is isotopic to an embedded knot in \(\Sigma\)._
_Remark 6.2_.: By a spectral sequence of Winkeler [25, Theorem 1.3], we have
\[\operatorname{rank}_{\mathbb{Z}/2}\Sigma\mathrm{Kh}_{0}(L;\mathbb{Z}/2) \geqslant\operatorname{rank}_{\mathbb{Z}/2}\Sigma\mathrm{Kh}_{1}(L;\mathbb{Z}/2).\]
Therefore, Theorem 6.1 is an improvement of [10, Theorem 1.2].
Suppose \(R\) is a _closed_ oriented surface, and let \(L\) be a link in \((-1,1)\times R\). Let \(p\) be a point on \(R\) that is disjoint from the projection of \(L\) to \(R\). In [10], the authors studied the instanton homology group
\[\Sigma\mathrm{HI}_{R,p}(L)=\mathrm{I}(S^{1}\times R,L,S^{1}\times\{p\}|\{t_{ *}\}\times R),\]
where \(S^{1}\) is viewed as the quotient space of \([-1,1]\) with \(-1\) identified with \(1\), and \(t_{*}\in S^{1}\) is a fixed base point.
Suppose \(c\) is an embedded \(1\)-manifold in \(R\), and \(b\) is a band attached to \(c\) that is disjoint from \(p\). Then the band surgery along \(b\) defines a link cobordism from \(c\) to \(c_{b}\) as links in \((-1,1)\times R\). Therefore, it induces a cobordism map for Floer homology groups (up to sign)
\[\Sigma\mathrm{HI}(b):\Sigma\mathrm{HI}_{R,p}(c)\to\Sigma\mathrm{HI}_{R,p}(c_{b}).\]
As discussed in [10, Proposition 6.12], the maps \(\Sigma\mathrm{HI}(b)\) are components of the second page of a variant of Kronheimer-Mrowka's spectral sequence. In [10, Proposition 6.11], the cobordism maps \(\Sigma\mathrm{HI}(b)\) were computed for multiple special cases, and they all have the same structure as \(T_{\lambda}(b)\) up to multiplications by integer powers of \(i\) and suitable changes of variables. This computation motivated the definition of the homology invariant \(\Sigma\mathrm{Kh}_{\lambda}\). It is natural to conjecture that the second page of Kronheimer-Mrowka's spectral sequence is isomorphic to the homology \(\Sigma\mathrm{Kh}_{\lambda}\) with coefficient ring \(\mathbb{C}\) for some \(\lambda\neq 0\).
In order to establish Theorem 6.1, we prove the following technical results that sharpens some of the computations in [10]. Let \(\lambda_{1},\dots,\lambda_{4}\) be the constants from [10, Section 6]. By [10, Lemma 6.9], one can always rescale the generators of the instanton homology groups so that \(\lambda_{1}=\pm 1\), \(\lambda_{3}=\pm 1\).
**Lemma 6.3**.: _Assume the generator \(w_{0}\) defined in [10, Section 5.2.1] is chosen so that \(\lambda_{1}=\pm 1\), \(\lambda_{3}=\pm 1\). Then \(\lambda_{2}=\pm\lambda_{4}\)._
Proof.: Consider the two bands in Figure 4 and apply the TQFT property of \(\Sigma\mathrm{HI}(b)\).
**Lemma 6.4**.: _The coefficients \(\lambda_{2}\) and \(\lambda_{4}\) are both non-zero._
Proof.: We keep using the notation from [1, Section 6]. According to the proof of [1, Lemma 6.4], we see that it suffices to prove that
\[\operatorname{SHI}([-1.1]\times\Sigma,\{0\}\times\Sigma,K)\leq 4.\]
where \(K\) is the knot shown in Figure 5.
Let \(L\) be the link in the thickened annulus as shown in Figure 6. Pick a meridional disk in the thickened annulus which intersects \(L\) at two points. We decompose the thickened annulus along this disk and obtain a product sutured thickened disk with a tangle \(T\) in it. The sutured intanton Floer homology of this sutured manifold with tangle \(T\) is isomorphic to \(\operatorname{AHI}(L,2)\) according to [1, Theorem 2.14], where \(\operatorname{AHI}(L,2)\) denotes
Figure 4. Two bands
Figure 6. The annular link \(L\)
the component of the annular instanton Floer homology with Alexander grading \(2\). The tangle \(T\) has two product vertical components. We remove the tubular neighborhoods of the two vertical components and add a meridian suture to the boundary of each neighborhood to obtain a sutured manifold \(M^{\prime}\) with a knot \(K^{\prime}\) in it. Moreover, this process does not change the sutured instanton Floer homology according to [13, Lemma 7.10] and its proof. Therefore we have
\[\operatorname{SHI}(M^{\prime},\gamma_{M^{\prime}},K^{\prime})\cong \operatorname{AHI}(K,2).\]
Notice that in the definition of sutured instanton Floer homology, the pair \((M^{\prime},K^{\prime})\) and \((M,K)\) can be given the same closure, therefore their sutured instanton homology are isomorphic. As a result, we have
\[\operatorname{SHI}([-1,1]\times\Sigma,\{0\}\times\Sigma,K)\cong \operatorname{AHI}(L,2) \tag{6.1}\]
A straightforward calculation shows that
\[\operatorname{AKh}(L,2;\mathbb{C})\cong\mathbb{C}^{4}\]
where \(\operatorname{AKh}(L,2;\mathbb{C})\) denotes the component of the annular Khovanov homology of \(L\) with Alexander grading \(2\) and with coefficient ring \(\mathbb{C}\). According to [11, Theorem 5.16], we have
\[\dim\operatorname{AHI}(L,2;\mathbb{C})\leq\dim\operatorname{AKh}(L,2;\mathbb{C })=4.\]
Therefore Equation (6.1) implies
\[\dim\operatorname{SHI}([-1,1]\times\Sigma,\{0\}\times\Sigma,K)\leq 4.\]
So we obtain \(\lambda_{2}\neq 0\). Since \(\lambda_{4}=\pm\lambda_{2}\), we also have \(\lambda_{4}\neq 0\).
Proof of Theorem 6.1.: Recall that when \(\Sigma\) is a compact surface with genus zero, there is a grading on \(V(-)\) such that \(T_{0}(b)\) is homogeneous with degree zero and \(T_{1}(b)\) is homogeneous with degree \(-1\). Since \(\lambda_{2}\neq 0\), we can rescale the map \(\Theta_{w_{0},\sigma}\) in [10] by a factor of \(\lambda_{2}^{k}\) at degree \(k\). By the discussions in [10, Section 6], there is a spectral sequence of chain complexes in \(\mathbb{C}\)-coefficients that converges to \(\operatorname{I}([-1,1]\times\Sigma,\{0\}\times\partial\Sigma,L)\), whose second page \((E_{2},d_{2})\) is isomorphic to the chain complex \((\operatorname{CKh}_{\Sigma,1}(L),\mathcal{D}_{\Sigma,1})\) up to multiplications by integer powers of \(i\) on the components of the differential map. In other words, there exists a chain complex \((C,d)\) defined with \(\mathbb{Z}[i]\) coefficients, such that when reducing to \(\mathbb{C}\) coefficients, it is isomorphic to \((E_{2},d_{2})\); when reducing to \(\mathbb{Z}[i]/(i-1)\cong\mathbb{Z}/2\) coefficients, it is isomorphic to the chain complex \((\operatorname{CKh}_{\Sigma,1}(L),\mathcal{D}_{\Sigma,1})\). By the universal coefficient theorem, we have
\[\operatorname{rank}_{\mathbb{Z}/2}\Sigma\operatorname{Kh}_{ \Sigma,1}(L;\mathbb{Z}/2) \geq\operatorname{rank}_{\mathbb{Z}[i]}H(C,d)=\dim_{\mathbb{C}}H (E_{2},d_{2})\] \[\geq\dim_{\mathbb{C}}\operatorname{I}([-1,1]\times\Sigma,\{0\} \times\partial\Sigma,L),\]
and the desired result follows from [10, Theorem 1.3].
|
2308.04239
|
Enhanced coherent light-matter interaction and room-temperature quantum
yield of plasmonic resonances engineered by a chiral exceptional point
|
Strong dissipation of plasmonic resonances is detrimental to quantum
manipulation. To enhance the quantum coherence, we propose to tailor the local
density of states (LDOS) of plasmonic resonances by integrating with a photonic
cavity operating at a chiral exceptional point (CEP), where the phase of light
field can offer a new degree of freedom to flexibly manipulate the quantum
states. A quantized few-mode theory is employed to reveal that the LDOS of the
proposed hybrid cavity can evolve into sub-Lorentzian lineshape, with
order-of-magnitude linewidth narrowing and additionally a maximum of eightfold
enhancement compared to the usual plasmonic-photonic cavity without CEP. This
results in the enhanced coherent light-matter interaction accompanied by the
reduced dissipation of polaritonic states. Furthermore, a scattering theory
based on eigenmode decomposition is present to elucidate two mechanisms
responsible for the significant improvement of quantum yield at CEP, the
reduction of plasmonic absorption by the Fano interference and the enhancement
of cavity radiation through the superscattering. Importantly, we find the
latter allows achieving a near-unity quantum yield at room temperature; in
return, high quantum yield is beneficial to experimentally verify the enhanced
LDOS at CEP by measuring the fluorescence lifetime of a quantum emitter.
Therefore, our work demonstrates that the plasmonic resonances in
CEP-engineered environment can serve as a promising platform for exploring the
quantum states control by virtue of the non-Hermiticity of open optical
resonators and building the high-performance quantum devices for sensing,
spectroscopy, quantum information processing and quantum computing.
|
Yuwei Lu, Haoxiang Jiang, Renming Liu
|
2023-08-08T13:10:04Z
|
http://arxiv.org/abs/2308.04239v1
|
Enhanced coherent light-matter interaction and room-temperature quantum yield of plasmonic resonances engineered by a chiral exceptional point
###### Abstract
Strong dissipation of plasmonic resonances is detrimental to quantum manipulation. To enhance the quantum coherence, we propose to tailor the local density of states (LDOS) of plasmonic resonances by integrating with a photonic cavity operating at a chiral exceptional point (CEP), where the phase of light field can offer a new degree of freedom to flexibly manipulate the quantum states. A quantized few-mode theory is employed to reveal that the LDOS of the proposed hybrid cavity can evolve into sub-Lorentzian lineshape, with order-of-magnitude linewidth narrowing and additionally a maximum of eightfold enhancement compared to the usual plasmonic-photonic cavity without CEP. This results in the enhanced coherent light-matter interaction accompanied by the reduced dissipation of polaritonic states. Furthermore, a scattering theory based on eigenmode decomposition is present to elucidate two mechanisms responsible for the significant improvement of quantum yield at CEP, the reduction of plasmonic absorption by the Fano interference and the enhancement of cavity radiation through the superscattering. Importantly, we find the latter allows achieving a near-unity quantum yield at room temperature; in return, high quantum yield is beneficial to experimentally verify the enhanced LDOS at CEP by measuring the fluorescence lifetime of a quantum emitter. Therefore, our work demonstrates that the plasmonic resonances in CEP-engineered environment can serve as a promising platform for exploring the quantum states control by virtue of the non-Hermiticity of open optical resonators and building the high-performance quantum devices for sensing, spectroscopy, quantum information processing and quantum computing.
## I Introduction
Plasmonic nanocavities have achieved great success in realizing the room-temperature strong coupling between the localized surface plasmon resonance and a few or even single quantum emitter (QE) at the nanoscale [1; 2; 3; 4]. This leads to the formation of hybrid plasmon-QE states, usually termed plexcitons, that enables a rich variety of advanced quantum technologies, such as ultrafast single-photon source [5; 6; 7], single-qubit control [8; 9; 10], and ultrasensitive quantum sensing [11; 12]. Nevertheless, to move toward practical applications, plexcitonic systems still call for further engineer to prolong the coherence time, which is severely limited by the large Ohmic loss in metals. To overcome this challenge, the oscillating photonic building blocks like photonic crystal and whispering-gallery-mode (WGM) cavities, have been employed to reduce the plasmonic dissipation while simultaneously produce larger cooperativity of light-matter interaction [13; 14; 15; 16; 17; 18; 19; 20; 21; 22]. This attributes to the constructive interference of scattering paths in hybrid cavity with red-detuned plasmon-photon coupling, which provides the unique benefit of higher Purcell factor than separate entities [14; 15; 18; 23]. Such plasmonic-photonic hybrids also emerge as one of successful approaches to manipulate quantum states. In this context, on resonance the plasmonic absorption can be suppressed by exploiting the principle of bound states in the continuum [24] and as a result, bound polaritonic states with anomalously small decay can form [25]. Therefore, previous studies have shown the ability of photonic elements to tailor the electromagnetic environment of LSPR and hence modify the plasmon-QE interaction.
Leakage of optical resonators, often considered detrimental to quantum devices, is inevitable but at the same time offers new avenues for light manipulation exploiting the non-Hermitian degeneracies, known as exceptional points (EPs) [26; 27; 28]. With the flourish of non-Hermitian quantum mechanics, the dimensionality reduction of state space at EPs arises as a new strategy to steer the quantum states and spurs the exploration of quantum-optics applications, including tunable photon statistics [29; 30], enhanced sensitivity of quantum sensing [31; 32; 33], and emission control of quantum light [34; 35; 36], to name a few. Nevertheless, the isolated EPs are very sensitive to the external perturbations, this feature hinders the access to EPs in quantum systems. For this reason, extending isolated EPs to higher dimension has emerged as an alternative for the observation of related
phenomena [37; 38; 39; 40]. Particularly, a chiral EP (CEP), which is a two-dimensional collection of second-order EPs in parameter space, can be implemented in WGM cavity by introducing the unidirectional coupling between the degenerate counterclockwise (CCW) and clockwise (CW) modes [39; 41; 42; 43]. This structure exhibits a fascinating feature that the CEP hosting in WGM cavity is immune to the inevitable experimental uncertainties and fabrication imperfections, thus can offer a robust platform for exploring the non-Hermitian physics.
Inspired by these pioneering works, we propose to bring the light-matter interaction into the strong coupling regime by engineering the local density of states (LDOS) of a plasmonic nanocavity through the integration with a WGM photonic cavity featuring CEP, which we call hybrid CEP cavity hereafter, while the usual plasmonic-photonic cavity without CEP is called hybrid cavity. A full quantum model is first built to reveal that the LDOS of plasmon-photon hybrid mode is related to the phase of light field at CEP, and can be tailored to further enhance while at the same time the linewidth is significantly reduced. It also reveals the mechanisms of order-of-magnitude enhancement of quantum yield at CEP, by either reducing the plasmonic absorption or enhancing the photonic radiation. Combined with the electromagnetic simulations of a simple design of hybrid CEP cavity, we predict that high quantum yield at room temperature can be achieved in realistic structures with proper design.
The paper is organized as follows. In Sec. II, we propose a prototypical mode of hybrid plasmonic-photonic cavity with a CEP hosting in WGM cavity, and formulate its LDOS based on the few-mode quantization description, which allows to analytically express LDOS through the frequencies, decay rates and coupling rates of uncoupled plasmonic and photonic modes. Sec. III investigates the modification of LDOS at CEP, where the phase of light field offers a new degree of freedom to flexibly tune the magnitude and linewidth of LDOS. Subsequently, in Secs. IV and V we demonstrate the advantages of hybrid CEP cavity in enhancing the coherent light-matter interaction and quantum yield, respectively, as two examples of quantum-optics applications. The formalism of LDOS is applied to a realistic structure in Sec. VI, and analyze the enhanced light-matter interaction and quantum yield by varying the parameters of structure geometry, from which we summarize the design rules for hybrid CEP cavity to achieve high quantum yield at room temperature. Finally, we conclude in Sec. VII.
## II Model and theory
The hybrid CEP cavity under investigation is schematically sketched in Fig. 1(a), which consists of a usual plasmonic-photonic cavity and a waveguide evanescently coupled to a pair of degenerate WGM modes with opposite propagating direction, i.e., the CW and CCW modes. A CEP is constructed through the unidirectional coupling from CCW mode to CW mode generated by the mirror at the right end of waveguide. The mirror is assumed to have unity reflectivity. An effective phase \(\phi_{s}=\beta L\) is induced for the light field of CCW mode propagating from the waveguide-cavity junction to the mirror, with \(\beta\) and \(L\) being the propagation constant of the waveguide and the distance between the cavity-waveguide junction and the mirror, respectively. When interacting with a QE, hybrid CEP cavity in general manifests non-Lorentzian LDOS and can be described as a structured electromagnetic environment that constitutes continuous bosonic modes in quantized representation [44]. On the other hand, the non-Lorentzian LDOS stems from the coupling of plasmonic and photonic resonances, thus can in principle be generated using the parameters of individual resonances and the coupling rate between them. With this prospect, we develop a quantized LDOS
Figure 1: (a) Schematic diagram of a plasmonic-photonic cavity operating at a chiral exceptional point (CEP) based on whispering-gallery-mode (WGM) cavity, which supports the degenerate clockwise (CW) and counterclockwise (CCW) modes with unidirectional coupling provided by the mirror. (b) Conceptual sketch of the mapping of the local density of states of plasmonic-photonic cavity constituted of continuous electromagnetic modes into a quantized few-mode model, where the composed system can be decomposed into a non-Markovian core and a Markovian bath. The gray dashed line indicates the structured environment of plasmonic-photonic cavity.
theory for hybrid CEP cavity in this section.
We consider a dipolar plasmonic antenna where the nonradiative higher-order modes are well separated from the dipolar mode, so that the plasmonic antenna can be treated as a single-mode resonator with resonance frequency \(\omega_{a}\). This approximation is valid for many plasmonic antennas, such as the elongated rods [4] and dimer structures [23; 45]. The dipolar plasmonic mode is coupled to both CCW and CW modes with resonance frequency \(\omega_{c}\). Accordingly, we can obtain a quantized few-mode model of hybrid CEP cavity where the continuous bosonic environment is decomposed into a non-Markovian core and a Markovian bath [46; 47; 48; 49; 25], as illustrated in Fig. 1(b), which yields the same reduced quantum dynamics as the original system. Based on this equivalent model, an extended cascaded quantum master equation of a two-level QE interacting with hybrid CEP cavity is derived in Appendix A, and employed to describe the quantum dynamics of the composed cavity quantum electrodynamics (QED) system
\[\dot{\rho}=-i[H,\rho]+\mathcal{D}[\rho] \tag{1}\]
with Lindblad operator
\[\mathcal{D}[\rho]= \gamma\mathcal{L}\left[\sigma_{-}\right]\rho+\kappa_{a}\mathcal{L }[a]\rho+\kappa\left(\mathcal{L}\left[c_{ccw}\right]\rho+\mathcal{L}\left[c_{ cw}\right]\rho\right) \tag{2}\] \[+\kappa_{c}\left(e^{i\phi}\left[c_{ccw}\rho,c_{cw}^{\dagger} \right]+e^{-i\phi}\left[c_{cw},\rho c_{ccw}^{\dagger}\right]\right)\]
where the total Hamiltonian reads \(H=H_{0}+H_{I}\), with the free Hamiltonian \(H_{0}=\omega_{a}a^{\dagger}a+\omega_{c}c_{ccw}^{\dagger}c_{ccw}+\omega_{c}c_{ cw}^{\dagger}c_{cw}+\omega_{0}\sigma_{+}\sigma_{-}\) and the interaction Hamiltonian \(H_{I}=g_{1}(c_{ccw}^{\dagger}a+a^{\dagger}c_{ccw})+g_{a}(a^{\dagger}\sigma_{-} +\sigma_{+}a)+g_{c}(c_{ccw}^{\dagger}\sigma_{-}+\sigma_{+}c_{ccw})+g_{c}(c_{cw}^ {\dagger}\sigma_{-}+\sigma_{+}c_{cw})\). \(a\) and \(c_{ccw}\) (or \(c_{cw}\)) are the bosonic annihilation operators for dipolar plasmonic mode and CCW (or CW) mode, respectively, while \(\sigma_{-}\) is the lowering operator of QE with transition frequency \(\omega_{0}\). \(g_{1}\), \(g_{a}\) and \(g_{c}\) denote the coupling rates for plasmon-photon, plasmon-QE and photon-QE interactions, respectively. The coupling rates are assumed to be real since the mode volume of cavity is much larger than that of plasmonic antenna and hence the latter can be considered as a dipole scatter [14; 15]. \(\mathcal{L}[O]\rho=2O\rho O^{\dagger}-O^{\dagger}O\rho-\rho O^{\dagger}O\) is the Liouvillian superoperator for the dissipation of operator \(O\). The first line of Eq. (2) introduces the dissipation for individuals, where the QE decay rate is \(\gamma=\gamma_{0}+\gamma_{nr}\), with \(\gamma_{0}\) being the free-space emission rate of QE and \(\gamma_{nr}\) accounting for the nonradiative decay rate of QE, which is typically \(10-20\)meV for QEs at room temperature [50; 51]. \(\kappa_{a}=\kappa_{o}+\kappa_{r}\) stands for the total decay rate of dipolar plasmonic mode, with \(\kappa_{o}\) and \(\kappa_{r}\) being the radiative and nonradiative decay rates, respectively. \(\kappa=\kappa_{c}+\kappa_{i}\) denotes the total decay rate of WGM modes, where \(\kappa_{c}\) stems from the evanescent coupling of cavity to the guided mode of waveguide, which can be tuned by adjusting the cavity-waveguide separation. \(\kappa_{i}\) is the intrinsic decay of CCW and CW modes resulting from the material absorption and the coupling of cavity to the modes other than the guided modes of waveguide. We first omit the intrinsic decay \(\kappa_{i}\) in consideration of the high-\(Q\) feature of WGM modes, but retrieve it in Sec. VI when analyzing the quantum yield of realistic structures. The second line of Eq. (2) describes the cascaded interaction that the CW mode is driven by the output field from the CCW mode through the mirror reflection, where \(\phi=2\phi_{s}\) is the roundtrip phase factor.
For the cases of spontaneous emission and weak drive studied in this work, we restrict the states space of quantum system in single-excitation manifold, where the equations of motion can be derived from Eqs. (1) and (2)
\[\frac{d}{dt}\vec{p}_{0}=-i\mathbf{M}_{0}\vec{p}_{0} \tag{3}\]
with \(\vec{p}_{0}=\left[\left\langle\sigma_{-}\right\rangle,\left\langle a\right\rangle,\left\langle c_{cw}\right\rangle,\left\langle c_{cw}\right\rangle\right]^{T}\) and the matrix \(\mathbf{M}_{0}\) being
\[\mathbf{M}_{0}=\left[\begin{array}{cccc}\omega_{0}-i\frac{\gamma}{2}&g_{a} &g_{c}&g_{c}\\ g_{a}&\omega_{a}-i\frac{\kappa_{a}}{2}&g_{1}&g_{1}\\ g_{c}&g_{1}&\omega_{c}-i\frac{\kappa}{2}&0\\ g_{c}&g_{1}&-i\kappa_{c}e^{i\phi}&\omega_{c}-i\frac{\kappa}{2}\end{array}\right] \tag{4}\]
We then change the basis of WGM cavity into the representation of standing wave modes [52]
\[c_{1}=\frac{1}{\sqrt{2}}\left(c_{ccw}+c_{cw}\right),\quad c_{2}=\frac{1}{ \sqrt{2}}\left(c_{ccw}-c_{cw}\right) \tag{5}\]
Substituting Eq. (5) into Eq. (3), we obtain \(d\vec{p}/dt=-i\mathbf{M}_{p}\vec{p}\), with \(\vec{p}=\left[\left\langle\sigma_{-}\right\rangle,\left\langle a\right\rangle,\left\langle c_{1}\right\rangle,\left\langle c_{2}\right\rangle\right]^{T}\). The matrix \(\mathbf{M}_{p}\) is given by
\[\mathbf{M}_{p}=\left[\begin{array}{cccc}\omega_{0}-i\frac{\gamma}{2}&g_{a}& \sqrt{2}g_{c}&0\\ g_{a}&\omega_{a}-i\frac{\kappa_{a}}{2}&\sqrt{2}g_{1}&0\\ \sqrt{2}g_{c}&\sqrt{2}g_{1}&\omega_{c}-i\frac{\kappa_{a}}{2}&i\frac{\kappa_{a} }{2}e^{i\phi}\\ 0&0&-i\frac{\kappa_{c}}{2}e^{i\phi}&\omega_{c}-i\frac{\kappa_{c}}{2}\end{array}\right] \tag{6}\]
where \(\kappa_{\pm}=\kappa_{i}+\kappa_{c}(1\pm e^{i\phi})\). We can see that without the mirror (i.e., \(\kappa_{c}=0\)), the standing wave mode \(c_{2}\) becomes uncoupled and Eq. (6) returns to the single-mode treatment of WGM cavity alternatively used in the literature [52; 53; 54]. In the presence of mirror, \(c_{2}\) is still decoupled from the QE and plasmon, which can simplify the subsequent derivation of LDOS.
## III Local density of states of hybrid CEP cavity
The normalized LDOS, i.e., Purcell factor, is linked to the spectral density \(J(\omega)\) through the relation \(P(\omega)=J(\omega)/J_{0}(\omega)\), with \(J_{0}(\omega)=\omega^{3}\mu^{2}/6\pi^{2}\hbar\epsilon_{0}c^{3}\) being the spectral density of a QE with dipole moment \(\mu\)[55] in the free space, where \(c\) is the speed of light and \(\epsilon_{0}\) is the permittivity of vacuum. The spectral density of hybrid CEP cavity is given by [46; 47]
\[J(\omega)=\mathrm{Re}\left[i\chi_{sys}\left(\omega\right)\right]=\mathrm{Re} \int_{-\infty}^{+\infty}d\tau e^{i\omega\tau}\left\langle\Lambda^{\dagger}(0) \Lambda(\tau)\right\rangle \tag{7}\]
with
\[\Lambda(t)=g_{a}a(t)+\sqrt{2}g_{c}c_{1}(t) \tag{8}\]
where \(\chi_{sys}\left(\omega\right)\) defines the polarizability of hybrid CEP cavity. We can see that the spectral density can be separated into three parts, i.e., \(J(\omega)=J_{a}(\omega)+J_{c}(\omega)+J_{ac}(\omega)\), with \(J_{a}(\omega)=g_{a}^{2}\operatorname{Re}\left\{\mathcal{F}\left[\left\langle a ^{\dagger}(0)a(\tau)\right\rangle\right]\right\}\) and \(J_{c}(\omega)=2g_{c}^{2}\operatorname{Re}\left\{\mathcal{F}\left[\left\langle c _{1}^{\dagger}(0)c_{1}(\tau)\right\rangle\right]\right\}\) being the modified plasmon and cavity response, respectively. \(J_{ac}(\omega)=\sqrt{2}g_{a}g_{c}\operatorname{Re}\left\{\mathcal{F}\left[ \left\langle a^{\dagger}(0)c_{1}(\tau)\right\rangle\right]\right\}\) contains the interference between the plasmon and cavity, where \(\mathcal{F}\left[\cdot\right]\) represents the Fourier transform. The two-time correlation functions \(\left\langle a^{\dagger}(0)a(\tau)\right\rangle\), \(\left\langle c_{1}^{\dagger}(0)c_{1}(\tau)\right\rangle\) and \(\left\langle a^{\dagger}(0)c_{1}(\tau)\right\rangle\) can be calculated using the quantum regression theorem [55]. Taking \(\left\langle a^{\dagger}(0)a(\tau)\right\rangle\) as an example, its dynamics follows the equation
\[\frac{d}{d\tau}\left\langle a^{\dagger}(0)\vec{c}(\tau)\right\rangle=-i\mathbf{ M}_{s}\left\langle a^{\dagger}(0)\vec{c}(\tau)\right\rangle \tag{9}\]
where \(\vec{c}(\tau)=\left[\left\langle a(\tau)\right\rangle,\left\langle c_{1}(\tau )\right\rangle,\left\langle c_{2}(\tau)\right\rangle\right]^{T}\). The matrix \(\mathbf{M}_{s}\) takes the form
\[\mathbf{M}_{s}=\left[\begin{array}{cc}\omega_{a}-i\frac{\kappa_{a}}{2}& \sqrt{2}g_{1}&0\\ \sqrt{2}g_{1}&\omega_{c}-i\frac{\kappa_{a}}{2}&i\frac{\kappa_{a}}{2}e^{i\phi} \\ 0&-i\frac{\kappa_{a}}{2}e^{i\phi}&\omega_{c}-i\frac{\kappa_{a}}{2}\end{array}\right] \tag{10}\]
With initial condition \(\left\langle a^{\dagger}(0)a(0)\right\rangle=1\), \(\left\langle a^{\dagger}(0)c_{1}(0)\right\rangle=0\) and \(\left\langle a^{\dagger}(0)c_{2}(0)\right\rangle=0\), the equation can be solved through the Fourier transform. Other correlation functions can be obtained with the similar fashion (see Appendix B for details). Note that for hybrid cavity, \(g_{a}\) is in general larger than \(g_{c}\) by over two orders of magnitude since the QE is often placed outside instead of embedded in the WGM cavity [16, 23, 56, 14]; therefore, the contributions of \(J_{c}(\omega)\) and \(J_{ac}(\omega)\) to the spectral density can be omitted, i.e., \(J(\omega)\approx J_{a}(\omega)\). Furthermore, the decoupling of QE from WGM cavity can also facilitate to clarify the role of CEP in engineering the plasmonic resonance. Based on these reasons, in the following discussion we take \(g_{c}=0\) unless special noted. In this circumstance, the analytical expression of spectral density is given by
\[J(\omega)=-g_{a}^{2}\operatorname{Im}\left[\frac{\chi_{a}(\omega)}{1-g_{1}^{2} \chi_{a}(\omega)\chi_{EP}(\omega)}\right] \tag{11}\]
where the polarizabilities of uncoupled plasmonic antenna and CEP cavity are characterized by \(\chi_{a}(\omega)=\left[\omega-\omega_{a}+i\kappa_{a}/2\right]^{-1}\) and \(\chi_{EP}(\omega)=2\chi_{c}(\omega)-i\kappa_{c}e^{i\phi}\chi_{c}^{2}(\omega)\), respectively, with \(\chi_{c}(\omega)=\left[\omega-\omega_{c}+i\kappa_{c}/2\right]^{-1}\) being the polarizability of bare WGM modes. Both \(-\operatorname{Im}\left[\chi_{a}(\omega)\right]\) and \(-\operatorname{Im}\left[\chi_{c}(\omega)\right]\) present the conventional Lorentzian lineshape, while the squared Lorentzian term \(\propto\chi_{c}^{2}(\omega)\) in \(\chi_{EP}(\omega)\) is a hallmark of second-order EP, where the relative phase \(\phi\) provides an extra degree of freedom to tailor LDOS.
We first consider the case of red-detuned plasmon-photon interaction, with coupling rates \(g_{1}=-20\mathrm{meV}\) and \(g_{a}=10\mathrm{meV}\). This value of \(g_{1}\) is similar to the theoretical values previously reported in the literature [48, 25], while \(g_{a}=10\mathrm{meV}\) is far below the achievable plasmon-QE coupling rate in experiments, which can exceed \(40\mathrm{meV}\) for molecule QEs [1, 4, 16] and is over \(100\mathrm{meV}\) for semiconductor quantum dots [57, 3]. Therefore, the aforementioned parameters are attainable in realistic structures. In the inset of Fig. 2, we plot the Purcell factor of hybrid CEP cavity versus frequency, where it demonstrates distinct lineshapes as \(\phi\) varies. Fig. 2 shows this dependence of Purcell factor on \(\phi\) more explicitly. We can see that the frequency, magnitude, and linewidth of Purcell factor varies greatly with \(\phi\). For example, two peaks appearing in the Purcell factor for \(\phi\sim 0\), with a dip around cavity resonance resulting from the deconstructive interference between the CCW and CW modes at CEP. On the contrary, Purcell factor with \(\phi=\pi\) presents a single peak close to the resonance frequency of hybrid cavity (i.e., without CEP), but with a linewidth narrower than the corresponding Lorentzian function with the same maximum, demonstrating a sub-Lorentzian lineshape. In particular, the Purcell factor reaches the maximum with \(\phi\sim 3\pi/4\), accompanied by an eightfold enhancement and order-of-magnitude linewidth narrowing compared to that of hybrid cavity. This is explained by the complex constructive interference of cavity modes at CEP. It should be emphasized that in hybrid CEP cavity, the lineshape of Purcell factor is no longer Lorentzian, in stark contrast to the hybrid cavity
Figure 2: Purcell factor of hybrid CEP cavity for various \(\phi\) (solid lines). The inset shows the logarithmic plot of the Purcell factor versus frequency and \(\phi\). Purcell factor of the hybrid cavity without CEP is shown for comparison (dashed black line). The parameters are \(\kappa_{i}=\gamma_{nr}=0\), \(g_{1}=-20\mathrm{meV}\), \(g_{a}=10\mathrm{meV}\), and the plasmon-photon detuning \(\Delta_{ac}=\omega_{a}-\omega_{c}=1\mathrm{eV}\). The quality factors of dipolar plasmonic antenna and WGM modes are \(Q_{a}=18\) and \(Q_{c}=2\times 10^{3}\), respectively.
where the Purcell factor can still be well approximated by Lorentzian function in the case of far red-detuned plasmon-photon coupling, which is a special case of Fano resonance with a large Fano parameter \(q\)[14; 58; 18].
Distinguished from the low quality factor (\(Q\)) of plasmonic antenna, which is \(Q_{a}=\omega_{a}/\kappa_{a}=10\sim 20\) for common structures [16; 2; 4; 1], the WGM cavity covers a wide range of \(Q\) varying from several hundred to million [59; 60; 53; 61], according to the different materials and structure geometries. Therefore, Figs. 3(a) and (b) investigate the Purcell enhancement \(F_{p}/F_{p}^{0}\) and the corresponding linewidth narrowing \(\Gamma_{EP}/\Gamma_{EP}^{0}\) of hybrid CEP cavity versus the quality factor \(Q_{c}=\omega_{c}/\kappa\) of WGM modes and \(g_{1}\), respectively, where \(F_{p}=\max[J(\omega)/J_{0}(\omega)]\) is the maximal Purcell factor and \(\Gamma_{EP}\) is the linewidth of hybrid CEP cavity, while \(F_{p}^{0}\) and \(\Gamma_{EP}^{0}\) denote the corresponding quantities of hybrid cavity. The results exhibit two remarkable features. One is that the Purcell enhancement can be always achieved in the chosen parameter ranges, as Fig. 3(a) shows, and a maximum of eightfold enhancement can be obtained for arbitrary \(Q_{c}\) with an optimal \(g_{1}\), which is denoted as \(g_{1}^{\rm opt}\) and indicated by the white dashed line in Figs. 3(a) and (b). However, a smaller \(Q_{c}\) requires stronger plasmon-photon interaction to reach an eightfold Purcell enhancement. On the other hand, Fig. 3(b) shows that \(g_{1}^{\rm opt}\) is exactly corresponding to the greatest linewidth narrowing, where we find \(\Gamma_{EP}/\Gamma_{EP}^{0}\sim 0.1\) and is almost unchanged as \(Q_{c}\) varies. From the analytical expression of spectral density [Eq. (11)], we find \(g_{1}^{\rm opt}\approx-\sqrt{3}\Delta_{ac}\kappa_{c}/2\) and the maximal Purcell enhancement (also the greatest linewidth narrowing) is achieved at \(\omega\approx-3\kappa_{c}/2\), where \(\Delta_{ac}=\omega_{a}-\omega_{c}\) is the plasmon-photon detuning. In Fig. 3(c), we compare the analytically predicted \(g_{1}^{\rm opt}\) (solid line) with the numerical results (hollow circles), where good agreement can be seen. It shows that WGM cavity with a moderate quality factor of \(Q_{c}=10^{3}\sim 10^{4}\) is suitable to demonstrate the enhanced Purcell effect at CEP since it yields \(\left|g_{1}^{\rm opt}\right|=10\sim 30\)meV, which is attainable in common plasmonic-photonic cavities.
## IV Enhanced coherent light-matter interaction at CEP
The Purcell enhancement and linewidth narrowing at CEP is expected to enhance the coherent light-matter interaction. This can be revealed from the emission spectrum of QE, which is defined as \(S(\omega)=\lim_{t\rightarrow\infty}\text{Re}\left[\int_{0}^{\infty}d\tau e^{i \omega\tau}\left\langle\sigma_{+}(t)\sigma_{-}(t+\tau)\right\rangle\right]\)[55], where the two-time correlation function \(\left\langle\sigma_{+}(t)\sigma_{-}(t+\tau)\right\rangle\) can be calculated using the quantum regression theorem in a fashion similar to Eqs. (7)-(10). The emission spectrum of QE is expressed as (see Appendix C for detailed derivation)
\[S(\omega)=\frac{1}{\pi}\frac{\gamma+\Gamma(\omega)}{\left[\omega-\omega_{0}- \Delta(\omega)\right]^{2}+\left[\frac{\gamma+\Gamma(\omega)}{2}\right]^{2}} \tag{12}\]
where the local coupling strength \(\Gamma(\omega)\) and the photon-induced Lamb shift \(\Delta(\omega)\) are related to the cavity polarizability and given by \(\Gamma(\omega)=-2g_{a}^{2}\text{Im}[\chi_{sys}(\omega)]\) and \(\Delta(\omega)=g_{a}^{2}\text{Re}[\chi_{sys}(\omega)]\), respectively. The temporal dynamics of QE can be retrieved from the Fourier transform of the emission spectrum.
Fig. 4(a) plots the emission spectrum of QE corresponding to the maximal Purcell enhancement in hybrid CEP cavity (i.e., \(\phi=3\pi/4\)), where the strong-coupling anticrossing can be clearly seen by varying the transition frequency of QE. While without CEP, no Rabi splitting can be seen in the emission spectrum of QE, as Fig. 4(c) shows. By comparing with the eigenenergies (dashed lines), we can see that the peak location of emission spectrum exactly corresponds to the transition frequency of QE, with broaden linewidth around cavity resonance due to the Purcell effect. This reflects the fact that the
Figure 3: Purcell enhancement \(F_{p}/F_{p}^{0}\) (a) and linewidth narrowing \(\Gamma_{EP}/\Gamma_{EP}^{0}\) (b) of hybrid CEP cavity versus \(Q_{c}\) and \(g_{1}\). The white dashed line traces the locations of maximal Purcell enhancement. (c) Comparison of the optimal plasmon-photon coupling \(g_{1}^{\rm opt}\) for the maximal Purcell enhancement obtained from the numerical results (circles) and predicted by the analytical expression \(g_{1}^{\rm opt}=-\sqrt{3\Delta_{ac}\kappa_{c}}/2\) (solid line), where \(\Delta_{ac}=\omega_{a}-\omega_{c}\) is the plasmon-photon detuning. Parameters not mentioned are the same as Fig. 2.
QE-cavity interaction is under the weak-coupling regime. Moreover, the strong light-matter interaction occurred at hybrid CEP cavity is also evident by the well-separated Rabi splitting in the emission spectrum of QE [Fig. 4(b)] and the prominent Rabi oscillation in temporal domain [Fig. 4(d)].
It is worth noting that the Rabi oscillation decaying with a rate \(\Gamma_{p}\) smaller than the emission rate \(\gamma_{\rm eff}\) of QE in the weak-coupling regime, as we see in Fig. 4(d), is a counterintuitive behavior. It is because in the standard cavity QED system (i.e., the Jaynes-Cummings model) [55], we have \(\Gamma_{p}=\kappa_{c}/2\) and \(\gamma_{\rm eff}=4g^{2}/\kappa_{c}\) when \(\gamma_{0}\ll\kappa_{c}\), where \(g\) is the coupling rate of QE-cavity interaction. On the other hand, the critical coupling rate of the onset of strong coupling is \(g_{0}=\kappa_{c}/4\)[63], which yields \(\Gamma_{p}>\gamma_{\rm eff}\), i.e., the Rabi oscillation should decay faster than a QE weakly coupled to the cavity. Therefore, the results of Fig. 4(d) implies that the composed system enters into the strong-coupling regime mainly attributed to the effect of linewidth narrowing at CEP. This feature is more evident for a composed system where the QE-cavity interaction is already in the strong-coupling regime without CEP, see Fig. 4(e) for an example. In such a case, we find the locations of two Rabi peaks in the emission spectrum of QE remain approximately unchanged in the presence of CEP, so does the period of Rabi oscillation, as shown in Fig. 4(f). This indicates that the effective coupling rate between the QE and cavity is similar to the case without CEP. However, the linewidth of Rabi peaks is significantly narrow and as a consequence, the Rabi oscillation manifests a slower decay. The results demonstrate the enhanced quantum coherence by CEP in both the weak- and strong-coupling regimes.
## V Enhanced quantum yield at CEP
In the above section, we discuss the enhanced coherent light-matter interaction in hybrid CEP cavity with off-resonant plasmon-photon coupling. In the following, we demonstrate the possibility of enhancing the quantum yield by CEP in the case of resonant plasmon-photon
Figure 4: CEP-enhanced quantum coherence. (a) and (c) Emission spectrum of QE versus QE-cavity detuning for hybrid CEP cavity with \(\phi=3\pi/4\) and for hybrid cavity without CEP, respectively. The white dashed lines trace the eigenenergies of \(\mathbf{M}_{0}\) [Eq. (4)]. (b) and (d) Emission spectrum of QE with equal splitting and the corresponding temporal dynamics, respectively. The emission spectrum of QE in hybrid CEP cavity in (b) is shifted to the peak location of that without CEP for the sake of comparison, and the corresponding frequency is given in the upper \(x\)-axis. The circles in (d) plot the results obtained by Fourier transforming the emission spectrum of QE, while the solid lines show the results obtained by numerically calculating the extended cascaded quantum master equation [Eqs. (1) and (2)] using QuTip [62]. The parameters are \(\gamma_{0}=3\mu\)eV, \(g_{1}=-24\)meV and \(Q_{c}=10^{3}\). (e) Comparison of the emission spectrum of QE in the strong-coupling regime. The pink solid line indicates the emission spectrum of QE in hybrid CEP cavity with \(\phi=\phi^{\rm opt}\) corresponding to the maximal Purcell factor, while the blue dashed line shows the emission spectrum of QE in hybrid cavity without CEP. The inset shows the Purcell factor of hybrid CEP cavity as the function of frequency and \(\phi\), where the horizontal dashed line indicates \(\phi^{\rm opt}\). The parameters are \(g_{1}=-20\)meV, \(g_{4}=40\)meV and \(Q_{c}=10^{4}\). (f) shows the corresponding temporal dynamics of QE. The results are obtained by Fourier transforming the emission spectrum. Parameters not mentioned are the same as Fig. 2.
coupling, where low quantum yield is expected due to the severe nonradiative loss of plasmonic resonance. The equations for evaluating the quantum yield are given by
\[\frac{d}{dt}\vec{p_{0}}=-i\mathbf{M_{0}}\vec{p_{0}}-i\mathbf{\Omega} \tag{13}\]
where the frequency in the diagonal elements of \(\mathbf{M_{0}}\) is now replaced by the frequency detuning \(\Delta_{L}=\omega_{X}-\omega_{L}\) between the system constituents and the driving field, with \(X=0,a,ccw,cw\) and \(\omega_{L}\) being the frequency of driving field. The composed system is initially in the ground state and \(\mathbf{\Omega}=\left[p_{in},0,0,0\right]^{T}\) accounts for a weak coherent drive of QE with amplitude \(p_{in}\), which is introduced by implementing a driving Hamiltonian \(H_{d}=p_{in}\left(e^{-i\omega_{L}t}\sigma_{+}+\sigma_{-}e^{i\omega_{L}t}\right)\) in Eq. (1). The steady-state solutions of Eq. (12) are used to calculate the quantum yield, which is defined as \(\eta=\Phi_{r}/\left(\Phi_{r}+\Phi_{d}\right)\), with radiation power \(\Phi_{r}=\langle\left(\sqrt{\kappa_{r}}a^{\dagger}+\sqrt{\gamma_{0}}\sigma_{+ }\right)\left(\sqrt{\kappa_{r}}a+\sqrt{\gamma_{0}}\sigma_{-}\right)\rangle+ \kappa_{c}\left(\left(c_{ccw}^{\dagger}c_{ccw}\right)+\left(c_{ccw}^{\dagger} c_{c}w\right)\right)\) and absorption power of plasmonic modes \(\Phi_{d}=\kappa_{o}\langle a^{\dagger}a\rangle+\gamma_{m}\langle\sigma_{+} \sigma_{-}\rangle\), where \(\gamma_{m}\) accounts for the dissipation of QE to higher-order plasmonic modes, i.e., the QE decay rate becomes \(\gamma=\gamma_{0}+\gamma_{nr}+\gamma_{m}\) in Eq. (2). The quantum yield, radiation and absorption power of hybrid cavity are denoted as \(\eta_{0}\), \(\Phi_{r}^{0}\) and \(\Phi_{d}^{0}\), respectively. It is worth mentioning that the first term in \(\Phi_{r}\) stands for the radiation from both plasmonic antenna and QE, considering that the differences of emission from these two constituents to detector can be neglected due to their subwavelength dimensions.
To better illustrate the enhancement of quantum yield at CEP, we first adopt the parameters of a hybrid cavity reported in Ref. [15] at low temperature. The results are presented in Fig. 5, where the main panel shows the quantum yield \(\eta\) with CEP versus \(Q_{c}\) and \(\phi\); meanwhile, the results without CEP is provided in the bottom panel for comparison. We can see that the maximal quantum yield without CEP is \(\eta_{\max}^{0}=0.636\), achieved at \(Q_{c}\sim 2\times 10^{4}\); while in a hybrid CEP cavity, the enhanced quantum yield (i.e., \(\eta>\eta_{\max}^{0}\)) can be found with \(Q_{c}\) varying from \(2\times 10^{3}\) to \(10^{6}\), indicated by the red dashed dotted line. In addition, high quantum yield with \(\eta>0.8\) can be achieved in a wide range of parameters.
To shed insight on the physical mechanism of enhanced quantum yield at CEP, we plot \(\eta\) as the function of frequency detuning \(\Delta_{L}\) in the upper panels of Fig. 6, where two situations are analyzed. The parameters of the first are similar to hybrid cavity achieving \(\eta_{\max}^{0}\), with \(Q_{c}\sim 1.5\times 10^{4}\) and \(\phi=3\pi/2\), as the green star in Fig. 5 indicates. The maximal quantum yield \(\eta_{\max}\) of hybrid CEP cavity is enhanced by \(50\%\) and can reach \(0.92\), as we see in the upper panel of Fig. 6(a). This enhancement stems from the CEP-reduced absorption of dipolar plasmonic mode, see the absorption power \(\Phi_{d}\) shown in the lower panel of Fig. 6(a). It shows that the minimum of absorption power is reduced by one order of magnitude in the presence of CEP, and the valley of absorption power is exactly corresponding to \(\eta_{\max}\). This feature indicates that the enhanced quantum yield originates from the Fano interference, as found by the authors in Ref. [15], while in our setup this mechanism is strengthened by CEP. The white dotted line in Fig. 5 surrounds the region of \(\eta_{d}=\min\left[\Phi_{d}^{0}\right]/\min\left[\Phi_{d}\right]>10\), indicating the parameter area of significantly reduced plasmonic absorption. We can see that for hybrid CEP cavity with high \(Q_{c}\), a high quantum yield is still hard to achieve by means of reducing the plasmonic absorption. In such a case, the improvement of quantum yield is hindered by the low radiation power of cavity.
We then focus on the region of high quantum yield extended to high \(Q_{c}\) at the upper right corner of Fig. 5, with \(\phi\) around zero (or \(2\pi\)). The quantum yield \(\eta\) versus \(\Delta_{L}\) for \(\phi=0\) and \(Q_{c}=10^{5}\) (indicated by the yellow triangle in Fig. 5) is analyzed in the upper panel of Fig. 6(b). It shows that high quantum yield can be achieved in a narrow range of frequency around \(\Delta_{L}=0\), where \(\eta\) can reach \(0.98\) while the maximal quantum yield without CEP (\(\eta_{\max}^{0}\)) is below \(0.5\). We notice that in hybrid CEP cavity, the frequency corresponding to the maximal quantum yield \(\eta_{\max}\) deviates from that of hybrid cavity; actually, the quantum yield with \(\phi=0\) is slightly re
Figure 5: Quantum yield \(\eta\) of hybrid CEP cavity versus \(Q_{c}\) and \(\phi\). The red dotted line surrounds the region of enhanced quantum yield at CEP, i.e., \(\eta>\eta_{0}\). The black dashed line shows the parameter area of cavity radiation enhancement \(\eta_{r}=\max\left[\Phi_{r}\right]/\max\left[\Phi_{r}^{0}\right]>10\), while the white dashed dotted line indicates the parameter area of plasmon absorption reduction \(\eta_{d}=\min\left[\Phi_{d}^{0}\right]/\min\left[\Phi_{d}\right]>10\) for dipolar plasmonic mode. The yellow triangle and green star indicate the parameters of Figs. 6(a) and (b), respectively. The bottom panel shows the quantum yield \(\eta_{0}\) of hybrid cavity without CEP. Other parameters are the same as Ref. [15]: \(g_{c}=0.144\)meV, \(g_{a}=7.2\)meV, \(g_{1}=-2.9\)meV, \(\kappa_{r}=2.45\)meV, \(\kappa_{o}=200\)meV, \(Q_{c}=10^{5}\), \(\gamma_{0}=3\mu\)eV, \(\gamma_{m}=83\mu\)eV and \(\gamma_{nr}=0\).
duced at the frequency of \(\eta_{\max}^{0}\). While for \(\phi=\pi\), which corresponds to the region of Fano-reduced plasmonic absorption, only a weak enhancement of quantum yield is observed at the frequencies of \(\eta_{\max}\) and \(\eta_{\max}^{0}\). Therefore, the results clearly indicate that the underlying mechanism of \(\eta_{\max}\) achieved at CEP is essentially different from the Fano-reduced plasmonic absorption of hybrid cavity. By inspecting the output power of cavity radiation \(\Phi_{r}\) [the lower panel of Fig. 6(b)], we find a sharp peak appearing exactly at the frequency of \(\eta_{\max}\), with an enhancement of about three orders of magnitude compared to that of hybrid cavity. It reveals that the striking improvement of quantum yield with \(\phi=0\) comes from the enhanced cavity radiation at CEP, which can be interpreted as a phenomenon analogous to superscattering where the partial radiation of eigenmodes is added up to produce an extremely sharp peak in \(\Phi_{r}\) without interference [64]. To the best of our knowledge, this mechanism of enhanced quantum yield, i.e., the superscattering at CEP, has not been reported in plasmonic-photonic cavity. The black dashed line in Fig. 5 indicates the region of \(\eta_{r}=\max\left[\Phi_{r}\right]/\max\left[\Phi_{r}^{0}\right]>10\), where we see that the prominent superscattering enables high quantum yield with high \(Q_{c}\) around \(\phi=0\) (or \(2\pi\)).
To gain more insight on how the cavity radiation is enhanced by the superscattering at CEP, we derive a formalism of eigenmode decomposition for the scattering spectrum of hybrid CEP cavity [64]. In absence of QE, the dynamic of cavity modes is described by \(d\langle\vec{c}(t)\rangle/dt=-i\mathbf{M_{c}}\langle\vec{c}(t)\rangle-i \mathbf{\Omega_{p}}\), where the frequency in the diagonal elements of \(\mathbf{M_{c}}\) is again replaced by the frequency detuning \(\Delta_{L}\), and \(\mathbf{\Omega_{p}}=\left[q_{in},0,0\right]^{T}\) accounts for the driving field for dipolar plasmonic mode. This equation can be rewritten as
\[i\frac{d}{dt}\langle\vec{c}(t)\rangle=V^{-1}BV\langle\vec{c}(t)\rangle+ \mathbf{\Omega_{p}} \tag{14}\]
where \(B\) and \(V\) are the diagonal matrix formed from the eigenvalues of \(\mathbf{M_{c}}\) and the corresponding left eigenvectors in matrix form, respectively. The above equation can be solved through the Fourier transform. The solution is
\[\vec{c}\left(\Delta_{L}\right)=V^{-1}\left(\Delta_{L}I-B\right)^{-1}V\mathbf{ \Omega_{p}} \tag{15}\]
The scattering of hybrid CEP cavity is expressed as \(\sigma(\Delta_{L})=s^{\dagger}(\Delta_{L})s(\Delta_{L})\), with \(s(\Delta_{L})=K\vec{c}(\Delta_{L})\), where \(K\) defines the coupling \(\Gamma\) between different scattering channels, which are independent in our setup
\[\Gamma=K^{\dagger}K=\left[\begin{array}{ccc}\kappa_{a}&0&0\\ 0&\kappa_{c}&0\\ 0&0&\kappa_{c}\end{array}\right] \tag{16}\]
After lengthy calculations (see Appendix D for details), we can obtain the expression of \(\sigma(\Delta_{L})\) at \(\Delta_{L}=0\)
\[\begin{split}\sigma_{0}\equiv\sigma(0)=\frac{p}{\gamma_{a} \gamma_{b}\gamma_{c}}\left\{h_{aa}\left|C_{a}\right|^{2}+h_{bb}\left|C_{b} \right|^{2}+h_{cc}\left|C_{c}\right|^{2}\right.\\ \left.+2\operatorname{Re}\left[h_{ab}C_{a}^{*}C_{b}+h_{ac}C_{a}^{*}C_{c }+h_{bc}C_{b}^{*}C_{c}\right]\right\}\end{split} \tag{17}\]
where \(p=\operatorname{Det}\left[V\right]^{-2}\). \(C_{i}\) is defined as the radiation pattern of eigenmodes, with \(i=a,b,c\), while \(h_{ij}\) and \(h_{ii}\) encode the interaction and the weighted coefficient of eigenmodes, respectively. For the specific expressions of \(C_{i}\), \(h_{ii}\) and \(h_{ij}\), we refer to Appendix D. \(\gamma_{i}\) is the imaginary part of eigenvalues of \(\mathbf{M_{c}}\) in decreasing order, i.e., \(\gamma_{a}>\gamma_{b}>\gamma_{c}\). Therefore, eigenmode \(a\) is superradiant. We denote the first term as \(\sigma_{\sup}\) while the remaining terms as \(\sigma_{\infty}\), thus \(\sigma_{0}=\sigma_{\sup}+\sigma_{\mathrm{so}}\). Therefore, if the condition \(\gamma_{a}\gg\gamma_{b},\gamma_{c}\) is satisfied, which holds true for hybrid CEP cavity, the superscattering will occur when \(\sigma_{\sup}>\sigma_{\mathrm{so}}>0\), giving rise to a peak in the scattering spectrum.
Fig. 7(a) compares \(\sigma(\Delta_{L})\) with \(\phi=0\) and \(3\pi/2\), while other parameters are the same as Fig. 6(b). We can see that a sharp peak shows around \(\Delta_{L}=0\) for both cases, but the intensity of \(\phi=0\) is much higher than that of \(\phi=3\pi/2\). On the contrary, no peak can be found around \(\Delta_{L}=0\) without CEP. In Figs. 7(b) and (c), we plot the corresponding eigenmode decomposition of \(\sigma(\Delta_{L})\) around \(\Delta_{L}=0\). We find \(\sigma_{\mathrm{so}}\gg\sigma_{\sup}\) for \(\phi=0\); furthermore, the scattering peak is mainly contributed by \(\sigma_{\mathrm{so}}\) and demonstrates a subnatural linewidth. These features signify the occurrence of superscattering at CEP. While for \(\phi=3\pi/2\), Fig. 7(c) shows a scattering peak appearing at the local maximum of \(\sigma_{\mathrm{so}}\), however, in this case \(\sigma_{\mathrm{so}}\) is negative around \(\Delta_{L}=0\), i.e., \(\sigma_{\sup}>0>\sigma_{\mathrm{so}}\). Therefore, it belongs to the intermediate mechanism of
Figure 6: Reduced plasmonic absorption and enhanced cavity radiation for high quantum yield of hybrid CEP cavity. (a) Quantum yield (upper panel) and plasmon absorption of dipolar mode (lower panel) in hybrid CEP cavity for \(Q_{c}\sim 1.5\times 10^{4}\) (see the green star in Fig. 5). (b) Quantum yield (upper panel) and cavity radiation (lower panel) of hybrid CEP cavity for \(Q_{c}=10^{5}\) (see the yellow triangle in Fig. 5). The inset shows a close-up of quantum yield around \(\Delta_{L}=0\). In (a) and (b), the results of hybrid cavity without CEP are also shown for comparison (black dashed lines). Parameters not mentioned are the same as Fig. 5.
electromagnetically induced transparency and superscattering.
In view of the advance of hybrid CEP cavity in enhancing quantum yield at low temperature, we investigate the performance at room temperature. In this case, the intrinsic quantum yield of QE [i.e., \(\gamma_{0}/(\gamma_{0}+\gamma_{m}+\gamma_{nr})\)] approaches to zero, while the quantum yield of hybrid cavity is dramatically reduced by the nonradiative decay \(\gamma_{nr}\) of QE. We evaluate that the quantum yield of hybrid cavity decreases from 0.58 to less than 0.014 for \(Q_{c}=10^{4}\) and \(\gamma_{nr}=15\)meV [50, 51]. With the same \(Q_{c}\) and \(\gamma_{nr}\), Fig. 7(d) displays the quantum yield of hybrid CEP cavity as the function of \(g_{a}\) and \(g_{1}\), where it shows that \(\eta>0.9\) can be achieved with \(g_{a}\),\(|g_{1}|>15\)meV. It also shows that strong plasmon-photon and plasmon-QE interactions are beneficial to improve quantum yield. \(\eta\) rapidly grows from \(\sim 0.4\) to \(\sim 0.9\) as the coupling rates \(g_{a}\) and \(|g_{1}|\) increase from several meV to 15meV. With \(g_{a}\),\(|g_{1}|\sim 30\)meV, which are attainable in realistic structures [25, 48], near-unity quantum yield can be achieved at room temperature, demonstrating over hundredfold enhancement of quantum yield. As \(Q_{c}\) reduces to \(10^{3}\), Fig. 7(e) shows that the quantum yield significantly decreases in a wide range of parameters, but hybrid CEP cavity with stronger plasmon-photon and plasmon-QE interactions manifests higher robustness. For example, the quantum yield drops from 0.88 to 0.45 for \(g_{a}\),\(|g_{1}|=15\)meV, while high quantum yield \(\eta_{\text{max}}\approx 0.9\) can still be maintained with \(g_{a}\),\(|g_{1}|=30\)meV.
The results presented in Figs. 7(d) and (e) indicate that a high \(Q_{c}\) facilitates to improve the quantum yield of hybrid CEP cavity. Fig. 7(f) investigates the quantum yield for various \(Q_{c}\) versus \(\Delta_{L}\), where \(\eta_{\text{max}}\) is found to reach 0.992 at room temperature with \(Q_{c}=10^{5}\). It also indicates that this near-unity quantum yield is achieved by the superscattering at CEP, while the mechanism of Fano interference fails to produce high quantum yield at room temperature. Therefore, the results demonstrate the unique advance of hybrid CEP cavity in realizing high quantum yield and great potential in practical applications, such as building room-temperature single-photon sources [65, 66].
## VI A physical realization
In the above discussion, we omit the intrinsic decay of WGM modes. We will address this issue in this subsec
Figure 7: Superscattering and near-unity quantum yield at room temperature. (a) Scattering spectrum \(\sigma(\Delta_{L})\) of hybrid CEP cavity for plasmon driving. The inset shows a close-up of \(\sigma(\Delta_{L})\) around \(\Delta_{L}=0\). (b) and (c) Decomposition of cavity scattering \(\sigma(\Delta_{L})\) into the superscattering (\(\sigma_{\text{sup}}\)) and other (\(\sigma_{\text{so}}\)) terms for the cases of \(\phi=0\) and \(3\pi/2\), respectively. Parameters of these two cases are indicated by the yellow triangle and green star in Fig. 6, respectively. The results of hybrid cavity without CEP are also shown for comparison (dashed lines). (d) and (e) Room-temperature quantum yield \(\eta\) of hybrid CEP cavity versus \(g_{a}\) and \(g_{1}\) for \(Q_{c}=10^{4}\) and \(10^{3}\), respectively. \(\eta_{0}\) denotes the quantum yield of hybrid cavity without CEP. (f) Comparison of room-temperature quantum yield versus the frequency detuning for various \(Q_{c}\). The quantum yield of hybrid cavity without CEP is also shown for comparison (black dashed line). The parameters are \(\phi=0\), \(g_{1}=-15\)meV, \(g_{a}=20\)meV and \(\gamma_{nr}=15\)meV. Other parameters are the same as Fig. 6.
tion by evaluating the quantum yield of QE in a realistic structure of hybrid CEP cavity. To do this, we need to extract the system parameters from the simulation data by applying the analytical LDOS. Therefore, we should first validate our LDOS theory.
Fig. 8(a) depicts the geometry of hybrid CEP cavity under study, which is based on a SiN microdisk with a gold dimer placed on the top surface. The refractive index of microdisk is \(n=2\), with an imaginary component of \(4\times 10^{-6}\) to include the material absorption [14]. The permittivity \(\epsilon(\omega)\) of gold is characterized by the Drude model \(\epsilon(\omega)=1-\omega_{g}^{2}/\omega(\omega+i\gamma_{g})\), where \(\omega_{g}=1.26\times 10^{16}\) rad/s and \(\gamma_{g}=1.41\times 10^{14}\) rad/s are the plasma frequency and the collision rate, respectively [23]. The gold dimer on SiN slab supports a single localized plasmon resonance with the peak appearing at \(\omega_{a}=2.254\)eV and a linewidth (total decay rate) of \(\kappa_{a}=255\)meV, where the radiative decay rate is evaluated as \(\kappa_{r}=13\)meV. A point-dipole QE is located at the gap center of the gold dimer.
According to the LDOS theory developed in Sec. II, the Purcell enhancement felt by QE can be modified by adjusting the cavity-waveguide separation \(s\) and the distance \(L\) between the cavity-waveguide junction and the mirror, as the former determines \(\kappa_{c}\) while the latter controls \(\phi\). To verify our LDOS theory, we consider a pair of degenerate WGM modes with frequency \(\omega_{c}=1.462\)eV and intrinsic linewidth \(\kappa_{i}=0.387\)meV in absence of waveguide. The frequency of cavity resonance is unaffected by the coupling to waveguide with \(s=100\)nm, but the linewith is broadened to \(\kappa\), from which we obtain \(\kappa_{c}=0.867\)meV by subtracting the intrinsic linewidth from \(\kappa\). With the parameters of uncoupled constituents at hand, we can employ the analytical expression of complete spectral density [i.e., including \(J_{c}(\omega)\) and \(J_{ac}(\omega)\)] given in Eqs. (115) to determine the plasmon-photon coupling rate \(g_{1}\) by fitting the Purcell factor obtained from electromagnetic simulations. We evaluate \(g_{1}=-11\)meV and find good agreement between the analytical and simulation results, see the blue dashed
Figure 8: Enhanced coherent light-matter interaction and room-temperature quantum yield of a realistic hybrid CEP cavity. (a) Schematic diagram of a hybrid CEP cavity based on WGM microdisk. A gold dimer consisting of two nanorods is placed on the top of microdisk, with a QE located in the gap center and aligned matching the polarization of dipolar plasmonic mode. The radius of nanorods is \(40\)nm. Other geometry parameters are indicated in the figure. The Purcell factor can by tailored by adjusting the edge-to-edge separation \(s\) between the cavity and the waveguide and the distance \(L\) between the cavity-waveguide junction and the mirror made of Ag [67]. The width of waveguide is \(400\)nm. The orange color illustrates the field profiles for \(\phi=\pi\) and \(\omega_{c}=1.462\)eV. (b) Purcell factor of the physical realization shown in (a) for various \(s\) obtained from electromagnetic simulations. The dashed lines with the same color plot the corresponding analytical results given by the quantized few-mode model [Eqs. (115)]. (c) Temporal dynamics of initially excited QE with a dipole moment \(\mu=48\) Debye for various Purcell factors shown in (b). The inset presents the logarithmic plot of QE population. (d) Room-temperature quantum yield \(\eta\) of hybrid CEP cavity as the function of \(Q_{c}\) and \(\kappa_{c}/\kappa_{i}\). \(\eta_{0}\) denotes the quantum yield of hybrid cavity without CEP. The red star indicates \(Q_{c}\) and \(\kappa_{c}/\kappa_{i}\) of WGM modes with frequency \(\omega_{c}=2.246\)eV. Other parameters are \(g_{1}=-11\)meV, \(g_{a}=23.6\)meV, \(g_{c}=0.171\)meV and \(\gamma_{nr}=15\)meV. (e) Room-temperature quantum yield \(\eta\) versus \(g_{1}\) for various \(Q_{c}\). The frequencies and decay rates of uncoupled gold dimer and WGM cavity are provided in the text. (f) Room-temperature quantum yield \(\eta\) for various \(g_{1}\). The parameters are \(\kappa=\omega_{c}/10^{4}\) and \(\kappa_{c}=10^{2}\kappa_{i}\). Other parameters are the same as (d).
and solid lines in Fig. 8(b). Then we introduce the mirror at the right end of waveguide to create a CEP with \(\phi=\pi\), which corresponds to the maximal Purcell enhancement of this setup. We find the analytically predicted Purcell factor accords well with the simulation results, see the pink solid and dashed lines in Fig. 8(b). In addition, for large cavity-waveguide separation \(s=160\)nm, the analytical model can also give a correct prediction of Purcell factor. Therefore, the results of Fig. 8(b) validate the applicability of our LDOS theory for hybrid CEP cavity.
Fig. 8(b) shows that the Purcell factor is increased by \(65\%\) after introducing a CEP and is double as the cavity-waveguide separation enlarges to \(160\)nm. In Fig. 8(c), we plot the corresponding temporal dynamics of initially excited QE, where it shows the greater and faster Rabi oscillation with slower decay compared to that without CEP. This indicates the stronger coherent energy exchange between the cavity and QE as a result of the enhanced Purcell factor with slightly narrow linewidth at CEP.
The intrinsic decay of cavity modes is inevitable in realistic structures, which hinders the improvement of quantum yield. Fig. 8(d) displays the quantum yield of hybrid CEP cavity versus \(Q_{c}\) and \(\kappa_{c}/\kappa_{i}\) with \(\phi=0\), where the QE is resonantly coupled to WGM modes with frequency \(\omega_{c}=2.246\)eV. The corresponding intrinsic and waveguide-induced decay rates for \(s=40\)nm are indicated by the red star in Fig. 8(d), which yields \(\eta\sim 0.36\). Fig. 8(d) shows that the enhancement of quantum yield requires an increase of \(Q_{c}\) while simultaneously reducing the intrinsic decay of WGM modes. This can be achieved by using a WGM cavity with large radius [39] and made of high refractive index [48]. With the fixed \(\kappa_{c}/\kappa_{i}=100\), Fig. 8(e) shows that the quantum yield for \(|g_{1}|>5\)meV can be significantly enhanced as \(Q_{c}\) varies from \(10^{3}\) to \(10^{4}\), and reach 0.9 as \(|g_{1}|\) increases to \(20\)meV. However, a high \(Q_{c}\) may also have negative impact on the quantum yield. When \(|g_{1}|>25\)meV, the increase of \(Q_{c}\) from \(10^{4}\) to \(10^{5}\) leads to the reduction of quantum yield. It is because the radiation through cavity is less efficient for a high \(Q_{c}\), then a substantial portion of energy is absorbed by plasmon with a large \(g_{1}\) since the intrinsic quantum yield of QE is extremely low. In addition, Fig. 8(f) shows that a large \(g_{1}\) narrows the linewidth of radiation spectrum and may lead to the reduced quantum yield outside the cavity resonance, which is detrimental to realize the broadband enhancement of quantum yield. Therefore, the results show that \(g_{1}\) also plays an important role in determining the quantum yield, but is not always in a positive manner.
Finally, we briefly summarize and discuss the findings from this simple design of hybrid CEP cavity. To achieve high quantum yield at room temperature, a hybrid CEP cavity with moderate \(|g_{1}|\sim 20\)meV, relatively high \(Q_{c}\sim 10^{5}\) and low \(\kappa_{i}\) are desirable. We note that it is not difficult to find a realistic structure satisfying one or two of them, but is challenging to meet all of these requirements in WGM cavity. It is because a high \(Q_{c}\) in general means weak energy leakage, resulting in a small overlap between the plasmonic and photonic modes when the plasmonic antenna is placed outside the cavity. This results in the tradeoff between \(Q_{c}\) and \(g_{1}\) in hybrid CEP cavity. To overcome this obstacle, the plasmonic antenna can be embedded in the cavity, with an analogues design of the structure studied in Ref. [48]; however, in such a case the QE location cannot be feasibly controlled in experiments. A possible solution is to introduce a thin air slot at the antinode of the selected WGM modes [68], where the plasmonic antenna and QE can be placed with precise control. The presence of air slot in WGM microdisk will not degrade \(Q_{c}\) since it has a physical volume as small as the plasmonic antenna, thus can be seen as a perturbation of WGM modes. Such hybrid CEP cavity design can fulfill the requirements of achieving high quantum yield at room-temperature, and can also be integrated with other on-chip optoelectronic devices.
## VII Conclusion and outlook
In this work, we propose to engineer the plasmonic resonance by virtue of a CEP hosting in a WGM cavity and demonstrate the great tunability of LDOS provided by CEP. A LDOS theory is established to reveal the Purcell enhancement accompanied by order-of-magnitude linewidth narrowing at CEP, which results in the enhanced coherent light-matter interaction and the reduced dissipation of polaritonic states. Importantly, we identify a new mechanism of enhancing the quantum yield through the superscattering at CEP, which holds great promise for realizing a near-unity quantum yield at room temperature. A physical implement is utilized to validate our LDOS theory and analyze the possible factors that hinder the experimental demonstration of the predicted high quantum yield. One direction of future study is the optimization of structure design with moderate plasmon-photon interaction, high quality factor and low intrinsic decay. We believe that our work can provide insights on harnessing the non-Hermiticity of open quantum systems in quantum states control, which may benefit diverse quantum-optics applications.
###### Acknowledgements.
This work is supported by the National Natural Science Foundation of China (Grant Nos. 62205061, 12274192, 11874438) and the Key Project of Natural Science Foundation of Henan (Grant No. 232300421141).
## Appendix A Derivation of the extended cascaded quantum master equation
This section is aimed to derive the quantum master equation (QME) for hybrid CEP cavity based on the the
oretical framework of waveguide QED [69; 30], which is called the extended cascaded QME [Eqs. (1) and (2)] since the CW mode is driven by the output field of CCW mode similar to a cascaded quantum system [70]. We consider an equivalent model by removing the mirror, as depicted in Fig. 9, where the CW mode is flipped into a mirrored CCW mode through the mirror symmetry. Accordingly, the bosonic annihilation operators of the original and the mirrored CCW modes are denoted as \(c_{L}\) and \(c_{R}\), respectively. We can see that in this equivalent model, only the right-propagating guided mode of waveguide is involved in the interaction between \(c_{L}\) and \(c_{R}\). Then the extended cascaded QME can be derived from tracing out the waveguide modes. The system Hamiltonian including the waveguide modes is written as
\[H_{S}=H+H_{w}+H_{sw} \tag{10}\]
where \(H=H_{0}+H_{I}\) is given in Eq. (1). \(H_{w}\) is the free Hamiltonian of waveguide
\[H_{w}=\int d\omega\omega b_{R}^{\dagger}b_{R} \tag{11}\]
and \(H_{sw}\) describes the Hamiltonian of cavity-waveguide interaction
\[H_{sw}=i\sum_{j=L,R}\int d\omega\sqrt{\frac{\kappa_{c}}{2\pi}}b_{R}^{\dagger} e^{-ikx_{j}}c_{j}+H.c. \tag{12}\]
where \(b_{R}\) is the bosonic annihilation operator of the right-propagating waveguide mode with frequency \(\omega\) and wave vector \(k=\omega_{c}/v\) with \(v\) being the group velocity. \(x_{L}\) and \(x_{R}\) are the positions of CCW mode and the mirrored CW mode, respectively. Without the loss of generality, we set \(x_{L}=0\) and \(x_{R}=2L\) according with the definition in the main text. Applying the transformation \(\widetilde{H}=UHU^{\dagger}-idU/dtU^{\dagger}\) with \(U=\exp\left[i\left(\omega_{c}\sum_{j=L,R}c_{j}^{\dagger}c_{j}+\int d\omega \omega b_{R}^{\dagger}b_{R}\right)t\right]\), we have
\[\widetilde{H}_{sw}(t)=i\sum_{j=L,R}\int d\omega\sqrt{\frac{\kappa_{c}}{2\pi}}b _{R}^{\dagger}e^{i(\omega-\omega_{c})t}e^{-i\omega x_{j}/v}c_{j}+H.c. \tag{13}\]
The equation of motion of \(b_{R}\) can be obtained from the Heisenberg equation
\[\frac{d}{dt}b_{R}(t)=\sum_{j=L,R}\sqrt{\frac{\kappa_{c}}{2\pi}}c_{j}e^{i(\omega -\omega_{c})t}e^{-i\omega x_{j}/v} \tag{14}\]
The above equation can be formally integrated to obtain
\[b_{R}(t)=\sum_{j=L,R}\int_{0}^{t}d\tau\sqrt{\frac{\kappa_{c}}{2\pi}}c_{j}e^{i (\omega-\omega_{c})\tau}e^{-i\omega x_{j}/v} \tag{15}\]
where we have taken \(b_{R}(0)=0\) since the waveguide is initially in the vacuum state. On the other hand, the equation of motion of arbitrary operator \(O\) is given by
\[\frac{d}{dt}O(t)=\sum_{j=L,R}\int d\omega\sqrt{\frac{\kappa_{c}}{2\pi}}\left\{ b_{R}^{\dagger}(t)e^{i(\omega-\omega_{c})t}e^{-i\omega x_{j}/v}\left[O(t),c_{j}(t) \right]-\left[O(t),c_{j}^{\dagger}(t)\right]b_{R}(t)e^{-i(\omega-\omega_{c})t }e^{i\omega x_{j}/v}\right\} \tag{16}\]
Substituting \(b_{R}(t)\) into the above equation, we have
\[\frac{d}{dt}O(t)=\frac{\kappa_{c}}{2\pi}\sum_{j,l=L,R}\int_{0}^{t}d\tau\int d \omega\left\{e^{i(\omega-\omega_{c})(t-\tau)}e^{-i\omega x_{j}/v}c_{l}^{ \dagger}(\tau)\left[O(t),c_{j}(t)\right]-\left[O(t),c_{j}^{\dagger}(t)\right]c _{l}(\tau)e^{-i(\omega-\omega_{c})(t-\tau)}e^{i\omega x_{jl}/v}\right\} \tag{17}\]
where \(x_{jl}=x_{j}-x_{l}\), and we apply the Markov approximation by assuming the time delay \(x_{jl}/v\) between the CCW mode and the mirrored CW mode can be neglected. Therefore,
\[\begin{split}\frac{\kappa_{c}}{2\pi}\sum_{l=L,R}\int_{0}^{t}d \tau\int d\omega e^{i(\omega-\omega_{c})(t-\tau)}e^{-i\omega x_{jl}/v}c_{l}^{ \dagger}(\tau)=\kappa_{c}\sum_{l=L,R}\int_{0}^{t}d\tau\delta\left(t-\frac{x_{ jl}}{v}-\tau\right)e^{-ikx_{jl}}c_{l}^{\dagger}(\tau)\\ \approx\frac{\kappa_{c}}{2}c_{j}^{\dagger}(t)+\kappa_{c}\sum_{l= L,R}\Theta\left(t-\frac{x_{jl}}{v}\right)e^{-ikx_{jl}}c_{l}^{\dagger}(t)\end{split} \tag{18}\]
Figure 9: Schematic diagram of the equivalent model of hybrid CEP cavity without the mirror. The original CW mode is flipped into a mirrored CCW mode and as a result, the QE becomes circularly polarized due to the mirror symmetry. \(x_{L}\) and \(x_{R}\) denote the positions of cavity modes. Other notions are the same as that in the main text.
where \(x_{jl}>0\) and \(\Theta(t)\) is the step function. With Eq. (102) and taking the averages of Eq. (100), we have
\[\begin{split}\frac{d}{dt}\langle O(t)\rangle=\frac{\kappa_{c}}{2}& \sum_{j=L,R}\left\{\left\langle c_{j}^{\dagger}(t)\left[O(t),c_{j}(t) \right]\right\rangle-\left\langle\left[O(t),c_{j}^{\dagger}(t)\right]c_{j}(t) \right\rangle\right\}\\ &+\kappa_{c}\sum_{j,l=L,R,j\neq l}\left\{e^{-ikx_{jl}}\left\langle c _{l}^{\dagger}(t)\left[O(t),c_{j}(t)\right]\right\rangle-e^{ikx_{jl}}\left\langle \left[O(t),c_{j}^{\dagger}(t)\right]c_{l}(t)\right\rangle\right\}\end{split} \tag{103}\]
Since \(\langle O(t)\rangle=\text{Tr}[O(t)\rho(0)]=\text{Tr}[O\rho(t)]\), we can simplify the averages of operators in the above equation by using the cyclic property of trace
\[\left\langle c_{\lambda}^{\dagger}(t)\left[O(t),c_{\lambda}(t)\right]\right\rangle =\text{Tr}\left[c_{\lambda}^{\dagger}Oc_{\lambda}\rho(t)-c_{ \lambda}^{\dagger}c_{\lambda}O\rho(t)\right]=\text{Tr}\left[Oc_{\lambda}\rho( t)c_{\lambda}^{\dagger}-O\rho(t)c_{\lambda}^{\dagger}c_{\lambda}\right]=\text{ Tr}\left\{O\left[c_{\lambda},\rho(t)c_{\lambda}^{\dagger}\right]\right\} \tag{104}\]
\[\left\langle\left[O(t),c_{j}^{\dagger}(t)\right]c_{j}(t)\right\rangle=\text{ Tr}\left[Oc_{j}^{\dagger}c_{j}\rho(t)-c_{j}^{\dagger}Oc_{j}\rho(t)\right]= \text{Tr}\left[Oc_{j}^{\dagger}c_{j}\rho(t)-Oc_{j}\rho(t)c_{j}^{\dagger} \right]=\text{Tr}\left\{O\left[c_{j}^{\dagger},c_{j}\rho(t)\right]\right\} \tag{105}\]
Therefore, we can obtain a QME in the following form
\[\begin{split}\frac{d}{dt}\rho(t)=-i[H,\rho(t)]+\frac{\kappa_{c}}{ 2}\sum_{j=ccw,cw}\left\{\left[c_{j},\rho(t)c_{j}^{\dagger}\right]-\left[c_{j}^ {\dagger},c_{j}\rho(t)\right]\right\}\\ +\kappa_{c}\sum_{j,l=ccw,cw,j\neq l}\left\{e^{-ikx_{jl}}\left[c_{ j},\rho(t)c_{l}^{\dagger}\right]-e^{ikx_{jl}\left[c_{j}^{\dagger},c_{l}\rho(t) \right]\right\}\end{split} \tag{106}\]
The second term on the right-hand side can be expanded and rewritten using the Liouvillian superoperator. By replacing \(c_{L}\) and \(c_{R}\) with \(c_{ccw}\) and \(c_{cw}\), respectively, and noting \(kx_{jl}=\phi\) in the third term on the right-hand side, we arrive at the extended cascaded QME in Eqs. (1) and (2), except for the additional decay for individuals.
## Appendix B Derivation of the complete spectral density of hybrid CEP cavity
In the main text, we provide the component of modified plasmon \(J_{a}(\omega)\) of spectral density, here we derive the analytical expressions of other components. For the modified cavity response \(J_{c}(\omega)\), the dynamics of corresponding correlation functions is as follows:
\[\frac{d}{d\tau}\left[\begin{array}{c}\left\langle c_{1}^{\dagger}(0)a(\tau) \right\rangle\\ \left\langle c_{1}^{\dagger}(0)c_{1}(\tau)\right\rangle\\ \left\langle c_{1}^{\dagger}(0)c_{2}(\tau)\right\rangle\end{array}\right]=-i \left[\begin{array}{ccc}\omega_{a}-i\frac{\kappa_{a}}{2}&\sqrt{2}g_{1}&0\\ \sqrt{2}g_{1}&\omega_{c}-i\frac{\kappa_{c}\left(1+e^{i\phi}\right)}{2}&i\frac {\kappa_{c}}{2}e^{i\phi}\\ 0&-i\frac{\kappa_{c}}{2}e^{i\phi}&\omega_{c}-i\frac{\kappa_{c}\left(1-e^{i \phi}\right)}{2}\end{array}\right]\left[\begin{array}{c}\left\langle c_{1}^ {\dagger}(0)a(\tau)\right\rangle\\ \left\langle c_{1}^{\dagger}(0)c_{1}(\tau)\right\rangle\\ \left\langle c_{1}^{\dagger}(0)c_{2}(\tau)\right\rangle\end{array}\right] \tag{107}\]
where the characteristic matrix takes the same form as \(\mathbf{M_{s}}\) of Eq. (9). With the initial conditions \(\left\langle c_{1}^{\dagger}(0)a(0)\right\rangle=0\), \(\left\langle c_{1}^{\dagger}(0)c_{1}(0)\right\rangle=1\) and \(\left\langle c_{1}^{\dagger}(0)c_{2}(0)\right\rangle=0\), we obtain the following equation by taking the Laplace transform
\[s\left[\begin{array}{c}\left\langle c_{1}^{\dagger}a(s)\right\rangle\\ \left\langle c_{1}^{\dagger}c_{1}(s)\right\rangle\\ \left\langle c_{1}^{\dagger}c_{2}(s)\right\rangle\end{array}\right]=-i\left[ \begin{array}{ccc}\omega_{a}-i\frac{\kappa_{a}}{2}&\sqrt{2}g_{1}&0\\ \sqrt{2}g_{1}&\omega_{c}-i\frac{\kappa_{c}\left(1+e^{i\phi}\right)}{2}&i\frac {\kappa_{c}}{2}e^{i\phi}\\ 0&-i\frac{\kappa_{c}}{2}e^{i\phi}&\omega_{c}-i\frac{\kappa_{c}\left(1-e^{i\phi} \right)}{2}\end{array}\right]\left[\begin{array}{c}\left\langle c_{1}^{ \dagger}a(s)\right\rangle\\ \left\langle c_{1}^{\dagger}c_{1}(s)\right\rangle\\ \left\langle c_{1}^{\dagger}c_{2}(s)\right\rangle\end{array}\right]+\left[ \begin{array}{c}0\\ 1\\ 0\end{array}\right] \tag{108}\]
The solution of \(\left\langle c_{1}^{\dagger}c_{1}(\tau)\right\rangle\) is given by
\[\left\langle c_{1}^{\dagger}c_{1}(\omega)\right\rangle=-\frac{i\chi_{c_{1}}( \omega)}{-1+2g_{1}^{2}\chi_{a}(\omega)\chi_{c_{1}}(\omega)+\left(\frac{\kappa_ {c}}{2}\right)^{2}e^{i2\phi}\chi_{c_{1}}(\omega)\chi_{c_{2}}(\omega)} \tag{109}\]
where we have transformed from the \(s\) domain to the frequency domain by replacing \(s=-i\omega\), and introduced the polarizabilities for standing wave modes \(c_{1}\) and \(c_{2}\)
\[\chi_{c_{1}}(\omega)=\frac{1}{\omega-\omega_{c}+i\frac{\kappa_{c}\left(1+e^{i \phi}\right)}{2}} \tag{110}\]
\[\chi_{c_{2}}(\omega)=\frac{1}{\omega-\omega_{c}+i\frac{\kappa_{c}(1-e^{i\phi})}{2}} \tag{10}\]
We can rearrange terms in the denominator of Eq. (11) and obtain
\[\left\langle c_{1}^{\dagger}c_{1}(\omega)\right\rangle=\frac{i\chi_{c}(\omega )\left[1-i\frac{\kappa_{c}}{2}e^{i\phi}\chi_{c}(\omega)\right]}{1-g_{1}^{2} \chi_{a}(\omega)\chi_{EP}(\omega)} \tag{11}\]
Therefore, the modified cavity response is given by
\[J_{c}(\omega)=-g_{c}^{2}\operatorname{Im}\left[\chi_{c}(\omega)\frac{1-i\frac{ \kappa_{c}}{2}e^{i\phi}\chi_{c}(\omega)}{1-g_{1}^{2}\chi_{a}(\omega)\chi_{EP}( \omega)}\right] \tag{12}\]
On the other hand, we can also obtain the solution of \(\left\langle c_{1}^{\dagger}a(\omega)\right\rangle=\left[\left\langle a^{ \dagger}c_{1}(\omega)\right\rangle\right]^{\dagger}\) from Eq. (9), which is
\[\left\langle c_{1}^{\dagger}a(\omega)\right\rangle=\frac{i\sqrt{2}g_{1}\chi_ {a}(\omega)\chi_{c_{1}}(\omega)}{1-2g_{1}^{2}\chi_{a}(\omega)\chi_{c_{1}}( \omega)-\left(\frac{\kappa_{c}}{2}\right)^{2}e^{i2\phi}\chi_{c_{1}}(\omega) \chi_{c_{2}}(\omega)} \tag{13}\]
After simplification, we obtain
\[\left\langle c_{1}^{\dagger}a(\omega)\right\rangle=\frac{i\sqrt{2}g_{1}\chi_ {a}(\omega)\chi_{c}(\omega)\left[1-i\frac{\kappa_{c}}{2}e^{i\phi}\chi_{c}( \omega)\right]}{1-g_{1}^{2}\chi_{a}(\omega)\chi_{EP}(\omega)} \tag{14}\]
which yields the crossing interference term of spectral density
\[J_{ac}(\omega)=-2g_{a}g_{c}\operatorname{Im}\left[g_{1}\chi_{a}(\omega)\chi_{ c}(\omega)\frac{1-i\frac{\kappa_{c}}{2}e^{i\phi}\chi_{c}(\omega)}{1-g_{1}^{2} \chi_{a}(\omega)\chi_{EP}(\omega)}\right] \tag{15}\]
We can see that \(J_{c}(\omega)\) and \(J_{ac}(\omega)\) exhibit more complex dependence on \(\phi\) compared to \(J_{a}(\omega)\). In the case of far red-detuned plasmon-photon coupling, it is possible to derive a simpler analytical expression of the spectral density by adiabatically eliminating the equation of plasmonic mode. The technical details of this approximation can be referred to Ref. [18].
The complete spectral density is given by
\[J(\omega) =J_{a}(\omega)+J_{c}(\omega)+J_{ac}(\omega)\] \[=-\operatorname{Im}\left\{\frac{g_{a}^{2}\chi_{a}(\omega)+g_{c} \chi_{c}(\omega)\left[1-i\frac{\kappa_{c}}{2}e^{i\phi}\chi_{c}(\omega)\right] \left[g_{c}+2g_{a}g_{1}\chi_{a}(\omega)\right]}{1-g_{1}^{2}\chi_{a}(\omega) \chi_{EP}(\omega)}\right\} \tag{16}\]
## Appendix C Derivation of the emission spectrum of QE
The emission spectrum of QE studied here is also called the spontaneous emission spectrum or the polarization spectrum [71], which reflects the local dynamics of a QE. We express the emission spectrum of QE in term of the polarizability of hybrid CEP cavity for the sake of physical transparency. The correlation \(\left\langle\sigma_{+}(t)\sigma_{-}(t+\tau)\right\rangle\) can be calculated from Eqs. (1) and (2) using the quantum regression theorem [55], which yields the following equation of motion
\[i\frac{d}{d\tau}\left[\begin{array}{c}\left\langle\sigma_{+}(0)\sigma_{-}( \tau)\right\rangle\\ \left\langle\sigma_{+}(0)a(\tau)\right\rangle\\ \left\langle\sigma_{+}(0)c_{1}(\tau)\right\rangle\\ \left\langle\sigma_{+}(0)c_{2}(\tau)\right\rangle\end{array}\right]=\mathbf{M} _{p}\left[\begin{array}{c}\left\langle\sigma_{+}(0)\sigma_{-}(\tau)\right\rangle \\ \left\langle\sigma_{+}(0)a(\tau)\right\rangle\\ \left\langle\sigma_{+}(0)c_{1}(\tau)\right\rangle\\ \left\langle\sigma_{+}(0)c_{2}(\tau)\right\rangle\end{array}\right] \tag{17}\]
where the matrix \(\mathbf{M}_{p}\) is given in Eq. (6). The initial condition is
\[\left[\begin{array}{c}\left\langle\sigma_{+}(0)\sigma_{-}(0)\right\rangle\\ \left\langle\sigma_{+}(0)a(0)\right\rangle\\ \left\langle\sigma_{+}(0)c_{1}(0)\right\rangle\\ \left\langle\sigma_{+}(0)c_{2}(0)\right\rangle\end{array}\right]=\left[ \begin{array}{c}1\\ 0\\ 0\\ 0\end{array}\right] \tag{18}\]
Laplace transforming Eq. (11) into the \(s\) domain, we have
\[\left[s+i\left(\omega_{0}-i\frac{\gamma}{2}\right)\right]\left\langle\sigma_{+} \sigma_{-}(s)\right\rangle=1-ig_{a}\left\langle\sigma_{+}a(s)\right\rangle-i \sqrt{2}g_{c}\left\langle\sigma_{+}c_{1}(s)\right\rangle \tag{12}\]
The remaining correlation functions can be solved by the following equation
\[s\left[\begin{array}{c}\left\langle\sigma_{+}a(s)\right\rangle\\ \left\langle\sigma_{+}c_{1}(s)\right\rangle\\ \left\langle\sigma_{+}c_{2}(s)\right\rangle\end{array}\right]=-i\left[\begin{array} []{ccc}\omega_{a}-i\frac{\kappa_{e}}{2}&\sqrt{2}g_{1}&0\\ \sqrt{2}g_{1}&\omega_{c}-i\frac{\kappa_{e}(1+e^{i\phi})}{2}&i\frac{\kappa_{e}} {2}e^{i\phi}\\ 0&-i\frac{\kappa_{e}}{2}e^{i\phi}&\omega_{c}-i\frac{\kappa_{e}(1-e^{i\phi})}{2 }\end{array}\right]\left[\begin{array}{c}\left\langle\sigma_{+}a(s)\right \rangle\\ \left\langle\sigma_{+}c_{1}(s)\right\rangle\\ \left\langle\sigma_{+}c_{2}(s)\right\rangle\end{array}\right]-i\left[\begin{array} []{c}g_{a}\\ \sqrt{2}g_{c}\\ 0\end{array}\right]\left\langle\sigma_{+}\sigma_{-}(s)\right\rangle \tag{13}\]
We can see from Eq. (12) that only the correlation functions \(\left\langle\sigma_{+}a(s)\right\rangle\) and \(\left\langle\sigma_{+}c_{1}(s)\right\rangle\) are needed, which are given by
\[\left\langle\sigma_{+}a(s)\right\rangle=-i\left[g_{a}\left\langle a^{\dagger} a(s)\right\rangle+\sqrt{2}g_{c}\left\langle a^{\dagger}c_{1}(s)\right\rangle \right]\left\langle\sigma_{+}\sigma_{-}(s)\right\rangle \tag{14}\]
\[\left\langle\sigma_{+}c_{1}(s)\right\rangle=-i\left[\sqrt{2}g_{c}\left\langle c _{1}^{\dagger}c_{1}(s)\right\rangle+g_{a}\left\langle c_{1}^{\dagger}a(s) \right\rangle\right]\left\langle\sigma_{+}\sigma_{-}(s)\right\rangle \tag{15}\]
Substituting the above equations into Eq. (12), we obtain
\[\left[s+i\left(\omega_{0}-i\frac{\gamma}{2}\right)+\left\langle\Lambda^{ \dagger}\Lambda(s)\right\rangle\right]\left\langle\sigma_{+}\sigma_{-}(s) \right\rangle=1 \tag{16}\]
where \(\Lambda\) is given in Eq. (8). Transforming into the frequency domain by replacing \(s=-i\omega\), we have
\[\left\langle\sigma_{+}\sigma_{-}(\omega)\right\rangle=\frac{i}{\omega-\omega_ {0}+i\frac{\gamma}{2}-\chi_{sys}(\omega)} \tag{17}\]
where \(\chi_{sys}(\omega)\) is defined as the cavity polarizability in Eq. (7). Eq. (17) can be rewritten as
\[\left\langle\sigma_{+}\sigma_{-}(\omega)\right\rangle=\frac{i}{\omega-\omega_ {0}-\Delta(\omega)+i\frac{\gamma+\Gamma(\omega)}{2}} \tag{18}\]
with the photon induced Lamb shift \(\Delta(\omega)=\operatorname{Re}\left[\chi_{\text{sys}}\left(\omega\right)\right]\) and the local coupling strength \(\Gamma(\omega)=-2\operatorname{Im}\left[\chi_{sys}(\omega)\right]\). Therefore, the emission spectrum of QE is expressed as
\[S(\omega)=\frac{1}{\pi}\frac{\gamma+\Gamma(\omega)}{\left[\omega-\omega_{0}- \Delta(\omega)\right]^{2}+\left[\frac{\gamma+\Gamma(\omega)}{2}\right]^{2}} \tag{19}\]
## Appendix D Eigenmode decomposition of the scattering spectrum of hybrid CEP cavity
In this section, we present a formalism that the cavity scattering is described by the radiation of individual superradiant and subradiant eigenresonances and their interferences, thus allows to identify the contributions of eigenmodes to the scattering spectrum of hybrid CEP cavity and explain the exotic radiation enhancement of plasmonic-photonic cavity operating at CEP with \(\phi=0\). We start by implementing a driving Hamiltonian for plasmon driven case in Eq. (1), which reads
\[H_{p}=q_{in}\left(e^{-i\omega_{L}t}a^{\dagger}+ae^{i\omega_{L}t}\right) \tag{20}\]
where \(\omega_{L}\) is the frequency of laser field and \(q_{in}\) is the driving strength. Applying the unitary transformation \(U=\exp\left[-i\omega_{L}\left(c_{ccw}^{\dagger}c_{ccw}+c_{cw}^{\dagger}c_{cw}+ a^{\dagger}a+\sigma_{+}\sigma_{-}\right)t\right]\), we can obtain the following equations of motion in the basis of standing wave modes
\[\dot{a}=-i\left(\Delta_{L}-i\frac{\kappa_{a}}{2}\right)a-i\sqrt{2}g_{1}c_{1}-iq _{in} \tag{21}\]
\[\dot{c}_{1}=-i\left[\Delta_{L}-i\frac{\kappa_{c}\left(1+e^{i\phi}\right)}{2} \right]c_{1}-i\sqrt{2}g_{1}a-\frac{\kappa_{c}}{2}e^{i\phi}c_{2} \tag{22}\]
\[\dot{c}_{2}=-i\left[\Delta_{L}-i\frac{\kappa_{c}\left(1-e^{i\phi}\right)}{2} \right]c_{2}+\frac{\kappa_{c}}{2}e^{i\phi}c_{1} \tag{104}\]
where we assume the resonant plasmon-photon coupling \(\omega_{c}=\omega_{a}\), thus \(\Delta_{L}=\omega-\omega_{c}=\omega-\omega_{a}\). By defining \(\vec{c}=\left[a,c_{1},c_{2}\right]^{T}\), we can rewrite the above equations as
\[i\frac{d\vec{c}}{dt}=V^{-1}BV\vec{c}+s_{p} \tag{105}\]
where \(B\) is the diagonal matrix formed from the eigenvalues of the characteristic matrix of Eqs. (12)-(104)
\[B=\left[\begin{array}{ccc}\omega_{a}+i\gamma_{a}&0&0\\ 0&\omega_{b}+i\gamma_{b}&0\\ 0&0&\omega_{c}+i\gamma_{c}\end{array}\right] \tag{106}\]
and \(V\) specifies the matrix where the rows are constituted by the corresponding left eigenvectors
\[V=\left[\begin{array}{ccc}v_{a,1}&v_{a,2}&v_{a,3}\\ v_{b,1}&v_{b,2}&v_{b,3}\\ v_{c,1}&v_{c,2}&v_{c,3}\end{array}\right] \tag{107}\]
and \(s_{p}=\left[q_{in},0,0\right]^{T}\) is the vector for input fields. Eq. (105) can be formally solved by using the Fourier transform
\[\vec{c}\left(\Delta_{L}\right)=V^{-1}\left(\Delta_{L}I-B\right)^{-1}Vs_{p} \tag{108}\]
Therefore, we have
\[s\left(\Delta_{L}\right)=K\vec{c}\left(\Delta_{L}\right)=KV^{-1}\left(\Delta_ {L}I-B\right)^{-1}Vs_{p} \tag{109}\]
where \(K\) defines a matrix describing the coupling between different radiation channels
\[\Gamma=K^{\dagger}K=\left[\begin{array}{ccc}\kappa_{a}&0&0\\ 0&\kappa_{c}&0\\ 0&0&\kappa_{c}\end{array}\right] \tag{110}\]
The scattering spectrum is given by
\[\sigma\left(\Delta_{L}\right)=s^{\dagger}\left(\Delta_{L}\right)s\left(\Delta _{L}\right)=\left[KV^{-1}\left(\Delta_{L}I-B\right)^{-1}Vs_{p}\right]^{ \dagger}\left[KV^{-1}\left(\Delta_{L}I-B\right)^{-1}Vs_{p}\right] \tag{111}\]
We introduce a matrix \(G\) of imaginary eigenenergies
\[G=\left[\begin{array}{ccc}\gamma_{a}&0&0\\ 0&\gamma_{b}&0\\ 0&0&\gamma_{c}\end{array}\right] \tag{112}\]
Then the scattering spectrum can be written as
\[\sigma\left(\Delta_{L}\right) =\left[KV^{-1}G^{-1}G\left(\Delta_{L}I-B\right)^{-1}Vs_{p}\right] ^{\dagger}\left[KV^{-1}G^{-1}G\left(\Delta_{L}I-B\right)^{-1}Vs_{p}\right] \tag{113}\] \[=\left[G\left(\Delta_{L}I-B\right)^{-1}Vs_{p}V^{-1}G^{-1}\right] ^{\dagger}\Gamma V^{-1}G^{-1}G\left(\Delta_{L}I-B\right)^{-1}Vs_{p}\]
where \(G(\Delta_{L}I-B)^{-1}\) yields the Lorentzian lineshape
\[G\left(\Delta_{L}I-B\right)^{-1}=\left[\begin{array}{ccc}\frac{\gamma_{a}}{ \Delta_{L}-\omega_{a}-i\gamma_{a}}&0&0\\ 0&\frac{\gamma_{b}}{\Delta_{L}-\omega_{b}-i\gamma_{b}}&0\\ 0&0&\frac{\gamma_{c}}{\Delta_{L}-\omega_{c}-i\gamma_{c}}\end{array}\right] \tag{114}\]
\(Vs_{p}\) gives the complex radiation patterns
\[Vs_{p}=\left[\begin{array}{c}v_{a,1}\\ v_{b,1}\\ v_{c,1}\end{array}\right]=\left[\begin{array}{c}C_{a}\\ C_{b}\\ C_{c}\end{array}\right] \tag{115}\]
The remaining part of Eq. (D.13) can be simplified as
\[\left[V^{-1}G^{-1}\right]^{\dagger}\Gamma V^{-1}G^{-1}=\frac{1}{ \mathrm{Det}[V]^{2}}\left\{\begin{array}{ll}v_{b,2}v_{c,3}-v_{b,3}v_{c,2}&v_{a, 3}v_{c,2}-v_{a,2}v_{c,3}&v_{a,2}v_{b,3}-v_{a,3}v_{b,2}\\ v_{b,3}v_{c,1}-v_{b,1}v_{c,3}&v_{a,1}v_{c,3}-v_{a,3}v_{c,1}&v_{a,3}v_{b,1}-v_{a,1}v_{b,3}\\ v_{b,1}v_{c,2}-v_{b,2}v_{c,1}&v_{a,2}v_{c,1}-v_{a,1}v_{c,2}&v_{a,1}v_{b,2}-v_{a,2}v_{b,1}\end{array}\right]\left[\begin{array}{cc}\frac{1}{\gamma_{a}}&0&0\\ 0&\frac{1}{\gamma_{a}}&0\\ 0&0&\frac{1}{\gamma_{c}}\end{array}\right]\right\}^{\dagger}\] \[\times\left[\begin{array}{cc}\kappa_{a}&0&0\\ 0&\kappa_{c}&0\\ 0&0&\kappa_{c}\end{array}\right]\left[\begin{array}{ll}v_{b,2}v_{c,3}-v_{b,3 }v_{c,2}&v_{a,3}v_{c,2}-v_{a,2}v_{c,3}&v_{a,2}v_{b,3}-v_{a,3}v_{b,2}\\ v_{b,3}v_{c,1}-v_{b,1}v_{c,3}&v_{a,1}v_{c,3}-v_{a,3}v_{c,1}&v_{a,3}v_{b,1}-v_{a,1}v_{b,3}\\ v_{b,1}v_{c,2}-v_{b,2}v_{c,1}&v_{a,2}v_{c,1}-v_{a,1}v_{c,2}&v_{a,1}v_{b,2}-v_{a,2}v_{b,1}\end{array}\right]\left[\begin{array}{cc}\frac{1}{\gamma_{a}}&0&0 \\ 0&\frac{1}{\gamma_{b}}&0\\ 0&0&\frac{1}{\gamma_{c}}\end{array}\right]\] \[=p\left[\begin{array}{c}\frac{\left|V_{1,1}^{-1}\right|^{2} \kappa_{a}+\left(\left|V_{2,1}^{-1}\right|^{2}\kappa_{a}+\left|V_{3,1}^{-1} \right|^{2}\right)\kappa_{c}}{\frac{\gamma_{a}^{2}}{\gamma_{a}^{2}}}&\frac{V_{1,1}^{-1}\nu_{1,2}^{-1}\kappa_{a}+\left(V_{2,1}^{-1}\nu_{2,1}^{-1}+V_{3,1}^{-1} \nu_{3,2}^{-1}\right)\kappa_{c}}{\frac{\gamma_{a}^{2}}{\gamma_{a}^{2}}}&\frac{ V_{1,1}^{-1}\nu_{1,3}^{-1}\kappa_{a}+\left(V_{2,1}^{-1}\nu_{3,1}^{-1}+V_{3,1}^{-1} \nu_{3,3}^{-1}\right)\kappa_{c}}{\frac{\gamma_{a}^{2}}{\gamma_{a}^{2}}}\\ \frac{V_{1,2}^{-1}\nu_{1,3}^{-1}\kappa_{a}+\left(V_{2,1}^{-1}\nu_{2,1}^{-1}+V_ {3,2}^{-1}\nu_{3,3}^{-1}\right)\kappa_{c}}{\frac{\gamma_{a}^{2}}{\gamma_{b}^{2 }}}&\frac{V_{1,2}^{-1}\nu_{3,3}^{-1}\kappa_{a}+\left(V_{2,1}^{-1}\nu_{3,2}^{- 1}\nu_{3,3}^{-1}\right)\kappa_{c}}{\frac{\gamma_{a}^{2}}{\gamma_{b}^{2}}}\\ \frac{V_{1,3}^{-1}\nu_{1,3}^{-1}\kappa_{a}+\left(V_{2,3}^{-1}\nu_{3,2}^{-1}+V _{3,3}^{-1}\nu_{3,3}^{-1}\right)\kappa_{c}}{\frac{\gamma_{a}^{2}}{\gamma_{a} \gamma_{c}}}&\frac{V_{1,3}^{-1}\nu_{1,2}^{-1}\kappa_{a}+\left(V_{2,3}^{-1}\nu_{ 3,2}^{-1}+V_{3,3}^{-1}\right)\kappa_{c}}{\frac{\gamma_{a}^{2}}{\gamma_{a}^{2}} }\end{array}\right] \tag{16}\]
where \(p=\mathrm{Det}[V]^{-2}\) and \(V_{i,j}^{-1}\) indexes the elements of matrix \(V^{-1}\)
\[V^{-1}=\frac{1}{\mathrm{Det}[V]}\left[\begin{array}{cccc}v_{b,2}v_{c,3}-v_{b,3}v_{c,2}&v_{a,3}v_{c,2}-v_{a,2}v_{c,3}&v_{a,2}v_{b,3}-v_{a,3}v_{b,2}\\ v_{b,3}v_{c,1}-v_{b,1}v_{c,3}&v_{a,1}v_{c,3}-v_{a,3}v_{c,1}&v_{a,3}v_{b,1}-v_{a,1}v_{b,3}\\ v_{b,1}v_{c,2}-v_{b,2}v_{c,1}&v_{a,2}v_{c,1}-v_{a,1}v_{c,2}&v_{a,1}v_{b,2}-v_{a,2}v_{b,1}\end{array}\right] \tag{17}\]
Therefore, Eq. (D.16) can be expressed in a compact form
\[\left[V^{-1}G^{-1}\right]^{\dagger}\Gamma V^{-1}G^{-1}=\frac{p}{\gamma_{a} \gamma_{b}\gamma_{c}}\left[\begin{array}{ccc}h_{aa}&h_{ab}&h_{ac}\\ h_{ab}^{*}&h_{bb}&h_{bc}\\ h_{ac}^{*}&h_{bc}^{*}&h_{cc}\end{array}\right] \tag{18}\]
with
\[h_{ab} =\gamma_{c}\left[V_{1,1}^{-1}V_{1,2}^{-1}\kappa_{a}+\left(V_{2,1}^ {-1*}V_{2,2}^{-1}+V_{3,1}^{-1*}V_{3,2}^{-1}\right)\kappa_{c}\right]\] \[h_{ac} =\gamma_{b}\left[V_{1,1}^{-1}V_{1,3}^{-1}\kappa_{a}+\left(V_{2,1}^ {-1*}V_{2,3}^{-1}+V_{3,1}^{-1*}V_{3,3}^{-1}\right)\kappa_{c}\right]\] \[h_{bc} =\gamma_{a}\left[V_{1,2}^{-1}V_{1,3}^{-1}\kappa_{a}+\left(V_{2,2}^ {-1*}V_{2,3}^{-1}+V_{3,2}^{-1*}V_{3,3}^{-1}\right)\kappa_{c}\right]\] \[h_{aa} =\frac{\gamma_{b}\gamma_{c}}{\gamma_{a}}\left[\left|V_{1,1}^{-1} \right|^{2}\kappa_{a}+\left(\left|V_{2,1}^{-1}\right|^{2}+\left|V_{3,1}^{-1} \right|^{2}\right)\kappa_{c}\right] \tag{19}\] \[h_{bb} =\frac{\gamma_{a}\gamma_{c}}{\gamma_{b}}\left[\left|V_{1,2}^{-1} \right|^{2}\kappa_{a}+\left(\left|V_{2,2}^{-1}\right|^{2}+\left|V_{3,2}^{-1} \right|^{2}\right)\kappa_{c}\right]\] \[h_{cc} =\frac{\gamma_{a}\gamma_{b}}{\gamma_{c}}\left[\left|V_{1,3}^{-1} \right|^{2}\kappa_{a}+\left(\left|V_{2,3}^{-1}\right|^{2}+\left|V_{3,3}^{-1} \right|^{2}\right)\kappa_{c}\right]\]
Accordingly, the scattering spectrum can be further simplified as
\[\sigma\left(\Delta_{L}\right) =\frac{p}{\gamma_{a}\gamma_{b}\gamma_{c}}\left[\begin{array}{ccc}C _{a}^{*}&C_{b}^{*}&C_{c}^{*}\end{array}\right]\left[\begin{array}{ccc}\frac{ \gamma_{a}}{\Delta_{L}-\omega_{a}+i\gamma_{a}}&0&0\\ 0&\frac{\gamma_{b}}{\Delta_{L}-\omega_{b}+i\gamma_{b}}&0\\ 0&0&\frac{\gamma_{c}}{\Delta_{L}-\omega_{c}+i\gamma_{c}}\end{array}\right] \left[\begin{array}{ccc}h_{aa}&h_{ab}&h_{ac}\\ h_{ab}^{*}&h_{bb}&h_{bc}\\ h_{ac}^{*}&h_{bc}^{*}\end{array}\right]\] \[\times\left[\begin{array}{ccc}\frac{\gamma_{a}}{\Delta_{L}- \omega_{a}+i\gamma_{a}}&0&0\\ 0&\frac{\gamma_{b}}{\Delta_{L}-\omega_{b}-i\gamma_{b}}&0\\ 0&0&\frac{\gamma_{c}}{\Delta_{L}-\omega_{c}-i
Finally, we arrive at
\[\sigma\left(\Delta_{L}\right)=p_{\gamma} \left\{\frac{h_{aa}\left|C_{a}\right|^{2}\gamma_{a}^{2}}{\left( \Delta_{L}-\omega_{a}\right)^{2}+\gamma_{a}^{2}}+\frac{h_{bb}\left|C_{b}\right| ^{2}\gamma_{b}^{2}}{\left(\Delta_{L}-\omega_{b}\right)^{2}+\gamma_{b}^{2}}+ \frac{h_{cc}\left|C_{c}\right|^{2}\gamma_{c}^{2}}{\left(\Delta_{L}-\omega_{c} \right)^{2}+\gamma_{c}^{2}}+2\operatorname{Re}\left[\frac{h_{ab}C_{a}^{*}C_{b} \gamma_{a}\gamma_{b}}{\left(\Delta_{L}-\omega_{a}+i\gamma_{a}\right)\left( \Delta_{L}-\omega_{b}-i\gamma_{b}\right)}\right]\right.\] \[\left.+2\operatorname{Re}\left[\frac{h_{ac}C_{a}^{*}C_{c}\gamma_{ a}\gamma_{c}}{\left(\Delta_{L}-\omega_{a}+i\gamma_{a}\right)\left(\Delta_{L}- \omega_{c}-i\gamma_{c}\right)}\right]+2\operatorname{Re}\left[\frac{h_{bc}C_{b }^{*}C_{c}\gamma_{b}\gamma_{c}}{\left(\Delta_{L}-\omega_{b}+i\gamma_{b}\right) \left(\Delta_{L}-\omega_{c}-i\gamma_{c}\right)}\right]\right\} \tag{101}\]
where \(p_{\gamma}=p/\gamma_{a}\gamma_{b}\gamma_{c}\). At \(\Delta_{L}=\omega_{a}=\omega_{b}=\omega_{c}\), the scattering intensity is given by
\[\sigma(0)=p_{\gamma}\left\{h_{aa}\left|C_{a}\right|^{2}+h_{bb}\left|C_{b} \right|^{2}+h_{cc}\left|C_{c}\right|^{2}+2\operatorname{Re}\left[h_{ab}C_{a}^ {*}C_{b}+h_{ac}C_{a}^{*}C_{c}+h_{bc}C_{b}^{*}C_{c}\right]\right\} \tag{102}\]
|
2301.08108
|
Channel Reuse for Backhaul in UAV Mobile Networks with User QoS
Guarantee
|
In mobile networks, unmanned aerial vehicles (UAVs) acting as flying base
stations (FlyBSs) can effectively improve performance. Nevertheless, such
potential improvement requires an efficient positioning of the FlyBS. In this
paper, we study the problem of sum downlink capacity maximization in
FlyBS-assisted networks with mobile users and with a consideration of wireless
backhaul with channel reuse while a minimum required capacity to every user is
guaranteed. The problem is formulated under constraints on the FlyBS's flying
speed, propulsion power consumption, and transmission power for both of flying
and ground base stations. None of the existing solutions maximizing the sum
capacity can be applied due to the combination of these practical constraints.
This paper pioneers in an inclusion of all these constraints together with
backhaul to derive the optimal 3D positions of the FlyBS and to optimize the
transmission power allocation for the channels at both backhaul and access
links as the users move over time. The proposed solution is geometrical based,
and it shows via simulations a significant increase in the sum capacity (up by
19%-47%) compared with baseline schemes where one or more of the aspects of
backhaul communication, transmission power allocation, and FlyBS's positioning
are not taken into account.
|
Mohammadsaleh Nikooroo, Zdenek Becvar, Omid Esrafilian, David Gesbert
|
2023-01-19T14:57:52Z
|
http://arxiv.org/abs/2301.08108v1
|
# Channel Reuse for Backhaul in UAV Mobile Networks with User QoS Guarantee
###### Abstract
In mobile networks, unmanned aerial vehicles (UAVs) acting as flying base stations (FlyBSs) can effectively improve performance. Nevertheless, such potential improvement requires an efficient positioning of the FlyBSs. In this paper, we study the problem of sum downlink capacity maximization in FlyBS-assisted networks with mobile users and with a consideration of wireless backhaul with channel reuse while a minimum required capacity to every user is guaranteed. The problem is formulated under constraints on the FlyBS's flying speed, propulsion power consumption, and transmission power for both of flying and ground base stations. None of the existing solutions maximizing the sum capacity can be applied due to the combination of these practical constraints. This paper pioneers in an inclusion of all these constraints together with backhaul to derive the optimal 3D positions of the FlyBS and to optimize the transmission power allocation for the channels at both backhaul and access links as the users move over time. The proposed solution is geometrical based, and it shows via simulations a significant increase in the sum capacity (up by 19%-47%) compared with baseline schemes where one or more of the aspects of backhaul communication, transmission power allocation, and FlyBS's positioning are not taken into account.
Flying base station, UAV, Backhaul, Relaying, Transmission power, Sum capacity, Mobile networks, 6G.
## I Introduction
Unmanned aerial vehicles (UAVs) have received an extensive attention in wireless communications in the recent years. Due to a high flexibility and adaptability to the environment, UAVs can be regarded as flying base stations (FlyBSs) that potentially bring a significant enhancement in the performance of mobile networks [1]. Such potential enhancements, however, are essentially subject to an effective management of several aspects, including propulsion power consumption, transmission power consumption/allocation, FlyBS's positioning, etc. As another crucial aspect, a backhaul communication of the FlyBSs with the ground base station (GBS) or access point (AP) must be ensured in order to integrate FlyBSs into mobile networks.
Many recent work investigate the performance in FlyBS-assisted networks with inclusion of backhaul. In [2], they address the FlyBS's positioning and bandwidth allocation to optimize the total profit gained from the users in a network. Furthermore, the authors in [3] investigate an optimization of the FlyBS's position, user association, and resource allocation, to maximize the utility in software-defined cellular networks. In [4], they maximize energy efficiency in a relaying network with static BSs via optimization of transmission power allocation to the BSs. Then, the problem of joint 2D trajectory design and resource allocation is investigated in [5] to minimize the network latency in a space-air-ground network with millimeter wave (mmWave) backhaul. In [6], the authors study a joint placement, resource allocation, and user association of FlyBSs to maximize the network's utility. Then, in [7], they maximize the minimum rate of the delay-tolerant users via a joint resource allocation and FlyBS's positioning. Then, in [8], the authors consider a scenario where a set of relaying FlyBSs establish a communication between multiple sources and multiple destinations. The goal is to maximize the minimum average rate among the relays via transmission power allocation and FlyBSs' positioning. Furthermore, in [9], the operation cost in a mobile edge computing network is minimized via FlyBS's positioning and resource allocation. The solutions provided in [2]-[9] do not assume constraints on the user's instantaneous capacity and, hence, they cannot be applied in scenarios with delay-sensitive users where a minimum capacity is demanded by the users.
Several works also consider the individual user's quality of service in terms of instantaneous capacity. In [10] and [11], the problem of FlyBS positioning and resource allocation is investigated to minimize the transmission power of the FlyBS. Then, the minimum capacity of the users is maximized via the FlyBS's positioning and the transmission power allocation in [12]. Furthermore, the problem of transmission power allocation is investigated in [13] for FlyBS networks to maximize the energy efficiency, i.e., the ratio of the sum capacity to the total transmission power consumption. The authors in [14] minimize the number of FlyBSs in a network while ensuring both coverage to all ground users. Then, in [15], the problem of resource allocation and circular-trajectory design for fixed-wing FlyBSs is investigated to minimize the power consumption of the FlyBS. Furthermore, the minimum capacity of the users is maximized in [16] via resource allocation and positioning. Also, the minimum downlink throughput is maximized in [17] by optimizing the FlyBSs' positioning, bandwidth, and power allocation.
In our prior work [18], the problem of FlyBS's positioning and user association is investigated in mobile networks
assisted by relaying FlyBSs. Also, in [19] and [20], a positioning of the FlyBS and transmission power allocation is proposed at the access link to maximize the sum capacity and to minimize the FlyBS's total power consumption, respectively, where a minimum required capacity for the users is guaranteed.
To our best knowledge, there is no work targeting the sum capacity maximization in a practical scenario with _moving users_ and with the _minimum capacity guaranteed to the individual users_ where a _backhaul link_ is also provided. All related works either target scenario where no minimum capacity is guaranteed to the users and/or a backhaul connection (together with related backhaul constraints [10, 21]) is missing. It is also noted that, existing solutions maximizing the minimum capacity among users cannot be applied in many scenarios where users require different instantaneous capacities. To this end, we target the case with both backhaul and user's required capacity and we propose an analytical solution based on an alternating optimization of the FlyBS's positioning and the transmission power allocation at the backhaul and at the access links. Due to a non-convex nature of the problem, a heuristic solution is proposed with respect to the feasibility region that is determined via constraints in the problem, i.e., 1) user's required capacity at all time, 2) FlyBS's maximum speed, 3) maximum propulsion power consumption of FlyBS, and 4) flow conservation constraint regarding the backhaul and access links.
## II System model and problem formulation
In this section, we first define the system model. Then, we formulate the constrained problem of sum capacity maximization with inclusion of backhaul communication.
In our system model, the FlyBS serves \(N\) mobile users in an area as shown in Fig. 1. The FlyBS connects to the GBS located at \(\mathbf{l_{G}}=\left[X_{G},Y_{G},H_{G}\right]\) via backhaul. Let, \(\mathbf{l_{F}}[k]=\left[X[k],Y[k],H[k]\right]^{T}\) and \(\mathbf{u_{n}}[k]=\left[x_{n}[k],\ \ y_{n}[k],z_{n}[k]\right]^{T}\) denote the location of the FlyBS and the user \(n\) at the time step \(k\), respectively. Also, let \(d_{n,F}[k]\) and \(d_{n,G}[k]\) denote the Euclidean distance of the user \(u_{n}\) to the FlyBS and to the GBS at the time step \(k\), respectively.
Suppose the whole available radio band is divided into a set of \(S\) channels \(\mathbf{J}=\{J_{1},\ldots,J_{S}\}\), where channel \(J_{s}\) has a bandwidth of \(B_{s}\)\((1\leq s\leq S)\). At the FlyBSs, we adopt orthogonal downlink channel allocation to all users. Furthermore, all the \(S\) channels are reused at the backhaul link to alleviate the scarcity of radio resources. Let \(g_{n}\in[1,S]\) be the index of the channel allocated to user \(n\). Note that, we do not target an optimization of channel allocation in this paper, and we leave that for future work. Nevertheless, our model works with any channel allocation.
Let \(p_{n,F}^{R}\) be the received power at the user \(n\) from the FlyBS. Furthermore, \(p_{F,G,s}^{R}\) denotes the received power at the FlyBS from the GBS over the channel \(s\). Then, the channel capacity of the user \(n\) is:
\[C_{n}[k]=B_{g_{n}}\log_{2}\Bigg{(}1+\frac{p_{n,F}^{R}[k]}{\sigma_{n}^{2}+p_{n,G}^{R}[k]}\Bigg{)}, \tag{1}\]
where \(p_{n,G}^{R}\) is the interference power received at user \(n\) from the GBS, \(\sigma_{n}^{2}\) is noise's power. Similarly, the link's capacity between the GBS and the FlyBS is:
\[C_{G,F}[k]=\sum_{s=1}^{S}B_{s}\log_{2}\Bigg{(}1+\frac{p_{F,G,s}^{R}[k]}{ \sigma_{F,s}^{2}}\Bigg{)}, \tag{2}\]
where \(\sigma_{F,s}^{2}\) is the noise power over the channel \(s\).
Let \(p_{F}^{T}=[p_{F,1}^{T},...,p_{F,N}^{T}]\) denote the FlyBS's transmission power vector to all the users. Also, for the GBS-to-FlyBS communication, let \(p_{G}^{T}=[p_{G,1}^{T},...,p_{G,S}^{T}]\) be the GBS's transmission power vector over the \(S\) channels. According to the Friis' transmission equation, we have
\[p_{n,F}^{R}=Q_{n,F}p_{F,n}^{T}d_{n,F}^{-\alpha_{n,F}},n\in[1,N], \tag{3}\]
where the coefficient \(Q_{n,F}\) is the parameter depending on the communication frequency and the gain of antennas, and \(\alpha_{n,F}\) is the pathloss exponent of the channel between the FlyBS and the user \(n\). Similar relation can be derived between the GBS's transmission power and the received power at the user \(n\) and at the FlyBS as
\[p_{n,G}^{R}=Q_{n,G}p_{G,g_{n}}^{T}d_{n,G}^{-\alpha_{n,G}},n\in[1,N], \tag{4}\] \[p_{F,G}^{R}=Q_{F,G}p_{G,s}^{T}d_{F,G}^{-\alpha_{F,G}},n\in[1,N], s\in[1,S].\]
For the propulsion power consumption, we refer to the model provided in [22] for rotary-wing UAVs, where the propulsion power is expressed as:
\[P_{pr}[k]=L_{0}\big{(}1+\frac{3V_{F}^{2}[k]}{U_{\text{tip}}^{2} }\big{)}+\frac{\eta_{0}\rho s_{r}AV_{F}^{3}[k]}{2}+\] \[L_{i}(\sqrt{1+\frac{V_{F}^{4}[k]}{4v_{0,h}^{4}}-\frac{V_{F}^{2}[ k]}{2v_{0,h}^{2}}})^{\frac{1}{2}}, \tag{5}\]
where \(V_{F}[k]\) is the FlyBS's speed at the time step \(k\), \(L_{0}\) and \(L_{i}\) are the blade profile and induced powers in hovering status, respectively, \(U_{\text{tip}}\) is the tip speed of the rotor blade, \(v_{0,h}\) is the mean rotor induced velocity during hovering, \(\eta_{0}\) is the fuselage drag ratio, \(\rho\) is the air density, \(s_{r}\) is the rotor solidity, and \(A\) is the rotor disc area.
Our goal is to find the optimized position of the FlyBS and to determine the transmission power allocation over each channel both at the backhaul and at the access link
Fig. 1: System model with mobile users placed within the coverage area of the FlyBS. The channels at the access link are reused for GBS-to-FlyBS communication
to maximize the sum capacity at every time step \(k\) under practical constraints as follows:
\[\max_{\mathbf{p}_{\mathcal{G}}^{T}[k],\mathbf{p}_{\mathcal{F}}^{T}[k],\mathbf{I} _{\mathcal{F}}[k]}\ \sum_{n=1}^{N}C_{n}[k], \forall k,\] (6) s.t. \[\ C_{n}[k]\geq C_{n,min}[k],n\in[1,N], \tag{6a}\] \[H_{min}[k]\leq H[k]\leq H_{max}[k],\] (6b) \[\big{|}\big{|}\mathbf{l}[k]-\mathbf{l}[k-1]\big{|}\big{|}\leq V_{F,max} \delta_{k},\] (6c) \[P_{pr}[k]\leq P_{pr,th}[k],\] (6d) \[\sum\nolimits_{n=1}^{N}C_{n}[k]\leq C_{G,F}[k],\] (6e) \[\sum\nolimits_{n=1}^{N}p_{G,n}^{T}\leq p_{G,max}^{T},\quad p_{G,n }^{T}\geq 0\] (6f) \[\sum\nolimits_{s=1}^{S}p_{F,s}^{T}\leq p_{F,max}^{T},\quad p_{F,s }^{T}\geq 0 \tag{6g}\]
where \(\delta_{k}\) is the duration between the time steps \(k-1\) and \(k\), and \(||.||\) is the \(\mathcal{L}_{2}\) norm. The constraint (6a) ensures that every user always receives their minimum required capacity \(C_{n,min}[k]\). The constraint (6b) restricts the FlyBS's altitude within \([H_{min},H_{max}]\) where \(H_{min}\) and \(H_{max}\) are the minimum and maximum allowed flying altitude, respectively, and are set according to the environment and also the flying regulations. The constraint (6c) ensures the FlyBS's speed would not exceed the maximum supported speed \(V_{F,max}\), and (6d) assures that the FlyBS's movement would not incur the propulsion power larger than a threshold \(P_{pr,th}\). In practice, the value of \(P_{pr,th}\) can be set arbitrarily at every time step and according to available remaining energy in the FlyBS's battery to prolong the FlyBS's operation. Furthermore, (6f) and (6g) limit the total transmission powers of the GBS and the FlyBS to the maximum values of \(p_{G,max}^{T}\) and \(p_{F,max}^{T}\), respectively.
In the next section, we elaborate on our proposed solution to the formulated problem in (6).
## III FlyBS positioning and transmission power allocation on access and backhaul links
In this section, we present our proposed solution to (6). We provide a high level overview of the optimization of the transmission power allocation on both access and backhaul links as well as the FlyBS's positioning. Then, we describe in details individual steps of the optimization in following subsections.
### _Overview of the proposed solution_
Solving (6) in general is challenging, since the objective (i.e., sum capacity) is only convex with respect to \(\mathbf{p}_{\mathcal{G}}^{T}[k]\), as it is concave with respect to \(\mathbf{p}_{\mathcal{F}}^{T}[k]\) and also not convex (nor concave) with respect to \(\mathbf{l}_{\mathcal{F}}\). In addition, the constraints (6a), (6d), and (6e) are also not convex (nor concave) with respect to \(\mathbf{l}_{\mathcal{F}}\). Therefore, we propose a solution based on an alternating optimization of the power allocation and the FlyBS's positioning. In particular, the optimization in (6) is done via iterating the following three steps: 1) optimize \(p_{F}^{T}\) at a given position of the FlyBS \(q_{F}\) and for a fixed power allocation \(p_{G}^{T}\), 2) optimize \(p_{G}^{T}\) at the same given position of the FlyBS in _step 1_ and for the updated \(p_{F}^{T}\) from _step 1_, 3) optimize the FlyBS's position \(\mathbf{l}_{\mathcal{F}}\) for the updated power allocation derived from _steps 1_ and \(2\). Furthermore, to tackle the non-convexity of the objective, we propose an approximation form of the objective that intuits us to what direction for the FlyBS's movement incurs an increase in the sum capacity. The idea of step-wise solving of (6) facilitates to deal with the non-convexity of the constraints. Each step is solved with respect to the related constraints in (6). In the next section, we elaborate our proposed solution.
### _Transmission power allocation for access link_
At a fixed position of the FlyBS and for a given setting of transmission power at the backhaul link (\(\mathbf{p}_{\mathcal{G}}^{T}\)), the problem of transmission power optimization at the access link to maximize the sum capacity is formulated as follows:
\[\max_{\mathbf{p}_{\mathcal{F}}^{T}[k]} \sum_{n=1}^{N}C_{n}[k], \forall k,\] (7) s.t. \[\eqref{eq:eq:p_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_FF_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_F_FF_F_F_F_F_F_F_F_FF_F_F_F_F_F_FF_F_FF_F_FF_F_F_FF_F_FF_FF_F_FF_F_FF_FF_F_FF_FF_F_FF_FF_FFF_F_FF_FF_FF_FF_FFF_FF_FF_FFFF_FF_FFF_F
\(C_{\text{tot}}^{\text{ub}}\) substitutes the upper bound for sum capacity in the right-hand side in (11). Using (11), we replace (6e) by the linear (with respect to \(p_{F,n}^{T}\)) inequality \(C_{\text{tot}}^{\text{up}}\leq C_{G,F}[k]\). Then, the problem in (7) is solved using CVX.
### _Transmission power allocation for backhaul link_
Once the power allocation \(\mathbf{p}_{\mathbf{G}}^{T}[k]\) over the access link channels is optimized, we optimize the power allocation over the backhaul. To this end, we derive the subproblem of the transmission power optimization at the access link from (6) as
\[\max_{\mathbf{p}_{\mathbf{G}}^{T}[k]} \sum_{n=1}^{N}C_{n}[k], \forall k,\] (12) s.t. \[\eqref{eq:eq:eq:p_F,n},\eqref{eq:eq:eq:p_F,n},\eqref{eq:eq:p_F,n}.\]
From (1) and (4), we observe that the objective as well as (6f) are convex with respect to \(\mathbf{p}_{\mathbf{G}}^{T}\). Furthermore, the constraint (6a) is rewritten similarly as for (8) and (9) as
\[p_{G,g_{n}}^{T}\leq(\frac{p_{n,F}^{R}[k]}{2^{\frac{C_{n,min}[k]}{B_{g_{n}}}}-1 }-\sigma_{n}^{2})Q_{n,G}^{-1}d_{n,G}{}^{\alpha_{n,G}}, \tag{13}\]
which is linear with respect to \(p_{G,g_{n}}^{T}\). Next, we rewrite (6e) by the means of (1), (3), and (4) as:
\[\sum\nolimits_{n=1}^{N}B_{g_{n}}\log_{2}\left(1+\frac{Q_{n,F}p_{F,n}^{T}}{d_{ n,F}^{\alpha_{n,F}}(\sigma_{n}^{2}+\frac{Q_{n,G}p_{G,g_{n}}^{T}}{d_{n,G}^{ \alpha_{n,G}}})}\right)-\]
\[\sum_{s=1}^{S}B_{s}\log_{2}\left(1+\frac{Q_{F,G}p_{G,s}^{T}[k]}{\sigma_{F,s}^{ 2}d_{F,G}^{\alpha_{F,G}}}\right)\leq 0, \tag{14}\]
which is convex with respect to \(p_{G,g_{n}}^{T}\) for \(p_{G,g_{n}}^{T}\geq 0\). Hence, the problem in (12) is a concave programming problem. Similar to the convex optimization, efficient solutions are developed in the literature for such class of problems in case that constraints define a convex compact set (like in (12)).
We develop a solution based on an iterative construction of level sets for the objective function and derivation of local solutions with respect to the level sets using linear programming (LP), see [23].
### _FlyBS positioning_
After optimizing the power allocation over the access and backhaul channels, we propose a solution to the FlyBS's positioning. To this end, we formulate the problem as:
\[\max_{\mathbf{l}_{\mathbf{F}}[k]} \sum_{n=1}^{N}C_{n}[k], \forall k,\] (15) s.t. \[\eqref{eq:eq:eq:p_F,n},\eqref{eq:eq:p_F,n},\eqref{eq:eq:p_F,n}, \eqref{eq:eq:p_F,n},\eqref{eq:eq:p_F,n}.\]
The objective and the constraints (6c) and (6e) are not convex with respect to \(\mathbf{l}_{\mathbf{F}}[k]\). Before dealing with the mentioned non-convexity, let us first discuss the constraints (6a), (6c), and (6d).
### _Interpretation of constraints_
The constraint (6a) is rewritten as
\[d_{n,F}\leq(\frac{Q_{n,F}p_{F,n}^{T}}{(2^{\frac{C_{n,min}[k]}{B_{g_{n}}}}-1)( \sigma_{n}^{2}+p_{n,G}^{R}[k])})^{\frac{1}{\alpha_{n,F}}},\quad\forall n \tag{16}\]
which defines as the FlyBS's next possible position as the border and inside of a sphere with a center at \(\mathbf{u}_{\mathbf{n}}[k]\) and with a radius of the right-hand side in 16.
According to Fig. 3 the constraint (6d) is equivalent to \(V_{F}\) being upper bounded by a threshold \(V_{F,th}\), i.e., \(V_{F}\leq V_{F,th}\). By combining this inequality with (6c) we get
\[||\mathbf{l}[k]-\mathbf{l}[k-1]||\leq(\text{min}\{V_{F,max},V_{F,th}\})\delta_{k}, \tag{17}\]
Equation (17) defines the FlyBS's next possible position as the border or inside of the region enclosed by two spheres centered at \(\mathbf{l}[k-1]\) (i.e., the FlyBS's position at the previous time step), one with a radius of \(V_{F,th}\delta_{k}\) and the other one with \((\text{min}\{V_{F,max},V_{F,th}\})\delta_{k}\).
Next, to deal with the non-convexity in (6e), let us first derive an upper bound for the left-hand side in (6e). To this end, we use the fact the FlyBS's next position is bounded due to the limit on the FlyBS's speed as well as altitude. More specifically, from (17) we find a lower bound to the FlyBS's distance from user \(n\) (\(n\in[1,N]\)) at time step \(k\) in terms of the FlyBS's position at time step \(k-1\) as
\[d_{n,F}[k]\geq d_{n,F,min}[k]=\text{max}\{H_{min}[k], \tag{18}\] \[||\mathbf{l}_{\mathbf{F}}[k-1]-\mathbf{u}_{\mathbf{n}}[k]||+(\text{min}\{V_{F,max },,V_{F,th}\})\delta_{k}\},\]
Then, by using (18), we get the following upper bound for the left-hand side in (6e):
\[\sum_{n=1}^{N}C_{n}[k]\leq \tag{19}\] \[\sum_{n=1}^{N}B_{g_{n}}\log_{2}\Bigg{(}1+\frac{Q_{n,F}p_{F,n}^{T} }{d_{n,F,min}{}^{\alpha_{n,F}}(\sigma_{n}^{2}+p_{n,G}^{R}[k])}\Bigg{)},\]
Thus, we replace (6e) with the following constraint:
\[\sum_{n=1}^{N}B_{g_{n}}\log_{2}\Bigg{(}1+\frac{Q_{n,F}p_{F,n}^{T}}{d_{n,F,min}{ }^{\alpha_{n,F}}(\sigma_{n}^{2}+p_{n,G}^{R}[k])}\Bigg{)}\leq\]
\[C_{G,F}[k]=\sum_{s=1}^{S}B_{s}\log_{2}\Bigg{(}1+\frac{Q_{F,G}p_{G,s}^{T}[k]}{ \sigma_{F,s}^{2}d_{F,G}^{\alpha_{F,G}}}\Bigg{)}. \tag{20}\]
Once (20) is fulfilled, the constraint (6e) is automatically fulfilled as well. The left-hand side in (20) is a constant.
Fig. 2: two-dimensional depiction of feasibility region (hatched in yellow) with respect to the constraints in (22).
Furthermore, the right-hand side in (20) is strictly decreasing with respect to \(d_{F,G}[k]\). Hence, we use bisection method to find the upper bound \(d_{F,G,max}[k]\) such that the inequality
\[d_{F,G}[k]\leq d_{F,G,max}[k], \tag{21}\]
is equivalent to (20). The derived upper bound in (21) defines the next allowed position of the FlyBS as a sphere centered at the GBS's transmitter and with a radius of \(d_{F,G,max}[k]\).
With the above-provided analysis of the constraints in (15), we now target the following substitute optimization problem
\[\max_{\mathbf{I}\mathbf{F}[k]} \sum_{n=1}^{N}C_{n}[k], \forall k,\] (22) s.t. \[\eqref{eq:1},\eqref{eq:2},\eqref{eq:3},\eqref{eq:4},\eqref{eq:5}.\]
Note that, in later discussions, we refer to the combination of constraints in (22) as the _feasibility region_ at the time step \(k\) and we denote it as \(\mathcal{R}_{f}\), i.e., \(\mathcal{R}_{f}=\{\mathbf{l}_{\mathbf{F}}|\eqref{eq:1},\eqref{eq:2},\eqref{eq:3}\}\)). Fig. 2 shows a 2D instance of \(\mathcal{R}_{f}\).
### _Radial approximation of sum capacity_
Now, to tackle the non-convexity of the objective, we propose a radial-basis approximation for the sum capacity. Such approach helps to express the sum capacity as a union of level surfaces determining the direction of FlyBS's movement towards the optimum position. In the following, we explain the steps towards the derivation of radial approximation. Firstly, using (3), the log(.) term in (1) is rewritten as
\[\log_{2}\left(1+\frac{p_{n,F}^{R}[k]}{\sigma_{n}^{2}+p_{n,G}^{R}[k]}\right)= \log_{2}\left(1+\frac{Q_{n,F}p_{F,n}^{T}d_{n,F}^{-\alpha_{n,F}}}{(\sigma_{n}^ {2}+p_{n,G}^{R}[k])}\right) \tag{23}\]
Next, the linear approximation \(\log_{2}(1+X)\approx\frac{X}{\ln(2)}\) is applied to the right-hand side in (23) to derive a linear expression with respect to \(d_{n,F}^{-\alpha_{n,F}}\). Then, we further derive a linear approximation of \(d_{n,F}^{-\alpha_{n,F}}\) with respect to \(d_{n,F}^{2}\). In particular, we use the Taylor approximation \((a+X)^{k}\approx(a+\delta a\xi)^{k}+k(a+\delta a\xi)^{k-1}(X-\delta a\xi)\) where \(\delta=\lfloor\frac{X}{\alpha\xi}\rfloor\), and the parameter \(\xi\) determines the accuracy in approximation (smaller \(\xi\) leads to smaller error). Using the mentioned approximation for \(n\in[1,N]\), we get a sum of quadratic terms in the form of \(d_{n,F}^{2}=(X-x_{n})^{2}+(Y-y_{n})^{2}+(Z-z_{n})^{2}\). Since a sum of quadratic terms is also a quadratic expression, the sum capacity \(\sum_{n=1}^{N}C_{n}\) is rewritten as
\[\sum_{n=1}^{N}C_{n}[k]\approx W(\mathbf{p^{T}},k)-\zeta(\mathbf{p^{T}},k)\big{|}|\bm {l}_{\mathbf{F}}[k]-\mathbf{l}_{\mathbf{F},\mathbf{o}}[k]|\big{|}^{2}, \tag{24}\]
where the substitutions \(W(\mathbf{p^{T}},k)\) and \(\zeta(\mathbf{p^{T}},k)\) are constants with respect to \(\mathbf{l}_{\mathbf{F}}[k]\). The details about the derivation of (24) are not entirely shown to avoid distraction from the main discussion in this section. Nevertheless, interested readers can refer to [19] where more steps of a similar derivation is presented (Appendix A in [19] in particular).
In order to make the approximation in (24) efficient, we derive the approximation at a position "close" to the actual optimal position, since the objective (sum capacity) is a continuous function. Let \(\mathbf{l}_{\mathbf{F},\mathbf{o}}\) denote such a position. We propose to choose \(\mathbf{l}_{\mathbf{F},\mathbf{o}}\) by solving an optimization problem derived from (6) as explained in the following Remark 1.
**Remark 1**: By using the inequality \(\log_{2}(1+\frac{1}{ax})\geq\frac{1}{\ln(2)}(\frac{-x}{ax_{0}^{2}}+\frac{1}{ax _{0}}+\ln(1+\frac{1}{ax_{0}}))\) for arbitrary \(a\) and \(x\) at any point \(x_{0}\), a lower bound \(C_{\text{tot}}^{\text{lb}}\) for the sum capacity is obtained as
\[\sum_{n=1}^{N}C_{n}[k]=\sum_{n=1}^{N}B_{g_{n}}\log_{2}\left(1+\frac{Q_{n,F}p_{ n,n}^{T}}{d_{n,F}^{\alpha_{n,F}}(\sigma_{n}^{2}+p_{n,G}^{R}[k])}\right)\geq C_{ \text{tot}}^{\text{lb}}=\]
\[\sum_{n=1}^{N}\frac{B_{g_{n}}}{\ln(2)}\big{(}\frac{-Q_{n,F}d_{n,F}\sigma_{n,F} }{(\sigma_{n}^{2}p_{n,G}^{R}[k])d_{n,F}^{2\alpha_{n,F}}[k-1]}+\frac{Q_{n,F}}{( \sigma_{n}^{2}+p_{n,G}^{R}[k])d_{n,F}^{\alpha_{n,F}}[k-1]}\big{)} \tag{25}\]
The right-hand side in (25) is a concave function with respect to \(\mathbf{l}_{\mathbf{F}}[k]\). Hence,
\[\mathbf{l}_{\mathbf{F},\mathbf{c}}=\operatorname*{argmax}_{\mathbf{l}_{\mathbf{F}}[k]} \;C_{\text{tot}}^{\text{lb}},\quad\forall k,\] (26) s.t. \[\eqref{eq:1},\eqref{eq:1},\eqref{eq:2},\eqref{eq:3}.\]
The convex problem in (26) is solved using CVX. Next, we set \(\mathbf{l}_{\mathbf{F},\mathbf{c}}[k]\) as the reference point for the approximation in (24) as it is a "close" point to the optimal position.
### _Solution to FlyBS positioning_
Now, we elaborate the solution to the FlyBS's positioning. According to (24), the sum capacity increases with a decrease in the distance between \(\mathbf{l}_{\mathbf{F},\mathbf{o}}\) and \(\mathbf{l}_{\mathbf{F}}\). Thus, the maximum value of sum capacity is achieved at the closest point to \(\mathbf{l}_{\mathbf{F},\mathbf{o}}\) that fulfills all the constraints in (22). According to the discussion in this subsection, each of the constraints (16), (17), and (21) in (22) limits the FlyBS's position to the border and interior of a sphere and, hence, are convex. Combined with (6b), the feasibility region \(\mathcal{R}_{f}\) for the FlyBS's position is convex.
Then, the problem of FlyBS's positioning is transformed to
\[\min_{\Lambda\in\mathcal{R}_{f}}\;||\Lambda-\mathbf{l}_{\mathbf{F},\mathbf{o}}[k]||^{2}, \quad\forall k. \tag{27}\]
Fig. 3: Propulsion power model vs. speed for rotary-wing FlyBS.
The objective and the domain in (27) is convex and, hence, it is solved using CVX.
Once the FlyBS's position \(\boldsymbol{l_{F}}\) is updated to the solution derived from (27), the power allocation \(\boldsymbol{p^{T}}\) is again optimized at the updated position of the FlyBS. Consequently, the updated \(\boldsymbol{p^{T}}\) changes the spheres corresponding to (16), (17), (21) in (22). Thus, an updated solution to (27) is derived. This optimization of \(\boldsymbol{p^{T}}\) and \(\boldsymbol{l_{F}}\) is repeated until the FlyBS's movement at some iteration falls below a given threshold \(\epsilon\) or until the maximum number of iterations is reached.
## IV Simulations and results
This section provides the details for our adopted simulation scenario followed by the results and discussions to show superiority of the proposed solution over state-of-the-art.
### _Simulation scenario and models_
We assume a 500 m \(\times\) 500 m square area with 100 to 600 users initially distributed randomly. The GBS is located at a distance of 1500 m from the center of the area. We adopt the user's mobility model from [24] where a half of the users move at a speed of 1 m/s according to random-walk model and, the other half are randomly divided into six clusters of crowds. A simulation duration of 1200 seconds is assumed.
A total bandwidth of 100 MHz is divided equally among the users at the access link. The background interference and the noise's spectral density are set to -90 dBm and -174 dBm/Hz, respectively. Pathloss exponents of \(\alpha_{n,F}=2.3\), \(\alpha_{n,G}=2.8\), and \(\alpha_{F,G}=2.1\) for FlyBS-user, GBS-user, and GBS-FlyBS channels are assumed, respectively [18]. An altitude range of [100, 300] m and a maximum transmission power limit of \(P_{F,max}^{F}=30\) dBm is considered for the FlyBS. Also, an altitude of 30 m and a maximum transmission power of \(P_{G,max}^{T}=36\) dBm (5 W) is assumed for GBS. The results are averaged out over 100 simulation drops.
We benchmark our proposed solution to backhaul-aware sum capacity maximization against the following state-of-the-art schemes: \(i)\) maximization of sum capacity, referred to as _MSC_, via FlyBS's positioning and transmission power allocation at the access link, published in [19], \(ii)\) minimum capacity maximization, referred to as _mCM_, via optimization of FlyBS's positioning and transmission power allocation to the users at the access link, published in [12], \(iii)\) maximization of energy efficiency, referred to as _EEM_, via transmission power allocation at the access link, as introduced in [13]. Note that the original solution in [13] does not provide a positioning of the FlyBS, thus, the benchmark scheme EEM is an enhanced version of the solution [13] and the FlyBS's positioning is solved using K-means.
### _Simulation results_
In this subsection, we present the simulation results and we discuss the performance of different schemes.
Fig. 4 shows the sum capacity versus number of users (\(N\)) for different schemes. A minimum required capacity of \(C_{min}=1\) Mbps is assumed for all users. According to Fig. 4, the sum capacity decreases if more users are served by the FlyBS. This is due to two main reasons 1) the bandwidth allocated to each user becomes smaller, and 2) the FlyBS's total transmission power is divided among more users. Nevertheless, the proposed solution outperforms other schemes in the achieved sum capacity. More specifically, the sum capacity is increased by up to 21%, 28%, and 47% with respect to MSC, mCM, and EEM, respectively.
Next, Figs. 6 and 6 show the impact of minimum user's capacity \(C_{min}\) on the sum capacity for \(N=\) 300 and \(N=\) 600, respectively. The maximum depicted \(C_{min}\) represents the largest \(C_{min}\) for which a feasible solution exists. However, the value of \(C_{min}\) in _mCM_ is not set manually and beforehand, as it is directly derived by the scheme itself (which maximizes the minimum capacity). Hence, the sum capacity is constant in Figs. 6 and 6. However, for the proposed solution, MSC, and EEM, increasing \(C_{min}\) reduces the sum capacity. This is because increasing \(C_{min}\) leads to a tighter feasibility region according to (16) and, hence, it limits the FlyBS's movement to maximize the sum capacity. The proposed solution increases the sum capacity with respect to _MSC_, _mCM_, and _EEM_ by 24%, 25%, and 49%, respectively, for \(N=\) 300, and by 19%, 33%, and 49%, respectively, for \(N=\) 600.
Next, we also demonstrate the fast convergence of our proposed iterative algorithm in Figs. 6 and 6 by showing an evolution of the sum capacity over iterations the alternating optimization of transmission power allocation and FlyBS's positioning. Note that, the benchmark schemes mCM and EEM are not iterative and, hence, their sum capacity is constant and they are shown in the Figs. 6 and 6 only to show their performance. The proposed solution converges very fast and in only few iterations. This confirms that the iterative manner of the proposed solution does not limit its
feasibility and practical application. Note that, although the mCM scheme outperforms our proposal in the first iteration in Fig. 8, only the converged results should be subject to comparison as the performance at early iterations can be greatly impacted by the initialization of FlyBS's position and power allocation.
## V Conclusions
In this paper, we have provided an analytical approach to maximize the sum capacity via a positioning of the FlyBS, allocation of transmission power to the backhaul channels, and an allocation of the transmission power to the users at the access channel. The problem is constrained by the minimum required instantaneous capacity to each user and practical real world limitations of the FlyBSs. We have shown that the proposed solution enhances the sum capacity by tens of percent compared to state-of-the-art works. In the future work, a scenario with multiple FlyBSs should be studied along with related aspects, such as a management of interference among FlyBSs and an association of users to FlyBSs.
|
2302.06475
|
Introduction of Machine Learning for Astronomy (Hands-on Workshop)
|
This article is based on the tutorial we gave at the hands-on workshop of the
ICRANet-ISFAHAN Astronomy Meeting. We first introduce the basic theory of
machine learning and sort out the whole process of training a neural network.
We then demonstrate this process with an example of inferring redshifts from
SDSS spectra. To emphasize that machine learning for astronomy is easy to get
started, we demonstrate that the most basic CNN network can be used to obtain
high accuracy, we also show that with simple modifications, the network can be
converted for classification problems and also to processing gravitational wave
data.
|
Yu Wang, Rahim Moradi, Mohammad H. Zhoolideh Haghighi, Fatemeh Rastegarnia
|
2023-02-13T15:51:10Z
|
http://arxiv.org/abs/2302.06475v1
|
# Introduction of Machine Learning for Astronomy (Hands-on Workshop)
###### Abstract
This article is based on the tutorial we gave at the hands-on workshop of the ICRANet-ISFAHAN Astronomy Meeting1. We first introduce the basic theory of machine learning and sort out the whole process of training a neural network. We then demonstrate this process with an example of inferring redshifts from SDSS spectra. To emphasize that machine learning for astronomy is easy to get started, we demonstrate that the most basic CNN network can be used to obtain high accuracy, we also show that with simple modifications, the network can be converted for classification problems and also to processing gravitational wave data.
Footnote 1: [https://indico.icranet.org/event/2/page/12-data-science-in-relativistic-astrophysics-hands-on-workshop](https://indico.icranet.org/event/2/page/12-data-science-in-relativistic-astrophysics-hands-on-workshop)
## 1 Introduction
Machine learning is no more a new concept as currently it is involved in many aspects of life. Every time you pick up the phone and unlock it using face recognition and translate your voice into text when chatting, all have contributions from machine learning and in terms of the professional field, the first time that machine learning made a major contribution was in 2016, when Google's AlphaGo playing Go defeated the world champion LiShishi and Ke Jie (Silver et al., 2016, 2017, 2018; Schrittwieser et al., 2020), Alphago played some of the moves that confused human beings and these are commonly used now in human-human combat. If you often play go, you will find that many human players play with a machine style. In other words, humans accept the changes brought by machines. Training alphaGo used convolutional neural networks (Fukushima and Miyake, 1982; LeCun et al., 1989). Today our examples use a similar network. Last year Google's Deepmind team announced that protein folding prediction could also be successfully solved in this way and this had posed a major problem in biology for almost 50 years. (Jumper et al., 2021). Also they immediately used the trained machine to predict the protein structure of the COVID19 virus to help in the development of drugs (Arora and Bist, 2020; Jumper et al., 2021).
Returning to astronomy, its data records spatial and temporal information, which is very close to image recognition and speech recognition. The pictures of galaxies seen by telescopes, the spectra of detected radiation, and the structure of the universe generated by simulations, all these can be categorized into image analysis. The evolution of celestial bodies, the lightcurves of explosions, and simulations of changes at each time step, can be used with techniques similar to voice recognition. Broadly speaking, the only axes of the four-dimensional universe we live in are time and space, so the data is no exception with time and space. What machine learning has to do is to receive this data and then give an answer.
Abstracted to mathematics, machine learning can be seen as a map, mapping data to answer. For deep learning (see e.g. LeCun et al., 2015; Goodfellow et al., 2016; Dong et al., 2021, and references therein), this map consists of many layers of neurons. Deep learning is a subset of machine learning, which has developed rapidly in the last decade. Machine learning is also a subset of ar
tificial intelligence (AI), which has been talked about for many years. The AI that surpasses human beings in science fiction movies is defined as strong AI (Strauss, 2018; Butz, 2021), the development of strong AI has only reached the stage of thesis. But according to our estimation, now the computer can probably simulate millions of neurons. The human brain has hundreds of millions of neurons. According to the speed of computer development, when in about 20 years, the computer can reach the complexity of the human brain, then strong AI will be really practical.
This article is based on our hands-on tutorial in the workshop. In section 2 we introduce the principle and implementation of neural networks, in section 3 we illustrate it with an example of inferring redshift by SDSS spectra, and in section 4 we demonstrate that the application can be extended to classification and other sources by simple changes to the neural network. This article does not cover the code explanation, running and troubleshooting in the workshop.
## 2 Basics
Figure 1 shows an example of a neural network. The two leftmost grey dots represent the input, which is passed into the middle neural network, composed of three layers, and each layer contains four neurons. If we know the correct answer, we can define a loss function to compare the difference between the machine's answer and the real answer. For example, if we input the LIGO data, we can calculate whether it contains the gravitational wave signal, and the parameters of the binary system.
Our input can generally be represented as a vector or matrix, corresponding to each of our neurons (each blue
Figure 1: Neural network composed of various neurons. features and labels are the input data represented by grey dots, the hidden layers constructed by neurons and activation functions are represented by blue dots.
dot in figure 1) doing a matrix operation, as shown in this equation,
\[\vec{y}=A(\omega\cdot\vec{x}+b) \tag{1}\]
where \(\vec{x}\) is the input, multiplied by weight \(\omega\), which is a matrix and added by an offset \(b\), then the nonlinear function \(A\) acts on the result. When we train the machine, we are actually fitting this \(\omega\). There are thousands of parameters in each \(\omega\), and the whole network has hundreds of neurons, so the total number of parameters reaches the order of a million. To fit such a large number of parameters, a large amount of data input is required. The more data input, the more accurate the fit, the better the prediction ability of the machine. Usually we say the quality of data defines the upper limit of the accuracy, and the algorithm of the network always tries to improve the accuracy to reach the upper limit.
Considering that we have a network and many data, how can we train the machine? Here we take supervised learning as an example, which is the most common training method at this stage. Supervised learning requires that the data provided has the correct answer. It is like teaching a child to distinguish a dog from a cat, you show him a cat picture, and he answers dog, then you correct him "you are wrong, it should be a cat". After many times of repeated training, the child masters the method of distinguishing between cats and dogs. Mathematically expressed, it is to find a set of parameters \(\omega\) that makes the neural network output the same answer as the real answer, that is, the loss function is close to 0. Figure 2 represents the loss corresponding to a certain \(\omega\). The initial \(\omega\) corresponds to a very high loss in the red region. For each input of data, or each training, we seek the gradient of the loss, our \(\omega\) moves along the gradient by one small step in the inverse direction, with the
Figure 2: Demonstration of the gradient descent process, the loss decreases following the curve line from the red region to the blue valley via the change of \(\omega\) at each training step, eventually, the \(\omega\) approaches \(\omega^{*}\). This figure is reproduced from (Amini et al., 2018).
step size controlled by \(\eta\). After many training sessions, \(\omega\) step by step eventually goes to a very small loss, the blue valley on the figure.
We go over the procedure, as shown in figure 3. First we design a neural network and initialize the parameters. Then we do the forward propagation to make a prediction. By comparing with the ground truth, we obtain the loss. Then the error is back propagated to the entire network to refine the parameters. We repeat this training process by feeding a lot of data until convergence is reached.
## 3 Example
Our example follows Rastegarnia et al. (2022), that is to infer the spectroscopic redshift of quasars by deep learning. The relevant code and data can be found in [https://github.com/YWangScience/Isfahan-workshop-2021/](https://github.com/YWangScience/Isfahan-workshop-2021/) and [https://www.kaggle.com/datasets/ywangscience/sdss-iii-iv](https://www.kaggle.com/datasets/ywangscience/sdss-iii-iv) respectively.
The quasar spectra are retrieved from Sloan Digital Sky Survey (SDSS) Data Release 16 and quasars only (DR16Q) (Hutchinson et al., 2016; Lyke et al., 2020), which is one of the best samples to train the neural networks, with more than \(700,000\) quasars, and includes \(326,535\) quasars that are visually inspected. The spectra are standardized by fitting and then extrapolating to \(4618\) data points (pixels) uniformly distributed in \(\log\lambda\), where \(\lambda\) is the wavelength in the range of \(360\ \mathrm{nm}-1032.5\ \mathrm{nm}\). Then the spectra are normalized using the Zero-Mean Normalization method (Jayalakshmi and Santhakumaran, 2011) as features. The corresponding redshifts are exported as labels. We take \(90\%\) of samples for training and \(10\%\) for testing. Figure 4 shows two spectra with the emission/absorption lines marked.
In the field of machine learning, the most commonly used network architecture is the convolutional neural network (CNN), and many complex networks are built on top of the CNN. For our simple 1D data, a very deep network is rarely needed, and our experiments also demonstrate that building a CNN network with about 10 layers is sufficient to obtain accurate redshift. We also tested that increasing the complexity of the network by a factor of 10 or more, or using a state-of-the-art network, yields an accuracy improvement of only \(\sim 1\%\), so in this tutorial article we use the most basic CNN.
When designing CNN networks, we need to start from the actual physical problem. For the problem of deriving the redshift from the spectrum, we learn that the combination of these patterns can estimate the redshift:
1. The global shift of spectrum;
2. The shift of the emission and absorption lines at different redshifts;
3. Some specific signals may appear at given redshifts.
Hence, in the convolutional part, we specially construct a large size filter of 500 pixels covering \(>10\%\) of the data to capture the global shift of the spectrum, and in series a middle size 200 pixels and a small size filter of 10 pixels to capture the small and minor shift and those specific signals. The network architecture is shown in figure 5.
The code of the whole process is written according to to figure 3, corresponding to:
1. He uniform (He et al., 2015) is adopted to initialize the network weights;
2. forward propagation follows our net defined in figure 5;
3. we adopt the mean squared error as the loss \[MSE(x)=E((x-x^{*})^{2})\] (2) where \(x\) is the prediction and \(x^{*}\) presents the ground truth.
4. Adam is selected to perform the backpropagation;
5. we consider convergence when training cannot reduce the loss.
After dozens of epochs of training, this simple network obtains an accuracy of \(>97\%\) for the redshift of the testing samples.
## 4 Extend the network by simple changes
Here by changing some lines of the net, we turn to train the net to classify the SDSS objects, and for simplicity of the tutorial, only involve two classes of quasar and galaxy.
This classification example uses the same input as the redshift example, a series of one-dimensional data. The outputs are different, The redshift example finally gives a redshift value, while here we output two values, corresponding to the probability of each class. In order to standardize the output, a LogSoftmax function
\[p(x_{i})=\log\left(\frac{\exp(x_{i})}{\sum_{j}\exp(x_{j})}\right), \tag{3}\]
is applied at the last layer. Hence all values are between 0 and 1, and the sum of all equals one. For instance,
the output \((0.2,0.8)\) indicates that the source belongs to the second class. The loss function is also different, and the resulting \(q(x_{i})\) (predicted classification) will be adopted and together with the labels p(xi) (true ground classification) to compute the cross entropy as the loss
\[H(p,q)=\sum_{i}p(x_{i})\log q(x_{i}) \tag{4}\]
With the above two simple changes to training the net following the same procedure of our first redshift example, this simple net is capable of making the classification of more than 80% accuracy for quasar and galaxy.
With some simple modifications, this net can also be used for gravitational wave detection. Similar to the SDSS spectrum, the gravitational wave signal forms a one-dimensional data structure, that the time versus the strain, or the frequency versus the amplitude if changing from the time domain to the frequency domain, see figure 6. The output will be a value indicating the existence of gravitational wave. So we hardly need to modify the network structure, just to modify the length of some parameters according to the length of the input data. Our test using the data provided from Kaggle competition ([https://www.kaggle.com/c/g2net-gravitational-wave-detection/](https://www.kaggle.com/c/g2net-gravitational-wave-detection/)) shows such a simple net obtains an accuracy of more than \(>80\%\), only about \(\sim 5\%\) less accurate than the top one network. We also noticed in this competition that the top three networks did not use very complex networks but put much thought into pre-processing the data.
## 5 Conclusion
Analyzing and processing data is the backbone of astronomy, and machine learning has arguably advanced the most. As a relatively new concept for astronomy, machine learning may be difficult for astronomers to start with. It occurred to us to illustrate through this article that, thanks to the easy-to-use programming frame
Figure 3: Normally, a complete training process includes initiation, forward propagation, loss computation, backpropagation and the final convergence. The left side shows some examples of each step.
Figure 4: The spectra of quasars. There are several emission/absorption lines present in the spectra, including Ly\(\alpha\) (121.6 nm), CIV (154.9 nm), CIII (190.9 nm), MgII (279.6 nm), H\(\beta\) (486.2 nm) and H\(\alpha\) (656.3 nm) as well as a CIV line with a broad absorption feature. The spectral number, visually inspected redshift and the signal to ratio are labelled on each figure, respectively. This figure is reproduced from Rastegarnia et al. (2022).
Figure 5: Structure of one dimensional CNN. The quasar spectrum is input as a one-dimensional array, which goes through the convolutional layer of kernel size = 500, 200, 10 respectively to search for the global and local pattern. The fully connected layers output the redshift.
Figure 6: Example of the gravitational wave data, the blue curve shows the original data, the orange curve is the window for Fourier transformation, and the green curve is the filtered signal that only contains \(20-500\) Hz data.
works developed by the industry, machine learning in astronomy does not require very difficult programming theories and capabilities. In fact, the main development of machine learning in the industry is in the processing of images and voices, which are actually recorded in a form no different from astronomical data, both in temporal and spatial data formats. These similarities allow us to apply many of the neural networks matured in the industry to astronomical data processing. In conclusion, machine learning as a tool is easy to use, but astronomers and astrophysicists need brilliant ideas to use this tool flexibly.
|
2310.08891
|
EHI: End-to-end Learning of Hierarchical Index for Efficient Dense
Retrieval
|
Dense embedding-based retrieval is now the industry standard for semantic
search and ranking problems, like obtaining relevant web documents for a given
query. Such techniques use a two-stage process: (a) contrastive learning to
train a dual encoder to embed both the query and documents and (b) approximate
nearest neighbor search (ANNS) for finding similar documents for a given query.
These two stages are disjoint; the learned embeddings might be ill-suited for
the ANNS method and vice-versa, leading to suboptimal performance. In this
work, we propose End-to-end Hierarchical Indexing -- EHI -- that jointly learns
both the embeddings and the ANNS structure to optimize retrieval performance.
EHI uses a standard dual encoder model for embedding queries and documents
while learning an inverted file index (IVF) style tree structure for efficient
ANNS. To ensure stable and efficient learning of discrete tree-based ANNS
structure, EHI introduces the notion of dense path embedding that captures the
position of a query/document in the tree. We demonstrate the effectiveness of
EHI on several benchmarks, including de-facto industry standard MS MARCO (Dev
set and TREC DL19) datasets. For example, with the same compute budget, EHI
outperforms state-of-the-art (SOTA) in by 0.6% (MRR@10) on MS MARCO dev set and
by 4.2% (nDCG@10) on TREC DL19 benchmarks.
|
Ramnath Kumar, Anshul Mittal, Nilesh Gupta, Aditya Kusupati, Inderjit Dhillon, Prateek Jain
|
2023-10-13T06:53:02Z
|
http://arxiv.org/abs/2310.08891v1
|
# EHI: End-to-end Learning of Hierarchical Index for Efficient Dense Retrieval
###### Abstract
Dense embedding-based retrieval is now the industry standard for semantic search and ranking problems, like obtaining relevant web documents for a given query. Such techniques use a two-stage process: (a) contrastive learning to train a dual encoder to embed both the query and documents and (b) approximate nearest neighbor search (ANNS) for finding similar documents for a given query. These two stages are disjoint; the learned embeddings might be ill-suited for the ANNS method and vice-versa, leading to suboptimal performance. In this work, we propose End-to-end Hierarchical Indexing - EHI- that jointly learns both the embeddings and the ANNS structure to optimize retrieval performance. \(\mathrm{EHI}\) uses a standard dual encoder model for embedding queries and documents while learning an inverted file index (IVF) style tree structure for efficient ANNS. To ensure stable and efficient learning of discrete tree-based ANNS structure, \(\mathrm{EHI}\) introduces the notion of dense path embedding that captures the position of a query/document in the tree. We demonstrate the effectiveness of \(\mathrm{EHI}\) on several benchmarks, including de-facto industry standard MS MARCO (Dev set and TREC DL19) datasets. For example, with the same compute budget, \(\mathrm{EHI}\) outperforms state-of-the-art (SOTA) in by \(0.6\%\) (MRR@10) on MS MARCO dev set and by \(4.2\%\) (nDCG@10) on TREC DL19 benchmarks.
## 1 Introduction
Semantic search (Johnson et al., 2019) aims to retrieve relevant or _semantically similar_ documents/items for a given query. In the past few years, semantic search has been applied to numerous real-world applications like web search, product search, and news search (Nayak, 2019; Dahiya et al., 2021). The problem in the simplest form can be abstracted as: for a given query \(q\), retrieve the relevant documents(s) \(d(q)\) from a static set of documents \(\{d_{1},d_{2},\ldots,d_{N}\}\) s.t. \(d(q)=\arg\max_{1\leq j\leq N}\texttt{SIM}(\mathbf{q},\mathbf{d}_{j})\). Here \(\texttt{SIM}\) is a similarity function that has high fidelity to the training data \(\mathcal{B}=\{(q_{i},d_{j},y_{ij})\}\). Tuple \((q_{i},d_{j},y_{ij})\) indicates if document \(d_{j}\) is relevant (\(y_{ij}=1\)) or irrelevant (\(y_{ij}=1\)) for a given query \(q_{i}\in\mathcal{Q}\).
Dense embedding-based retrieval (Johnson et al., 2019; Jayaram Subramanya et al., 2019; Guo et al., 2020) is the state-of-the-art (SOTA) approach for semantic search and typically follows a two-stage process. In the first stage, it embeds the documents and the query using a deep network like BERT (Devlin et al., 2018). That is, it defines similarity \(\texttt{SIM}(q,d):=\langle\mathcal{E}_{\theta}(q),\mathcal{E}_{\theta}(d)\rangle\) as the inner product between embeddings \(\mathcal{E}_{\theta}(q)\) and \(\mathcal{E}_{\theta}(d)\) of the query \(q\) and the document \(d\), respectively. \(\mathcal{E}_{\theta}(\cdot)\) is a dense embedding function learned using contrastive losses (Ni et al., 2021; Menon et al., 2022).
In the second stage, approximate nearest neighbor search (ANNS) retrieves relevant documents for a given query. That is, all the documents are indexed offline and are then retrieved online for the input query. ANNS in itself has been extensively studied for decades with techniques like ScaNN (Guo et al., 2020), IVF (Sivic and Zisserman, 2003), HNSW (Malkov and Yashunin, 2020), DiskANN (Jayaram Subramanya et al., 2019) and many others being used heavily in practice.
The starting hypothesis of this paper is that the two-stage dense retrieval approach - disjoint training of the encoder and ANNS - is sub-optimal due to the following reasons:
_Misalignment of representations:_ When the encoder and ANNS are trained separately, there is no explicit optimization objective that ensures that the representations learned by the encoder are aligned with the requirements of the ANNS technique. For example, the documents might be clustered in six clusters to optimize encoder loss. However, due to computational constraints, ANNS might allow only five branches/clusters, thus splitting or merging clusters unnaturally and inaccurately.
_Ignoring query distribution:_ Generic ANNS techniques optimize for overall retrieval efficiency without considering the query distribution. As a result, the indexing structure might not be optimal for a particular train/test query distribution (Jaiswal et al., 2022). See Appendix B for more details.
Motivated by the aforementioned issues, we propose EHI- End-to-end learning of **H**ierarchical **I**ndex - that jointly trains both the encoder and the search data structure; see Figure 1. To the best of our knowledge, EHI is the _first_ end-to-end learning method for dense retrieval. Recent methods like DSI (Tay et al., 2022) and NCI (Wang et al., 2022) do not follow a dense embedding approach and directly generate document ID, but they also require a separate hierarchical clustering/tokenization phase on embeddings from a pre-trained encoder; see Section 2 for a more detailed comparison.
EHI parameterizes the hierarchical tree-based indexer with classifiers in its nodes. One key idea in EHI is to map the path taken by a query or a document in the tree with a _compressed_, continuous, and dense path embedding. Standard path embedding in a tree is exponentially sized in tree height, but EHI's path embeddings are linear in branching factor and tree height. EHI further uses these embeddings with contrastive loss function over (query, doc) tuples along with two other loss terms promoting diversity in indexing.
We conduct an extensive empirical evaluation of our method against SOTA techniques on standard benchmarks. For example, on FIQA dataset (Maia et al., 2018) - a question-answering dataset - we observe that our method is \(5.5\%\) more accurate than standard dense retrieval with ScaNN ANNS index (Guo et al., 2020) when restricted to visit/search only 20% of documents in the corpus. Furthermore, for FIQA, EHI shows an improvement of \(5.61\%\) than the dense retrieval baselines with exact search, thus demonstrating better embedding learning as well. We attribute these improved embeddings to the fact that EHI enables integrated hard negative mining as it can retrieve irrelevant or negative documents from indexed leaf nodes of a query. Here, the indexer parameters are always fresh, unlike techniques akin to ANCE (Xiong et al., 2020).
Furthermore, our experiments on the popular MS MARCO benchmark (Bajaj et al., 2016) demonstrate that EHI shows improvements of \(0.6\%\) in terms of nDCG@10 compared to dense-retrieval with ScaNN baselines when only 10% of documents are searched. Similarly, EHI provides \(4.2\%\) higher nDCG@10 than state-of-the-art (SOTA) baselines on the MS MARCO TREC DL19 (Craswell et al., 2020) benchmarks for the same compute budget. EHI also achieves SOTA exact search performance on both MRR@10 and nDCG@10 metrics with up to \(80\%\) reduction in latency, indicating the effectiveness of the joint learning objective. Similarly, we outperform SOTA architectures such as NCI on NQ320k by \(0.5\%\) and \(\sim 2\%\) on Recall@10 and Recall@100 metrics with a model one-tenth the size! (see Section 4.2).
**To summarize, the paper makes the following key contributions:**
* Proposed EHI, the _first_ end-to-end learning method for dense retrieval that jointly learns both the encoder and the search indexer for various downstream tasks. (see Section 3). EHI represents a paradigm shift in dense retrieval where both encoder and ANNS could be integrated and trained accurately, efficiently, and stably in a single pipeline.
* Extensive empirical evaluation of EHI on the industry standard MS MARCO benchmark and compare it to SOTA approaches like ColBERT, SGPT, cpt-text, ANCE, DyNNIBAL, etc. (see Appendix D). EHI's focus is mainly on improving retrieval accuracy for a fixed computation/search budget and is agnostic to encoder architecture, similarity computation, hard negative mining, etc.
## 2 Related Works
Dense retrieval (Mitra et al., 2018) underlies a myriad of web-scale applications like search (Nayak, 2019), recommendations (Eksombatchai et al., 2018; Jain et al., 2019), and is powered by (a) learned representations (Devlin et al., 2018; Kolesnikov et al., 2020; Radford et al., 2021), (b) ANNS (Johnson
et al., 2019; Sivic and Zisserman, 2003; Guo et al., 2020) and (c) LLMs in retrieval (Tay et al., 2022; Wang et al., 2022; Guu et al., 2020).
**Representation learning.** Powerful representations are typically learned through supervised and un/self-supervised learning paradigms that use proxy tasks like masked language modeling (Devlin et al., 2018) and autoregressive training (Radford et al., 2018). Recent advances in contrastive learning (Gutmann and Hyvarinen, 2010) helped power strong dual encoder-based dense retrievers (Ni et al., 2021; Izacard et al., 2021; Nayak, 2019). They consist of query and document encoders, often shared, which are trained with contrastive learning using limited positively relevant query and document pairs (Menon et al., 2022; Xiong et al., 2020). While most modern-day systems use these learned representations as is for large-scale ANNS, there is no need for them to be aligned with the distance metrics or topology of the data structures. Recent works have tried to address these concerns by warm-starting the learning with a clustering structure (Gupta et al., 2022) but fall short of learning jointly optimized representations alongside the search structure.
**Approximate nearest neighbor search (ANNS).** The goal of ANNS is to retrieve _almost_ nearest neighbors without paying exorbitant costs of retrieving true neighbors (Clarkson, 1994; Indyk and Motwani, 1998; Weber et al., 1998). The "approximate" nature comes from pruning-based search data structures (Sivic and Zisserman, 2003; Malkov and Yashunin, 2020; Beygelzimer et al., 2006) as well as from the quantization based cheaper distance computation (Jegou et al., 2010; Ge et al., 2013). This paper focuses on ANNS data structures and notes that compression is often complementary. Search data structures reduce the number of data points visited during the search. This is often achieved through hashing (Datar et al., 2004; Salakhutdinov and Hinton, 2009; Kusupati et al., 2021), trees (Friedman et al., 1977; Sivic and Zisserman, 2003; Bernhardsson, 2018; Guo et al., 2020) and graphs (Malkov and Yashunin, 2020; Jayaram Subramanya et al., 2019). ANNS data structures also carefully handle the systems considerations involved in a deployment like load-balancing, disk I/O, main memory overhead, etc., and often tree-based data structures tend to prove highly performant owing to their simplicity and flexibility (Guo et al., 2020). For a more comprehensive review of ANNS structures, please refer to Cai (2021); Li et al. (2020); Wang et al. (2021).
**Encoder-decoder for Semantic Search.** Recently, there have been some efforts towards modeling retrieval as a sequence-to-sequence problem. In particular, Differential Search Index (DSI) (Tay et al., 2022) and more recent Neural Corpus indexer (NCI) (Wang et al., 2022) method proposed encoding the query and then find relevant document by running a learned decoder. However, both these techniques, at their core, use a _separately_ computed hierarchical k-means-based clustering of document embeddings for semantically assigning the document-id. That is, they also index the documents using an ad-hoc clustering method which might not be aligned with the end objective of improving retrieval accuracy. In contrast, EHI jointly learns both representation and a k-ary tree-based search data structure end-to-end. This advantage is reflected on MS MARCO dataset EHI is upto 7.12% more accurate (in terms of nDCG@10) compared to DSI. Recently, retrieval has been used to augment LLMs also (Guu et al., 2020; Izacard and Grave, 2020;
Figure 1: EHI is an end-to-end hierarchical indexer which comprises an encoder and a hierarchical tree as the indexer where the entire pipeline is learnable and differentiable. Here, variables \(V_{98}\), \(V_{123}\), and \(V_{576}\) are dense representations (embeddings) of the text and \(P_{98}\), \(P_{123}\), and \(P_{576}\) are path embeddings of the respective samples. To efficiently train EHI without any warm starting, we use a combination of objectives - \(L_{\text{sianese}}\), \(L_{\text{indexing}}\), \(L_{\text{intra-leaf}}\) (see Section 3 for details).
2022). We would like to stress that the goal with LLMs is language modeling while retrieval's goal is precise document retrieval. However, retrieval techniques like EHI can be applied to improve retrieval subcomponents in such LLMs.
## 3 End-to-end Hieararchical Indexing (EHI)
**Problem definition and Notation.** Consider a problem with a corpus of \(N\) documents \(\mathcal{D}=\{d_{1},...,d_{N}\}\), a set of \(Q\) training queries \(\mathcal{Q}=\{q_{1},...,q_{Q}\}\), and training data \((q_{i},d_{k},y_{ik})\), where \(y_{ik}\in\{-1,1\}\) is the label for a given training (query, document) tuple and \(y_{ik}=1\) denotes that \(d_{k}\) is relevant to \(q_{i}\). Given these inputs, the goal is to learn a _retriever_ that maps a given query to a set of relevant documents while minimizing the computation cost. While wall-clock time is the primary cost metric, comparing different methods against it is challenging due to very different setups (language, architecture, parallelism, etc.). Instead, we rely on recall vs. % searched curves, widely considered a reasonable proxy for wall-clock time modulo other setup/environment changes (Guo et al., 2020).
### Overview of EHI
At a high level, EHI has three key components: Encoder \(\mathcal{E}_{\theta}\), Indexer \(\mathcal{I}_{\phi}\) and Retriever. Parameters \(\theta\) of the query/document encoder and \(\phi\) of the indexer are the trainable parameters of EHI. Unlike most existing techniques, which train the encoder and indexer in a two-step disjoint process, we train both the encoder and indexer parameters jointly with an appropriate loss function; see Section 3.5. Learning the indexer - generally a discontinuous function - is a combinatorial problem that also requires multiple rounds of indexing the entire corpus. However, by modeling the indexer using a hierarchical tree and its "internal representation" as compressed path embedding, we demonstrate that the training and retrieval with encoder\(+\)indexer can be executed efficiently and effectively.
In the following sections, we provide details of the encoder and indexer components. In Section 3.4, we detail how encoder\(+\)indexer can be used to retrieve specific documents for a given query, which is used both for inference and hard-negative mining during training. Section 3.5 provides an overview of the training procedure. Finally, Section 3.6 summarizes how documents are ranked after retrieval.
### Encoder \(\mathcal{E}_{\theta}\): Dense embedding of query/documents
Our method is agnostic to the architecture used for dual encoder. But for simplicity, we use standard dual encoder (Ni et al., 2021) to map input queries and documents to a common vector space. That is, encoder \(\mathcal{E}_{\theta}\) parameterized by \(\theta\), maps query (\(q\in\mathcal{Q}\)) and document (\(d\in\mathcal{D}\)) to a common vector space: \(\mathcal{E}_{\theta}(q)\in\mathbb{R}^{m}\), and \(\mathcal{E}_{\theta}(d)\in\mathbb{R}^{m}\), where \(m\) is the embedding size of the model (768 here). While such an encoder can also be multi-modal as well as multi-vector, for simplicity, we mainly focus on standard textual data with single embedding per query/document. We use the standard _BERT architecture_ for encoder \(\mathcal{E}_{\theta}\) and initialize parameters \(\theta\) using a pre-trained Sentence-BERT distilbert model (Reimers and Gurevych, 2019). Our base model has 6 layers, 768 dimensions, 12 heads with 66 million parameters. We then fine-tune the final layer of the model for the target downstream dataset.
### Indexer \(\mathcal{I}_{\phi}\): Indexing of query/document in the hierarchical data structure
EHI's indexer (\(\mathcal{I}_{\phi}\)) is a tree with height \(H\) and branching factor \(B\). Each tree node contains a _classifier_ that provides a distribution over its children. So, given a query/document, we can find out the leaf nodes that the query/document indexes into, as well as the _probabilistic_ path taken in the tree.
The final leaf nodes reached by the query are essential for retrieval. But, we also propose to use the path taken by a query/document in the tree as an _embedding_ of the query/document - which can be used in training through the loss function. However, the path a query/document takes is an object in an exponentially large (in height \(H\)) vector space, owing to \(B^{H}\) leaf nodes, making it computationally intractable even for a small \(H\) and \(B\).
Instead, below, we provide a significantly more compressed _path embedding_ - denoted by \(\mathcal{T}(\cdot;\phi)\) and parameterized by \(\phi\) - embeds any given query or document in a relatively low-dimensional (\(B\cdot H\)) vector space. For simplicity, we denote the query and the document path embedding as \(\mathcal{T}_{\phi}(q)=\mathcal{T}(\mathcal{E}_{\theta}(q);\phi)\) and \(\mathcal{T}_{\phi}(d)=\mathcal{T}(\mathcal{E}_{\theta}(d);\phi)\), respectively.
We construct path embedding of a query/document as:
\[\mathcal{T}(\mathcal{E}_{\theta}(q)=\mathcal{T}(\mathcal{E}_{\theta}(q);\phi)=[ \mathbf{p}^{H};\mathbf{p}^{H-1};\ldots;\mathbf{p}^{1}],\]
Where \(\mathbf{p}^{h}\in[0,1]^{B}\) denotes the probability distribution of children nodes for a parent at height \(h\). For a given leaf \(l\), say path from root node is defined as \(\mathbf{l}=[i_{l}^{l},i_{l}^{2}\ldots i_{l}^{H}]\) where \(i_{l}^{h}\in[1\ldots B]\) for \(h\in[H]\). The probability at a given height in a path is approximated using a height-specific simple feed-forward neural network parameterized by \(\mathbf{W}_{h+1}\in\mathbb{R}^{(B\cdot h+m)\times B}\) and \(\mathbf{U}_{h+1}\in\mathbb{R}^{(B\cdot h+m)\times(B\cdot h+m)}\) (\(m\) is the embedding size). That is,
\[\mathbf{p}^{h+1}=\texttt{Softmax}(\mathbf{W}_{h+1}^{\top}\mathcal{F}([ \mathbf{o}(i_{l}^{h});\mathbf{o}(i_{l}^{h-1});\ldots;\mathbf{o}(i_{l}^{h}); \mathcal{E}_{\theta}(q)];\mathbf{U}_{h+1}))\cdot\mathbf{p}^{h}[i_{l}^{h}] \tag{1}\]
where one-hot-vector \(\mathbf{o}(i)\) is the \(i\)-th canonical basis vector and \(\mathcal{F}\) is a non-linear transformation given by \(\mathcal{F}(\mathbf{x};\mathbf{U}_{h})=\mathbf{x}+\texttt{ReLU}(\mathbf{U}_{h}^ {\top}\mathbf{x})\).
In summary, the path embedding for height 1 represents a probability distribution over the leaves. During training, we compute path embedding for higher heights for only the most probable path, ensuring that the summation of leaf node logits remains a probability distribution. Also, the indexer and path embedding function \(\mathcal{T}(\cdot;\phi)\) has the following collection of trainable parameters: \(\phi=\{\mathbf{W}_{H},\ldots,\mathbf{W}_{1},\mathbf{U}_{H},\ldots,\mathbf{U}_ {1}\}\), which we learn by optimizing a loss function based on the path embeddings; see Section 3.5.
### Retriever: Indexing items for retrieving
Indexing and retrieval form a backbone for any search structure. \(\mathrm{EHI}\) efficiently encodes the index path of the query and documents in \((B\cdot H)\)-dimensional embedding space. During retrieval for a query q, \(\mathrm{EHI}\) explores the tree structure to find the "most relevant" leaves and retrieves documents associated with those leaves. For retrieval, it requires encoder and indexer parameters \((\theta,\phi)\) along with Leaf, document hashmap \(\mathcal{M}\).
The relevance of a leaf \(l\) for a query \(q\) is measured by the probability of a query reaching a leaf at height \(H\) (\(\mathcal{P}(q,l,H)\)). Recall from previous section that path to a leaf \(\hat{l}\) is defined as \(\mathbf{l}=[i_{l}^{1},i_{l}^{2}\ldots i_{l}^{H}]\) where \(i_{l}^{h}\in[1\ldots B]\) for \(h\in[H]\). The probability of reaching a leaf \(l\) for a given query \(q\in\mathcal{Q}\) to an arbitrary leaf \(l\in\texttt{Leaves}\) can be computed as \(\mathcal{P}(q,l,H)=\mathbf{p}^{H}[i_{l}^{H}]\) using equation 1. But, we only need to compute the most probable leaves for every query during inference, which we obtain by using the standard beam-search procedure summarized below:
1. For all parent node at height \(h-1\), compute probability of reaching their children \(\hat{S}=\cup_{e\in\texttt{child}(p)}\mathcal{P}(q,c,h)\;\forall_{p\in P}\).
2. Keep top \(\beta\) children based on score \(\hat{S}\) and designate them as the parents for the next height.
Repeat steps 1 and 2 until the leaf nodes are reached.
Once we select \(\beta\) leaves \(\mathrm{EHI}\) retrieves documents associated with each leaf, which is stored in the hashmap \(\mathcal{M}\). To compute this hash map, \(\mathrm{EHI}\) indexes each document \(d\in\mathcal{D}\) (similar to query) with \(\beta=1\). Here, \(\beta=1\) is a design choice considering memory and space requirements and is kept as a tuneable parameter. Algorithm 2 in the appendix depicts the approach used by our Indexer for better understanding.
``` Input: \(\mathcal{M}\), \(\mathcal{
where we penalize if similarity between query \(q\) and an _irrelevant_ document \(d_{-}\) (\(y(q,d_{-})\neq 1\)) is within \(\gamma\) margin of the corresponding similarity between \(q\) and a relevant document \(d_{+}\) (\(y(q,d_{+})=1\)).We now define the following three loss terms:
1. **Semantic Similarity**: the first term is a standard dual-encoder contrastive loss between a relevant document \(d_{+}\) - i.e., \(y(q,d_{+})=+1\) - and an _irrelevant_ document with \(y(q,d_{-})\neq 1\). \[L_{\text{siamese}}=L(\mathcal{E}_{\theta}(q),\mathcal{E}_{\theta}(d_{+}), \mathcal{E}_{\theta}(d_{-});\theta)\] (3)
2. **Indexing Similarity**: the second term is essentially a similar contrastive loss over the query, relevant-doc, irrelevant-doc triplet, but where the query and documents are represented using the path-embedding \(\mathcal{T}_{\phi}(\cdot)\) given by the indexer \(\mathcal{I}_{\phi}\). \[L_{\text{indexing}}=L(\mathcal{T}_{\phi}(q),\mathcal{T}_{\phi}(d_{+}), \mathcal{T}_{\phi}(d_{-});\theta,\phi)\] (4)
3. **Intra-leaf Similarity**: to spread out irrelevant docs, third loss applies triplet loss over the sampled relevant and irrelevant documents for a query \(q\). Note that we apply the loss only if the two docs are semantically dissimilar according to the latest encoder, i.e., \(\texttt{SIM}(\mathbf{a},\mathbf{b})=\frac{\mathbf{a}^{T}\mathbf{b}}{| \mathbf{a}||\mathbf{b}|}<\tau\) for a pre-specified threshold \(\tau=0.9\). \[L_{\text{intra-leaf}}=\mathbf{1}\{\texttt{SIM}(\mathcal{E}_{\theta}(d_{+}), \mathcal{E}_{\theta}(d_{-}))<\tau\}L(\mathcal{T}_{\phi}(d_{+}),\mathcal{T}_{ \phi}(d_{-});\theta,\phi)\] (5)
The final loss function \(\mathcal{L}\) is given as the weighted sum of the above three losses:
\[\mathcal{L}(q,d_{+},d_{-};\theta,\phi)=\lambda_{1}L_{\text{siamese}}+\lambda_ {2}L_{\text{indexing}}+\lambda_{3}L_{\text{intra-leaf}} \tag{6}\]
Here \(\gamma\) is set to 0.3 for all loss components, and \(\lambda_{1},\lambda_{2},\lambda_{3}\) are tuneable hyper-parameters. Our trainer (see Algorithm 1) learns \(\theta\) and \(\phi\) by optimizing \(\mathcal{L}\) using standard techniques; for our implementation we used AdamW (Loshchilov and Hutter, 2017).
Note that the loss function only uses in-batch documents' encoder embeddings and path embeddings, i.e., we are not even required to index all the documents in the tree structure, thus allowing efficient joint training of both encoder and indexer. To ensure fast convergence, we use hard negatives mined from the indexed leaves of a given query \(q\) for which we require documents to be indexed in the tree. But, this procedure can be done once in every \(r\) step where \(r\) is a hyper-parameter set to 5 by default across our experiments. We will like to stress that existing methods like DSI, NCI, or ANCE not only have to use stale indexing of documents, but they also use stale or even fixed indexers - like DSI, NCI learns a fixed semantic structure over docids using one-time hierarchical clustering. In contrast, EHI jointly updates the indexer and the encoder in each iteration, thus can better align the embeddings with the tree/indexer.
### Re-ranking and Evaluation
This section describes the re-ranking step and the test-time evaluation process after retrieval. In Section 3.4, we discussed how each document is indexed, and we now have a learned mapping of \(d\times l\), where \(d\) is the corpus size, and \(l\) is the number of leaves. Given a query at test time, we perform a forward pass similar to the indexing pipeline presented in Section 3.4 and find the top-\(b\) leaves (\(b\) here is the beam size) the given query reaches. We collate all the documents that reached these \(b\) leaves (set operation to avoid any repetition of the same documents across multiple leaves) and rank them based on an appropriate similarity metric such as cosine similarity, dot product, manhattan distance, etc. We use the cosine similarity metric for ranking throughout our experiments presented in Section 4.2.
## 4 Experiments
In this section, we present empirical evaluation of EHI on standard dense retrieval benchmarks. The goal of empirical evaluation is twofold: (a) highlight the paradigm shift of training both encoder and ANNS in an end-to-end fashion (EHI) is more favorable to training them in an disjoint fashion (off-the-shelf indexers such as ScaNN, Faiss-IVF, etc.), (b) understand EHI's stability wrt various hyper-parameters and how to set them up appropriately.
We note that due to typical scale of retrieval systems, a method's ability to retrieve relevant documents under strict latency budget is critical, and defines success of a method. So, we would want to compare query throughput against recall/MRR, but obtaining head-to-head latency numbers is challenging as different systems are implemented using different environments and optimizations. So, following standard practice in the ANNS community, we use a fraction of documents visited/searched as a proxy for latency (Jayaram Subramanya et al., 2019; Guo et al., 2020). Appendix C provides exact training hyperparameters of EHI.
### Experimental Setup
Datasets:We evaluate EHI on four standard but diverse retrieval datasets of increasing size: SciFact (Wadden et al., 2020), FIQA (Maia et al., 2018), MS MARCO (Bajaj et al., 2016) and NQ320\(k\)(Kwiatkowski et al., 2019). Appendix A provides additional details about these datasets.
Baselines.We consider five baseline methods to evaluate against EHI. In particular, baselines DE+Exact-search, DE+ScaNN, and DE+FaiSS-IVF are standard dense retrieval methods with dual-encoder (DE) architecture (Menon et al., 2022) trained using Siamese loss (Chopra et al., 2005). The three methods use three different ANNS methods for retrieval: Exact-search1, ScaNN (Guo et al., 2020), and FaiSS-IVF (Johnson et al., 2019). DSI (Yay et al., 2022) and NCI (Wang et al., 2022) are the remaining two main baselines. We report DSI numbers on MS MARCO using an implementation validated by the authors. However, we note that NCI fails to scale to large datasets like MS MARCO. For EHI and baseline dual-encoder (DE) models, we use a pre-trained Sentence-BERT (Reimers and Gurevych, 2019) fine-tuned appropriately on the downstream dataset using contrastive loss. For DE baselines, only the encoder is fine-tuned, while the ANNS structure (off-the-shelf indexers) is built on top of the learned representations.
Footnote 1: Performance metric when 100% of documents are visited (see Figure 2, Figure 5).
### Results
SciFact.We first start with the small-scale SciFact dataset. Figure 5(a) and Table 8 compares EHI to three DE baselines. Clearly, EHI's recall-compute curve dominates that of DE+ScaNN and DE+FaiSS-IVF. For example, when allowed to visit/search about 10% of documents, EHI obtains up to **+15.64%** higher Recall@\(100\). Furthermore, EHI can outperform DE+Exact Search with a **60%** reduction in latency. Finally, representations from EHI's encoder with exact search can be as much as **4%** more accurate (in terms of Recall@\(100\)) than baseline dual-encoder+Exact Search, indicating effectiveness of EHI's integrated hard negative mining.
well as TREC DL-19 set (Craswell et al., 2020). We compare against the standard Sentence-BERT model (Huggingface, 2019), fine-tuned on MS MARCO, with Exact Search (see Table 10).
For the _standard dev set_, \(\mathrm{EHI}\) is able to match or surpass the accuracy of baseline Exact Search with an **80%** reduction in number of documents visited. This is in stark contrast to the baseline DE+ScaNN and DE+Faiss-IVF methods, which require visiting almost double, i.e., almost \(50\%\) of the documents. Furthermore, when restricted to visiting only \(1\%\) of the documents, \(\mathrm{EHI}\) obtains **0.6%** higher nDCG@10 than DE+ScaNN and DE+Faiss-IVF. Note that such a gain is quite significant for the highly challenging and well-studied MS MARCO dataset. We also compare \(\mathrm{EHI}\) against DSI on this dataset. We note that the DSI base model with 250M parameters is almost _four times_ the size of the current \(\mathrm{EHI}\) model. After multiple weeks of DSI training with docQuery + atomic id + base model, DSI's MRR@10 value is \(26\%\), which is about **6%** lower than \(\mathrm{EHI}\) with just \(1\%\) visited documents. Also note that despite significant efforts, we could not scale NCI code (Wang et al., 2022) on MS MARCO due to the dataset size; the NCI paper does not provide accuracy numbers on MS MARCO dataset.
For the _TREC DL-19 set_, \(\mathrm{EHI}\) is able to match or surpass the nDCG@10 of baseline Exact Search with an **78%** reduction in latency. Furthermore, when restricted to visiting 1% of the documents, \(\mathrm{EHI}\) achieves **4.2%** higher nDCG@10 than DE+ScaNN and DE+Faiss-IVF.
For completeness, we compare \(\mathrm{EHI}\)'s accuracy against SOTA methods for this dataset that uses a similar-sized encoder. Note that these methods often use complementary and analogous techniques to \(\mathrm{EHI}\), such as multi-vector similarity, etc., and can be combined with \(\mathrm{EHI}\). Nonetheless, we observe that \(\mathrm{EHI}\) is competitive with SOTA techniques like ColBERT with a similar encoder and is significantly more accurate than traditional DE methods like ANCE, HNSW, etc. Appendix D provides a more detailed comparison of \(\mathrm{EHI}\) encoder against other SOTA encoders on the MS MARCO dataset. Note that any of these encoders could replace the Distilbert model used in \(\mathrm{EHI}\) and only serve to show the efficacy of the learned representations. (see Table 1)
**NQ320\(k\).** Finally, we present evaluation on the standard NQ320k (Kwiatkowski et al., 2019) benchmark, in the setting studied by the NCI paper (Wang et al., 2022). \(\mathrm{EHI}\) matches or surpasses the accuracy of baseline Exact Search with a **60%** reduction in latency. Furthermore, when limited to the same compute budget, \(\mathrm{EHI}\) outperforms DE+SCANN and DE+Faiss-IVF by up to **0.4%** in Recall@10.
_Comparison to DSI/NCI_: Note that \(\mathrm{EHI}\) is able to significantly outperform DSI and NCI (without query generation) despite NCI utilizing a \(10\times\) larger encoder! Furthermore, even with query generation, NCI is \(0.5\%\) and \(\sim 2\%\) less accurate than \(\mathrm{EHI}\) on Recall@10 and Recall@100 metrics, respectively. (see Table 2)
Our observations about \(\mathrm{EHI}\) are statistically significant as evidenced by p-value tests Appendix E.3. Additional experiments such as the number of documents per leaf, robustness to initialization, qualitative analysis on the leaves of the indexer learned by the \(\mathrm{EHI}\) model are depicted in Appendix E.
\begin{table}
\begin{tabular}{l c c} \hline \hline Method & **MRR@10** & **nDCG@10** \\ \hline BM25 & 16.7 & 22.8 \\ HNSW & 28.9 & - \\ DyNNIBAL & 33.4 & - \\ DeepCT & 24.3 & 29.6 \\ DSI & 26.0 & 32.28 \\ ANCE & 33.0 & 38.8 \\ TAS-B & - & 40.8 \\ GenQ & - & 40.8 \\ ColBERT & 36.0 & 40.1 \\ \hline \hline EHI (distilbert-cos; **Ours**) & 33.8 & 39.4 \\ EHI (distilbert-dot; **Ours**) & **37.2** & **43.3** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Performance metrics (\(\%\)) evaluated on the MS MARCO dev dataset. The best value for each metric is indicated in **bold** font. For an exhaustive comparison refer to Table 5 in appendix
\begin{table}
\begin{tabular}{l c c} \hline \hline Method & **R@10** & **R@100** \\ \hline BM25 & 32.48 & 50.54 \\ BERT + BruteForce & 53.42 & 73.16 \\ BM25 + DocTSQuery & 61.83 & 76.92 \\ ANCE (MaxP) & 80.38 & 91.31 \\ SEAL (Large) & 81.24 & 91.93 \\ DVNNIBAL & 75.4 & 86.2 \\ DSI & 70.3 & - \\ NCI (Large) & 85.27 & 92.49 \\ NCI _w/ org-ft_ (Large) & 88.45 & 94.53 \\ \hline EHI (distilbert-cos; **Ours**) & **88.98** & **96.3** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Performance metrics (\(\%\)) evaluated on the NQ320\(k\) dataset (Kwiatkowski et al., 2019). The best value for each metric is indicated in **bold** font. Please refer to Table 7 for more methods.
### Ablations
In the previous section, we demonstrated the effectiveness of EHI against multiple baselines on diverse benchmarks. In this section, we report results from multiple ablation studies to better understand the behavior of EHI. Additional properties such as load balancing, effect of negative mining refresh factor, and other properties of EHI are discussed in Appendix E.
**Effect of branching factor.** Figure 3(a) shows recall@\(100\) of EHI on SciFact with varying branching factors. We consider two versions of EHI, one with exact-search that is a beam-size=branching factor (B), and with beam-size \(\approx 0.1\) branching factor, i.e., restricting to about 10% visited document. Interestingly, for EHI + Exact Search, the accuracy decreases with a higher branching factor, while it increases for the smaller beam-size of \(0.1\). We attribute this to documents in a leaf node being very similar to each other for high branching factors (fewer points per leaf). We hypothesize that EHI is sampling highly relevant documents for hard negatives leading to a lower exact search accuracy.
**Ablation w.r.t loss components.** Next, on FIQA dataset, we study performance of EHI when one of the loss components equation 6 is turned off; see Figure 3(b). First, we observe that EHI outperforms the other three vanilla variants, implying that each loss term contributes non-trivially to the performance of EHI. Next, we observe that removing the document-similarity-based loss term (\(\lambda_{3}\)), Eq. equation 5, has the least effect on the performance of EHI, as the other two-loss terms already capture some of its desired consequences. However, turning off the contrastive loss on either encoder embedding (\(\lambda_{1}\)), Eq. equation 3, or path embedding (\(\lambda_{2}\)), Eq. equation 4, loss leads to a significant loss in accuracy. This also indicates the importance of jointly and accurately learning both the encoder and indexer parameters.
**Effect of hard negative sampling.** Figure 3(c) shows recall@100 with and without hard-negative mining using the learned indexer (see Algorithm 1) on FIQA. EHI with hard negative sampling improves recall@100 significantly by \(\mathbf{3.1}\%\), thus clearly demonstrating it's importance.
**Effect of height.** We study the accuracy of EHI when extending to multiple heights of the tree structure to extend its effectiveness in accurately indexing extensive web-scale document collections. Traditional indexing methods that rely on a single-height approach can be computationally impractical and sub-optimal when dealing with billions or more documents. To address this challenge, EHI treats height as a hyperparameter and learns the entire tree structure end-to-end. Our experimental results in Figure 3(d) demonstrate that trees with \(H=2\) also exhibit similar performance on the MS MARCO as \(H=1\). This extension enhances scalability and efficiency when indexing large web-scale datasets. For instance, EHI trained on Scifact with equal number of leaves, we notice a significant speedup with increasing height; for example at \((B=64,H=1)\), \((B=8,H=2)\), and \((B=4,H=3)\), we notice a per-query latency of \(2.48\) ms, \(2.40\) ms, and \(1.99\) ms respectively at the same computation budget. This extension to hierarchical k-ary tree is absolutely necessary for scalability and discussed in further detail in Appendix E.6.
## 5 Conclusions, Limitations, and Future Work
We presented EHI, a framework and paradigm shift to jointly learn both the query/document encoder and the search indexer to retrieve documents efficiently. EHI is composed of three key components: encoder, indexer, and retriever; indexer generates compressed, low-dimensional path embeddings of query/documents in the tree, which is key to joint training of encoder and indexer. We demonstrated the effectiveness of EHI on a variety of standard benchmarks. Currently, path embeddings are mainly
Figure 3: Ablation study of four major components in EHI to evaluate their contributions towards jointly learned representation and ANNS structure for state-of-the-art dense retrieval.
an intuitive construct without formal understanding. In the future, understanding path embeddings and providing rigorous guarantees should be of significant interest. Furthermore, combining EHI encoders that output hierarchical representations like matryoshka embeddings (Kusupati et al., 2022) or integrating with RGD (Kumar et al., 2023) to further improve generalization of tail queries should also be of interest. Finally, this paper addresses an abstract and established problem, so we don't expect any significant additional societal implications from this work.
|
2306.04795
|
Feature Selection using Sparse Adaptive Bottleneck Centroid-Encoder
|
We introduce a novel nonlinear model, Sparse Adaptive Bottleneck
Centroid-Encoder (SABCE), for determining the features that discriminate
between two or more classes. The algorithm aims to extract discriminatory
features in groups while reconstructing the class centroids in the ambient
space and simultaneously use additional penalty terms in the bottleneck layer
to decrease within-class scatter and increase the separation of different class
centroids. The model has a sparsity-promoting layer (SPL) with a one-to-one
connection to the input layer. Along with the primary objective, we minimize
the $l_{2,1}$-norm of the sparse layer, which filters out unnecessary features
from input data. During training, we update class centroids by taking the
Hadamard product of the centroids and weights of the sparse layer, thus
ignoring the irrelevant features from the target. Therefore the proposed method
learns to reconstruct the critical components of class centroids rather than
the whole centroids. The algorithm is applied to various real-world data sets,
including high-dimensional biological, image, speech, and accelerometer sensor
data. We compared our method to different state-of-the-art feature selection
techniques, including supervised Concrete Autoencoders (SCAE), Feature
Selection Networks (FsNet), Stochastic Gates (STG), and LassoNet. We
empirically showed that SABCE features often produced better classification
accuracy than other methods on the sequester test sets, setting new
state-of-the-art results.
|
Tomojit Ghosh, Michael Kirby
|
2023-06-07T21:37:21Z
|
http://arxiv.org/abs/2306.04795v2
|
# Feature Selection using Sparse Adaptive Bottleneck Centroid-Encoder
###### Abstract
We introduce a novel nonlinear model, Sparse Adaptive Bottleneck Centroid-Encoder (SABCE), for determining the features that discriminate between two or more classes. The algorithm aims to extract discriminatory features in groups while reconstructing the class centroids in the ambient space and simultaneously use additional penalty terms in the bottleneck layer to decrease within-class scatter and increase the separation of different class centroids. The model has a sparsity-promoting layer (SPL) with a one-to-one connection to the input layer. Along with the primary objective, we minimize the \(l_{2,1}\)-norm of the sparse layer, which filters out unnecessary features from input data. During training, we update class centroids by taking the Hadamard product of the centroids and weights of the sparse layer, thus ignoring the irrelevant features from the target. Therefore the proposed method learns to reconstruct the critical components of class centroids rather than the whole centroids. The algorithm is applied to various real-world data sets, including high-dimensional biological, image, speech, and accelerometer sensor data. We compared our method to different state-of-the-art feature selection techniques, including supervised Concrete Autoencoders (SCAE), Feature Selection Networks (FsNet), Stochastic Gates (STG), and LassoNet. We empirically showed that SABCE features often produced better classification accuracy than other methods on the sequester test sets, setting new state-of-the-art results.
Sparse Adaptive Bottleneck Centroid-Encoder
## 1 Introduction
Technological advancement has made high-dimensional data readily available. For example, in bioinformatics, the researchers seek to understand the gene expression level with microarray or next-generation sequencing techniques where each point consists of over 50,000 measurements (Pease et al., 1994; Shalon et al., 1996; Metzker, 2010; Reuter et al., 2015). The abundance of features demands the development of feature selection algorithms to improve a Machine Learning task, e.g., classification. Another important aspect of feature selection is knowledge discovery from data. Which biomarkers are important to characterize a biological process, e.g., the immune response to infection by respiratory viruses such as
influenza (O'Hara et al., 2013)? Additional benefits of feature selection include improved visualization and understanding of data, reducing storage requirements, and faster algorithm training times.
Feature selection can be accomplished in various ways that can be broadly categorized as filter, wrapper, and embedded methods. In a filter method, each variable is ordered based on a score. After that, a threshold is used to select the relevant features (Lazar et al., 2012). Variables are usually ranked using correlation (Guyon and Elisseeff, 2003; Yu and Liu, 2003), and mutual information (Vergara and Estevez, 2014; Fleuret, 2004). In contrast, a wrapper method uses a model and determines the importance of a feature or a group of features by the generalization performance of the predetermined model (El Aboudi and Benhlima, 2016; Hsu et al., 2002). Since evaluating every possible combination of features becomes an NP-hard problem, heuristics are used to find a subset of features. Wrapper methods are computationally intensive for larger data sets, in which case search techniques like Genetic Algorithm (GA) (Goldberg and Holland, 1988) or Particle Swarm Optimization (PSO) (Kennedy and Eberhart, 1995) are used. In embedded methods, feature selection criteria are incorporated within the model, i.e., the variables are picked during the training process (Lal et al., 2006). Iterative Feature Removal (IFR) uses the ratio of absolute weights of a Sparse SVM model as a criterion to extract features from the high dimensional biological data set (O'Hara et al., 2013).
Mathematically feature selection problem can be posed as an optimization problem on \(\ell_{0}\)-norm, i.e., how many predictors are required for a machine learning task. As the minimization of \(\ell_{0}\) is intractable (non-convex and non-differentiable), \(\ell_{1}\)-norm is used instead, which is a convex proxy of \(\ell_{0}\)(Tibshirani, 1996). Although the \(\ell_{1}\) has been used in the feature selection task in linear (Fonti and Belitser, 2017; Muthukrishnan and Rohini, 2016; Kim and Kim, 2004; O'Hara et al., 2013; Chepushtanova et al., 2014) as well as in nonlinear regime (Li et al., 2016; Scardapane et al., 2017; Li et al., 2020), it has some disadvantages as well. For example, when multi-collinearity exists (i.e., two or more independent features have a high correlation with one another) in data, \(\ell_{1}\) selects one of them and discards the rest, degrading the rest prediction performance (Zou and Hastie, 2005). Although the problem can be overcome using iterative feature removal scheme as proposed by (O'Hara et al., 2013). It has been reported that minimizing Lasso doesn't satisfy the Oracle property (Zou, 2006). ElasticNet (Zou and Hastie, 2005), on the other hand, overcomes some limitations of Lasso by combining \(\ell_{2}\) norm with \(\ell_{1}\)-norm.
This paper proposes a new embedded variable selection approach called Sparse Adaptive Bottleneck Centroid-Encoder (SABCE) to extract features when class labels are available. Our method modifies the Centroid-Encoder model (Ghosh et al., 2018; Ghosh and Kirby, 2022) by incorporating two penalty terms in the bottleneck layer to increase class separation and localization. SABCE applies a \(\ell_{2,1}\) penalty to a sparsity-promoting layer between the input and the first hidden layer while reconstructing the class centroids. One key attribute of SABCE is the adaptive centroids update during training, distinguishing it from Centroid-Encoder, which has a fixed class centroid. We evaluate the proposed model on diverse data sets and show that the features produce better generalization than other state-of-the-art techniques.
## 2 Sparse Adaptive Bottleneck Centroid-Encoder (SABCE)
Consider a data set \(X=\{x_{i}\}_{i=1}^{N}\) with \(N\) samples and \(M\) classes where \(x_{i}\in\mathbb{R}^{d}\). The classes denoted \(C_{j},j=1,\ldots,M\) where the indices of the data associated with class \(C_{j}\) are denoted \(I_{j}\). We define centroid of each class as \(c_{j}=\frac{1}{|C_{j}|}\sum_{i\in I_{j}}x_{i}\) where \(|C_{j}|\) is the cardinality of class \(C_{j}\).
### Bottleneck Centroid-Encoder (BCE)
Given the setup mentioned above, we define Bottleneck Centroid-Encoder, which is the starting point of our proposed algorithm. The objective function of BCE is given below:
\[\mathcal{L}_{bce}(\theta)=\frac{1}{2N}\sum_{j=1}^{M}\sum_{i\in I_{j}}(\|c_{j} -f(x_{i};\theta)\|_{2}^{2}+\mu_{1}\|g(c_{j})-g(x_{i})\|_{2}^{2})+\mu_{2}\sum_{ k<l}\frac{1}{1+\|g(c_{k})-g(c_{l}))\|_{2}^{2}} \tag{1}\]
The mapping \(f\) is composed of a dimension-reducing mapping \(g\) (encoder) followed by a dimension-increasing reconstruction mapping \(h\) (decoder). The first term of the objective is minimizing the square of the distance between \(f(x_{i})\) and its class centroid \(c_{j}\). Therefore the aim is to map the sample \(x_{i}\) to its corresponding class centroid \(c_{j}\), and the mapping function \(f\) is known as Centroid-Encoder (Ghosh and Kirby, 2022). The output of the encoder \(g\) is used as a supervised visualization tool (Ghosh and Kirby, 2022; Ghosh et al., 2018). Centroid-Encoder calculates its cost on the output layer; if the centroids of multiple classes are close in ambient space, the corresponding samples will land close in the reduced space, increasing the error rate. To remedy the situation, we add two more terms to the bottleneck layer, i.e., at the output of the encoder \(g\), which we call Bottleneck Centroid-Encoder (BCE). The term \(\|g(c_{j})-g(x_{i})\|_{2}^{2}\) will further pull a sample \(x_{i}\) towards it centroid which will improve the class localization in reduced space. Further, to avoid the overlap of classes in the latent bottleneck space, we introduce a term that serves to to repel centroids there. We achieve this by maximizing the distances (equivalently the square of the \(\ell_{2}\)-norm) between all class-pairs of latent centroids. We introduced the third term to fulfill the purpose. Note, as the original optimization is a minimization problem, we choose to minimize \(\sum_{k<l}\frac{1}{1+\|g(c_{k})-g(c_{l})\|_{2}^{2}}\) which will ultimately increase the distance between the latent centroids of class \(k\) and \(l\). We added 1 in the denominator for numerical stability. The hyper parameters \(\mu_{1}\) and \(\mu_{2}\) will control the class localization and separation in the embedded space. We use a validation set to determine their values.
### Sparse Adaptive Bottleneck Centroid-Encoder for Robust Feature Selection
The Sparse Bottleneck Centroid-Encoder (SBCE) is a modification to the BCE architecture as shown in Figure 1. The input layer is connected to the first hidden layer via the sparsity promoting layer (SPL). Each node of the input layer has a weighted one-to-one connection to each node of the SPL. The number of nodes in these two layer are the same. The nodes in SPL don't have any bias or non-linearity. The SPL is fully connected to the first hidden layer, therefore the weighted input from the SPL will be passed to the hidden layer in the same way that of a standard feed forward network. During training, an \(\ell_{2,1}\) penalty, which is
also known as Elastic Net (Zou and Hastie, 2005), will be applied to the weights connecting the input layer and SPL layer. The sparsity promoting \(\ell_{2,1}\) penalty will drive most of the weights to near zero and the corresponding input nodes/features can be discarded. Therefore, the purpose of the SPL is to select important features from the original input. Note we only apply the \(\ell_{2,1}\) penalty to the parameters of the SPL.
Denote \(\theta_{spl}\) to be the parameters (weights) of the SPL and \(\theta\) to be the parameters of the rest of the network. The cost function of sparse bottleneck centroid-encoder is given by
\[\begin{split}\mathcal{L}_{sbce}(\theta,\theta_{spl})=\frac{1}{ 2N}\sum_{j=1}^{M}\sum_{i\in I_{j}}(\|c_{j}-f(x_{i};\theta)\|_{2}^{2}+\mu_{1}\| g(c_{j})-g(x_{i})\|_{2}^{2})\\ +\mu_{2}\sum_{k<l}\frac{1}{1+\|g(c_{k})-g(c_{l}))\|_{2}^{2}}+ \lambda_{1}\|\theta_{spl}\|_{1}+\lambda_{2}\|\theta_{spl}\|_{2}^{2}\end{split} \tag{2}\]
where \(\lambda_{1},\lambda_{2}\) are the hyperparameter which control the sparsity. A larger value of \(\lambda_{1}\) will promote higher sparsity resulting more near-zero weights in SPL.
#### 2.2.1 Sparsification of the Centroids
The targets of the SBCE are the class centroids which are pre-computed from data and labels. For high-dimensional datasets, the features are noisy, redundant, or irrelevant (Alelyani et al., 2018); therefore, feature selection with fixed class centroids, computed on high-dimensional ambient space, may be impacted by the noise. We can remedy the situation by promoting sparsity in the class centroid during training. In this approach, we start with \(c_{j}\)'s computed in the ambient space; after that, we change \(c_{j}\)'s by multiplying it component-wise by \(\theta_{spl}\), i.e., \([c_{j}]_{t}=[c_{j}]\odot[\theta_{spl}]_{t-1}\), where \(t\) is the current epoch. As \(\theta_{spl}\) sparsifies the input data by eliminating redundant and noisy features, updating the centroids as shown above will reduce noise from the targets, thus improving the discriminative power of selected features. We call this algorithm as Sparse Adaptive Bottleneck Centroid-Encoder
Figure 1: The architecture of Bottleneck Centroid-Encoder and Sparse Bottleneck Centroid-Encoder. Notice that Sparse Bottleneck Centroid-Encoder employs a sparse layer between the input and the first hidden layer to promote feature sparsity using \(\ell_{2,1}\) norm.
(SABCE). In Equation 2, we use \([c_{j}]_{t}\) (instead of \(c_{j}\)) to calculate the cost at each iteration. The comparison in Table 1 shows the performance advantage of SABCE over SBCE.
**Training Details:** We implemented SLCE in PyTorch (Paszke et al., 2017) to run on GPUs on Tesla V100 GPU machines. We trained SABCE using Adam (Kingma and Ba, 2015) on the whole training set without minibatch. We pre-train our model for ten epochs and then include the sparse layer (SPL) with the weights initialized to 1. Then we train the model for another ten epochs to adjust the weights of the SPL. After that, we did an end-to-end training applying \(\ell_{2,1}\)-penalty on the SPL for 1000 epochs. Like any neural network-based model, the hyperparameters of SABCE need to be tuned for optimum performance. Table 2 contains the list with the range of values we used in this paper. We used validation set to choose the optimal value. Section A of Appendix has information on reproducibility. We will provide the code with a dataset as supplementary material.
#### 2.2.2 Feature Cut-off
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline \multirow{2}{*}{Models} & \multicolumn{5}{c|}{Data set} \\ \cline{2-7} & Mice Protein & MNIST & FMNIST & GLIOMA & Prostate\_GE \\ \hline SBCE & 95.8 & 92.5 & 85.0 & 65.6 & 88.2 \\ \hline SABCE & **99.8** & **94.0** & **85.4** & **74.2** & **90.2** \\ \hline \end{tabular}
\end{table}
Table 1: Comparison between SBCE and SABCE on five benchmarking data sets using top 50 features. We use the same network architecture and hyper parameters in training. We follow the same experimental set up as in Section 3.
\begin{table}
\begin{tabular}{|c|c|} \hline Hyper parameter & Range of Values \\ \hline \# Hidden Layers (L) & \{1, 2\} \\ \hline \# Hidden Nodes (H) & \{50,100,200,250,500\} \\ \hline Activation Function & Hyperbolic tangent (\(\tanh\)) \\ \hline \(\mu_{1},\mu_{2}\) & \{0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7\} \\ \hline \(\lambda_{1},\lambda_{2}\) & \{0.01, 0.001, 0.0001, 0.0002, 0.0004, 0.0006, 0.0008\} \\ \hline \end{tabular}
\end{table}
Table 2: Hyperparameters for Sparse Adaptive Bottleneck Centroid-Encoder.
Figure 2: Feature selection cut off geometry.
As shown in Figure 2, the \(\ell_{2,1}\)-norm of the sparse layer (SPL) drives a lot of weights to near zero. Often hard thresholding or a ratio of two consecutive weights is used to pick the nonzero weight (O'Hara et al., 2013).
In this article, we take a different approach to select the set of discriminatory features as shown in Figure 2. After training SBCE, we arrange the absolute value of the weights of the sparse layer in descending order forming a curve (the orange one). We then join the first and the last point of the curve with a straight line (the blue dotted line). We measure the distance of each point on the curve to the straight line. The point with the largest distance is the position (P) of the elbow. We pick all the features whose absolute weight is greater than that of P. Figure 2 demonstrates the approach on GLI_85 and MNIST set. The red star indicates the position of point P (the elbow), and the absolute weight of features on the left of P is higher than on the right, selecting only 796 out of 22,283 features from GLI_85 and 137 out of 784 MNIST pixels.
### Empirical Analysis of SBCE
In this section we present a series of analyses of the proposed model to understand its behavior.
**1. Analysis of Hyper-parameters \(\mu_{1}\) and \(\mu_{2}\):** The hyper-parameters \(\mu_{1}\) and \(\mu_{2}\) control the class scatter and separation in bottleneck space. We ran an experiment on the MNIST digits to understand the effect of \(\mu_{1}\) and \(\mu_{2}\) on model's performance. We put aside 20% of samples from each class as a validation set and the rest of the data set is used to train a the model for each combination of \(\mu_{1}\) and \(\mu_{2}\). The validation set is used to compute the error rate using a 5-NN classifier in the two-dimensional space. Figure 3 shows the errors for different combinations of \(\mu_{1}\) and \(\mu_{2}\) in a heat map.
Observe that the error rate increases with \(\mu_{2}\) when \(\mu_{1}\) is zero. The behavior is not surprising as setting \(\mu_{1}\) to zero nullifies the effect of the second term (see Equation 2), which would hold the samples tightly around their centroid in reduced space. The gradient from the first term will exert a pulling force to bind the samples around their centroids, but the gradient coming from the third term will dominate the gradient of the first term as \(\mu_{2}\) increases. As an effect, the class-scatter increases in low dimensional space resulting in misclassifications. As soon as \(\mu_{1}\) increases
Figure 3: Analysis of error rate with changes to \(\mu_{1}\) and \(\mu_{2}\).
to 0.1, the error rate decreases significantly. After that, a higher value of \(\mu_{2}\) doesn't change the results too much. The minimum validation error occurs for \(\mu_{1}=0.6\) and \(\mu_{2}=0.1\). The analysis reveals that \(\mu_{1}\) is relatively more important than \(\mu_{2}\).
**2. Analysis of Feature Sparsity:** Here, we study sparsity promotion on six representative data sets. Three datasets contain more samples than features: MNIST, ISOLET, and Human Activity. These data sets are also from different domains, namely image, speech, and accelerometer sensors. We also use three biological data sets: ALLAML, GLI_85, and Prostate_GE, where the sample size is significantly smaller when compared to the number of features (see Table 3). We fit our model on the training partition of each data set and then plot the absolute value of the weights of the sparse layer in descending order as shown in Figure 4. As can be seen, the model promotes sparsity in each case, driving most of the weights in the sparse layer to near zero (\(10^{-4}\) to \(10^{-6}\)). The rest of the features have significantly higher values, and our feature cut-off technique distinguishes them successfully, pointing out the number of selected features in each case.
**3. Effect of \(\lambda_{1}\) and \(\lambda_{2}\) on Feature Sparsity:** In Figure 5, we show how hyper-parameters \(\lambda_{1}\) and \(\lambda_{2}\) control sparsity on Human Activity data. We fix \(\lambda_{2}\) to 0.001 and run SABCE with different values of \(\lambda_{1}\), which we show in the first row. Observe that the solution become less sparse with the decrease of \(\lambda_{1}\). In the second row, we present a similar plot over different values of \(\lambda_{2}\) while fixing \(\lambda_{1}\) to 0.001. The change of \(\lambda_{2}\) doesn't contribute too much to the model's sparsity.
Figure 4: Sparsity analysis of SABCE on six data sets. MNIST, ISOLET, and Human Activity have more samples than features while ALLAML, GLI_85, and Prostate_GE have more features than data points; see Table 3 for more details. In each case, we plotted the absolute value of weights of the sparse layer in descending order. In all the experiments, we set \(\lambda_{1},\lambda_{2}\), to 0.001. These plots suggest the model promotes sparsity, driving most of the weights in the sparsity layer to near zero.
## 4 Generalization Property of Selected Features
To investigate the generalization aspect of the features, we restrict training and validation sets to the selected variables. We fit a one-hidden layer neural network on the training set to predict the class label of the validation samples. Figure 6 shows the accuracy as a function of feature count.
The plot suggests that the selected features can accurately predict unseen samples. We see that the accuracy increases with the number of features for MNIST, ISOLET Human Activity, and GLL85; in contrast, adding more features does not change the accuracy for ALLAML and Prostate_GE significantly.
## 5 Feature Selection Stability
In this experiment, we shed light on the stability of the feature selection process; specifically, we want to compare and contrast the feature sets across several trials. To this end, We run our model five times on two high-dimensional biological data, ALLAML, Prostate_GE, and one high sample size data Human Activity and then compare the feature sets.
Figure 7 shows the results using Venn diagrams. Observe that the number of selected features over five runs are generally close to each other. In each case, there is a significant number of overlapping features. For each data set, we calculate the Jaccard index using the five feature sets to measure their similarity--the Jaccard index of ALLAML, Prostate_GE,
Figure 5: Effect of \(\lambda_{1}\) and \(\lambda_{2}\) on sparsity.
Figure 6: Accuracy on the validation set as a function of the number of features.
and Human Activity are 0.6254, 0.6828, and 0.6253, respectively. High Jaccard scores indicate that the feature sets have a lot of commonality over different runs.
## 3 Experimental Results
We present the comparative evaluation of our model on various data sets using several feature selection techniques.
### Experimental Details
We used twelve data sets from a variety of domains (image, biology, speech, and sensor; see Table 3) and five neural network-based models to run three benchmarking experiments. To this end, we picked the published results from two papers (Lembadri et al., 2021; Singh et al., 2020) for benchmarking and we ran the Stochastic Gate (Yamada et al., 2020) using the code provided by authors. We followed the same experimental methodology described in (Lembadri et al., 2021; Singh et al., 2020) for an apples-to-apples comparison. This approach permitted a direct comparison of LassoNet, FsNet, Supervised CAE using the authors' best results. All three experiments follow the standard workflow:
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline Dataset & No. Features & No. of Classes & No. of Samples & Domain \\ \hline ALLAML & 7129 & 2 & 72 & Biology \\ GLIOMA & 4434 & 4 & 50 & Biology \\ SMK\_CAN & 19993 & 2 & 187 & Biology \\ Prostate\_GE & 5966 & 2 & 102 & Biology \\ GLI\_85 & 22283 & 2 & 85 & Biology \\ CLL\_SUB & 11340 & 3 & 111 & Biology \\ Mice Protein & 77 & 8 & 975 & Biology \\ COIL20 & 1024 & 20 & 1440 & Image \\ Isolet & 617 & 26 & 7797 & Speech \\ Human Activity & 561 & 6 & 5744 & Accelerometer Sensor \\ MNIST & 784 & 10 & 70000 & Image \\ FMNIST & 784 & 10 & 70000 & Image \\ \hline \end{tabular}
\end{table}
Table 3: Descriptions of the data sets used for benchmarking experiments.
Figure 7: Venn diagram of five sets of features for each of the three high-dimensional data sets.
* Split each data sets into training and test partition.
* Run SABCE on the training set to extract top \(K\in\{10,50\}\) features.
* Using the top \(K\) features train a one hidden layer ANN classifier with \(H\) ReLU units to predict the test samples. The \(H\) is picked using a validation set.
* Repeat the classification 20 times and report average accuracy.
Now we describe the details of the two experiments.
**Experiment 1:** The first bench-marking experiment is conducted on six publicly available (Li et al., 2018) high dimensional biological data sets: ALLAML, GLIOMA, SMK_CAN, Prostate_GE, GLL85, and CLL_SUB1 to compare SABCE with FsNet, Supervised CAE (SCAE), and Stochastic Gates (STG). Following the experimental protocol of Singh et al. (Singh et al., 2020), we randomly partitioned each data into a 50:50 ratio of train and test and ran SABCE, STG on the training set. After that, we calculated the test accuracy using the top \(K=\{10,50\}\) features. We repeated the experiment 20 times and reported the mean accuracy. We ran a 5-fold cross-validation on the training set to tune the hyperparameters.
Footnote 1: Available at [https://jundongl.github.io/scikit-feature/datasets.html](https://jundongl.github.io/scikit-feature/datasets.html)
**Experiment 2:** In the second bench-marking experiment, we compared our approach with LassoNet(Lembadri et al., 2021) and Stochastic Gate(Yamada et al., 2020) on six data sets: Mice Protein2, COIL20, Isolet, Human Activity, MNIST, and FMNIST3. Following the experimental set of Lembadri et al., we split each data set into 70:10:20 ratio of training, validation, and test sets. We ran SCE on the training set to pick the top \(K=50\) features to predict the class labels of the sequester test set. We extensively used the validation set to tune the hyperparameters.
Footnote 2: There are some missing entries that are imputed by mean feature values.
Footnote 3: Available at UCI Machine Learning repository
### Results
Now we discuss the results of the benchmarking experiments. In Table 4 we present the results of the first experiment where we compare SABCE, SCAE, STG, and FsNet on six high-dimensional biological data sets. Apart from the results using a subset (10 and 50) of features, we also provide the prediction using all the features. In most cases, feature selection helps improve classification performance. Generally, SABCE features perform better than SCAE and FsNet; out of the twelve classification tasks, SABCE produces the best result on ten. Notice that the top fifty SABCE features give a better prediction rate than the top ten in all the cases. Interestingly, the accuracy of SCAE and FsNet drop significantly on SMK_CAN, GLL85 and CLL_SUB using the top fifty features.
Now we turn our attention to the results of the second experiment, as shown in Table 5. The features of the SABCE produce better classification accuracy than LassoNet in all cases. Besides COIL20, our model has better accuracy by a margin of \(4\%-6.5\%\). On the other hand, STG performed slightly better (a margin of \(0.4\%\) to \(0.7\%\)) than SABCE on Mice Protein, COIL20, and MNIST. In contrast, our model is more accurate than STG
on FMNIST, ISOLET, and Activity by 1.6% to 4.2%. Note that LassoNet is the worst-performing model. In this experiment, STG performed competitively compared to the first, where STG's performance was significantly worse than that of SABCE. Upon further investigation, it turns out that the model fails to induce feature sparsity on all six high-dimensional biological data sets. We fit the model on the training partition of each data set and then plot the probability of the stochastic gates in descending order, which we call the _sparsity plot_. We run STG using a wide range on \(\lambda\), which controls the sparsity of the model. In Figure 8, we show the result on ALLAML data. As we can see, the model doesn't create a sparse solution for input features for any of the nine values. Ideally, we should have observed the probability of many variables to near zero so that those features could be ignored. Changing the activation function, number of hidden nodes, or depth of the network doesn't produce a sparser solution. We kept the similar analysis on other data sets in Appendix (Section C).
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{Data set} & \multicolumn{4}{c|}{Top 10 features} & \multicolumn{4}{c|}{Top 50 features} & All \\ \cline{2-9} & FsNet & SCAE & STG & SABCE & FsNet & SCAE & STG & SABCE & Fea. \\ \hline ALLAML & 91.1 & 83.3 & 81.0 & **93.7** & 92.2 & 93.6 & 88.5 & **94.6** & 89.9 \\ \hline Prostate\_GE & 87.1 & 83.5 & 82.3 & **89.9** & 87.8 & 88.4 & 85.0 & **90.1** & 75.9 \\ \hline GLIOMA & 62.4 & 58.4 & 62.0 & **66.8** & 62.4 & 60.4 & 70.4 & **74.2** & 70.3 \\ \hline SMK\_CAN & **69.5** & 68.0 & 65.2 & 68.1 & 64.1 & 66.7 & 68.0 & **69.4** & 65.7 \\ \hline GLL85 & 87.4 & **88.4** & 72.2 & 84.7 & 79.5 & 82.2 & 81.0 & **85.7** & 79.5 \\ \hline CLL\_SUB & 64.0 & 57.5 & 54.4 & **70.8** & 58.2 & 55.6 & 63.2 & **72.2** & 56.9 \\ \hline \end{tabular}
\end{table}
Table 4: Comparison of mean classification accuracy of FsNet, SCAE,STG and SABCE features on six real-world high-dimensional biological data sets. The prediction rates are averaged over twenty runs on the test set. Numbers for FsNet and SCAE are being reported from (Singh et al., 2020). The last column reports accuracy using all features using an ANN classifier.
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline \multirow{2}{*}{Data set} & \multicolumn{4}{c|}{Top 50 features} & All features \\ \cline{2-5} & LassoNet & STG & SABCE & ANN \\ \hline Mice Protein & 95.8 & **99.8** & 99.4 & 99.00 \\ \hline MNIST & 87.3 & **94.7** & 94.0 & 92.8 \\ \hline FMNIST & 80.0 & 83.8 & **85.4** & 88.30 \\ \hline ISOLET & 88.5 & 90.9 & **93.3** & 95.30 \\ \hline COIL-20 & 99.1 & **99.7** & 99.3 & 99.60 \\ \hline Activity & 84.9 & 86.6 & **90.8** & 85.30 \\ \hline \end{tabular}
\end{table}
Table 5: Classification results using LassoNet, STG, and SABCE features on six publicly available data sets. Numbers for LassoNet and ’All features ANN’ are reported from (Lemhadri et al., 2021). All the reported accuracies are measured on the test set.
## 4 Related Work
Feature selection has a long history spread across many fields, including bioinformatics, document classification, data mining, hyperspectral band selection, computer vision, etc. We describe the literature related to the embedded methods where the selection criteria are part of a model. The model can be either linear or non-linear. Adding an \(\ell_{1}\) penalty to classification and regression methods naturally produce feature selectors for linear model, see (Tibshirani, 1996; Fonti and Belitser, 2017; Muthukrishnan and Rohini, 2016; Kim and Kim, 2004; Zou and Hastie, 2005; Marafino et al., 2015; Shen et al., 2011; Sokolov et al., 2016; Lindenbaum and Steinerberger, 2021; Candes et al., 2008; Daubechies et al., 2010; Bertsimas et al., 2017; Xie and Huang, 2009). Support Vector Machines (Cortes and Vapnik, 1995) have been used extensively for feature selection, see (Marafino et al., 2015; Shen et al., 2011; Sokolov et al., 2016; Guyon et al., 2002; O'Hara et al., 2013; Chepushtanova et al., 2014).
While the linear models are generally fast and convex, they don't capture the non-linear relationship among the input features (unless a kernel trick is applied). Non-linear models based on deep neural networks overcome these limitations. Here, we will briefly discuss a handful of such models. Group Sparse ANN (Scardapane et al., 2017) used group Lasso (Tibshirani, 1996) to impose the sparsity on a group of variables instead of a single variable. Li et al. proposed deep feature selection (DFS), which is a multilayer neural network-based feature selection technique (Li et al., 2016). (Kim et al., 2016) proposed a heuristics based technique to assign importance to each feature. Using the ReLU activation, (Roy et al.,
Figure 8: Sparsity analysis of Stochastic Gates on ALLAML data.
2015) provided a way to measure the contribution of an input feature towards hidden activation of next layer. (Han et al., 2018) developed an unsupervised feature selection technique based on the autoencoder architecture. (Taherkhani et al., 2018) proposed a RBM (Hinton et al., 2006; Hinton and Salakhutdinov, 2006) based feature selection model. Also see, (Baln et al., 2019; Yamada et al., 2020; Singh et al., 2020).
## 5 Discussion, Conclusion and Limitations
In this paper, we proposed a novel neural network-based feature selection technique, Sparse Adaptive Bottleneck Centroid-Encoder (SABCE). Using the basic multi-layer perceptron encoder-decoder neural network architecture, the model backpropagates the SABCE cost to a feature selection layer that filters out non-discriminating features by \(\ell_{2,1}\)-regularization. The setting allows the feature selection to be data-driven without needing prior knowledge, such as the number of features to be selected and the underlying distribution of the input features. The extensive analysis in Section 2.3 demonstrates that the \(\ell_{2,1}\)-norm induces good feature sparsity without shrinking all the variables. Unlike other methods, e.g., Stochastic Gates, our approach promotes feature sparsity for high-dimension and low-sample size biological datasets, further demonstrating the value of SABCE as a feature detector. We chose the \(\lambda_{1}\) and \(\lambda_{2}\) from the validation set from a wide range of values and saw that smaller values work better for classification. The plots with the Venn diagrams confirm the consistent and stable feature detection ability of SABCE.
The rigorous benchmarking with twelve data sets from diverse domains and four methods provides evidence that the features of SABCE produce better generalization performance than other state-of-the-art models. We compared SABCE with FsNet, mainly designed for high-dimensional biological data, and found that our proposed method outperformed it in most cases. In fact, our model produced new state-of-the-art results in ten cases out of twelve. The comparison also includes Supervised CAE, which is less accurate than SABCE. On the data sets where the number of observations is more than the number of variables, SABCE features produces better classification results than LassoNet in all six cases and better than Stochastic Gates on three data sets. The strong generalization performance, coupled with the ability to sparsify input features, establishes the value of our model as a nonlinear feature detector.
Although SABCE produces new state-of-the-art results on diverse data sets, our model won't be the right choice for the cases where class centroids make little sense, e.g., natural images. The current scope of the work doesn't allow us to investigate other optimization techniques, e.g., proximal gradient descent, trimmed Lasso, etc., which we plan to explore in the future.
|
2310.19744
|
Multistable protocells can aid the evolution of prebiotic autocatalytic
sets
|
We present a simple mathematical model that captures the evolutionary
capabilities of a prebiotic compartment or protocell. In the model the
protocell contains an autocatalytic set whose chemical dynamics is coupled to
the growth-division dynamics of the compartment. Bistability in the dynamics of
the autocatalytic set results in a protocell that can exist with two distinct
growth rates. Stochasticity in chemical reactions plays the role of mutations
and causes transitions from one growth regime to another. We show that the
system exhibits `natural selection', where a `mutant' protocell in which the
autocatalytic set is active arises by chance in a population of inactive
protocells, and then takes over the population because of its higher growth
rate or `fitness'. The work integrates three levels of dynamics: intracellular
chemical, single protocell, and population (or ecosystem) of protocells..
|
Angad Yuvraj Singh, Sanjay Jain
|
2023-10-30T17:08:52Z
|
http://arxiv.org/abs/2310.19744v1
|
# Multistable protocells can aid the evolution of prebiotic autocatalytic sets
## Abstract
We present a simple mathematical model that captures the evolutionary capabilities of a prebiotic compartment or protocell. In the model the protocell contains an autocatalytic set whose chemical dynamics is coupled to the growth-division dynamics of the compartment. Bistability in the dynamics of the autocatalytic set results in a protocell that can exist with two distinct growth rates. Stochasticity in chemical reactions plays the role of mutations and causes transitions from one growth regime to another. We show that the system exhibits 'natural selection', where a'mutant' protocell in which the autocatalytic set is active arises by chance in a population of inactive protocells, and then takes over the population because of its higher growth rate or 'fitness'. The work integrates three levels of dynamics: intracellular chemical, single protocell, and population (or ecosystem) of protocells..
## Introduction
The simplest life forms existing today and plausibly existing at the origin of life are such complex chemical organizations involving small and large molecules, that it is virtually impossible to imagine their origin except through some process of chemical evolution [1, 2]. Imagining plausible steps in chemical evolution that resulted in the increase of complexity of prebiotic chemical organization is therefore an important task.
One significant set of prebiotic scenarios is based on the idea of an autocatalytic set (ACS) of chemical reactions [3, 4, 5], reviewed in [6, 7]. Here we are concerned about the evolution of ACSs. This has been investigated [8, 9, 10] (for reviews, see [11, 12]) largely in the context of ACSs that reside in static well stirred containers. It is recognized that at some stage autocatalytic networks must have evolved inside a spatial compartment or 'protocell' which propagated through growth and division. Consequently, different models of protocells containing ACSs have been proposed [13, 14, 15, 16, 17, 18, 19, 20, 21, 22] where the compartments are modeled after micelles (autocatalytic aggregates of lipid catalysts), vesicles (lipid bilayers permeable only to food molecules enclosing an aqueous environment containing the ACS) or other structures.
These models have considered how the features of Darwinian evolution [23, 24], namely, (i) heredity, (ii) heritable variation, and (iii) differential fitness of the variants, can arise in such protocells. In models of growing-dividing protocells that contain ACSs, daughter protocells inherit the composition of the mother, and this transmission of compositional information is the mechanism of heredity [25, 10, 26] instead of template replication of an information carrying molecule. The interesting property of'synchronization' has been shown to arise fairly generically in these models [18, 27] whereby the composition of the protocell at successive divisions remains the same, giving the lineage of protocells a stable compositional identity. As a source of variation needed for evolution, models have considered chemical fluctuations due to the chance occurrence of rare reactions which are enhanced in small volumes, or changes in the environment (e.g., addition or removal of molecular species from the food set) [28, 29, 30, 20]. A large network containing multiple ACSs [31, 32] causes protocells that contain distinct ACSs to grow with different rates [10, 26]. This can give rise to differential fitness of protocells.
Notwithstanding all the above work, a crisp and convincing theoretical demonstration of the Darwinian evolution of a population of ACS containing protocells remains an unfinished task [12]. In this paper we present a new model which explicitly demonstrates the evolution of a population of such protocells in the Darwinian sense (albeit only one step of evolution due to the simplicity of the model). Our work makes use of an interesting feature of certain autocatalytic network topologies: the presence of multi-stability in the dynamics [33, 34, 35, 36, 37]. Our protocell has just two stable states, one in which no ACS is present (inactive state) and the other in which it is (active state). The protocell has a higher growth rate in the active state compared to the inactive state. The variation in a protocell is just the spontaneous transition, due to chemical fluctuation in a small volume, from the inactive to the active state without any change of environment. The evolution exhibited is the establishment, growth and dominance of the active protocells in a population of protocells. The simplicity of the model allows us to quantify the conditions under which this 'natural selection' can take place, in terms of the various dynamically generated timescales of the model. In future work we hope to generalize this to multiple evolutionary steps of increasing complexity.
## 1 The model
The protocell consists of three molecular species, a monomer \(A(1)\) (food molecule), a dimer \(A(2)\) (assumed to be the enclosure forming molecule) and tetramer \(A(4)\) (catalyst); see Fig. 1. The population of \(A(i)\) (\(i=1,2,4\)) in the protocell is denoted \(X_{i}\); \(x_{i}\equiv X_{i}/V\) is its concentration, where \(V\) is the volume of the protocell. The set of reactions these molecules can undergo are:
Figure 1: An illustration of a protocell inside an aqueous medium buffered with monomeric food molecules, \(A(1)_{ext}\). The protocell membrane is composed of dimer molecules \(A(2)\).
\[\textbf{Transport}: A(1)_{ext}\stackrel{{\alpha X_{2}}}{{=}}A(1)\] \[\textbf{R1 (uncatalyzed)}: 2A(1)\stackrel{{ k_{F}}}{{=}}A(2)\] \[\textbf{R1 (catalyzed)}: 2A(1)+A(4)\stackrel{{\kappa_{F}}}{{=}}A(2)+A(4)\] \[\textbf{R2 (uncatalyzed)}: 2A(2)\stackrel{{ k_{F}}}{{=}}A(4)\] \[\textbf{R2 (catalyzed)}: 2A(2)+A(4)\stackrel{{\kappa_{R}}}{{=}}A(4)+A(4)\] \[\textbf{Degradation}: A(2)\stackrel{{\phi}}{{\longrightarrow}}\emptyset,\ A(4)\stackrel{{\phi}}{{ \longrightarrow}}\emptyset.\]
\(A(1)_{ext}\) denotes the monomer species outside the cell; its concentration is assumed constant. The membrane formed by the dimers is permeable only to monomers; the rate at which monomers come in is proportional to the number of dimers, \(\alpha\) being the proportionality constant. Two monomers can spontaneously ligate to form a dimer and two dimers to form a tetramer, both with the same rate constant \(k_{F}\). The reverse (dissociation) reactions have a spontaneous rate constant \(k_{R}\). These ligation-dissociation reactions are also catalyzed by the tetramer, whose 'catalytic efficiency' is denoted \(\kappa\) (this effectively means that the catalyzed reaction rate is \(\kappa x_{4}\) times the spontaneous rate). The dimer and tetramer are assumed to degrade with rate constant \(\phi\) into a waste product that quickly diffuses out of the protocell. Note that the catalyzed reactions R1 and R2 together with the transport reaction form an ACS starting from the food set \(A(1)_{ext}\).
In this model the dimer does double duty as both the enclosure forming molecule as well as a reactant for catalyst production. In the equations below, we do not introduce separate population variables for the two roles. This is purely for simplicity and is not a crucial assumption. In the Supplementary Material Section 1 we show that in a model with two monomer species in which these two functions are performed by distinct molecules, similar results arise.
Using mass action kinetics, the deterministic rate equations of the model are given by
\[\frac{dx_{1}}{dt} =\alpha x_{2}-\,2(k^{\prime}_{F}x_{1}^{2}-k^{\prime}_{R}x_{2})- \frac{\dot{V}}{V}x_{1}, \tag{1}\] \[\frac{dx_{2}}{dt} =k^{\prime}_{F}x_{1}^{2}-k^{\prime}_{R}x_{2}\] \[\quad-\,2(k^{\prime}_{F}x_{2}^{2}-k^{\prime}_{R}x_{4})\,-\,(\phi+ \frac{\dot{V}}{V})x_{2},\] (2) \[\frac{dx_{4}}{dt} =(k^{\prime}_{F}x_{2}^{2}-k^{\prime}_{R}x_{4})\,-\,(\phi+\frac{ \dot{V}}{V})x_{4},\] (3) \[k^{\prime}_{F} \equiv k_{F}(1+\kappa x_{4}),\;k^{\prime}_{R}\equiv k_{R}(1+ \kappa x_{4}). \tag{4}\]
The \(\dot{V}/V\) terms represent dilution in an expanding volume. Note that when \(V\) is not constant, Eqs. (1-3) do not specify the dynamics completely unless the growth rate \(\dot{V}/V\) is specified. Since here we want an endogenous growth rate, we do not specify \(\dot{V}/V\) exogenously. Instead, we write the model in terms of the populations, and assume a certain functional form for \(V\) in terms of the populations. In terms of \(X_{i}\)
the above equations reduce to
\[\frac{dX_{1}}{dt}= \alpha X_{2}-\,2(\frac{k_{F}X_{2}^{2}}{V}-k_{R}X_{2})(1+\kappa\frac{ X_{4}}{V}), \tag{5}\] \[\frac{dX_{2}}{dt}= (\frac{k_{F}X_{2}^{2}}{V}-k_{R}X_{2})(1+\kappa\frac{X_{4}}{V})\] \[-\,2(\frac{k_{F}X_{2}^{2}}{V}-k_{R}X_{4})(1+\kappa\frac{X_{4}}{V} )\;-\,\phi X_{2},\] (6) \[\frac{dX_{4}}{dt}= (\frac{k_{F}X_{2}^{2}}{V}-k_{R}X_{4})(1+\kappa\frac{X_{4}}{V})\;- \,\phi X_{4}. \tag{7}\]
For simplicity we take \(V\) to be a linear function of the populations \(X=(X_{1},X_{2},X_{4})\):
\[V(X)=v(X_{1}+2X_{2}+4X_{4}), \tag{8}\]
where \(v\) is a constant. This choice gives the protocell a constant mass density (as observed in bacterial cells [38]) since \(V\) is proportional to the mass of the protocell. This choice is not essential; we have tried other linear functions \(V=v_{1}X_{1}+v_{2}X_{2}+v_{4}X_{4}\) (\(v_{i}\) constant), including \(V=v(X_{1}+X_{2}+X_{4})\). The quantitative results depend on the values of \(v_{i}\) but the qualitative features presented below hold for all the cases considered. We have also considered other versions of the model with the transport term \(\alpha X_{2}\) in 5 modified to a gradient term \(\alpha X_{2}(x_{1,\mathrm{ext}}-x_{1})\) (where \(x_{1,\mathrm{ext}}\) is the constant concentration of \(A(1)_{ext}\)), certain other autocatalytic reaction topologies, etc. (see Supplementary Material Section 1). The qualitative conclusions seem to be robust to these choices. Without loss of generality, the constants \(k_{R}\) and \(v\) are set to unity by rescaling \(t\to k_{R}t\), \(\alpha\rightarrow\alpha/k_{R}\), \(\phi\rightarrow\phi/k_{R}\), \(k_{F}\to k_{F}/(k_{R}v)\), \(\kappa\rightarrow\kappa/v\), which makes time and the other parameters dimensionless.
The definition of \(V(X)\) and the values of the rescaled parameters \(k_{F},\phi,\alpha,\kappa\) completely define Eqs. (5-7), and one can solve for \(X(t)\) given any initial condition. In a particular trajectory \(V\) may increase or decrease. Protocells larger than a characteristic size may become floppy or unstable and spontaneously break up into smaller entities. We assume that if \(V\) increases to a critical value \(V_{c}\) the cell divides into two identical daughters each containing half of the three chemicals of the mother protocell at division. The dynamics of a daughter after division is again governed by Eqs. (5-8). This division rule and Eqs. (5-8) together completely define the model at the deterministic level.
The dynamics of the ACS consisting of the catalyzed reactions R1 and R2 in a fixed size container but with buffered \(A(1)\) as the food set is given by Eqs. (6-7) with \(V\) and \(X_{1}\) constant. This was studied in [36] at the deterministic level where a bistability was observed, and in [37] at the stochastic level where transitions between the attractors was observed. The present model by adding Eqs. (5), (8) and the division rule embeds the ACS in a growing-dividing protocell instead of a fixed volume container. It shares the bistability of the fixed volume version, but also possesses qualitatively new properties. These properties (considered along with stochastic dynamics) enable a population of such protocells to mimic (one step of) Darwinian evolution, as will be discussed below.
## 2 Results
### Deterministic dynamics: Bistability with two distinct growth rates
Since \(V\) is a linear function of the populations, \(\dot{V}/V\) can be expressed in terms of the concentrations. Differentiating Eq. (8) w.r.t. \(t\) and using Eqs. (5-7), it follows that
\[\mu\equiv\frac{\dot{V}}{V}=\alpha x_{2}-\phi(2x_{2}+4x_{4}). \tag{9}\]
Eqn. (9) expresses the instantaneous growth rate of the protocell in terms of its chemical composition, a feature that is missing from previous protocell models.
When Eq. (9) is substituted in Eqs. (1-3), the concentration dynamics also becomes completely defined. It has fixed points. Fig. 2 shows a bifurcation diagram in which the fixed point concentration of \(A(4)\) is plotted by varying the parameter \(\kappa\). The model exhibits bistability for \(\kappa^{I}<\kappa<\kappa^{II}\). Note that the catalyst concentration \(x_{4}\) in the upper stable branch is two orders of magnitude higher than in the lower stable branch. On the lower branch the rates of catalyzed reactions are smaller than the corresponding spontaneous reactions, while on the upper branch they are much higher. We therefore refer to the upper branch as one in which the ACS is _active_ and the lower branch as ACS _inactive_. Depending on the initial condition, for a given \(\kappa\) in the bistable region, the dynamics will settle into either of the two stable attractors as shown in Fig 3A for one such \(\kappa\). For \(\kappa<\kappa^{I}\) there is only one attractor (the inactive one), and for \(\kappa>\kappa^{II}\) also only one attractor (the active one).
For each fixed point attractor, the r.h.s. of Eq. (9) is constant. Hence in the attractor, \(V\) grows exponentially, \(V(t)=V(0)e^{\mu t}\) with constant \(\mu\). In other words the protocell has a characteristic growth rate in each attractor given by the expression in Eq (9). This is shown in the inset of Fig. 2. Hence in the bistable region, the protocell can grow with two distinct growth rates depending upon which attractor it is in. The growth rate is many times higher in the active state than in the inactive one.
Once the concentrations have reached their fixed point attractor, (9) implies that \(V\) grows exponentially, and Eq. (8) then implies that each chemical population must also grow exponentially with the _same_ rate \(\mu\). (Only if all populations grow at the same rate as \(V\) will their concentrations be constant.) Thus in each attractor we have \(X_{i}(t)=X_{i}(0)e^{\mu t}\). In other words, the protocell naturally exhibits _balanced growth_ in each attractor (growth with ratios of all populations constant [39]). Exponentially growing trajectories in a nonlinear system and this remarkable emergent coordination between the chemicals
Figure 2: Bifurcation diagram for the model: Steady state concentration, \(x_{4}\), of the catalyst versus catalytic efficiency, \(\kappa\). The region between \(\kappa^{I}(=1840)\) and \(\kappa^{II}(=3580)\) is the region having three fixed points, two of which are stable (solid black curves) and one is unstable (red dotted curve). **Inset**: Growth rate, \(\mu\), of the protocell, versus \(\kappa\). Parameters: Hereafter, \(k_{R}\) and \(v\) have been set to unity without loss of generality after non-dimensionalizing the model. \(k_{F}=1\), \(\phi=20\), \(\alpha=100\).
without any explicit regulatory mechanism is a consequence of (a) the fact that the r.h.s. of Eqs. (5-7) are homogeneous degree one functions of the populations (if all three populations are simultaneously scaled by a factor \(\beta\), \(X_{i}\rightarrow\beta X_{i}\), then the r.h.s. of Eqs. (5-7) also scales by the same factor \(\beta\)), and (b) that the ACS structure couples all chemicals to each other. This is discussed in detail in ref. [40] in the context of models of bacterial physiology.
Fig. 3B shows, for a protocell, the trajectories of its chemical populations and volume as functions of time for two very close initial conditions (defined by the population of species A(1), A(2) and A(4)) that lie in different attractor basins. They converge to different attractors: ACS-active (upper panel) and inactive (lower panel). After a protocell divides we track one of its daughters. The attractor is a fixed point for concentrations (Fig. 3A) but a limit cycle for populations and the volume (Fig. 3B). The growth phase of the limit cycle has the same constant slope for all populations in a given attractor, signifying exponential growth with the same growth rate for all chemicals in the attractor. The slope is larger (and interdivision time shorter) for the active attractor. At division, since populations and the volume both halve, concentrations do not see any discontinuity.
The existence of bistability is robust in parameter space. It may be noted that a nonzero degradation rate \(\phi\) of the dimer and tetramer is essential for bistability (as also found in the model studied in ref. [36]). A degradation term \(\phi^{\prime}x_{1}\) for the monomer can also be introduced in Eq. (1); however it is found that \(\phi^{\prime}\)
Figure 3: Deterministic trajectories in the bistable region of the model. \(\kappa=2400\), other parameters are as in Fig. 2. **A:** Phase portrait projected onto the \(x_{2}-x_{4}\) plane. Several trajectories starting with different initial conditions are shown; they reach one of two stable fixed points denoted by blue closed dots. All the solid curve trajectories end at the stable fixed point on the top right (ACS active) while all the dotted trajectories end at the stable fixed point on bottom left of the plot (ACS inactive). The red open dot represents an unstable fixed point. The dashed curve is a schematic of the basin boundary between the two stable fixed point attractors. **B:** Deterministic trajectories of populations (in log scale) of species \(A(1)\), \(A(2)\), \(A(4)\) and the protocell volume as functions of time for two initial conditions. \(V_{c}=1000\). Initial conditions: IC1 (lower panel; dotted curves): \(X_{1}=952,X_{2}=20,X_{4}=2\). IC2 (upper panel; solid curves): \(X_{1}=944,X_{2}=20,X_{4}=4\). Protocell starting with IC1 ends up in the _inactive state_ in which the population of the catalyst \(A(4)\) is less than one as seen in dotted red curve in the lower panel. Protocell starting with IC2 ends up in the _active state_ in which the population of the catalyst is high (approximately between 10 and 20). The interdivision times in the inactive and active states are, respectively, \(\tau_{1}=0.269\), \(\tau_{2}=0.075\).
must be sufficiently smaller than \(\phi\) for bistability to exist.
### Stochastic dynamics of a single protocell: transitions between states of different growth rates
We now consider the protocell under the stochastic chemical dynamics framework. The chemical populations are now non-negative integers and each unidirectional reaction occurs with a probability that depends on the populations of the reactants and the values of the rate constants. We simulate the stochastic chemical dynamics of the protocell using the Gillespie algorithm [41]. The reaction probabilities are listed in the Appendix A. Whenever a reaction occurs the populations of its reactants and products are updated. For large populations when fluctuations are ignored, the above mentioned probabilities lead to the deterministic Eqs. (1-3) or (5-7). In using the Gillespie algorithm for expanding volumes the rate of increase of volume needs to be taken into account [42, 43]. In the present work since volume is treated as a function of populations (8), we assume that it is instantaneously updated when the populations are.
Fig. 4 shows a simulation run of the stochastic chemical dynamics of a single growing and dividing protocell. At the volume threshold \(V_{c}\), when the protocell divides into two daughter protocells, we implement partitioning stochasticity, namely, each molecule in the mother is given equal probability of going into either daughter. In Fig. 4, at each division we randomly discard one of the two daughters and choose one for further tracking, in order to display a single-cell trajectory over several divisions (effectively it is the trajectory of a single lineage of protocells).
Starting from the initial condition shown where the protocell is composed of only \(A(1)\) and \(A(2)\), the protocell initially grows and divides in the inactive state. The first \(A(4)\) molecule is produced by the chance occurrence of the uncatalyzed reaction 2. Production of a sufficient number of \(A(4)\) molecules triggers a transition to the active state, where the population of \(A(4)\) is significantly larger than in the inactive state. As in Fig. 3 for the deterministic case, so also in Fig. 4 it can be seen that the protocell in the active state grows and divides faster than in the inactive state. However, unlike the deterministic case, we also see transitions between the inactive and active states. These transitions occur because for a small protocell (\(500\leq V\leq 1000\) for the protocell in Fig. 4), chance production or depletion of a few molecules of \(A(4)\) is enough to push its concentration into the basin of the other attractor. Note that in Fig. 4 the protocell lineage spends more time in the inactive state than the active. The residence times of a protocell lineage in the two attractors (\(T_{1},T_{2}\)) have distributions (see Fig 6 in Appendix B) that vary with parameters.
Note that typically a daughter naturally inherits the state of the mother protocell: since the two daughters have roughly half the number of molecules of each type as the mother, and hence also half the volume, they have the same concentration of each chemical as the mother. Partitioning stochasticity occasionally results in a daughter losing the mother's state.
### Protocol population dynamics: Dominance of the autocatalytic state
Fig. 5 shows the time evolution of a population of such protocells. At \(t=0\) we start from a single protocell in the inactive state, whose dynamics was shown in Fig. 4. However, in this simulation, when a protocell divides, instead of discarding a daughter, we keep it in the simulation until the total population of protocells reaches an externally imposed ceiling \(K\). After the total number of cells reaches \(K\), the total population is kept constant. This done by removing one randomly chosen protocell from among the \(K+1\) protocells whenever any protocell divides. Each protocell in the population is independently simulated by the single cell stochastic dynamics (Gillespie algorithm). Fig. 5 tracks only the number of protocells in each state (active or inactive) as a function of time.
The number of protocells in the inactive state increases whenever one of them divides. Eventually one of them makes a stochastic transition to the active state, whereupon the number of active protocells jumps from zero to one. Active protocells also make stochastic transitions to the non-active state on a certain time scale. However, since active protocells divide faster (as seen in Fig. 4), their number grows faster and their population catches up and overtakes the inactive population in Fig. 5. Eventually the active protocells dominate the population.
The curves in Fig. 5 represent the net result of stochastic transitions and proliferation by division. The fraction of protocells in each state is expected to reach a stochastic steady state (see below) that represents a balance between proliferation and transition. In the simulations we find that the fraction of inactive cells declines when the total population hits \(K\) (see Fig. 5). It eventually reaches its steady state fraction. A decline is seen at the time the total population hits \(K\), because in this simulation at that time the fraction of inactive cells is higher than its steady state fraction (this is a consequence of the initial condition, the fact that at \(t=0\) we started from a single protocell in the _inactive_ state). In Supplementary Material Section 2 a similar qualitative behaviour can be seen for other values of \(\kappa\) within the bistability region.
An approximate (mean field) model of the protocell population dynamics (valid for large populations) with no ceiling (\(K\rightarrow\infty\)) is the following:
\[\frac{dn_{1}}{dt}= \mu_{1}n_{1}-\lambda_{1}n_{1}+\lambda_{2}n_{2}, \tag{10}\] \[\frac{dn_{2}}{dt}= \mu_{2}n_{2}-\lambda_{2}n_{2}+\lambda_{1}n_{1}, \tag{11}\]
where \(n_{1}(n_{2})\) is the population of protocells in the inactive (active) state, \(\mu_{1}=\frac{\ln 2}{\langle r_{1}\rangle}\) and \(\mu_{2}=\frac{\ln 2}{\langle r_{2}\rangle}\) are the average growth rates of the protocell in the inactive and active states respectively, and \(\lambda_{1}=\frac{1}{\langle r_{1}\rangle}\) and \(\lambda_{2}=\frac{1}{\langle r_{2}\rangle}\) are the transition rates, respectively, from the inactive to active and active to inactive states. This is a linear dynamical system \(\frac{dn}{dt}=An\), where \(n=(n_{1}\;\;n_{2})^{T}\) is the column vector of protocell populations, and
\[A=\begin{pmatrix}\mu_{1}-\lambda_{1}&\lambda_{2}\\ \lambda_{1}&\mu_{2}-\lambda_{2}\end{pmatrix}. \tag{12}\]
Eqns. (10-11) for the populations of inactive and active protocells are identical to the model used to describe the populations of persister and normal cells of bacteria [44].
The steady state fraction \(f\) of active protocells in the population \(f\equiv n_{2}/(n_{1}+n_{2})\) can be computed
Figure 4: Stochastic simulation of the populations of species A(1), A(2) and A(4) for a single protocell lineage in the model. Parameter values are as in Fig. 3, \(V_{c}=1000\). Initial condition: \(X_{1}=480,\;X_{2}=10,\;X_{4}=0\). Note the transitions of the protocell between the inactive and active states. From a long such simulation we find that the average interdivision times in the inactive and active states are, respectively, \(\langle\tau_{1}\rangle=0.295\), \(\langle\tau_{2}\rangle=0.077\), while the average residence times in the two states are \(\langle T_{1}\rangle=3.413\), \(\langle T_{2}\rangle=1.916\).
from the eigenvector of \(A\) corresponding to its largest eigenvalue, \(e_{1}\). The result is:
\[f=\frac{\lambda_{1}}{e_{1}+\lambda_{1}+\lambda_{2}-\mu_{2}}, \tag{13}\]
where \(e_{1}=\frac{1}{2}[\text{tr}(A)+\sqrt{(\text{tr}(A))^{2}-4\det(A)}]\), \(\text{tr}(A)=\mu_{1}-\lambda_{1}+\mu_{2}-\lambda_{2}\) and \(\det(A)=(\mu_{1}-\lambda_{1})(\mu_{2}-\lambda_{2})\ -\ \lambda_{1}\lambda_{2}.\) A calculation of \(f\) for a finite but large ceiling \(K\) is given in the Appendix C.1 and yields the same answer as (13), independent of \(K\).
Using the averages given in the caption of Fig. 4 to determine the components of \(A\), this calculation yields \(f=0.925\pm 0.014\) (mean \(\pm\) standard error), with the error arising from the finite sample estimation of the averages. This agrees with the fraction found (over long times) in the stochastic steady state of the simulation of Fig. 5, namely \(0.937\pm 0.019\) (mean \(\pm\) standard deviation). The Supplementary Material Section 3 shows the agreement between simulations and the mean field model at other values of \(\kappa\).
Note in Fig. 5 that even though the active protocells have a higher growth rate than the inactive, a finite fraction of the inactive still survives in the steady state. This is because of the nonzero transition probability \(\lambda_{2}\) from the active to the inactive state. If \(\lambda_{2}\) had been zero, the eigenvector of \(A\) corresponding to its largest eigenvalue would have been \((0\ \ 1)^{T}\) implying that the inactive state is extinct in the steady state. When \(\lambda_{2}\neq 0\), one can show (see Appendix C.2) that if
\[\lambda_{2}\ll\mu_{2}-\mu_{1}+\lambda_{1}, \tag{14}\]
then \(f\simeq 1-\frac{\lambda_{2}}{\mu_{2}-\mu_{1}+\lambda_{1}}\) is close to unity. The quantity \(1/(\mu_{2}-\mu_{1}+\lambda_{1})\) defines a time scale of the single protocell dynamics. The above condition means that if the average lifetime \(\langle T_{2}\rangle\) (\(=\frac{1}{\lambda_{2}}\)) of the active state is much larger than this time scale, ACS active protocells will come to dominate the population. Another way of writing this condition is \(\mu_{2}\langle T_{2}\rangle-\mu_{1}\langle T_{2}\rangle+\frac{\langle T_{2} \rangle}{\langle T_{1}^{2}\rangle}\gg 1\). Therefore a sufficient condition for active protocells to dominate is that the active protocell divides many times in its typical lifetime (\(\mu_{2}\langle T_{2}\rangle\gg 1\)) _and_ grows much faster than the inactive protocell (\(\mu_{2}\gg\mu_{1}\)).
Note also that a nonzero \(\lambda_{1}\) is what ensures that even if we start with a zero population of active protocells, one active protocell will sooner or later be produced by chance, leading eventually to a fraction \(f\) of active protocells.
## Discussion
In this work we have constructed an example that shows (i) how autocatalytic sets of reactions inside protocells can spontaneously boost themselves into saliency and enhance the populations of their product molecules including catalysts, and (ii) how such protocells (where the ACS is active) can come to dominate in a population of protocells. Encasement within protocells serves two important functions. (i) The small size of a protocell allows a small number fluctuation of the catalyst molecules to take their concentration past the basin boundary of the attractor in which the ACS is inactive into the basin of the active attractor, thereby causing the protocell to transition from an inactive to active state. A large container would require a larger number fluctuation to achieve the same transition, which is more unlikely. (ii) Protocells in the active state grow at a faster rate than the inactive state, thereby eventually dominating in population. The differential growth rate is a consequence of the fact that the protocell size depends upon its internal chemical populations, a possibility that is precluded when we discuss chemical dynamics in a fixed size container. Therefore, in this example, protocells aid both the generation and the amplification of autocatalytic sets.
The differential growth rates of the two states are not posited exogenously, but arise endogenously within the model from the underlying chemical dynamics defined by Eqs (5-8) (and their stochastic version). The additional assumption made is that upon reaching a critical size a protocell divides into two daughters that share its contents. This property can arise naturally due to some physical instability. Collectively these assumptions lead to the properties of heredity, heritable variation (the variation is heritable because once the fluctuation pushes it into a new basin of attraction a protocell typically descends into its new attractor in a short time), and differential fitness in a purely physico-chemical
system. This leads to the dynamics of the two subpopulations of protocells shown in Fig 5 which is similar to that of natural selection. (A difference is that the slower growing subpopulation never goes completely extinct, due to the non-zero probability of transition of a faster growing protocell into a slower growing one.)
The process of going from an initial state with no ACS to its establishment in a population of protocells, discussed here, might be considered the first step in the evolution of the ACS. One might wonder how the ACS would evolve further from there. It has been shown that chemistries containing ACSs exhibit multistability in fixed sized containers. In some of these chemistries simpler ACSs involving small catalyst molecules are nested inside more complex ACSs having larger and more efficient catalyst molecules [36]. The multiple attractor states correspond to ACSs with progressively larger molecules and higher level of complexity being active. It is possible that by embedding such chemistries within protocells, the mechanism discussed here could allow one to realize a punctuated evolutionary path through sequentially more complex ACS attractors to a state of high chemical complexity from an initial state that only contains small molecules and no ACS. This is a task for the future.
The specific artificial chemistry and protocell properties studied here are highly idealized ones. The object was to demonstrate a mechanism in principle. However, we believe the mechanism is quite general and it should be possible to demonstrate it in other models (e.g., [10, 26]) provided multistability in
Figure 5: Time evolution of a population of protocells starting from a single protocell in the inactive state. Shown is the number of protocells in the inactive state (green), active state (orange), and their sum (blue). As inactive protocells grow and divide, their population increases. The orange curve departs from zero when one of the inactive protocells makes a stochastic transition to the active state. The two populations have different growth rates. After the total population reaches an externally imposed ceiling \(K\) (=100 in this figure), upon each cell division a randomly chosen protocell is removed from the population. The population eventually settles down in a stochastic steady state dominated by the active protocells. This is the natural selection of an autocatalytic state. Parameter values are as in Fig. 4, \(K=100\).
a fixed environment and the emergence of distinct timescales as discussed in the present work can be established. We remark that though we have been primarily thinking of protocells as vesicles (motivated by similar models of bacterial physiology), some of our methods might also be useful in the context of micelles. Recently Kahana et al [30] presented a model of the stochastic dynamics of lipid micelles which had multiple attractors corresponding to distinct composomes. It would be interesting to compare the growth rates of micelles in different attractors as well as the transition rates between the attractors in their model.
We note that there have been independent experimental developments in constructing bistable autocatalytic chemistries [45] and self-replicating protocells [21]. It is also established that small peptides exhibit catalytic properties [46] and they can be encapsulated within protocells to promote protocellular growth [47]. A recent paper also shows the coupling of a simple autocatalytic reaction with the compartment growth and division [48]. A synthesis of these approaches might result in the experimental realization of the mechanism described in the present work.
We have considered dynamics at three levels: One is the chemical dynamics of molecules within a single protocell. This depends upon molecular parameters such as rate constants, efficiency of the catalyst molecule, etc. From this we extracted effective parameters at the second level: that of a single protocell (growth rates of the two protocell states, residence times, etc.). These were then used to derive the dynamics at the third level consisting of the population of protocells. This enabled an understanding of the conditions under which active protocells would dominate. Such an approach might be useful in other settings, for example in understanding certain aspects of bacterial ecology from molecular models of single bacterial cells.
## Acknowledgements
This research was partially supported by the Indo French Centre for the Promotion of Advanced Research (IFCPAR) project No. 5904-3. AYS would like to thank the University Grants Commission, India for a Senior Research Fellowship and a Junior Research Fellowship. We thank Sandeep Krishna, Philippe Nghe, Parth Pratim Pandey, Shagun Nagpal Sethi, Yashika Sethi and Atiyah Zafar for fruitful discussions. We would like to acknowledge the hospitality of the International Centre for Theoretical Sciences, Bengaluru and the International Centre for Theoretical Physics, Trieste, where part of this work was done.
## Appendix A Reaction probabilities used in Gillespie Algorithm
Appendix B Single cell residence time and interdivision time distributions for active/inactive states of the protocell
### Definition of an active/inactive state of a protocell
In order to obtain residence times and inter-division times in the active and inactive states of the protocell, an inference has to be made from the intracellular populations about the current state of the cell - whether it is active or inactive. The protocell was defined to be in the active (inactive) state if its concentration profile was in the basin of attraction of the active (inactive) attractor. The two basins are separated by a basin boundary (as shown by the black dashed curve in Fig. 3A of the main paper). One might choose an alternative criterion based on 'closeness' to the attractor state, but for the purposes of the present work, the above definition is useful. In practice, for simplicity in the present work, the concentration of the catalyst molecule (\(x_{4}\)) at the unstable fixed point (through which the basin boundary passes) was taken to be the threshold value for determining the state of the protocell. If \(x_{4}\) was above this value, the cell was labelled as active, otherwise it was labelled as inactive. This is an approximate implementation of the above definition. While the actual values of transition times would change when the above definition is implemented exactly, we do not expect our qualitative conclusions to depend significantly on this approximation.
### Definition of residence time and interdivision time
While tracking the trajectory of a single lineage of cells (as shown in Fig. 4 of the main paper) the state of the protocell (1 if active; 0 if inactive) was determined for the mother protocell at every division along with the time of the division event. (For more details on data generation, see Section 4 of the Supplementary Material.) A long such trajectory gave a sequence of division times and a corresponding
\begin{table}
\begin{tabular}{c l c l} \hline \hline Reaction & Reaction Type & Reaction Probability & Deterministic \\ & & per unit time & rate of reaction \\ \hline \hline \(A(1)_{ext}+A(2)\stackrel{{\alpha X_{2}}}{{\longrightarrow}}A(1)+A(2)\) & transport & \(\alpha X_{2}\) & \(\alpha X_{2}\) \\ \hline \(A(1)+A(1)\) & \(\frac{k_{F}}{2}A(2)\) & spontaneous & \(k_{F}X_{1}(X_{1}-1)V^{-1}\) & \(k_{F}X_{1}^{2}V^{-1}\) \\ \hline \(A(1)+A(1)+A(4)\stackrel{{\kappa k_{F}}}{{\longrightarrow}}A(2)+A(4)\) & catalysed & \(\kappa k_{F}X_{4}V^{-1}X_{1}(X_{1}-1)V^{-1}\) & \(\kappa k_{F}X_{4}V^{-1}X_{1}^{2}V^{-1}\) \\ \hline \(A(2)+A(2)\) & \(\frac{k_{F}}{2}A(4)\) & spontaneous & \(k_{F}X_{2}(X_{2}-1)V^{-1}\) & \(k_{F}X_{2}^{2}\) \\ \hline \(A(2)+A(2)+A(4)\stackrel{{\kappa k_{F}}}{{\longrightarrow}}A(4)+A(4)\) & catalysed & \(\kappa k_{F}X_{4}V^{-1}X_{2}(X_{2}-1)V^{-1}\) & \(\kappa k_{F}X_{4}V^{-1}X_{2}^{2}\) \\ \hline \(A(2)\stackrel{{ k_{R}}}{{\longrightarrow}}A(1)+A(1)\) & spontaneous & \(k_{R}X_{2}\) & \(k_{R}X_{2}\) \\ \hline \(A(2)+A(4)\stackrel{{\kappa k_{R}}}{{\longrightarrow}}A(1)+A(1)+A(4)\) & catalysed & \(\kappa k_{R}X_{4}V^{-1}X_{2}\) & \(\kappa k_{R}X_{4}V^{-1}X_{2}\) \\ \hline \(A(4)\stackrel{{\kappa_{R}}}{{\longrightarrow}}A(2)+A(2)\) & spontaneous & \(k_{R}X_{4}\) & \(k_{R}X_{4}\) \\ \hline \(A(4)+A(4)\stackrel{{\kappa k_{R}}}{{\longrightarrow}}A(2)+A(2)+A(4)\) & catalysed & \(\kappa k_{R}X_{4}V^{-1}(X_{4}-1)\) & \(\kappa k_{R}X_{4}^{2}V^{-1}\) \\ \hline \(A(2)\stackrel{{\phi}}{{\longrightarrow}}\emptyset\) & degradation & \(\phi X_{2}\) & \(\phi X_{2}\) \\ \hline \(A(4)\stackrel{{\phi}}{{\longrightarrow}}\emptyset\) & degradation & \(\phi X_{4}\) & \(\phi X_{4}\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: List of unidirectional reactions in the model and their reaction probabilities per unit time. \(V\) appearing in the table is given by the r.h.s. of the equation (9) in the main paper text. Deterministic rate of reactions in the last column are given as in Eqs (5-7) of the main paper.
sequence of ones and zeros. A contiguous subsequence consisting of only ones bordered by zeros (only zeros bordered by ones) at both ends of the subsequence was declared to be an instance of residence in the active state (inactive state). The duration of such a subsequence (equal to the difference between the ending and starting times of the subsequence as measured by the corresponding division times) was taken to be the lifetime of the state. Within an active or inactive subsequence, the difference between two consecutive division times was taken to be an instance of an interdivision time in that state.
Fig 6 shows the histograms for the residence times and interdivision times in the active and inactive states using the above definitions, for one set of parameter values.
Figure 6: Distribution of residence times (time spent) and inter-division time in active and inactive states. Data was collected by simulating a single lineage of growing and dividing protocells over 2000 division cycles. Parameter values: \(\kappa=2400\), \(k_{F}=1,\ \phi=20,\ \alpha=100\). The average values of the interdivision times are \(\langle\tau_{1}\rangle=0.2947\), \(\langle\tau_{2}\rangle=0.0772\), while the average residence times in the two states are \(\langle T_{1}\rangle=3.413\), \(\langle T_{2}\rangle=1.916\).
## Appendix C The steady state fraction of ACS Active protocells (\(f\)) in the protocell population
An expression was derived for the asymptotic fraction of active protocells in the protocell population dynamics (Eq. (13) of the main paper). The derivation used mean field equations for the populations of the active and inactive protocells and assumed indefinite growth of the two populations. Here we show that the same expression follows if we truncate the total population of protocells at a large ceiling \(K\). We analyze the conditions under which this fraction is close to unity. We also present numerical evidence that the fraction so obtained agrees with the actual stochastic simulations of protocell population dynamics at different values of \(\kappa\).
### Calculation of \(f\) for a system with finite ceiling \(K\) on the total population
In our stochastic simulations of protocell population dynamics, the total number of protocells increases until it reaches the ceiling \(K\). After that it becomes constant because whenever a protocell divides one protocell chosen at random is removed from the population. Consider the dynamics of \(n_{1}\) and \(n_{2}\) (populations of the inactive and active protocells respectively) after the total population \(n_{1}+n_{2}\) has reached this constant value \(K\). If \(K\) is sufficiently large, we can use the same equations as before (namely, Eqs. (10) and (11) of the main text) modified by the addition of a death term on the right hand side. In other words,
\[\dot{n}_{1}= \mu_{1}n_{1}-\lambda_{1}n_{1}+\lambda_{2}n_{2}-\beta n_{1} \tag{15}\] \[\dot{n}_{2}= \mu_{2}n_{2}-\lambda_{2}n_{2}+\lambda_{1}n_{1}-\beta n_{2}, \tag{16}\]
where the last term in both equations accounts for the removal of active or inactive protocells in proportion to their existing population (the average effect of the random removal of a protocell from the population). \(\beta\) is chosen so that the total population is constant, i.e., \(n_{1}+n_{2}=K\). Then, using \(\dot{n}_{1}+\dot{n}_{2}=0\), we get
\[\beta=\frac{\mu_{1}n_{1}+\mu_{2}n_{2}}{n_{1}+n_{2}}=\frac{\mu_{1}n_{1}+\mu_{2} n_{2}}{K}. \tag{17}\]
Eliminating \(n_{1}=K-n_{2}\) from the \(\dot{n}_{2}\) equation, and setting \(\dot{n}_{2}=0\) to obtain a fixed point, we obtain a quadratic equation for the fixed-point value of \(n_{2}\):
\[\frac{(\mu_{2}-\mu_{1})}{K}n_{2}^{2}-(\mu_{2}-\mu_{1}-\lambda_{2}-\lambda_{1}) n_{2}-\lambda_{1}K=0.\]
This has the solution
\[n_{2}=\frac{K}{2(\mu_{2}-\mu_{1})}\left[(\mu_{2}-\mu_{1}-\lambda _{2}-\lambda_{1})\pm\sqrt{(\mu_{2}-\mu_{1}-\lambda_{2}-\lambda_{1})^{2}+4 \lambda_{1}(\mu_{2}-\mu_{1})}\right].\]
When \(\mu_{2}-\mu_{1}\) is positive (as is the case in our simulations), the positive root must be chosen to get a physical solution (non-negative value of \(n_{2}\)). This yields
\[f\equiv\frac{n_{2}}{K}=\frac{1}{2(\mu_{2}-\mu_{1})}\left[(\mu_{2}-\mu_{1}- \lambda_{2}-\lambda_{1})+\sqrt{(\mu_{2}-\mu_{1}-\lambda_{2}-\lambda_{1})^{2}+ 4\lambda_{1}(\mu_{2}-\mu_{1})}\right]. \tag{19}\]
The expression of \(f\) is independent of \(K\). A bit of algebra shows that this expression is identical to that in Eq. (13) of the main paper.
### Condition for active protocells to dominate the population
The above expression for \(f\) can be written as
\[f =\frac{1}{2}\bigg{[}1-\frac{\lambda_{1}+\lambda_{2}}{\mu_{2}-\mu_{1 }}+\sqrt{1+\bigg{(}\frac{\lambda_{1}+\lambda_{2}}{\mu_{2}-\mu_{1}}\bigg{)}^{2}+ \frac{2(\lambda_{1}-\lambda_{2})}{\mu_{2}-\mu_{1}}}\bigg{]} \tag{20}\] \[=\frac{1}{2}[1-s+\sqrt{1+s^{2}+2d}], \tag{21}\]
where \(s=\frac{\lambda_{1}+\lambda_{2}}{\mu_{2}-\mu_{1}}\) and \(d=\frac{\lambda_{1}-\lambda_{2}}{\mu_{2}-\mu_{1}}\). This shows that \(f\) only depends upon the two dimensionless combinations \(s\) and \(d\) of the four parameters.
From the above expression it immediately follows that \(f=1\) when \(\lambda_{2}=0\), as already mentioned in the main text. We can also ask: How small should \(\lambda_{2}\) be for \(f\) to be close to unity? To see this it is useful to introduce the combinations \(z=\frac{\lambda_{1}}{\mu_{2}-\mu_{1}}\) and \(x=\frac{\lambda_{2}}{\mu_{2}-\mu_{1}}\). Then
\[f=\frac{1}{2}[1-(z+x)+\sqrt{1+(z+x)^{2}+2(z-x)}]=\frac{1}{2}\bigg{[}1-(z+x)+(z +1)\sqrt{1+\frac{2(z-1)x+x^{2}}{(z+1)^{2}}}\bigg{]}. \tag{22}\]
When \(\frac{x}{z+1}\ll 1\), the second term inside the square root is much smaller than unity. Performing a Taylor expansion, we get \(f\simeq 1-\frac{x}{z+1}\) to leading order in \(\frac{x}{z+1}\). This shows that the condition for the active protocells to dominate in the steady state of the population dynamics is
\[\frac{x}{z+1}=\frac{\lambda_{2}}{\mu_{2}-\mu_{1}+\lambda_{1}}\ll 1. \tag{23}\]
We remark that as a special case if \(x\equiv\frac{\lambda_{2}}{\mu_{2}-\mu_{1}}\ll 1\), then the above condition will hold (since \(\lambda_{1}\geq 0\)), and active protocells will dominate. However the inequality (9) gives a more general condition for active protocell domination.
|
2303.01755
|
Kittel law and domain formation mechanism in PbTiO$_3$/SrTiO$_3$
superlattices
|
We report second-principles simulations on the structural and energetic
properties of domains in (PbTiO$_{3}$)$_{n}$/(SrTiO$_{3}$)$_{n}$ superlattices.
For the explored layer thickness ($n$ ranging between 8 and 16 unit cells) and
lateral sizes of the domains, the most stable configuration corresponds to
polar domains separated by a sequence of counter-rotating vortices
(clockwise/counterclockwise) perpendicular to the stacking direction and acting
as domain walls. The balance between the domain wall energy and the
electrostatic energy yields to an optimal domain period $\omega$ that is
proportional to the square-root of the thickness of the PbTiO$_{3}$ layer,
following the Kittel law. For a given lateral size of the simulation box,
suboptimal domain structures (with a width larger than the one predicted by the
Kittel law) can be obtained in a metastable form. However, at finite
temperature, molecular dynamics simulations show the spontaneous change of
periodicity, which implies the formation of new domains whose generation is
initiated by the nucleation of vortices and antivortices at the interface
between the SrTiO$_{3}$ and the PbTiO$_{3}$ layers. The vortices progressively
elongate and eventually annihilate with the antivortices yielding the formation
of new domains to comply the Kittel law via a topological phase transition.
|
Fernando Gómez-Ortiz, Hugo Aramberri, Juan M. López, Pablo García-Fernández, Jorge Íñiguez, Javier Junquera
|
2023-03-03T07:41:45Z
|
http://arxiv.org/abs/2303.01755v1
|
# Kittel law and domain formation mechanism in PbTiO\({}_{3}\)/SrTiO\({}_{3}\) superlattices.
###### Abstract
We report second-principles simulations on the structural and energetic properties of domains in (PbTiO\({}_{3}\))\({}_{n}\)/(SrTiO\({}_{3}\))\({}_{n}\) superlattices. For the explored layer thickness (\(n\) ranging between 8 and 16 unit cells) and lateral sizes of the domains, the most stable configuration corresponds to polar domains separated by a sequence of counter-rotating vortices (clockwise/counterclockwise) perpendicular to the stacking direction and acting as domain walls. The balance between the domain wall energy and the electrostatic energy yields to an optimal domain period \(\omega\) that is proportional to the square-root of the thickness of the PbTiO\({}_{3}\) layer, following the Kittel law. For a given lateral size of the simulation box, suboptimal domain structures (with a width larger than the one predicted by the Kittel law) can be obtained in a metastable form. However, at finite temperature, molecular dynamics simulations show the spontaneous change of periodicity, which implies the formation of new domains whose generation is initiated by the nucleation of vortices and antivortices at the interface between the SrTiO\({}_{3}\) and the PbTiO\({}_{3}\) layers. The vortices progressively elongate and eventually annihilate with the antivortices yielding the formation of new domains to comply the Kittel law via a topological phase transition.
## I Introduction
A common feature among the family of ferroelectric materials is the formation of domain structures, regions of space with different polarization separated by boundaries called domain walls [1; 2]. Domains of opposite polarization lead to an overall charge neutrality at the surfaces reducing the depolarization field and the associated electrostatic energy.
The structure and energetics of domains in ferroic materials were first addressed by Landau and Lifshitz [3], and one decade later by Kittel [4; 5] in his studies on ferromagnetic domains. The delicate balance between the energy of the boundary between domains, the magnetic field energy of the configuration, and the anisotropy energy of the spin orientation determine the relationship between the width of the domains, \(\omega\), and the thickness of the material, \(d\)[4]. Adding up all the energy costs and minimizing this with respect to the domain size leads to a square-root dependence of \(\omega\) as a function of \(d\). This is the so-called Landau-Kittel law, \(\frac{\omega^{2}}{\delta}=A\cdot d\), where \(A\) is an adimensional proportionality constant and \(\delta\) is the thickness of the domain wall. This law was extended to ferroelectric materials by Mitsui and Furuichi [6] studying the domain structure of the Rochelle salt, where the electrostatic, elastic, and gradient energies determine the number and width of the domains for a given thickness of the material. The square-root dependence was further generalised under specific periodicities and screening conditions to the case of ultrathin ferroelectric layers [7] and to the case of superlattices with paraelectric materials [8]. Moreover, Roitburd expanded it also for ferroicastic thin films under epitaxial strain [9]. Therefore, it seems that the Landau-Kittel law of stripe domain width on film thickness is a general property of all ferroics [2].
Beyond the analytical derivations, the validity of the law has been confirmed by first-principles-based studies in ferroelectric [10] and multiferroic [11] thin films with thicknesses down to three unit cells. Following the spirit of these two works, we widen the applicability of the Kittel law to the case of ferroelectric/dielectric superlattices characterized by a ground state consisting of polar domains separated by a sequence of counter-rotating vortices (clockwise/counterclockwise) acting as domain walls [12; 13; 14]. Using second-principles simulations we validate the law for this complex polarization texture. Very interestingly we show how when the system is initialized from a metastable state, where the density of domains is smaller than the one predicted by the Kittel law, it evolves upon heating to the ground state via a topological phase transition. The driving mechanism for the generation and closure of new domains is the recombination of vortex and antivortex defects generated at the interface between PbTiO\({}_{3}\) and SrTiO\({}_{3}\).
## II Methodology
The second-principles simulations were performed using the same methodology presented in previous
works [15; 16], as implemented in the scale-up package [15; 17]. The second-principles parameters of both materials were fitted from density functional theory imposing a hydrostatic pressure of \(-11.2\) GPa to counter the underestimation obtained by the local density approximation of the cubic lattice constant that was taken as the reference structure. We imposed an epitaxial constraint assuming in-plane lattice constants of \(a=b=3.901\) A, forming an angle \(\gamma=90^{\circ}\) mimicking the conditions of a SrTiO\({}_{3}\) substrate. The interatomic potentials, and the approach to simulate the interface, are the ones first introduced in Ref. [16]. For a given value of supercell periodicity \(n\), \((\text{PbTiO}_{3})_{n}/(\text{SrTiO}_{3})_{n}\) several values of lateral size \(L\) were relaxed making them commensurate with the number of simulated domains. We solved the models by running Monte Carlo-simulated annealing from 60 K down to very low temperatures, typically comprising 20,000 relaxation sweeps. Regular Langevin molecular dynamics methods at constant temperature of \(T=90\) K were also used to solve the models in order to follow the dynamics of the emergent domains. For computational feasibility, we have focused on a simulation supercell made of a periodic repetition of \(L\times 1\times 2n\) elemental perovskite unit cells for the Monte Carlo-simulated annealings and of \(L\times 10\times 2n\) for following the dynamics of the domains. As proven in [18], at low temperatures (\(T<73\) K), the vortices do not vary along the axial \(y\)-direction. Therefore, the simplification of taking one unit cell along this direction does not affect the validity of the model while it speeds up the calculations. However, when \(T=90\) K a sufficiently high number of unit cells along the \(y\)-direction must be considered in order to account for the variation along the axial direction.
We obtained the force-constant band calculations using the direct supercell approach as implemented in the phonopy package [19]. To this end, we considered the high-symmetry unit cell of the superlattices (in which the atoms in the PbTiO\({}_{3}\) and SrTiO\({}_{3}\) layers occupy the cubic-like perovskite positions), and repeated it 4\(\times\)4 times in the \(xy\)-plane to build the supercell for the calculations, which we found to be large enough to yield well-converged results. We included the non-analytical contribution to the bands which accounts for the splitting between the longitudinal and transverse polar bands as implemented in [20].
Local polarizations are computed within a linear approximation of the product of the Born effective charge tensor times the atomic displacements from the reference structure positions divided by the volume of the unit cell.
## III Results
### Validation of Kittel law
We have checked the validity of the Kittel law in our superlattices following a similar recipe as in Refs. [10; 11]. For different layer thicknesses \(n\) with constant and equal dielectric/ferroelectric ratio, ranging between 8 and 16 unit cells, the lateral size of the supercell to host two domains was optimized [see Fig. 1(a)]. In order to achieve this goal for every value of \(n\) different lengths of the supercell along the \(x\)-direction, \(L\), were simulated. Once \(n\) and \(L\) were fixed, the initial atomic positions were chosen to mimic a couple of pure Ising domains, where the polarization changes abruptly from pointing upwards (up-domain) to downwards (down-domain) along the \(z\)-direction in just one unit cell, as shown in Fig. 1(b). This configuration was taken as the starting point of the Monte Carlo annealing. The resulting typical dipole configuration, a local minimum at low temperature, is shown in Fig. 1(c). The spontaneous formation of alternating pairs of clockwise/counterclockwise vortices along the \(x\)-direction is clearly visible, together with the development of an axial component of the polarization along the \(y\)-direction. These vortices were already theoretically predicted from phenomenological theories [21; 22], first-principles-based effective Hamiltonian [23], second-principles simulations [14; 16; 18] or full first-principles calculations [12], and experimentally demonstrated [13] in PbTiO\({}_{3}\)/SrTiO\({}_{3}\) superlattices.
For a given layer thickness, the energy per five-atom unit cell as a function of the lateral size of the supercell, always assuming the presence of two domains in the simulation box, is shown in Fig. 2(a). The first observation that can be drawn is that the larger the layer thickness, \(n\), the smaller the energy per unit cell. This fact stems from two different causes. On the one hand, the polarization in the PbTiO\({}_{3}\) layers increases with \(n\), tending to the bulk polarization value and approaching the ground state of the domain. On the other hand, the larger the layer thickness the smaller the polarization within the SrTiO\({}_{3}\) layer; the system undergoes a transition from an electrostatically "coupled" regime for small \(n\) to a "decoupled" regime for large \(n\)[24; 25; 26]. Since the SrTiO\({}_{3}\) layers will be closer to the ground-state unpolarized configuration, the energy is also reduced. The second observation that can be drawn is that the energy curve of the two-domain structure as a function of the lateral size \(L\) presents a minimum corresponding to the most stable geometry. The larger the layer thickness the shallower the minimum, that is localized for longer values of \(L\). From these minima, we can infer the optimal width of the domain, \(\omega\). Assuming that the domain wall is one-unit-cell thick [in good agreement with our simulations; see Fig. 1(c) for a typical case], \(\omega=L/2-1\) unit cells. Plotting the square of these optimal widths against the thickness of the PbTiO\({}_{3}\) layer, red dots in Fig. 2(b), we recover a linear behavior as predicted by the Kittel law, a tendency also shown in BiFeO\({}_{3}\)[11] and Pb(Zr,Ti)O\({}_{3}\) ultrathin films [10].
Alternatively, one can try to predict the optimal width of the multidomain structure by studying force-constant
bands like the ones presented in Fig. 3(a). Following the unstable modes along \(\Gamma-X\) we can identify \(q_{\rm min}\) as the wave-vector associated with the strongest instability describing vortex structures as the ones presented in Fig. 3(b). From this value we can infer the optimal width as \(\omega=\frac{1}{2q}-1\) [see blue squares in Fig. 2(b)]. Interestingly, while the square-root dependence of the domain width as a function of the layer periodicity is nicely reproduced, there exists a discrepancy with the results obtained via the first method. The (harmonic) force-constant analysis predicts narrower domains than the ones obtained from a full energy minimization. This difference must be due to anharmonic effects and/or the fact that the fully relaxed structures feature a combination of phonon mode distortions to optimize the energy. Thus, for example, the development of a slight offset coupled with an in-plane component of the polarization as the one shown in Fig. 1 (c) and experimentally attained [13], although not captured in the force-constant analysis [see how vortices are centered in Fig. 3(b)], reduces the normal component of the polarization to the surface. The tilt of the polarization reduces the depolarization charges at the surface and allows the widening of the domains, resulting in the underestimate of the domain width predicted by the harmonic analysis.
### Domain formation
Up to now, the existence of two domains in the simulation box has been imposed in the calculations. In the following we shall study the phase competition between different configurations under suboptimal domain widths. This is shown in Fig. 4 where we compare the energy per unit cell considering two or four-domains for a given layer thickness, \(n\), and lateral size of the supercell, \(L\). Again, for each pair of \((n,L)\) a Monte-Carlo annealing was carried out starting from purely \(180^{\circ}\) Ising-like structures containing two- or four-domain walls in the simulation box. In every case, the final relaxed structures display the typical polar vortex configuration, similar to the one shown in Fig. 1(c). In Fig. 4 we plot the evolution of the energy profile per five-atoms unit cell as a function of the lateral size of the supercell. As already discussed in Fig. 2(a), for a given amount of domain walls in the supercell, the larger the layer thickness \(n\) the smaller the
Figure 1: Structural relaxation of a two-domain structure in (PbTiO\({}_{3}\))\({}_{n}\)/(SrTiO\({}_{3}\))\({}_{n}\) superlattices. (a) Schematic view of the simulation cell periodically repeated in the three directions of the space. Red (blue) regions indicate the positive (negative) polarization domains along the \(z\) direction. (b) Local dipoles of the initial Ising-like configuration presenting a clear two-domain structure for a layer thickness of \(n=12\), and a lateral size \(L=12\) u.c. (c) Corresponding pattern of local dipoles after relaxation, showing an alternating clockwise/counterclockwise polar vortex configuration [14] along \(x\). The domain wall thickness \(\delta\) extends over one unit cell. Colors represent the axial component of the polarization, perpendicular to the plane defined by the vortices.
Figure 2: Optimization of the lateral size of a two-domain structure in (PbTiO\({}_{3}\))\({}_{n}\)/(SrTiO\({}_{3}\))\({}_{n}\) superlattices. (a) Energy per five atoms unit cell as a function of the lateral size of the supercell for different periodicities: red dots (\(n=8\)), blue squares (\(n=10\)), green up-triangles (\(n=12\)), black diamonds (\(n=14\)), and magenta down-triangles (\(n=14\)). (b) Linear fit of the squared optimized width of the domain as a function of the layer thickness by two procedures. Red dots correspond to minimizing the total energy of the supercell while blue squares correspond to the minimum of the force-constant bands along \(\Gamma-X\).
total energy. But the most important conclusion that can be drawn from Fig. 4 is that, for a given \(n\), a crossover between the two- and four- domain configurations is found for a critical length \(L_{\rm c}\), whose value increases with \(n\), as marked by the filled squares in Fig. 4.
Studying lateral sizes far from the coexistence regions (below or above \(L_{\rm c}\)) different characteristic features can be observed for the evolution of the unstable phases. In Fig. 5 we show the behaviour for a layer thickness of \(n=12\). Below \(L_{\rm c}\) (short lateral sizes) the four-domain structure is only metastable since the large penalty coming from the gradient energy term is not compensated by the saving in electrostatic energy. Indeed, in this metastable regime the system tends to reduce the energy gradient contribution by forming polarization waves [27] [Fig. 5(a)] along the \([100]_{\rm pc}\) direction, with the concomitant onset of a net in-plane polarization, and the displacement of the center of the vortices in the PbTiO\({}_{3}\) layer towards the interface with SrTiO\({}_{3}\). The smaller the lateral size of the supercell \(L\), the larger the offset between neighboring vortex cores that do not fit in the center of the PbTiO\({}_{3}\) layer as in the case of larger lateral sizes [see Fig. 5(d)]. This phenomenon has also been observed in BiFeO\({}_{3}\) ultrathin films [11].
The energetically most favourable configuration presents only two domains [Fig. 5(c)], with larger domain sizes and a smaller number of domain walls. However, the energy of this configuration increases with \(L\). Above \(L_{\rm c}\) (long lateral sizes), the two-domain structure [Fig. 5(d)] becomes metastable since the electrostatic energy penalty starts to grow and become dominant. Therefore, we observe new patterns containing two vortices and two antivortices at the interface with SrTiO\({}_{3}\), as will be further discussed in Fig. 6. Indeed, increasing the temperature we observe how the system is able to
Figure 4: Energy profile per five atom unit cell as a function of the lateral size of the supercell for the two- (filled dots) and four- (open dots) domain structures. Different layer thicknesses are indicated by colors: red (\(n=8\)), blue (\(n=10\)), green (\(n=12\)), black (\(n=14\)) and magenta (\(n=16\)). Filled black squares indicate the crossing point where the four-domain structure becomes more stable. Numbers 1-4 label the different dipole patterns plotted in Fig. 5.
Figure 3: (a) Force-constant bands along \(\Gamma-X\) obtained after diagonalization of the force-constant matrix in a (PbTiO\({}_{3}\))\({}_{9}\)/(SrTiO\({}_{3}\))\({}_{9}\) supercell. Dashed line indicates the energy of the centrosymetric configuration which is taken as the reference. Red dot indicates the position of the strongest instability located at \(q_{\rm min}=0.123\) in fractional units.(b) Relaxed structure following the strongest instability of the force-constant bands.
escape from this metastable configuration and transit to a state with four domains, as shown in Fig. 5(b).
The new vortices formed at the interface between PbTiO\({}_{3}\) and SrTiO\({}_{3}\) layers, shown in Fig. 5(d), serve as nucleation points of a new down (respectively, up) domains dividing the already existing up (respectively, down) polarization regions. The formation of these new domains reduces the polarization charges generated at the interface.
Assuming in our simulations the in-plane lattice constant of SrTiO\({}_{3}\)[14], and at low-enough temperatures (\(T<50\) K), this state is a long-lived metastable phase. The new topological defects formed at the interface are not able to propagate and close the new domain. Increasing the temperature (beyond 90 K), or inducing compressive strain (beyond -0.5 %) on the sample, the energy barrier can be overcome and we observe the formation of new domains. Interestingly, the transition to the optimal domain configuration is not observed at high domain-densities. Below \(L_{\rm c}\) the system is trapped in a polarization wave and cannot transit to a lower-density domain configuration by means of increasing temperature. This asymmetry comes as a consequence of the different nature between the electrostatic and gradient energy penalties.
Molecular dynamics simulations at constant temperature show how the recombination of vortex-antivortex pairs is the driving mechanism for the domain propagation through the sample until the new domain is completely formed.
In Fig. 6(a) we show in detail the case of a \(n=14\), \(L=28\) at constant temperature \(T=90\) K and slight compressive strain of \(-0.5\%\). There we can notice the balance of vortex and antivortex defects resulting in a zero net vorticity on the supercell as stated by the Poincare-Hopf theorem for our specific periodic boundary conditions. The antivortex textures are mostly formed at the SrTiO\({}_{3}\) layers where the magnitude of the polarization and the concomitant electrostatic energy of head-to-head and tail-to-tail domains is smaller. This is in accordance with first-principles calculations [12].
In Fig. 6(b) we analyze the time evolution of a portion within the up domain of the same superlattice [see dashed square in Fig. 6(a)], in a region where new polarization vortices have been formed at the interfaces between the PbTiO\({}_{3}\) and the SrTiO\({}_{3}\) layers. The presence of two vortices (red circles) and an antivortex (light-blue circle) is clearly observed both at the top and the bottom interface. Starting from this configuration these vortices and antivortices change their shapes in order to reduce the total energy of the system. First, the vortices elongate, while keeping their centers essentially at the same positions. This comes with two main consequences. First, locally, the number of unit cells with down polarization increases with time [from three at the initial configuration to four
Figure 5: Polarization map of the relaxed structures labeled from 1 to 4 in Fig. 4 for different lateral sizes \(L\) and a layer thickness of \(n=12\). Black arrows indicate the local dipoles, projected onto the \((x,y)\) plane. The axial component of the polarization along the \(|010|_{\rm pc}\) direction is represented by the green and magenta color map. Units of the dipoles in \(e\times\) Bohr, where \(e\) is the electron charge.
Figure 6: Polarization map for a \(n=14\), \(L=28\) supercell at finite temperature (\(T=90\) K) and compressive epitaxial strain of \(-0.5\%\). (a) Initial two-domain structure configuration. Dashed square delimits the upwards domain studied for the dynamics in (b), dashed arrows within the SrTiO\({}_{3}\) are a guide to the eye to locate one of the different antivortex structures. (b) Temporal evolution obtained by molecular dynamics simulations at finite temperature. Red dots indicate the location of vortex while light- and dark-blue dots indicate the location of antivortices of vorticity \(-1\) and \(-2\), respectively. Meaning of the arrows and colors as in Fig. 5.
at 300 or 400 fs, or even five at 550 fs, see dashed ovals at Fig. 6(b)]. Second, the region where the local polarization points in-plane to close the vortices moves towards the center of the PbTiO\({}_{3}\) layer, and so does the center of the antivortices. At 550 fs, the two antivortices merge to form an antivortex with topological charge -2 at the center of the PbTiO\({}_{3}\) layer [dark-blue point in Fig. 6(b)]. The field disturbance doubles its charge with a high energetic cost [in accordance with Kosterlitz-Thouless analysis in the sample XY model [28; 29] where the energy of the vortices increases (quadratically) with the vorticity]. This is the reason why this state is very short-lived in the molecular dynamic simulations. Only 50 fs later, it annihilates with two vortices. In this process a new domain with down polarization is formed, together with two new elongated clockwise/counterclockwise pair. Finally, the new domain widens till recovering the optimal lateral size determined by the Kittel law.
## IV Conclusions
In summary, we theoretically extend the application of the Kittel law to the polar vortex phase in (PbTiO\({}_{3}\))\({}_{n}\)/(SrTiO\({}_{3}\))\({}_{n}\) superlattices. For the explored layer thicknesses, the square-root dependence of the domain period with the thickness of PbTiO\({}_{3}\) is restored by two different procedures: (_i_) full minimization of the energy where all possible interactions are considered, and (_ii_) analyzing the harmonic force-constant bands. We find that the harmonic approach predicts narrower domains, which is consistent with the fact that anharmonic effects, like the development of an offset, tend to reduce the depolarizing fields on the structure.
Moreover, studying the phase competition under sub-optimal domain widths we showed how at low-domain density new domains can be formed to relax electrostatic constraint. These domains nucleate as vortex/antivortex pair defects at the interfaces with SrTiO\({}_{3}\) and propagate through the lattice by means of recombination until the new domains are completely formed. This recombination of vortex/antivortex is driven by the high energy costs of polarization patterns containing vortex/antivortex pairs.
###### Acknowledgements.
F.G.-O., P.G.-F., and J.J. acknowledge financial support from Grant No. PGC2018-096955-B-C41 funded by MCIN/AEI/10.13039/501100011033 and by ERDF "A way of making Europe" by the European Union. F.G.-O acknowledge financial support from grant FPU18/04661 funded by MCIN/AEI/ 10.13039/501100011033. J.M.L. was supported by Grant No. PID2021-125543NB-I00 funded by MCIN/ AEI/10.13039/501100011033/ and by ERDF "A way of making Europe" by the European Union. H. A. and J. I. were funded by the Luxembourg National Research Fund through Grant C21/MS/15799044/FERRODYNAMICS. The authors thankfully acknowledge computing time at Altamira supercomputer and the technical support provided by the Instituto de Fisica de Cantabria (IFCA) and Universidad de Cantabria (UC). The authors would also like to thank Jose Angel Herrero for his valuable assistance with the supercomputing environment HPC/HTC cluster "Calderon", supported by datacenter 3Mares, from Universidad de Cantabria.
|
2307.02973
|
Pruning vs Quantization: Which is Better?
|
Neural network pruning and quantization techniques are almost as old as
neural networks themselves. However, to date only ad-hoc comparisons between
the two have been published. In this paper, we set out to answer the question
on which is better: neural network quantization or pruning? By answering this
question, we hope to inform design decisions made on neural network hardware
going forward. We provide an extensive comparison between the two techniques
for compressing deep neural networks. First, we give an analytical comparison
of expected quantization and pruning error for general data distributions.
Then, we provide lower bounds for the per-layer pruning and quantization error
in trained networks, and compare these to empirical error after optimization.
Finally, we provide an extensive experimental comparison for training 8
large-scale models on 3 tasks. Our results show that in most cases quantization
outperforms pruning. Only in some scenarios with very high compression ratio,
pruning might be beneficial from an accuracy standpoint.
|
Andrey Kuzmin, Markus Nagel, Mart van Baalen, Arash Behboodi, Tijmen Blankevoort
|
2023-07-06T13:18:44Z
|
http://arxiv.org/abs/2307.02973v2
|
# Pruning vs Quantization: Which is Better?
###### Abstract
Neural network pruning and quantization techniques are almost as old as neural networks themselves. However, to date only ad-hoc comparisons between the two have been published. In this paper, we set out to answer the question on which is better: neural network quantization or pruning? By answering this question, we hope to inform design decisions made on neural network hardware going forward. We provide an extensive comparison between the two techniques for compressing deep neural networks. First, we give an analytical comparison of expected quantization and pruning error for general data distributions. Then, we provide lower bounds for the per-layer pruning and quantization error in trained networks, and compare these to empirical error after optimization. Finally, we provide an extensive experimental comparison for training 8 large-scale models on 3 tasks. Our results show that in most cases quantization outperforms pruning. Only in some scenarios with very high compression ratio, pruning might be beneficial from an accuracy standpoint.
## 1 Introduction
Recent advances in deep learning led to exceeding human-level performance in many tasks, including computer vision, machine translation, voice recognition, and language understanding. Real-world applications of DNNs rely heavily on their efficiency. Both mobile and cloud platforms greatly benefit from reduced latency and energy efficiency achieved by some form of model compression. In this work, we consider two mainstream techniques used in practice; pruning and quantization.
Pruning methods remove individual weights [65; 23], or sometimes groups of weights [26; 44]. This procedure can reduce the memory footprint. Furthermore, not having to perform the computations with weights that are zeroed out can make network inference more efficient. On the other hand, quantization reduces the bit-width used for both the weights and the computation used in networks, leading to both predictable memory savings and reductions in the necessary compute. In both scenarios, the hardware used for making use of these optimization schemes needs to take them into account.
Depending on the availability of training data and computing budget, most methods for pruning and quantization fall into one of two families. The first family includes fine-tuning approaches, namely quantization-aware training (QAT) and fine-tuning with pruning in the loop. The second family includes post-training approaches such as post-training quantization (PTQ). Previously, pruning techniques primarily relied on fine-tuning; however, some post-training pruning methods appeared recently as fine-tuning is not desirable for large language models [17].
Despite the importance of model efficiency and the plethora of approaches for pruning and quantization, the two fields are mostly disjoint. The literature presents little insight into which of the two
techniques is more accurate. In practice, there is only limited time to compress a network and limited energy to spend on making deep learning inference hardware. For this reason, we ask the question: Should one focus on quantization or pruning for compression?
We present an extensive study comparing pruning and quantization in equal settings. First, we consider different data distributions and analyze the conditions under which each method is preferable. We match our findings with real weight tensors from pre-trained models. Second, we consider a post-training scenario and evaluate single-layer output errors for both methods. Because the comparison might depend on the specific choice of optimization method, we compare the two with theoretical bounds that apply regardless of the optimization method. Finally, we provide a full-model comparison for the most common scenario of fine-tuning networks after either pruning or quantization.
In our comparison, we intentionally avoid considering the hardware aspects of pruning and quantization. Instead, we focus solely on the accuracy of both methods, given similar theoretical compression ratios. A coarse discussion on the hardware necessary for both methods can be found in section 6.
## 2 Assumptions
In our work, we assume FP16 as the basic data type and measure any gains in compression with respect to it. Using FP16 for inference generally does not lead to a loss in accuracy. Neural networks are also very commonly trained with FP16, making it a common baseline. Thus, we compare 50% pruning sparsity to INT8 quantization, 75% sparsity to INT4 quantization and so forth. We also assume no overhead on storing the sparsity mask for pruning and relegate such hardware-specific implementations to section 6.
For the pruning experiments, we consider magnitude pruning. It is common to do fine-tuning after or during pruning [65]. Several works have independently shown that despite its simplicity, it is tough to improve upon magnitude pruning and fine-tuning [18; 3]. To our knowledge, no pruning algorithm exists that consistently outperforms this method.
For the quantization experiments, we use symmetric uniform quantization, which is defined by just the quantization scale factor and the bit-width. The scale is represented as a floating-point number and is used to map floating-point values to the integer grid. Further details on symmetric uniform quantization can be found in [46]. Uniform quantization is the standard in the quantization literature, and symmetric quantization is mostly employed for the weights. In all our experiments, we use a quantization range estimator minimizing the mean-squared error on weights by grid search [46].
## 3 Comparison on statistical distributions
Before diving into comparison results, we first describe theoretically what the quantization error and pruning error are. Looking at this with a theoretical lens helps with understanding the later experimental difference between the two methods. We start off by describing and analyzing both methods on simple data distributions.
In order to compare the error of pruning and quantization, we will frequently use the signal-to-noise ratio measure defined in the log scale: \(\text{SNR}_{dB}=10\log_{10}\left(\mathbb{E}\left[W^{2}\right]/\mathbb{E}\left[ (W-F(W))^{2}\right]\right)\), where \(F(W)\) is the quantization or pruning function. This measure is the same as a scaled logarithm of
Figure 1: Comparison for a standard normal distribution. (left) Distributions after pruning and quantization for INT4 and 75% pruning. (middle) The squared error weighted by probability. (right) SNR for different compression ratios.
an MSE measure. Both are often employed to analyze the sensitivity of neural network layers to quantization, and they are theoretically well-founded to correlate with network performance [38; 45].
### Quantization error
For quantization, we consider symmetric uniform quantization, which is also called integer quantization. Given a bit-width \(b\) and the scale \(\delta\), the grid nodes are defined as \(q_{i}=\delta i,i\in\{-2^{b},\ldots,0,2^{b}-1\}\). The quantization operation rounding-to-nearest \(Q(w)\) and the corresponding quantization error \(R(w)\) are defined as:
\[Q(w)=q_{i},\;i=\operatorname*{arg\,min}_{i}|w-q_{i}|,\qquad\qquad\qquad R(w)= Q(w)-w. \tag{1}\]
Following [33] we model neural network weights as a random variable \(W\sim p(w)\). The expected value of the quantization MSE can be expressed as follows:
\[\mathbb{E}\left[\left(Q(W)-W)^{2}\right)\right]=\int\limits_{q_{min}}^{q_{max} }R^{2}(w)p(w)dw+\int\limits_{-\infty}^{q_{min}}(w-q_{min})^{2}p(w)dw+\int \limits_{q_{max}}^{\infty}(q_{max}-w)^{2}p(w)dw, \tag{2}\]
where \(q_{min}=\min_{i}q_{i}\) and \(q_{max}=\max_{i}q_{i}\) are the quantization range limits. The left term corresponds to the rounding error, and the right two terms correspond to the clipping error. We use this analytic formulation for our distribution results below, the details are given in appendix A.
### Pruning error
We consider magnitude pruning \(T(x)=x\cdot\mathds{1}_{-t\leq x\leq t}\). This simply sets the values closest to zero to actual zero. Given this, the expected error of pruning is expressed as follows:
\[\mathbb{E}\left[T(W)^{2}\right]=\int\limits_{-t}^{t}w^{2}p(w)dw, \tag{3}\]
where \(t\) is the threshold value that controls how much is pruned. Given the compression ratio \(c\in(0,1)\), we find the threshold value which satisfies \(P(-t\leq W\leq t)=c\). In case of a symmetric zero-mean distribution, the threshold can be expressed as \(t=F_{W}^{-1}\left(\frac{1}{2}+\frac{c}{2}\right)\), where \(F(w)=P(W\leq w)\) is the CDF function and \(F^{-1}(p)\) is its inverse. The expected pruning error in equation 3 is similar to the clipping error for quantization (see the second and the third term in equation 2), and can also be computed analytically. We also use this formulation for our results below.
### Analytical comparison
**Standard normal distribution.** Let us first look at a standard normal distribution. As many weights in neural networks are roughly Gaussian-shaped, this distribution is useful for our understanding of
Figure 2: Comparing the error of pruning and quantization for a student-t distribution, simulating the presence of significant outliers. We plot the results for different magnitudes of the outliers, as per the kurtosis on the x-axis. (left) the pruning error, which does not change under the presence of more severe outliers. (middle) the quantization SNR, which is reduced greatly when outliers increase (right) the trade-off regions where quantization and pruning are better.
the comparison. As we can see from figure 1 (middle), the errors for both methods have very different behavior. The quantization error oscillates between the quantization nodes and has a moderate range. The pruning error effectively corresponds to rounding many weights to zero and thus has a higher error. As we can see in figure 1 (right), this results in a higher SNR for quantization, e.g. 19.1 dB for INT4 quantization versus only 5.6 dB for 75% pruning. We see similar results for different compression ratios. For this distribution, quantization achieves a much higher signal-to-noise ratio.
**Distributions with heavy tails.** The trade-off is expected to change when more significant outliers are introduced. The quantization grid is expected to be effected strongly by outliers as it increases the quantization grid in size, whereas the pruning method is expected to be hardly effected with outliers as it only affects weights around zero. We thus analyze both quantization and pruning errors in the presence of many outliers. To simulate a distribution with outliers, we use a truncated Student's-t distribution with \(\nu=2\), and a symmetric range \((-r,r)\) (the PDF is defined in appendix B). This distribution is nice as it gives a non-trivial weight to the tail ends of the distribution close to \(r\). The wider the range \(r\) is, the heavier are the tails of the distribution.
In order to introduce a quantitative measure of the number of outliers, we will use the distribution's kurtosis given by \(\text{Kurt}[X]=\mathbb{E}\left[(X-\mu)^{4}\right]/\left(\mathbb{E}\left[(X- \mu)^{2}\right]\right)^{2}\), where \(\mu\) is the mean. We will see later that this kurtosis measure is predictive of quantization and pruning performance for real layers. To increase the number of outliers, we will increase the range \(r\). The results are given in figure 2. The kurtosis range is chosen so that it includes most of the weights from the model zoo. We see that despite the significant outliers and high kurtosis, quantization still has higher SNR in most of the cases for moderate compression. Pruning is better however in the region of high clipping range and very high compression rate, e.g. 2-3 bits per value (see figure 2 on the right).
### Experiments on real weight tensors
The previous discussion was mostly theoretical. We set out to see happens when we do a similar analysis on real neural network weights. In order to investigate this, we compare the pruning and quantization SNR on the weight tensors for all the pre-trained models from the PyTorch model zoo2 (46 models in total, the details are give in appendix E). Each tensor is quantized using an integer grid of bit widths from 2 to 8. The results are shown in the figure 3 (left). We see a similar trend to our previous discussion that pruning becomes more beneficial for lower bitwidth/higher sparsity ratios.
Footnote 2: [https://pytorch.org/serve/model_zoo.html](https://pytorch.org/serve/model_zoo.html).
In order to match the analytical results from figure 2, we consider the sample kurtosis of every weight tensor given by \(k=\frac{1}{n}\sum_{i=1}^{n}(x_{i}-\overline{x})^{4}/\left[\frac{1}{n}\sum_{i =1}^{n}(x_{i}-\overline{x})^{2}\right]^{2}\). See figure 3 (right). We consider a range of kurtosis values for every quantization bit-width. Using a kernel density estimator, we compute the probability density of encountering a tensor for which pruning has higher SNR than quantization SNR. We compare the PDF to that for quantization and thus determine the region where each method is preferable. The results are given in figure 3 on the right. We see that the results from the previous theoretical section (figure 2 on the right) hold very nicely. We can also see that as predicted, the kurtosis is indeed a good metric for predicting if a tensor should be quantized or pruned for optimal accuracy.
Figure 3: Comparison on all the weights from PyTorch model zoo (46 models). (left) Pruning SNR versus quantization SNR for every tensor. (right) Pruning is preferable at high compression ratios for tensors with high sample kurtosis values.
## 4 Per-layer comparison
Most PTQ methods compress the model layer by layer. Given one layer, we use the mean-squared error of the output activations as an objective for optimization. As [45] shows, minimizing per layer MSE on the output activations of each layer is a computationally affordable second-order approximation of the loss function. The local MSE objective correlates well with the task loss and is often used in practice in DNN compression and quantization literature [29; 37; 63]. Our experiments in appendix D confirm this. For the experiments in this section, we will use SNR as it represents a normalized version of MSE. As opposed to section 3 where we used SNR on weights, in this section, we will use SNR on the output activations instead.
The goal of a PTQ method is to minimize the error in the output activations of the compressed layer by optimizing over the quantized weights subject to integer range constraints. Similarly, for pruning, the weights are optimized subject to a sparsity constraint. As the underlying combinatorial optimization problem for both methods is NP-hard [52; 13], in practice, each method relies on some form of heuristic providing a reasonably good solution given a realistic compute budget. This means that any practical comparison between pruning and quantization would depend on the choice of the method for both and would be open to debate of the optimality of the algorithm. In order to eliminate this dependence, we provide a tight lower bound on the output errors for quantization. For pruning we provide a way to solve the problem exactly for moderate dimensionalities. This way, we can provide a comparison that holds regardless of the algorithm used for each method.
### Post-training quantization
We set out to formulate a way by which we can get relatively tight bounds for comparison when quantizing a single layer with the MSE as the objective. The higher bound is simple to obtain by using a solution with a heuristic quantization algorithm, but for the lower bound, we have to reformulate the problem. The mean-squared error of the output activations of a quantized layer can be expressed as:
\[\min_{\mathbf{w}}E(\mathbf{w})= \left\|\mathbf{X}\delta\mathbf{w}-\mathbf{X}\mathbf{w}_{orig}\right\|_{2}^{2}\] (4) s.t. \[\mathbf{w}\in\mathbb{Z}^{n},\] \[w_{min}\leq w_{i}\leq w_{max},\]
where \(X\) is the input data in an unfolded form, and \(w_{orig}\) are the floating point weights. The quantized weights are computed as the product of the quantization scale \(\delta\), and the integer weights \(\mathbf{w}\). \(w_{min}\) and \(w_{max}\) are the integer limits. We ignore the averaging operation to simplify the notation, as it is not important for optimization. We also note that this problem can be solved independently for each output channel of a convolution or every row of a fully-connected layer weight.
Figure 4: Comparison in the post-training scenario. Each box corresponds to a subset of one of 10 layers from the 4 different models that were used, with 7 different bit-width comparison points. The ranges of the box indicate the lower and higher-bounds found by the algorithms.
This problem is an instance of a mixed-integer quadratic program:
\[\tilde{E}(\mathbf{w})= \frac{1}{2}\mathbf{w}^{T}\mathbf{P}\mathbf{w}-\mathbf{q}^{T}\mathbf{w},\] (5) s.t. \[\mathbf{w}\in\mathbb{Z}^{n},\] \[w_{min}\leq w_{i}\leq w_{max},\]
where \(\mathbf{P}=2\delta^{2}\mathbf{X}^{T}\mathbf{X}\), \(\mathbf{q}=2(\mathbf{w}_{orig}^{T}\mathbf{X}^{T})\mathbf{X}\delta\). In order to simplify the objective, we can omit the constant term that is irrelevant for the optimization \(c=\left\|\mathbf{X}\mathbf{w}_{orig}\right\|_{2}^{2}\), i.e. \(\tilde{E}(\mathbf{W})=E(\mathbf{W})-c\).
In order to find the lower bound of the objective, we follow [51] and relax the integer constraint to \(w_{i}(w_{i}-1)\geq 0\), which allows the weight to take values within the interval from 0 to 1. In order to obtain the lower bound, we will consider the dual version of the relaxed problem:
\[L(\mathbf{\lambda})= \max-\gamma,\] (6) s.t. \[\begin{bmatrix}\mathbf{P}-\text{diag}(\mathbf{\lambda})&q+\frac{1}{2} \lambda\\ \left(q+\frac{1}{2}\lambda\right)^{T}&\gamma\end{bmatrix}\succeq 0,\] \[\mathbf{\lambda}\geq 0,\]
where \(\mathbf{\lambda}\in\mathbb{R}^{n}\), \(\gamma\in\mathbb{R}\). The dual problem is convex, and its solution can be used as a lower bound on the solution of the original problem, i.e., \(\tilde{E}(\mathbf{w})\geq L(\mathbf{\lambda})\). The dual has a semi-definite constraint which can be solved with a semi-definite programming (SDP) solver with \(\mathcal{O}(n^{3})\) complexity. In our work, we used CVX solver [19]. As discussed in [51], this bound is a computationally efficient alternative to branch-and-bound approaches, while tightness is better than that for the alternative methods introduced in [4]. We use this approach for estimating the lower bound for MSE on the output activations for PTQ below.
### Post-training pruning
We also need a similar lower bound for pruning for comparison. To the best of our knowledge we are not aware of the ways to provide a tight lower bound for this problem, therefore we formulate a way to solve a problem for moderate dimensionalities exactly. Similar to quantization, post-training pruning of one layer of the network can mathematically be expressed as solving the following optimization problem:
\[E=\min_{\hat{\mathbf{w}}}\left\|\mathbf{X}\hat{\mathbf{w}}-\mathbf{X}\mathbf{w}_{orig }\right\|_{2}^{2}\] (7) s.t. \[\left\|\hat{\mathbf{w}}\right\|_{0}\leq s,\]
where the number of non-zero elements \(s\) in the solution is theoretically constrained by using the \(L_{0}\) norm, which is non-convex and not smooth. In order to solve the problem, we introduce the sparsity mask \(m\in\mathbb{R}^{n}\):
\[E(\mathbf{w})= \min_{\mathbf{w},\mathbf{m}}\left\|\mathbf{X}(\mathbf{m}\odot\mathbf{w})-\mathbf{X}\mathbf{w} _{orig}\right\|_{2}^{2},\] (8) s.t. \[\left\|\mathbf{m}\right\|_{1}=s,\] \[-\mathbf{m}\odot l\leq\hat{\mathbf{w}}\leq\mathbf{m}\odot u\] \[l,u>0,m_{i}\in\{0,1\},\]
where \(\odot\) is an element-wise product operation, and \(l,u\in\mathbb{R}\) are constants chosen such that any solution satisfies the constraint \(-\mathbf{m}\odot l\leq\hat{\mathbf{w}}\leq\mathbf{m}\odot u\). We solve this problem using the branch-and-bound method implemented in the Gurobi solver [21] that gives the global solution.
### Experiments
With our algorithms in the bag, we can now compare quantization versus pruning in the post-training settings with theoretical bounds. In each case, we analyze individual layers of several networks. Given a batch of input data, we optimize the pruned or quantized weights to minimize the error between the output activations and the output of the uncompressed layer. We provide a range between two SNR values for each method in each case. The performance of the heuristic method gives the first
value, and the second value is given by the error lower bound or the global solution, which translates into SNR upper bound.
As a heuristic method for pruning, we use magnitude pruning with a fixed sparsity mask \(m\) and data-optimized weights \(\mathbf{w}\) given by \(\mathbf{w}=\underset{\mathbf{w}}{\text{argmin}}\left\|\mathbf{X}(\mathbf{m}\odot\mathbf{w})-\mathbf{X} \mathbf{w}_{orig}\right\|_{2}^{2}\). This is a convex problem and has a unique solution. As a heuristic method for quantization, we use the mixed-integer solver introduced in [51]. We clip every sample in order to satisfy the integer quantization range constraint.
We chose a representative set of 10 layers, including 9 convolutional layers (one 3x3 convolutional layer and 8 point-wise convolutions) from MobileNet-V2, EfficientNet-lite, and Resnet-18, and one fully-connected layer from ViT. The full details for reproducing the experiments are given in appendix F. Due to the high computational complexity of the global solution for pruning, the layers had to be split into chunks. The slice of 4 input channels over all output channels was used for 3x3 convolutions. In the case of linear layers and point-wise convolutions, slices 36 input features over all the output features were used.
The results are shown in figure 4 grouped by bit-width. The rectangles indicate the full range of the pruning and quantization methods between the heuristic solution and the error lower bound or the global solution. Whenever a rectangle for each chunk intersects the diagonal line, the ranking of the two methods could depend on the optimization method, while in cases below or above the diagonal, the ranking is guaranteed regardless of the optimizer. We see that quantization mostly outperforms pruning for moderate compression, while methods become more comparable for higher compression ratios.
## 5 Full-model comparison
Now that we have seen the comparison between the methods in the PTQ setting, we turn to fine-tuning quantized and pruned models. This is the setting where pruning is applied in most, and it is possible that fine-tuning can change the models significantly enough that the performance between the two methods changes.
In order to provide a fair comparison of pruning and quantization, we chose the two most commonly used methods with performance competitive to state-of-the-art. For quantization-aware training, we used the widely adapted LSQ method suggested in [11; 2]. Following this approach, we jointly learn the weights and quantization scales, keep the batch norm layers unfolded, and re-estimated the batch norm statistics after training to avoid wrong running estimates due to oscillations [48]. We use the method suggested in [20] for pruning, which gradually increases the sparsity during fine-tuning and re-estimates batch norm statistics after training.
In our experiments we used a set of 8 models trained for 3 tasks including Resnet18, Resnet50 [25], MobileNet-V2 [54], MobileNet-V3-small [27], EfficientNet-lite [56], and ViT [10] trained on ImageNet classification [53]; DeepLab-V3 [6] with MobileNet-V2 backbone trained for semantic segmentation on Pascal VOC [12]; EfficientDet [57] trained for object detection on MS COCO [40].
For a fair comparison, we used the same amount of epochs of fine-tuning for each method (full details on hyperparameters are given in appendix G). The results given in table1 suggest that pruning almost never leads to higher accuracy than quantization if an equal compression rate is considered. The differences are sufficiently large enough that the small purported improvements by some methods [55] will likely not close the gap.
## 6 Discussion
Other types of pruningWhile we solely focused in our comparison on unstructured pruning in which individual weights are removed, our results translate to semi-structured and structured pruning. Unstructured pruning has more degrees of freedom and is a strict superset of what can be represented by (semi-)structured pruning. Therefore, unstructured pruning gives an upper bound of the accuracy for all pruning methods. This means that for the cases in which quantization is better than unstructured pruning, quantization will also be better than (semi-)structured pruning. However, we can not make any claims for (semi-)structured pruning for the few scenarios in which pruning is better than quantization.
Natural sparsity in quantized tensorsIn our comparison, we used a theoretical compression ratio for quantization, which depends on the bitwidth. However, we also observe that quantized tensors naturally contain many zeros; for example, 8-bit tensors from PyTorch model zoo have an average sparsity of 13% while 4-bit tensors are 35% sparse. We give more details on this in appendix C.
Representations learned in the compressed modelsTo provide insights into representations learned during pruning or QAT, we studied the evolution of models during fine-tuning. We found that fine-tuning after pruning tends to recover the original representation, while quantization-aware training leads to learning completely new representations. We provide further details on these experiments in appendix H.
Hardware implicationsSo far, we have deliberately avoided discussing the hardware implementations of pruning and quantization and focused solely on the accuracy of both methods at the same ideal compression rates. However, in practice, the hardware considerations do matter for the usability of the methods.
The analysis above assumed an idealistic case for pruning in terms of memory size and data transfer. Since the pruning is unstructured, in order to achieve memory savings in practice, one would need at least 1 bit of information for each weight indicating whether a weight is pruned or not. On top of 16-bit weights, this gives a 6.25% storage overhead at a minimum. Quantization does not have this overhead, as INTR is just 8 bits smaller than 16 bits, and the only storage overhead is a single scaling factor per tensor (or channel).
Also, in terms of the cost of computations done by the hardware, there is a difference between the two methods. For pruning, any hardware would have to take the densely stored weights and mask and either decompress them to the dense format with all weights and many 0s or take the pruning into account in the compute itself. No compute benefits are gained in the former, as the dense calculations are done in the uncompressed number format. In the latter, dedicated hardware to take into account the 0s is necessary. The overhead for this is generally non-trivial, leading vendors to implement more semi-structured pruning schemes [44]. Similarly, it is rare to see unstructured activation compression for the same reason that this needs to happen algorithmically on-the-fly. In contrast, quantization gives quadratic improvements in the compute. Going from INT8 to INT4 theoretically improves the compute performance by a factor 4, although practical gains depend on the memory overhead (which improves by only a factor 2x) and the existence of other formats in the same hardware compute unit.
\begin{table}
\begin{tabular}{l r r r r r r r r r} \hline \hline Model & Orig. & Metric & Method & 8b & 7b & 6b & 5b & 4b & 3b & 2b \\ \hline \multirow{2}{*}{Resnet-18} & \multirow{2}{*}{69.7} & \multirow{2}{*}{acc.} & \multicolumn{1}{l}{quant.} & **70.5** & **70.5** & **70.6** & **70.3** & **70.0** & **68.9** & **67.3** \\ & & & pruning & 70.3 & 70.1 & 69.9 & 69.5 & 69.3 & 68.3 & 66.8 \\ \hline \multirow{2}{*}{Resnet-50} & \multirow{2}{*}{76.1} & \multirow{2}{*}{acc.} & \multicolumn{1}{l}{quant.} & 76.4 & **76.4** & **76.4** & **76.3** & **76.2** & **75.5** & 72.3 \\ & & & pruning & **76.6** & **76.4** & 76.2 & 76.1 & 75.9 & 75.4 & **74.3** \\ \hline \multirow{2}{*}{MobileNet-V2} & \multirow{2}{*}{71.7} & \multirow{2}{*}{acc.} & \multicolumn{1}{l}{quant.} & **71.9** & **72.0** & **71.7** & **71.6** & **70.9** & **68.6** & **59.1** \\ & & & pruning & 68.1 & 65.6 & 61.9 & 56.3 & 48.0 & 34.0 & 21.2 \\ \hline \multirow{2}{*}{EfficientNet} & \multirow{2}{*}{75.4} & \multirow{2}{*}{acc.} & \multicolumn{1}{l}{quant.} & **75.2** & **75.3** & **75.0** & **74.6** & **74.0** & **71.5** & **60.9** \\ & & & pruning & 72.5 & 70.9 & 68.1 & 63.6 & 56.4 & 44.5 & 27.1 \\ \hline \multirow{2}{*}{MobileNet-V3} & \multirow{2}{*}{67.4} & \multirow{2}{*}{acc.} & \multicolumn{1}{l}{quant.} & **67.7** & **67.6** & **67.1** & **66.3** & **64.7** & **60.8** & **50.5** \\ & & & pruning & 65.6 & 64.4 & 62.4 & 60.2 & 56.1 & 31.7 & 0.0 \\ \hline \multirow{2}{*}{ViT} & \multirow{2}{*}{81.3} & \multirow{2}{*}{acc.} & \multicolumn{1}{l}{quant.} & **81.5** & **81.4** & **81.4** & **81.0** & **80.4** & **78.4** & **72.2** \\ & & & pruning & 76.6 & 76.6 & 76.2 & 73.1 & 72.4 & 71.5 & 69.4 \\ \hline \multirow{2}{*}{DeepLab-V3} & \multirow{2}{*}{72.9} & \multirow{2}{*}{mIoU} & \multicolumn{1}{l}{quant.} & **72.3** & **72.3** & **72.4** & **71.9** & **70.8** & **63.2** & **17.6** \\ & & & pruning & 65.2 & 62.8 & 56.8 & 47.7 & 32.9 & 18.6 & 10.0 \\ \hline \multirow{2}{*}{EfficientDet} & \multirow{2}{*}{40.2} & \multirow{2}{*}{mAP} & \multicolumn{1}{l}{quant.} & **39.6** & **39.6** & **39.6** & **39.2** & **37.8** & **33.5** & **15.5** \\ & & & pruning & 34.5 & 33.0 & 30.9 & 27.9 & 24.2 & 17.9 & 8.0 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison of QAT and magnitude pruning with fine-tuning given equal model size and equal number of epochs for fine-tuning.
**Impact** Using pruning or quantization leads to power reduction on many architectures and enables new applications on mobile platforms. We see only a positive impact from this on the whole.
**Limitations** First, our work has not extensively considered the hardware implications of pruning or quantization. Second, we do not study combinations of pruning and quantization apart from analyzing the inherent sparsity due to pruning. We leave this for future work. Finally, we consider only uniform quantization and ignore the other formats, such as low-precision floating or logarithmic quantization, although these are not likely to change the results presented in this paper.
## 7 Related work
QuantizationInteger quantization, or fixed-point quantization, is one of the most widely used techniques for inference, allowing to reduce the latency and improved energy efficiency. There are two main families of methods for model quantization. The first family includes post-training quantization (PTQ) methods [39; 49; 9; 1; 8; 5; 45; 37], which improve the model accuracy based on per-layer optimization of the quantized weights in a data-optimized fashion. The second family includes quantization-aware training methods [20; 31; 64; 7; 41; 11; 32; 2; 59; 48] which usually fine-tune the model with quantization in the loop using straight-through estimator (STE) for computing the gradient of rounding operations. A more comprehensive overview of quantization methods can be found in [47].
PruningNeural network pruning is one of the oldest methods to compress neural networks [34; 24]. A central problem in pruning is how to choose which weights to prune. Approaches published in the literature include: binary gating, in which a binary gate is learned on each individual weight [42; 43; 60]; sensitivity-based methods [36; 35; 62; 16; 17] in which sensitivity, based on a weights' gradient or hessian diagonal value, is used, and magnitude pruning [22; 50; 65; 44; 55]. While conceptually simple, magnitude-based methods have been shown to consistently outperform more intricate methods at scale [18; 3]. Weight re-initialization schemes [14; 15] or mask-reinitialization [55] yield additional minor improvements. While most pruning approaches require fine-tuning and yield unsatisfactory results in post-training scenarios, recent adaptations of Hessian-based sensitivity approaches [34; 24], in which the Hessian of a layerwise reconstruction loss is used instead of the task loss Hessian, show good pruning results in post-training pruning of large language models [16; 17].
Combining pruning and quantizationA number of works study combinations of pruning and quantization with different levels of granularity [22; 60; 28; 61; 58; 61].
Comparing pruning and quantizationDespite the large amount of work on pruning, quantization, and combining them, there is little literature comparing the two methods. To the best of our knowledge, there is only one work that performs a comparison of pruning versus non-uniform quantization [30]. The work considers only small-scale models and provides only an empirical comparison with no further analysis.
## 8 Conclusion
We have seen in this paper that in several settings, unstructured pruning only performs better than quantization in rare cases. In our theoretical analysis of distributions and on the real-layer-data, pruning is only better than quantization, compressing the network to an equivalent of 2 or 3 bits. This amount of compression comes with such a degree of a drop in performance it is rarely used in practice. The post-training quantization results are also informative. In the setting without fine-tuning, we have shown with theoretical bounds on many layers in neural networks that quantization is almost always provably better than pruning. Our hypothesis is that quantized layers are more accurate than pruned ones, as shown in the theoretical and PTQ setting, and fine-tuning a network is still highly dependent on that. This is in line with fine-tuning results, in which for many networks trained under the same conditions, quantization always has higher performance than pruning.
The conclusion is clear: Quantization generally outperforms pruning for neural networks. Taking into account the unfavorable hardware implications for pruning described, it could be argued that the conclusion holds even stronger. Based on this research, we recommend quantizing neural networks when efficiency is required before pruning is explored.
|
2301.10021
|
Photo-assisted current in the fractional quantum Hall effect as a probe
of the quasiparticle operator scaling dimension
|
We study photo-assisted transport for the edge states of a two dimensional
electron gas in the fractional quantum Hall regime, pinched by a single quantum
point contact. We provide a general expression of the photo-assisted current
using a Keldysh-Floquet approach, when the AC drive is applied either directly
to the edge states, or when it modulates the tunneling amplitude at the quantum
point contact. Strikingly, for a simple cosine modulation of the tunneling
amplitude, the phase shift of the second harmonic of the photoassisted current
is directly related to the scaling dimension of the quasiparticle operators
describing the fractional excitations. As the scaling dimension is intimately
related to the statistics, our proposal of a gate modulation of the
backscattered current provides a diagnosis of the statistics of Laughlin
quasiparticles using a simple quantum point contact geometry.
|
B. Bertin-Johannet, L. Raymond, J. Rech, T. Jonckheere, B. Grémaud, D. C. Glattli, T. Martin
|
2023-01-24T14:09:33Z
|
http://arxiv.org/abs/2301.10021v2
|
Photo-assisted current in the fractional quantum Hall effect as a probe of the quasiparticle operator scaling dimension
###### Abstract
We study photo-assisted transport for the edge states of a two dimensional electron gas in the fractional quantum Hall regime, pinched by a single quantum point contact. We provide a general expression of the photo-assisted current using a Keldysh-Floquet approach, when the AC drive is applied either directly to the edge states, or when it modulates the tunneling amplitude at the quantum point contact. Strikingly, for a simple cosine modulation of the tunneling amplitude, the phase shift of the second harmonic of the photoassisted current is directly related to the scaling dimension of the quasiparticle operators describing the fractional excitations. As the scaling dimension is intimately related to the statistics, our proposal of a gate modulation of the backscattered current provides a diagnosis of the statistics of Laughlin quasiparticles using a simple quantum point contact geometry.
## I Introduction
The Fractional Quantum Hall Effect (FQHE) is a strongly correlated state of a two dimensional electron gas (2DEG) under strong magnetic field in the presence of Coulomb interaction. It has generated considerable interest on the theoretical and the experimental side since its discovery [1; 2]. A key feature resides in the fact that excitations in the FQHE are quasiparticles which carry a fractional charge and which bear fractional statistics, which is intermediate between that of fermions and of bosons, thus the terminology "anyons". In the pioneering work of Ref. [2] this new state of matter was shown to appear for filling factors \(\nu\) equal to the inverse of an odd number. The fractional charge is then specified by \(e^{*}=\nu e\) and the statistical angle equals \(\pi\nu\).
It is therefore challenging to identify an experiment which measures the statistical angle of anyons independently of their charge. The charge of anyons has been successfully identified via the measurement of the Fano factor [3; 4; 5; 6], or via the identification of the Josephson frequency \(\omega_{J}=e^{*}V/\hbar\) which is accessed either through a photo assisted transport noise measurement at zero frequency [7; 8], or through a direct finite frequency noise measurement [9]. All the above experiments were performed in the weak backscattering regime of a single quantum point contact (QPC), where the accepted paradigm is that quasiparticle exchange between two opposite edge states constitute the dominant tunneling process.
The detection of fractional statistics has proved to represent an even greater challenge. On the theoretical side, several proposals considered setups with several QPCs in a Hanbury Brown and Twiss geometry [10] where noise crossed correlations were computed [11; 12; 13; 14]. Alternatively, more recently, proposals using either Fabry Perot interferometry or Hong-Ou-Mandel collisions of quasiparticles were put forward [15; 16; 17]. These proposals generated considerable attention from the experimental community as illustrated by two recent pioneering experiments with a strong claim that the measurement of the statistical angle has been achieved [18; 19]. Nevertheless, the detection of the statistical angle of anyons continues to generate a lot of excitement as illustrated by recent theoretical works, some involving anyon braiding [20; 21; 22], or thermal tunneling noise [23].
One obvious drawback of both theoretical proposals and experimental detection schemes for the statistical angle of quasiparticles resides in the fact that the setups typically involve several QPCs, which constitute both a theoretical and an experimental challenge, and typically a (rather involved) noise measurement. Here we address the issue whether some signatures of the statistics would already be present in a measurement of some specific features of the time dependent current.
Indeed, in the hydrodynamical picture of the FQHE of Ref. [24], which characterizes the edge excitations of a FQHE fluid by a chiral Luttinger liquid model, the statistical angle of anyons is intimately tied to the scaling dimension of the quasiparticle operator: the exponent which characterizes the time decay of the quasiparticle operator correlation function. This scaling dimension turns out to provide an upper bound for the statistical angle, the two being equal when all the modes of a given edge have the same chirality (up to a factor \(\pi\)). Naively speaking, one could argue that the scaling dimension of anyons should be accessible via a DC transport measurement in the weak backscattering regime of a single QPC, provided that one measures carefully the backscattering DC current-voltage characteristics. This unfortunately does not seem so realistic due to the complicated analytics which describe the dependence on both the temperature and the DC voltage of the backscattering current [25]. In practice, a quantitative agreement between the backscattering current predicted by theory
and experiments seems hard to achieve [26; 27] as it relies on identifying a power-law behavior, which not only requires varying the voltage over several decades but is also easily blurred by non-universal effects.
In this paper, we argue that the scaling dimension of the quasiparticle operator could in principle be detected via the careful measurement of a phase shift of the photo-assisted current. Photo-assisted transport (PAT), i.e., electric current in the presence of an additional time dependent drive, typically achieved by shining microwaves on an otherwise DC voltage biased device, was pioneered in Ref. [28]. It has provided condensed matter physicists with new tools to probe fundamental properties of physical systems. Early proposals in mesoscopic devices considered normal metal systems [29; 30], hybrid superconducting systems [31; 32; 33], and the FQHE [7; 9]. More recently photo-assisted transport has gained importance in the field of electron quantum optics, where a minimal excitation states, dubbed "Levitons" [34; 35; 36; 8; 37] and containing integer electron charge, can be designed to generate a pure electron excitation above the Fermi sea, devoid of unwanted electron hole pairs.
In the present context of a single, DC voltage biased QPC in the weak backscattering regime of the FQHE, PAT can be envisioned in two different ways. Either one varies the gate voltage applied to the QPC, thus modulating the tunnel amplitude for quasiparticles between the two edge states, or an AC signal is added to the voltage drive applied to the two edge states. Interestingly, in the FQHE, to our knowledge, attention has focused so far on the latter scenario, while the AC gate drive has been vastly unexplored so far.
The first important result of the present work is to show that both types of drives (gate drive or voltage drive), despite their fundamental differences, can be described analytically with the same unified formalism. Unfortunately, for both types of drives, the measurement of the PAT current averaged over a period of the drive, does not provide any striking dependence on the scaling dimension of quasiparticles. Nevertheless, we point out that a complete description of the PAT current requires the knowledge of all of its harmonics at the AC drive frequency. In practice, this is precisely where the two types of drive lead to significantly different results.
Quite importantly, it turns out that the gate drive is the most suited of the two to detect the scaling dimension of FQHE quasiparticles. The central result of this work is indeed to show that for a simple sinusoidal modulation of the gate this scaling dimension is directly accessible via the measurement of the second harmonic of the PAT current (conversely, for an AC voltage drive, we find that a similar analysis seems difficult to achieve). We thus find an appreciable range of parameters where the scaling dimension - which is tied to the statistical angle - can be easily extracted from a phase shift of the second harmonic of the current.
The paper is organized as follows. In Sec. II we introduce the model for the FQHE and recall its main properties. The current operator through the QPC is derived and we show that the computation of the current under a periodic voltage drive and a periodic gate drive can be carried out with the same formalism. In Sec. III we find a general analytic formula of the backscattered current as a function of time that holds for arbitrary values of \(\nu>0\). Sec. IV is devoted to finding the formula of the backscattered current in the weak backscattering regime. The harmonics structure of this current in the weak backscattering regime is studied in Sec. V depending on the type of drive. In particular, we show the specificity of the harmonic gate drive which allows to isolate the filling factor in the second harmonic of the current. In Sec. VI, we show a computation of the backscattered current in the Fermi liquid limit through two different routes, which allows to check the consistency of our treatment. Then, we compute the current in the strong backscattering regime in Sec. VII, showing analytically its particular behavior. Finally we conclude in Sec. VIII and propose further research tracks.
## II Luttinger liquid basics
### Model
We propose a brief summary of the hydrodynamical approach to the description of the FQHE put forward by Wen in Ref. [24]. In this approach, edge excitations are identified as surface waves of an incompressible irrotational two dimension quantum liquid with a perpendicular magnetic field \(B\). We restrict ourselves to Laughlin states of FQHE, see Ref. [2], which amounts to assuming that there is only one fractional edge state.
Defining the filling factor \(\nu\) as the fraction of the lowest Landau level which is filled, the Hamiltonian for the FQHE describing the edge excitations reads:
\[\mathcal{H}=\frac{v}{4\pi}\int_{0}^{L}\mathrm{d}x\,\left[\partial_{x}\phi(x) \right]^{2}\,, \tag{1}\]
where \(v\) is the drift/Fermi velocity along the edges and \(\phi(x)\) is a bosonic field which is related to the one dimensional electron density \(\rho(x)=\sqrt{\nu}\partial_{x}\phi(x)/2\pi\). Using the fermion anticommutation relation we establish a relation between the electron creation operator \(\Psi^{\dagger}\) and the latter's density \(\rho(x)\):
\[\left[\rho(x),\,\Psi^{\dagger}(x^{\prime})\right]=\delta(x-x^{\prime})\Psi^{ \dagger}(x)\,. \tag{2}\]
This means that the electron creation operator can be written as
\[\Psi^{\dagger}(x)=\frac{1}{\sqrt{2\pi a}}e^{\frac{i}{\sqrt{\nu}}\phi(x)}\,, \tag{3}\]
where \(a\) is a short distance cutoff. Using the Kac-Moody commutation relation
\[\left[\phi(x),\phi(x^{\prime})\right]=-i\pi\mathrm{sgn}(x-x^{\prime}) \tag{4}\]
and the Baker Campbell Hausdorff identity, it can be shown that:
\[\Psi(x)\Psi(x^{\prime})=e^{-i\frac{\pi}{\nu}\mathrm{sgn}(x-x^{\prime})}\Psi(x^{ \prime})\Psi(x)\,, \tag{5}\]
so electron operators obey anticommutation relations only if \(\nu=1,1/3,1/5,1/7,...\), i.e., for Laughlin filling factors. Note that other rational values of \(\nu\) are physically attainable and exhibit the FQHE, but the above derivation needs to be generalized in order to include the presence of several bosonic fields [24]. In what follows, we choose to focus on the simpler situation of the Laughlin series for clarity sake. The main results can be readily obtained for the case of a general Abelian FQH edge involving multiple bosonic modes, but do not fundamentally differ from the Laughlin filling factors. In Appendix B, we provide some elements of the model and main derivations for this more general case.
The quasiparticle operator is a local vertex operator and it is required to commute with the electron operator, which justifies the choice:
\[\psi(x)=\frac{1}{\sqrt{2\pi a}}e^{i\sqrt{\rho}\phi(x)}\,. \tag{6}\]
where \(a\) is a short distance cutoff. At zero temperature, the resulting correlation function, taken at position \(x=0\), follows a power-law decay in time as
\[\left\langle\psi(0,\tau)\psi^{\dagger}(0,0)\right\rangle=e^{\nu\mathcal{G}( \tau)}\sim\tau^{-\nu}\,, \tag{7}\]
where, as explained in Appendix A, \(\mathcal{G}(\tau)\) is the chiral bosonic field Green's function Eq. (10). This allows us to define the scaling dimension \(\nu_{D}\) of the quasiparticle operator [38], which, in the case of Laughlin filling factors, reduces to \(\nu_{D}=\nu\).
From Eq. (6), it is also possible to define the statistical angle \(\Theta\). Indeed, focusing on a given time \(\tau=0\), one readily obtains a nontrivial phase factor when exchanging two quasiparticles in real space, namely
\[\psi(x)\psi(x^{\prime})=e^{i\pi\nu\mathrm{sgn}(x-x^{\prime})}\psi(x^{\prime}) \psi(x)\,, \tag{8}\]
which is a clear illustration of anyonic statistics, with a statistical angle specified by \(\Theta=\pi\nu_{D}=\pi\nu\).
Note that so far, the fractional charge \(e^{*}=\nu e\) has not been discussed, at it typically appears in discussions where electromagnetic fields/voltage biases are involved.
### QPC current operator
The simplest quantum transport setup in the FQHE consists of a quantum Hall bar, along which edge excitations propagate, denoted left- and right-movers, further equipped with a QPC (see Fig. 1). Voltage sources can be connected to either edges in order to impose a potential bias difference between edge states, and the QPC can be tuned at will, with a special emphasis on two specific regimes. First, in the weak backscattering regime, the QPC is weakly pinched, the quantum Hall fluid spreads over the whole bar, and the dominant charge transfer process between the top and bottom edge is provided by quasiparticle excitations. In this situation, it is typically the backscattering current \(I_{\mathrm{T}}\) which is computed/measured. This regime is depicted in both panels of Fig. 1. Second, in the opposite limit, called the strong backscattering regime, the QPC is strongly pinched and the quantum Hall fluid is split in two (not shown). Only electrons can then tunnel between the left and right moving edges as they have to cross a vacuum region. The measured current corresponds then to that flowing between the left and right sides of the split Hall fluid.
Here, to obtain the scaling dimension of quasiparticles via a photo-assisted current measurement, we focus mainly on a weak backscattering situation, but results in the opposite regime of strong backscattering will also be presented for completeness.
Assuming that a DC voltage \(V_{\mathrm{DC}}\) is imposed on the right-moving edge (see Fig. 1), the tunnel, or backscattering Hamiltonian reads (see Ref. [39]):
\[H_{\mathrm{T}}=\sum_{\epsilon=\pm}\left[\lambda(t)e^{i\omega_{0}t}\psi_{R}^{ \dagger}(0)\psi_{L}(0)\right]^{\epsilon}\,, \tag{9}\]
where \(\epsilon=+\) leaves the expression unchanged, while \(\epsilon=-\) specifies the Hermitian conjugate. \(\lambda(t)\) is the (time dependent, see below) tunnel coupling amplitude and \(\omega_{0}^{*}=e^{*}V_{\mathrm{DC}}\) in the weak backscattering regime. Below we consider two distinct setups for photo-assisted
Figure 1: Sketch of the setup, a fractional quantum Hall bar pinched off by a quantum point contact. Top: AC gate drive setup where a DC voltage is applied between edge states and the tunnel coupling is time dependent. Bottom: AC voltage drive setup where a periodic voltage is applied on top of the DC voltage and the tunnel coupling is constant.
transport (upper and lower panels of Fig. 1) which can both be described by a general form \(\lambda(t)\) of the tunnel coupling. In full generality, it is assumed to be a (complex valued) periodic function of time.
With these conventions, the backscattered current reads:
\[I_{\mathrm{T}}(t)=ie^{*}\sum_{\epsilon=\pm}\epsilon\left[\lambda(t)e^{i\omega_{0 }^{*}t}\psi_{R}^{\dagger}(0,t)\psi_{L}(0,t)\right]^{\epsilon}\,. \tag{10}\]
We thus see that in the weak backscattering regime, the fractional charge \(e^{*}=\nu e\) appears both as a prefactor of the current, and through the definition of the DC bias frequency \(\omega_{0}^{*}\).
The strong backscattering equivalent of the tunnel Hamiltonian and current operator are achieved with the duality transformation \(e^{*}\to e\), \(\omega_{0}^{*}\to eV_{\mathrm{DC}}\), \(\psi\to\Psi\).
There are in fact two ways to achieve photo-assisted transport, both involving a constant gate voltage and a constant voltage bias \(V_{\mathrm{DC}}\). 1) one can apply an AC modulation on the QPC gate. This setup was discussed in Ref. [40] in a very different context. 2) one can directly superpose to the DC voltage drive an AC component. We call the former the gate drive and the latter the voltage drive, even though they both contain a DC voltage component. This was studied theoretically and experimentally for superconducting hybrid junctions in Refs. [37; 30; 33], and both theoretically and experimentally in Refs. [7; 8; 9; 39] for a QPC in the FQHE regime. We note that the results of Refs. [32; 29; 31] for both normal metal junctions and normal metal/superconducting hybrid junctions fall in this category, although they do not directly consider a voltage drive.
Interestingly, the gate voltage modulation scenario has received little attention so far in the FQHE. In the core of this paper we wish to stress that it is especially relevant in the search for manifestations of the scaling dimension of quasiparticles. Both setups are depicted in Fig. 1.
Note that both types of AC modulations can effectively be included in the tunnel amplitude. For the gate modulation, one adds an oscillating contribution \(\lambda_{1}(t)\) to the bare tunnel amplitude \(\lambda_{0}\), while for the voltage drive modulation, one incorporates the AC voltage drive via the Peierls substitution, i.e., as an additional phase, yielding
\[\lambda(t)=\begin{cases}\lambda_{0}(1+\lambda_{1}(t))&\text{gate drive},\\ \lambda_{0}\exp\left(ie^{*}\int_{-\infty}^{t}dt^{\prime}V_{ac}(t^{\prime}) \right)&\text{voltage drive}\,,\end{cases} \tag{11}\]
where \(\lambda=0\) corresponds to an infinite barrier. Concerning the gate drive, without loss of generality one can choose it to be real valued. It then only makes physical sense to choose \(|\lambda_{1}(t)|<1\).
Assuming that both drives are periodic, and in order to stick to previous conventions [37], we specify the Fourier decomposition of the tunnel coupling as:
\[\lambda(t)=\lambda_{0}\sum_{l}\overline{p_{l}}e^{il\Omega t}\,. \tag{12}\]
where the drive frequency is \(\Omega\), and \(\overline{x}\) is the complex conjugate of \(x\). Then, provided that no assumptions are made on the \(p_{l}\), the computation can be carried out simultaneously for the voltage drive or the gate drive. However, it is worth mentioning that the Fourier coefficients \(p_{l}\) do not bear the same physical meaning when describing the two different drives. For the gate drive, there is typically a finite number of Fourier coefficients, which satisfy \(\overline{p_{l}}=p_{-l}\) to represent a real valued modulation of the tunnel amplitude. However, for a voltage drive, the \(p_{l}\)'s are the Fourier coefficients of a complex number of modulus one. As a consequence, there is an infinite set of coefficients \(p_{l}\) in this case, which obey specific sum rules [36; 37].
## III Current for arbitrary \(\nu\)
The aim of this section is to provide a general derivation of the backscattering current while making minimal assumptions on the Fourier coefficients \(p_{l}\) of either drive. Also, specifically for this section, we keep in mind that at any moment, the duality transformation to the strong backscattering regime can be operated. Furthermore, we explicitly write the scaling dimension as \(\nu_{D}\), although in the Laughlin case, it reduces to \(\nu_{D}=\nu\). It allows to track down the effects which are specific to the scaling dimension. This choice is further reinforced by the general case detailed in Appendix B, where the degeneracy between scaling dimension and filling factor is lifted.
The photo-assisted backscattering current can be computed to second order in the tunnel coupling \(\lambda(t)\), using the Keldysh formalism [41]. From Eq. (6) and Ref. [25], it reads
\[\langle I_{\mathrm{T}}(t)\rangle= \frac{e^{*}}{2}\left(\frac{1}{2\pi a}\right)^{2}\sum_{\epsilon= \pm}\epsilon\int\mathrm{d}t^{\prime}e^{ie\omega_{0}^{*}(t-t^{\prime})}\] \[\times\left[\lambda(t^{\prime})\right]^{\epsilon}\left[\lambda(t ^{\prime})\right]^{-\epsilon}\sum_{\eta,\eta^{\prime}}\eta^{\prime}e^{2\nu_{D }\mathcal{G}\eta^{\prime\prime}\left(t-t^{\prime}\right)}\,, \tag{13}\]
where \(\eta\), \(\eta^{\prime}\) are Keldysh contour indices and \(\mathcal{G}\eta^{\prime}\) is the corresponding bosonic Keldysh Green's function defined in Appendix A. At this stage, it is important to stress out that this expression of the tunneling current readily generalizes to any Abelian edge theory comprising multiple bosonic modes, where it then depends on the effective charge and scaling dimension of the quasiparticle \(\psi_{\mathbf{g}^{*}}\) involved in the leading tunneling process at the QPC, as we show in detail in Appendix B,
\[\langle I_{T}(t)\rangle=\frac{1}{2}Q_{\mathbf{g}^{*}}\sum_{\epsilon}\epsilon\int dt ^{\prime}e^{iG_{\mathbf{g}^{*}}V_{\mathrm{DC}}(t-t^{\prime})}\left[\Gamma_{ \mathbf{g}^{*}}(t)\right]^{\epsilon}\left[\Gamma_{\mathbf{g}^{*}}(t^{\prime}) \right]^{-\epsilon}\sum_{\eta\eta^{\prime}}\eta^{\prime}e^{2\delta_{\mathbf{g}^ {*}}\mathcal{G}^{\eta\eta^{\prime}}(t-t^{\prime})}, \tag{14}\]
where \(Q_{\mathbf{g}^{*}}\) and \(\delta_{\mathbf{g}^{*}}\) are respectively the effective charge and scaling dimension of the leading tunneling quasiparticle \(\psi_{\mathbf{g}^{*}}\), while \(\Gamma_{\mathbf{g}^{*}}\) is the corresponding tunneling amplitude. In particular, this expression underlines the importance of the distinction we put forward between scaling dimension and filling factor (as the two only turn out to be equal in the Laughlin case), and further emphasizes the major role played by the scaling dimension in our derivation.
The general calculation of the backscattering current at finite temperature and for arbitrary periodic drives is quite cumbersome, and details of the derivation are provided in Appendix C. First, the summation over the Keldysh indices is performed explicitly using the symmetry properties of the chiral bosonic Green's function components. Next, the time integral is performed and written in terms of Gauss' hypergeometric function \({}_{2}F_{1}\). The result reads
\[\begin{split}\langle I_{\mathrm{T}}(t)\rangle=e^{*}\left(2v \tau_{0}\right)^{-2}\pi^{-3}\beta\lambda_{0}^{2}\sum_{l,m}\overline{p_{l}}p_{m }e^{i(l-m)\Omega t}\\ \times\sum_{\eta=\pm}\eta\Bigg{[}\frac{-i\eta\sin\left(\frac{ \pi}{\beta}\tau_{0}\right)\exp\left(i\eta\frac{\pi}{\beta}\tau_{0}\right)}{ \nu_{D}-i\frac{m+q}{2\pi\theta}}{}_{2}F_{1}\left(1,1-\nu_{D}-i\frac{m+q}{2\pi \theta};1+\nu_{D}-i\frac{m+q}{2\pi\theta};\exp\left(2i\eta\frac{\pi}{\beta} \tau_{0}\right)\right)\\ -(m,q)\rightarrow(-l,-q)\Bigg{]}\,,\end{split} \tag{15}\]
where \(\tau_{0}=a/v\) is the short time cutoff, \(\beta\) is the inverse temperature, \(\theta=(\beta\Omega)^{-1}\) is the reduced temperature and \(q=\omega_{0}^{*}/\Omega\). \({}_{2}F_{1}\) is the Gauss hypergeometric function and the shorthand notation \(f(a,b)-f(c,d)\equiv f(a,b)-(a,b)\rightarrow(c,d)\) has been used. An important advantage of this expression is that it remains valid for arbitrary values of the scaling dimension \(\nu_{D}>0\), allowing us to also obtain the current in various limiting cases, including in the strong backscattering limit using the duality transformation. However, the convergence properties of the resulting hypergeometric function significantly depend on the value of the scaling dimension. For this reason, all physically relevant cases corresponding to, the Laughlin fractions for weak backscattering, the Fermi liquid case, or the Laughlin fractions for strong backscattering, have to be discussed separately. Note that this subtlety does not occur in the standard computation of the current in the presence of a DC voltage, it is specific to photo-assisted transport at finite temperature.
## IV Weak backscattering regime
Here, we focus on filling factors \(0<\nu<1\) in Eq. (15), corresponding solely to the weak backscattering regime dominated by quasiparticle transfer through the quantum Hall fluid. One can perform an expansion of the hypergeometric function \({}_{2}F_{1}\), which is specific to these filling factors, to leading order in \(\tau_{0}/\beta\) (we recall that \(\tau_{0}\) is the short time cutoff of the chiral Luttinger liquid theory). This is achieved in Appendix D. Furthermore, without loss of generality (due a the choice of time origin), we can safely assume that the Fourier coefficients \(p_{l}\) are real. This leads to the general expression for the current in terms of cosine and sine harmonics at the drive frequency:
\[\begin{split}\langle I_{\mathrm{T}}(t)\rangle=I_{0}+\mathcal{I} \sum_{l>0}\Bigg{[}&\cos(l\Omega t)\sum_{m}\left|\Gamma\left(\nu_{ D}+i\frac{m+q}{2\theta\pi}\right)\right|^{2}(p_{m-l}p_{m}+p_{m}p_{l+m})\sinh \left(\frac{m+q}{2\theta}\right)\\ &+\sin(l\Omega t)\tan(\pi\nu_{D})\sum_{m}\left|\Gamma\left(\nu_{ D}+i\frac{m+q}{2\theta\pi}\right)\right|^{2}(p_{m}p_{l+m}-p_{m-l}p_{m})\cosh \left(\frac{m+q}{2\theta}\right)\Bigg{]}\,,\end{split} \tag{16}\]
with the prefactor:
\[\mathcal{I}=\frac{e^{*}\Omega}{\pi}\left(\frac{\lambda_{0}}{v}\right)^{2} \left(\frac{2\pi\theta}{\Lambda}\right)^{2\nu_{D}-2}\frac{\theta}{\Gamma(2\nu _{D})}\,, \tag{17}\]
where \(\Lambda=(\Omega\tau_{0})^{-1}\) is the reduced high energy cutoff, and the zeroth harmonic is
\[I_{0}=\mathcal{I}\sum_{m}\left|\Gamma\left(\nu_{D}+i\frac{m+q}{2\theta\pi} \right)\right|^{2}p_{m}^{2}\sinh\left(\frac{m+q}{2\theta}\right)\,. \tag{18}\]
A few comments are in order at this stage. First, we stress that this formula for the fully time-dependent current is valid for both a voltage drive and a gate drive. Second, the zeroth harmonic contribution introduced in Eq. (18) corresponds naturally to the current averaged over one period of the drive and, in the voltage drive case, satisfies a Tien-Gordon-like formula [28] as it corresponds to a weighted sum of DC contributions with a shifted voltage \(e^{*}V_{\mathrm{DC}}\to e^{*}V_{\mathrm{DC}}+m\Omega\) and probability \(p_{m}^{2}\). Finally, while all harmonics of the current depend on the scaling dimension in a nontrivial way, it turns out that the sine harmonics, in \(\sin(l\Omega t)\), all carry a prefactor \(\tan(\pi\nu_{D})\), which constitutes a striking dependence on the scaling dimension worth exploring further. Note that a similar-looking dephasing in the time-dependent current has been obtained previously in some related cases [7; 40] but remained unexploited. Indeed, it does not seem obvious to easily isolate this factor from the backscattering current, as the latter involves many contributions of the same order of magnitude.
To this end, the current can be rewritten as
\[\langle I_{\mathrm{T}}(t)\rangle=\sum_{n=0}^{\infty}I_{n}(t)\,, \tag{19}\]
where
\[I_{n\neq 0}(t)=\mathcal{I}C_{n}\cos(n\Omega t+\varphi_{n})\,, \tag{20}\]
and the formulas for \(\varphi_{n}\) and \(C_{n}\) are given in Appendix. D, see Eq. (D7).
We mention in passing that the analytical continuation of Eq. (16) for \(\nu_{D}=1\) holds, allowing us to retrieve the Fermi liquid behavior discussed below (see Sec. VI), although the computation steps are not quite valid in this regime.
## V Harmonics of the current
This section is devoted to the analysis of the current and its different harmonics in the weak backscattering regime, as defined in Eq. (19). We start by analyzing a cosine voltage drive, showing that there is no simple way to extract the scaling dimension of the quasiparticles from the current or its harmonics in this setting. On the other hand, for a cosine gate drive, we establish, in a second subsection, a proportionality relation between the phase shift of the second harmonic of the current and the scaling dimension of the quasiparticle operator.
### Voltage drive
When a voltage drive is applied, the tunnel coupling is modified according to Eq. (11). The drive is defined as
\[V(t)=V_{\mathrm{DC}}+V_{\mathrm{AC}}\cos\Omega t\,, \tag{21}\]
where the normalized modulation amplitude is \(\alpha=e^{*}V_{\mathrm{AC}}/\Omega\). The Fourier coefficients of the tunnel coupling, see Eq. (12), read [39]
\[p_{l}=J_{l}(-\alpha)\,, \tag{22}\]
where \(J_{l}\) are Bessel functions of the first kind. One can readily check that these coefficients are real so that the expression for the time-dependent current, Eq. (16), can be used as is.
It follows from Eq. (22) that the Fourier coefficients \(p_{l}\) are nonzero for any \(l\). The harmonics \(I_{n}\) of the current are therefore written as infinite sums, involving all Fourier coefficients. This significantly complicates the resulting expressions. As a result, in this voltage drive regime, we have been unable to extract a simple signature of the scaling dimension of the quasiparticle operator from the harmonics of the current. This situation is not specific of the present choice of a cosine drive, but instead arises from the time dependence of the tunnel coupling, which appears as an exponential of a periodic function.
An illustration of the fully time-dependent current is proposed in Fig. 2. We show the AC part of the current and its first two harmonics as a function of time over a full period for various values of the reduced DC voltage \(q<1\). We remark that the current displays a rich behavior, in particular, both the amplitude and the phase of the two first harmonics of the current depend on \(q\) in a nontrivial way. Indeed, each harmonic involves a large number of Fourier components of the tunnel coupling, making it impractical to extract any valuable information.
### Gate drive
The tunnel coupling under a gate drive, as defined in Eq. (11), reads, for a cosine drive,
\[\lambda(t)=\lambda_{0}\left[1+\lambda_{1}\cos(\Omega t)\right]\,. \tag{23}\]
Therefore, its Fourier coefficients are given by
\[p_{0}=1\,,\qquad p_{\pm 1}=\frac{\lambda_{1}}{2}\,,\qquad p_{|n|>1}=0\,. \tag{24}\]
These coefficients are real, allowing us to use the expression for the fully time-dependent current of Eq. (16). More importantly, there is only a finite subset of coefficients that are nonzero (two in the present case). This is a major difference between the gate drive and the voltage drive. While in the latter case, the proliferation of nonzero coefficients did not allow us to obtain a simple self-contained expression of the current, in the present case of a gate drive, the internal summations over \(l\) and \(m\) in Eq. (16) can be readily performed. The resulting expression for the fully time-dependent current is still quite cumbersome. However, working out explicitly the expression for the amplitudes \(C_{n}\) and phases \(\varphi_{n}\) [see Eq. (20)], one can show that
\[\tan\varphi_{2}=\tan\pi\nu_{D}\frac{\left|\Gamma\left(\nu_{D}+i\frac{q+1}{2\theta \pi}\right)\right|^{2}\cosh\left(\frac{q+1}{2\theta}\right)-\left|\Gamma\left( \nu_{D}+i\frac{q-1}{2\theta\pi}\right)\right|^{2}\cosh\left(\frac{q-1}{2\theta }\right)}{\left|\Gamma\left(\nu_{D}+i\frac{q+1}{2\theta\pi}\right)\right|^{2} \sinh\left(\frac{q+1}{2\theta}\right)+\left|\Gamma\left(\nu_{D}+i\frac{q-1}{2 \theta\pi}\right)\right|^{2}\sinh\left(\frac{q-1}{2\theta}\right)} \tag{25}\]
which, in the low temperature regime (\(\theta\ll 1\)), further reduces to
\[\varphi_{2}=\pi\nu_{D} \tag{26}\]
provided that the reduced DC voltage satisfies \(q<1\). Quite astonishingly, the phase shift of the second harmonic of the current induced by a cosine gate drive in the low temperature limit is exactly equal to the scaling dimension of the quasiparticle operator.
The same representation of the current as that adopted for the voltage drive (see Fig. 2) is proposed in Fig. 3, for a finite reduced temperature \(\theta=0.1\). In the lower right panel we remark that the phase shift of the second harmonic is indeed equal to \(\pi\nu_{D}\) for a large range of \(q\) (the slight discrepancy for the highest \(q=0.8\) disappears at lower temperature). Indeed, from Eq. (20), one readily sees that this phase shift can be recast as a shift in time by an amount \(t/T=-\varphi_{2}/(4\pi)=-\nu_{D}/4\) (taking into account that \(C_{2}>0\)).
The robustness of this result for finite temperature is explored in Fig. 4. It displays the evolution of \(\varphi_{2}\) as a function of \(q\) for different values of \(\nu_{D}\) and at two experimentally realistic reduced temperatures, \(\theta=0.1\) and \(\theta=0.2\). Note that in actual experimental realizations, a reduced temperature \(\theta=0.1\) would correspond to an actual temperature of 50mK for a drive frequency of 10GHz. For this value of the reduced temperature, the results of Fig. 4 show a good agreement between the phase shift \(\varphi_{2}\) and the scaling dimension, over a large range of DC voltage. Increasing the temperature leads to a small departure between the two, which further grows as one increases the reduced voltage \(q\) (as already observed for \(q=0.8\) in Fig. 3).
We stress that the identification of the phase shift, which gives direct access to the quasiparticle operator scaling dimension requires the experimental measure
Figure 2: **Voltage drive case:** Average current through the QPC in the weak backscattering regime under the voltage drive \(V(t)=V_{\rm DC}+V_{\rm AC}\cos 2t\) and for various values of the average transmitted charge \(q=e^{*}V_{\rm DC}/\Omega\). The filling factor is \(\nu=1/3\), the AC normalized amplitude is \(\alpha=1\), and the temperature is \(\theta=0.1\). The top panel shows the AC part of the current as a function of time, over one period. The lower left panel displays the first harmonic and the lower right the second one.
ment of the _time-dependent current_, rather than the measurement of its average value over the period of the drive. Experimentally, it would therefore be necessary to measure the _harmonics_ of the current, for instance by multiplying the current signal by a chosen, specific, periodic signal, and subsequently performing the average over the period of the drive.
The present prediction for the phase shift as a signature of the scaling dimension of the quasiparticle operator for a gate voltage modulation constitutes the central result of this work. Based on the generalized derivation presented in Appendix B, and the assumptions underlying the above computations, it should hold for a broad range of filling factors, the only requirement being that the scaling dimension of the quasiparticle involved in the leading tunneling process satisfies \(0<\nu_{D}<1\) with \(\nu_{D}\neq 1/2\).
## VI Fermi liquid computation
In this section we propose a computation of the Fermi liquid limit, first, using standard Fermi liquid theory and second, taking the limit \(\nu=1\) of the chiral Luttinger liquid theory presented above. This fulfills two purposes, it extends to periodic drives the usual Fermi liquid computation of the current through a QPC and it allows to perform consistency checks of the Fermi liquid limit of our Luttinger liquid computation.
### Fermi liquid formalism
In the Fermi liquid picture, the Hamiltonian is written in analogy with Eq. (9), in terms of electron creation and annihilation operators \(\Psi_{\rm L,R}\) at position \(x=0\) in the left or right leads. It reads
\[\mathcal{H}_{\rm T}=\lambda(t)\Psi_{\rm L}^{\dagger}\Psi_{\rm R}+{\rm H.c.} \tag{27}\]
Thus the current operator is
\[I_{\rm T}=ie\lambda(t)\Psi_{\rm L}^{\dagger}\Psi_{\rm R}+{\rm H.c.} \tag{28}\]
We define Keldysh Green's functions for electron operators as
\[G^{\eta\eta^{\prime}}_{\rm ss}=-i\left\langle\mathcal{T}_{K}\Psi_{\rm s}(t_{ \eta})\Psi_{\rm s^{\prime}}^{\dagger}(t_{\eta^{\prime}}^{\prime})\right\rangle\,, \tag{29}\]
where \(\mathcal{T}_{\rm K}\) is the time ordering operator along the Keldysh contour, s and s' can be either L or R and the Keldysh
Figure 3: **Gate drive case:** Average current through the QPC in the weak backscattering regime under the gate drive Eq. (23) and for various values of \(q\). The figure was obtained with the following parameters: the filling factor is \(\nu=1/3\), the tunneling amplitude modulation is \(\lambda_{1}=1\) and the reduced temperature is \(\theta=0.1\). The top panel shows the AC part of the current as a function of time, over one period. The lower left panel displays the first harmonic of the current and the lower right panel the second harmonic of the current, where all curves depict the same phase shift set by the scaling dimension \(\nu_{D}\).
contour indices \(\eta\) and \(\eta^{\prime}\) can be \(+\) or \(-\). The average current can be written with Keldysh Green's function for electron operators:
\[\langle I_{\mathrm{T}}(t)\rangle=-e\left[\lambda(t)G_{\mathrm{RL}}^{+-}(t,t)- \lambda^{*}(t)G_{\mathrm{LR}}^{+-}(t,t)\right]\,. \tag{30}\]
Working in the wide band limit, the expression for the current can be obtained using standard tools (see Appendix E for details) and can therefore be simplified into
\[\langle I_{\mathrm{T}}(t)\rangle= \frac{\lambda_{0}^{2}e}{4\pi v_{\mathrm{F}}^{2}}\sum_{l,m}\overline {p_{l}}p_{m}e^{i(l-m)\Omega t}\] \[\times\int\mathrm{d}\omega\left[f(\omega-\mu_{\mathrm{R}})-f( \omega+l\Omega-\mu_{\mathrm{L}})\right.\] \[\left.-f(\omega-\mu_{\mathrm{L}})+f(\omega+m\Omega-\mu_{\mathrm{ R}})\right]\,. \tag{31}\]
where \(f(x)\) is the usual Fermi distribution and \(\mu_{R/L}\) is the chemical potential of edge \(R/L\). Performing the integration yields
\[\langle I_{\mathrm{T}}(t)\rangle=\frac{e\Omega\lambda_{0}^{2}}{4\pi v_{F}^{2} }\sum_{l,m}\overline{p_{l}}p_{m}e^{i(l-m)\Omega t}(2q+l+m)\,. \tag{32}\]
Using Eq. (12) to write the derivative \(\partial_{t}\lambda(t)=i\Omega\sum_{l}\overline{p_{l}}e^{il\Omega t}\), the current can be rewritten as
\[\langle I_{\mathrm{T}}(t)\rangle=\frac{e\Omega}{4\pi v_{\mathrm{F}}^{2}}\left[ 2q\left|\lambda(t)\right|^{2}+\frac{1}{i\Omega}\partial_{t}\left|\lambda(t) \right|^{2}\right]\,. \tag{33}\]
Finally, substituting the expression for the time-dependent tunnel coupling \(\lambda(t)\) for the two types of drive, one has
\[\langle I_{\mathrm{T}}(t)\rangle=\frac{e^{2}}{2\pi v_{\mathrm{F}}^{2}}\left\{ \begin{aligned} \lambda(t)^{2}V_{\mathrm{DC}}&&\text{for a gate drive}\\ \lambda_{0}^{2}V(t)&&\text{for a voltage drive}\end{aligned}\right. \tag{34}\]
Which is the straightforward time-dependent generalization of the Landauer formula for PAT. The absence of a temperature dependence is a consequence of the wide band approximation.
### Luttinger liquid approach at \(\nu=1\)
Finding the current in the Fermi liquid limit of the Luttinger theory can be done by setting \(\nu_{D}=\nu=1\) in the general formula for the current, Eq. (15). In this case, the expansion of Eq. (15) performed in Sec. IV does not hold. However, as shown in Appendix F, a logarithmic expansion can be carried out to lowest order in \(\tau_{0}/\beta\). This expansion yields two sums, which we call Fermi/DC [Eq. (F3)] and correlated-AC [Eq. (F4)], for reasons that will become clear below. In the particular case of \(\nu_{D}=1\), the leading term is the first term of the Fermi/DC sum, see Eq. (F5), and the current is identical to that obtained from the Fermi liquid approach, Eq. (32). We have thus checked the consistency of our chiral Luttinger liquid approach in the Fermi liquid regime.
## VII Strong backscattering regime
In this section we describe the behavior of the current in the strong backscattering regime, which is obtained from a duality transformation. The latter only holds for filling factors in the Laughlin series, so that, for clarity, we revert to a description in terms of the filling factor \(\nu\). This regime is obtained by setting \(\nu\to\nu^{-1}\), \(e^{*}\to e\) and \(\omega_{0}^{*}\to eV_{\mathrm{DC}}/\Omega\) in the expression for the current, Eq. (13). As \(\nu^{-1}\in\mathbbm{N}\) for Laughlin fractions, one has to exploit the expansion of Eq. (15) to leading order in the cutoff, which is valid for all integers \(\nu^{-1}>0\), i.e. the logarithmic expansion, Eq. (F2).
As already pointed out in Section VI, this expansion consists of two sums. In the case \(\nu=1\), the leading term in \(\tau_{0}/\beta\) belongs to the Fermi/DC sum. However, here, the expansion in orders of \(\tau_{0}/\beta\) favors another term, which belongs to the second sum, which we call correlated-AC sum, see Eq. (F4). More precisely, the leading term is of first order in \(\tau_{0}/\beta\) and yields a current:
\[\begin{split}\langle I_{\mathrm{T}}(t)\rangle\approx& \frac{-e}{(1-2\nu^{-1})_{3}}\left(\frac{\lambda_{0}}{\pi v}\right)^{2}\frac{ \Omega}{\Lambda}\\ &\times\Im\left[\sum_{l,m}\overline{p_{l}}p_{m}e^{i(l-m)\Omega t }\left(m+q\right)^{2}\right]\,,\end{split} \tag{35}\]
where \((x)_{n}\) is the Pochhammer symbol, defined in Appendix F. This expression can take a simpler form, since following the steps used to obtain Eq. (33), one can write
Figure 4: Phase of the second harmonic [as defined in Eqs. (D7) and (25)] of the current through the QPC under the gate drive Eq. (23), in the weak backscattering regime, as a function of \(q\), for various filling factors \(\nu\). Full lines are for reduced temperature \(\theta=0.1\) and dashed lines for \(\theta=0.2\).
\[\langle I_{\rm T}(t)\rangle=\frac{e}{(2\nu^{-1}-1)(2\nu^{-1}-2)(2\nu^{-1}-3)} \frac{1}{\pi^{2}v^{2}\Omega\Lambda}\Im\left[\lambda_{0}^{2}e^{2}V_{\rm DC}^{2}+2 ieV_{\rm DC}\lambda(t)\partial_{t}\overline{\lambda(t)}-\lambda(t)\partial_{t}^{2} \overline{\lambda(t)}\right]\,. \tag{36}\]
Finally, computing explicitly the imaginary part, one can express the end result in a unified way for both types of drives defined in Eq. (11) as
\[\langle I_{\rm T}(t)\rangle=\frac{e}{\left(\frac{2}{\nu}-1\right)\left(\frac{ 2}{\nu}-2\right)\left(\frac{2}{\nu}-3\right)}\frac{1}{\pi^{2}v^{2}\Omega \Lambda}\partial_{t}\left(\lambda^{2}V\right)\,. \tag{37}\]
This result is quite intriguing as it involves the time derivative of the AC Landauer formula, Eq. (34). As in the Fermi liquid computation, there is no temperature dependence to this order, this reflects the wide band limit of the Luttinger model.
In the case of a voltage drive, the junction in the strong backscattering regime behaves as a standard capacitor, i.e.,
\[\langle I_{\rm T}(t)\rangle=C\frac{{\rm d}V(t)}{{\rm d}t}\,, \tag{38}\]
with a capacitance \(C=\frac{e^{2}}{\left(\frac{2}{\nu}-1\right)\left(\frac{2}{\nu}-2\right)\left( \frac{2}{\nu}-3\right)}\frac{\lambda_{0}^{2}}{\pi^{2}v^{2}\Omega\Lambda}\), which, after restoring the proper powers of \(\hbar\), can be further recast as \(C=c\frac{2\pi a}{\left(\frac{2}{\nu}-1\right)\left(\frac{2}{\nu}-2\right)\left( \frac{2}{\nu}-3\right)}\left(\frac{\lambda_{0}}{\hbar v}\right)^{2}\), where \(c=e^{2}/(\hbar v)\) is the quantum capacitance by unit length.
In the case of a gate drive the situation is different, defining the transmission of the junction as \(\tau(t)=4\lambda^{2}(t)\), the current reads
\[\langle I_{\rm T}(t)\rangle=\frac{e^{2}V_{\rm DC}}{\left(\frac{2}{\nu}-1 \right)\left(\frac{2}{\nu}-2\right)\left(\frac{2}{\nu}-3\right)\pi^{2}v^{2} \Omega\Lambda}\frac{{\rm d}\tau(t)}{{\rm d}t}\,. \tag{39}\]
To summarize, the general expansion of the hypergeometric function in \(\tau_{0}/\beta\) for positive integer \(\nu_{D}^{-1}\) yields a current consisting of two sums, see Eq. (100). We have to consider three different situations. When the junction is driven by a DC drive only, when the junction is driven by an AC drive and it is in the Fermi regime, \(\nu=1\), or when the junction is driven by an AC drive and correlations are present, i.e., \(\nu^{-1}>1\).
When the junction is in the Fermi liquid regime (\(\nu=1\)) or solely driven by a DC drive, the leading terms of the expansion belong to the same sum which we therefore call the Fermi/DC sum, see Eq. (101). In the DC case, this term yields the duality transformation of the already known weak backscattering DC result, see [39]. In the Fermi liquid case we find a straightforward extension of the Landauer formula which we call the AC Landauer formula.
When the junction is driven by both a DC and an AC drive (applied to either the edge or the gate) and correlations are present, i.e. \(\nu^{-1}>1\), the leading contribution to the current comes from another term, which we therefore call the correlated-AC sum, Eq. (101). This term yields a current proportional to the time derivative of the AC Landauer formula, Eq. (37).
## VIII Conclusion
In the introduction of this paper, we have stressed the considerable theoretical investment for the search of signatures of the statistics of anyons of the FQHE in the context of electronic quantum transport setups. Most of these setups indeed require quite complicated geometries or types of measurements. Noting that the statistical angle of anyonic quasiparticles is intimately tied to the scaling dimension of the quasiparticle operator, we have asked a naive question: can this scaling dimension be detected via a careful measurement of the time-dependent backscattering current in the weak backscattering regime?
For this purpose, we have reexamined the theory of photo-assisted transport in the FQHE. We have noted that such PAT can be achieved in two distinct ways. Either one modulates the gate voltage applied to the QPC (to our knowledge this type of drive has not received much attention in the context of PAT), or one adds an AC modulation on top of the DC voltage drive (as was proposed theoretically in Refs. [7; 39]).
Our first task was to show that both drives can be described with a unified approach: the only difference between the two drives resides in the details of the Fourier decomposition of the tunnel amplitude describing them. The time-dependent current can then be computed analytically in terms of sums over these Fourier coefficients, further involving a Gauss hypergeometric function, a result which is quite abstract in nature. In order to make progress, expansions to leading order in the cutoff need to be subsequently performed. Interestingly, these expansions depend crucially on the value of the scaling dimension \(\nu_{D}\) and whether the QPC is in the weak or strong backscattering regime.
For the weak backscattering regime, we obtained expressions which allow to characterize the time dependent current as a constant term accompanied by its harmonics at multiples of the drive frequency. It is precisely in these harmonics that we believe that it is possible to isolate the scaling dimension of the quasiparticle tunneling operator. Indeed, by choosing a simple cosine modulation for a gate drive, we were able to show that the phase shift of the second harmonic of the time-dependent current is directly proportional to the scaling dimension at low temperature, which constitutes the central message of this paper. We stressed that this connection is robust at finite temperature, rendering it accessible to experimental observation. A typical experiment would require to multiply the time dependent current signal by a proper harmonic drive at twice the drive frequency to detect this phase shift.
For completeness, we further explored the expansion
properties of hypergeometric functions which appear in the general expression of the time dependent current, in order to derive results for both the Fermi liquid limit \(\nu=1\) and for the strong backscattering limit. In the former case, we performed an independent Fermi liquid calculation using the Dyson equation for fermionic Keldysh Green's function and derived a time dependent generalization of the Landauer formula. We further showed that this AC Landauer formula is in agreement with the \(\nu=1\) limit of the chiral Luttinger model. In the case of strong backscattering, we derived an expression of the current in terms of time derivative of the drives.
Our main result about the phase shift suggest that a careful measurement of the time-dependent current for a device containing a _single_ quantum point contact could provide an insight on the detection (albeit indirect) of fractional statistics in the fractional quantum Hall effect.
###### Acknowledgements.
We are grateful to A. Crepieux, G. Feve and M. Hashisaka for useful discussions. This work received support from the French government under the France 2030 investment plan, as part of the Initiative d'Excellence d'Aix-Marseille Universite - A*MIDEX. We acknowledge support from the institutes IPhU (AMX-19-IET-008) and AMUtech (AMX-19-IET-01X). D.C.G. acknowledges the ANR FullyQuantum 16-CE30-0015-01 grant and the H2020 FET-OPEN UltraFastNano No. 862683 grant.
## Appendix A Keldysh Green's functions relations
In this appendix we introduce the bosonic Keldysh Green's functions used in the main text, see Eq. (13). They are defined as
\[\mathcal{G}^{\eta\eta^{\prime}}(t-t^{\prime})=\left\langle\ \mathcal{T}_{K}\phi\left(t_{\eta},x=0\right)\phi\left(t^{\prime}_{\eta^{ \prime}},x=0\right)\right\rangle\,, \tag{16}\]
where \(\mathcal{T}_{K}\) denotes ordering along the Keldysh contour (see Ref. [41]). The four different Keldysh Green's functions can be summarized by a single one (which has no Keldysh indices)
\[\begin{split}\mathcal{G}^{++}(t-t^{\prime})&= \mathcal{G}(|t-t^{\prime}|)\\ \mathcal{G}^{--}(t-t^{\prime})&=\mathcal{G}(-|t-t^{ \prime}|)\\ \mathcal{G}^{+-}(t-t^{\prime})&=\mathcal{G}(t^{ \prime}-t)\\ \mathcal{G}^{-+}(t-t^{\prime})&=\mathcal{G}(t-t^{ \prime})\,,\end{split} \tag{17}\]
where the'modified' Green's function is defined as
\[\begin{split}\mathcal{G}(t-t^{\prime})&=\left\langle \phi_{R(L)}(t)\phi_{R(L)}(t^{\prime})\right\rangle\\ &-\left\langle\phi_{R(L)}(t)^{2}\right\rangle/2-\left\langle \phi_{R(L)}(t^{\prime})^{2}\right\rangle/2\,,\end{split} \tag{18}\]
and reads [25]
\[\mathcal{G}(\tau)=-\log\left[\frac{\sinh\left(\frac{\pi}{\beta}(i\tau_{0}- \tau)\right)}{\sinh\left(i\frac{\pi}{\beta}\tau_{0}\right)}\right]\,, \tag{19}\]
where \(\beta\) is the inverse temperature and \(\tau_{0}\equiv a/v\) is the short time cutoff of the chiral Luttinger model.
## Appendix B General model for Abelian fractional quantum Hall edge states
In this appendix, we consider a general model for Abelian fractional quantum Hall edge states and show that it leads to a tunneling current of the same form as the one obtained in Eq. (13).
Our starting point is the action of the general Abelian FQH edge
\[S=\frac{1}{4\pi}\int dxdt\sum_{l=1}^{N}\left[-\chi_{l}\partial_{x}\phi_{l} \partial_{t}\phi_{l}-v_{l}\left(\partial_{x}\phi_{l}\right)^{2}\right], \tag{20}\]
where \(\phi_{l}\) are a set of bosonic modes (with \(l=1,...,N\)), with chirality \(\chi_{l}=\pm 1\) and velocity \(v_{l}\). In the case of a single mode (\(N=1\)) this reduces to the standard description of the Laughlin series.
These fields satisfy the commutation relation
\[[\phi_{l}(x,t),\phi_{l^{\prime}}(x^{\prime},t^{\prime})]=i\pi\chi_{l}\delta_{l^{ \prime}}\text{Sgn}\left(x-x^{\prime}-\chi_{l}v_{l}t+\chi_{l^{\prime}}v_{l^{ \prime}}t^{\prime}\right), \tag{10}\]
and help defining the density operator as
\[\rho=\frac{1}{2\pi}\sum_{l}q_{l}\partial_{x}\phi_{l}, \tag{11}\]
where the set of coefficients \(q_{l}\) encode the contribution of the \(l^{\text{th}}\) mode to charge transport. These coefficients are related to the filling factor in a nontrivial way as they satisfy the sum rule
\[\sum_{l}\chi_{l}q_{l}^{2}=\nu. \tag{12}\]
In analogy with the Laughlin case, the edge supports quasiparticles, whose creation/annihilation operators involve a linear combination of all bosonic modes, namely
\[\psi_{\mathbf{g}}(x,t)\propto\exp\left[i\sum_{l=1}^{N}g_{l}\phi_{l}(x,t)\right]. \tag{13}\]
For a given vector \(\mathbf{g}=\{g_{1},...,g_{N}\}\), the corresponding quasiparticle \(\psi_{\mathbf{g}}\) is characterized by 3 important physical quantities:
\[Q_{\mathbf{g}} =e\sum_{l}\chi_{l}q_{l}g_{l}\] its effective charge (14) \[\delta_{\mathbf{g}} =\sum_{l}g_{l}^{2}\] its scaling dimension (15) \[\Theta_{\mathbf{g}} =\pi\sum_{l}\chi_{l}g_{l}^{2}\] its statistical angle (16)
Note that in all generality, the statistical angle is bounded by the scaling dimension, \(|\Theta_{\mathbf{g}}|\leq\pi\delta_{\mathbf{g}}\), and even reduces to \(|\Theta_{\mathbf{g}}|=\pi\delta_{\mathbf{g}}\) in the special situation where all modes have the same chirality, \(\chi_{l}=\chi\), \(\forall l\).
The QPC is set in the weak backscattering regime and is thus modeled by a Hamiltonian describing the tunneling of quasiparticles between the two edges as
\[H_{T}=\sum_{\mathbf{g}}\Gamma_{\mathbf{g}}{\psi^{(u)}_{\mathbf{g}}}^{\dagger }(0)\psi^{(d)}_{\mathbf{g}}(0)+\text{H.c.} \tag{17}\]
where \((u)/(d)\) label the upper and lower edges (the standard \(R/L\) designation being ill-defined in the presence of non-chiral modes). In all generality, one would need to account for all possible tunneling events, i.e. ones involving all possible quasiparticles. In practice, however, it makes sense to favor the one with the lowest scaling dimension, as it is the most relevant perturbations in the RG sense. In what follows, we label this leading quasiparticle with the vector \(\mathbf{g}^{*}\).
From the expression of the tunneling Hamiltonian, one readily obtains the tunneling current operator at the location of the QPC, as
\[I_{T}(t)=iQ_{\mathbf{g}^{*}}\left[\Gamma_{\mathbf{g}^{*}}(t)e^{iQ_{\mathbf{g} ^{*}}V_{\text{D}\text{C}}t}{\psi^{(u)}_{\mathbf{g}^{*}}}^{\dagger}(0,t)\psi^ {(d)}_{\mathbf{g}^{*}}(0,t)-\text{H.c.}\right], \tag{18}\]
where we introduce the effect of an applied DC voltage between edges and introduced a time-dependent tunnel coupling along the same lines as we did in the text, leading to Eq. (11).
Using the decomposition of the quasiparticle operators in terms of the bosonic fields \(\phi_{l}\), one can express the thermal average of the tunneling current in terms of the bosonic Green's function \(\mathcal{G}^{\eta\eta^{\prime}}_{l}(t-t^{\prime})\) yielding
\[\langle I_{T}(t)\rangle=\frac{1}{2}Q_{\mathbf{g}^{*}}\sum_{\eta\eta^{\prime}} \eta^{\prime}\int dt^{\prime}\left[\Gamma_{\mathbf{g}^{*}}(t)\overline{\Gamma _{\mathbf{g}^{*}}(t^{\prime})}e^{iQ_{\mathbf{g}^{*}}V_{\text{D}\text{C}}(t-t ^{\prime})}-\overline{\Gamma_{\mathbf{g}^{*}}(t)}\Gamma_{\mathbf{g}^{*}}(t^{ \prime})e^{iQ_{\mathbf{g}^{*}}V_{\text{D}\text{C}}(t-t^{\prime})}\right]\prod _{l}e^{2g_{l}^{*}g_{l}^{*\eta^{\prime}}(t-t^{\prime})}. \tag{19}\]
At this stage, it is important to keep in mind that \(\mathcal{G}_{l}^{\eta\eta^{\prime}}(t-t^{\prime})\) is a trivial generalization of the one presented in Appendix A. In particular, its Keldysh components follow the same relations as the ones introduced in Eq. (10), with the corresponding Green's function \(\mathcal{G}_{l}(\tau)\) given by
\[\mathcal{G}_{l}(\tau)=-\log\left[\frac{\sinh\left(\frac{\pi}{\beta}\left(i\tau _{l}-\tau\right)\right)}{\sinh\left(i\frac{\pi}{\beta}\tau_{l}\right)}\right]\,, \tag{12}\]
with \(\tau_{l}=a/v_{l}\). It follows from this that the term involving the bosonic Green's function in Eq. (12) can be further rewritten as
\[\prod_{l}e^{2g_{l}^{*2}\mathcal{G}_{l}(\tau)}=\prod_{l}e^{2g_{l}^{*2}\log \left[\frac{\sinh\left(\frac{\pi}{\beta}\tau_{l}\right)}{\sinh\left(\frac{\pi }{\beta}\left(i\tau_{l}-\tau\right)\right)}\right]}=\prod_{l}e^{2g_{l}^{*2} \log\left[\frac{\sinh\left(\frac{\pi}{\beta}\tau_{0}\right)}{\sinh\left(\frac {\pi}{\beta}\left(i\tau_{0}-\tau\right)\right)}\right]+2g_{l}^{*2}\log\left( \frac{\pi_{l}}{\tau_{0}}\right)}=e^{2\sum_{l}g_{l}^{*2}\mathcal{G}(\tau)} \prod_{l}\left(\frac{\tau_{l}}{\tau_{0}}\right)^{2g_{l}^{*2}} \tag{13}\]
where we used that the short time cutoff \(\tau_{l}\) in the denominator only serves as a regularization and can be replaced by any infinitesimal. This allows to drop the \(l\) dependence in the bosonic Green's function and to perform the product over \(l\), letting the scaling dimension appear naturally.
The tunneling current can now be rewritten as
\[\langle I_{T}(t)\rangle=\frac{1}{2}Q_{\mathbf{g}^{*}}\sum_{\epsilon}\epsilon \int dt^{\prime}e^{i\epsilon Q_{\mathbf{g}^{*}}V_{\mathrm{DC}}(t-t^{\prime})} \left[\Gamma_{\mathbf{g}^{*}}(t)\right]^{\epsilon}\left[\Gamma_{\mathbf{g}^{* }}(t^{\prime})\right]^{-\epsilon}\sum_{\eta\eta^{\prime}}\eta^{\prime}e^{2g_{ \mathbf{g}^{*}}\cdot\mathcal{G}^{\eta\eta^{\prime}}(t-t^{\prime})}, \tag{14}\]
where, for convenience and without loss of generality, we reabsorbed the prefactor in \(\tau_{l}/\tau_{0}\) into the definition of the tunneling amplitude. This expression perfectly mirrors the one obtained for the Laughlin case in Eq. (13), where the effective charge \(e^{*}\) and scaling dimension \(\nu_{D}\) of the Laughlin quasiparticle is replaced with the corresponding effective charge \(Q_{\mathbf{g}^{*}}\) and scaling dimension \(\delta_{\mathbf{g}^{*}}\) of the leading tunneling quasiparticle.
## Appendix C Computation steps for the current
In this section we derive a general formula for the average current Eq. (13) without any assumptions on the value of \(\nu_{D}\) other than it being positive. In particular, we obtain Eq. (15).
We start by performing the sum over \(\eta\) in Eq. (13), using
\[\sum_{\eta,\eta^{\prime}=\pm}\eta^{\prime}e^{2\nu_{D}\mathcal{G}^{\eta\eta^{ \prime}}\left(t-t^{\prime}\right)}=2\left[e^{2\nu_{D}\mathcal{G}(\tau)}-e^{2 \nu_{D}\mathcal{G}(-\tau)}\right]\Theta(\tau)\,, \tag{15}\]
where \(\tau=t-t^{\prime}\). Then the sum over \(\epsilon\) can be performed as
\[\sum_{\epsilon=\pm}ee^{ie\omega_{0}^{*}(t-t^{\prime})}\left[\lambda(t)\right] ^{\epsilon}\left[\lambda(t^{\prime})\right]^{-\epsilon}=\lambda_{0}^{2}\sum_{ lm}\overline{p_{l}}p_{m}e^{i(l-m)\Omega t}\left(e^{i(m+q)\Omega\tau}-e^{-i(l+q) \Omega\tau}\right)\,, \tag{16}\]
where \(q=\frac{\omega_{0}^{*}}{\Omega}\). Inserting Eq. (15) and (16) in Eq. (13) gives
\[\langle I_{\mathrm{T}}(t)\rangle=e^{*}\left(\frac{1}{2\pi a}\right)^{2}\lambda _{0}^{2}\sum_{l,m}\overline{p_{l}}p_{m}e^{i(l-m)\Omega t}\int_{0}^{+\infty} \mathrm{d}\tau\,\left(e^{i(m+q)\Omega\tau}-e^{-i(l+q)\Omega\tau}\right)\left[e ^{2\nu_{D}\mathcal{G}(\tau)}-e^{2\nu_{D}\mathcal{G}(-\tau)}\right]\,. \tag{17}\]
The next step is to simplify the expression of the Green's function \(\mathcal{G}(\tau)\), see Eq. (11), denoting \(\eta=\pm\),
\[e^{2\nu_{D}\mathcal{G}(\eta\tau)}=(-i\eta)^{2\nu_{D}}\tanh\left(\frac{\pi}{ \beta}\tau_{0}\right)^{2\nu_{D}}\frac{\cosh\left(\frac{\pi}{\beta}\tau\right)^ {-2\nu_{D}}}{\left[\tanh\left(\frac{\pi}{\beta}\tau\right)-i\eta\tan\left( \frac{\pi}{\beta}\tau_{0}\right)\right]^{2\nu_{D}}}\,. \tag{18}\]
Thus, the current reads
\[\begin{split}\langle I_{\mathrm{T}}(t)\rangle=e^{*}\left(2v\tau_{ 0}\right)^{-2}&\pi^{-3}\beta\lambda_{0}^{2}\sum_{l,m}\overline{p_{l }}p_{m}e^{i(l-m)\Omega t}\sum_{\eta=\pm}\eta(-i\eta)^{2\nu_{D}}\tanh\left( \frac{\pi}{\beta}\tau_{0}\right)^{2\nu_{D}}\\ &\times\int_{0}^{+\infty}\mathrm{d}x\,\left[\exp\left(i\frac{m+q} {\pi\theta}x\right)-\exp\left(-i\frac{l+q}{\pi\theta}x\right)\right]\frac{ \cosh\left(x\right)^{-2\nu_{D}}}{\left[\tanh\left(x\right)-i\eta\tan\left( \frac{\pi}{\beta}\tau_{0}\right)\right]^{2\nu_{D}}}\,,\end{split} \tag{19}\]
where \(\theta=\left(\beta\Omega\right)^{-1}\) is the reduced temperature. Changing variables to \(y=\tanh(x)\), one is left with
\[\begin{split}\left\langle I_{\mathrm{T}}(t)\right\rangle& =e^{*}\left(2v\tau_{0}\right)^{-2}\pi^{-3}\beta\lambda_{0}^{2} \sum_{l,m}\overline{p_{l}}p_{m}e^{i(l-m)\Omega t}\sum_{\eta=\pm}\eta\times\\ &\quad\times\int_{0}^{1}\mathrm{d}y\,\left[(1-y)^{\nu_{D}-1-i\frac {m+q}{2\pi\theta}}(1+y)^{\nu_{D}-1+i\frac{m+q}{2\pi\theta}}\left(1+i\eta\tan \left(\frac{\pi}{\beta}\tau_{0}\right)^{-1}y\right)^{-2\nu_{D}}-(m,q)\to(-l,-q) \right]\,.\end{split} \tag{100}\]
Where the notation \(f(a,b)-f(c,d)=f(a,b)-(a,b)\to(c,d)\) has been used. This integral can be expressed in terms of the first Appell hypergeometric series \(F_{1}\), see Eq. (3.312) of Ref. [42], as long as \(\nu_{D}\) is positive. The current therefore reads
\[\begin{split}\left\langle I_{\mathrm{T}}(t)\right\rangle& =e^{*}\left(2v\tau_{0}\right)^{-2}\pi^{-3}\beta\lambda_{0}^{2} \sum_{l,m}\overline{p_{l}}p_{m}e^{i(l-m)\Omega t}\sum_{\eta=\pm}\eta\times\\ &\qquad\left[\mathrm{B}\left(\nu_{D}-i\frac{m+q}{2\pi\theta},1 \right)F_{1}\left(1,1-\nu_{D}-i\frac{m+q}{2\pi\theta},2\nu_{D},1+\nu_{D}-i \frac{m+q}{2\pi\theta};-1;-i\eta\tan\left(\frac{\pi}{\beta}\tau_{0}\right)^{-1 }\right)\right.\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\
Using the fact that \(i\eta=\exp\left(i\eta\frac{\pi}{2}\right)\), Euler's reflection formula for the Gamma function [see [42], Eq. (8.384.1)] and the fact that \(\Gamma(z^{*})=\Gamma(z)^{*}\) [which can be deduced from [42], Eq. (8.334.3)], the current is reduced to
\[\langle I_{\rm T}(t)\rangle=-i\frac{\mathcal{I}}{2\cos(\pi\nu_{D})}\sum_{l,m} \overline{p_{l}}p_{m}e^{i(l-m)\Omega t}\left[\left|\Gamma\left(\nu_{D}-i\frac{ m+q}{2\pi\theta}\right)\right|^{2}\sin\left(\pi\nu_{D}+i\frac{m+q}{2\theta} \right)-(m,q)\rightarrow(-l,-q)\right]\,. \tag{103}\]
Finally, after a change of indices, one is left with
\[\langle I_{\rm T}(t)\rangle=\mathcal{I}\sum_{l,m}\left|\Gamma\left(\nu_{D}+i \frac{m+q}{2\pi\theta}\right)\right|^{2}\Im\left\{\overline{p_{m}}p_{m-l}e^{ i\Omega t}\left[\tan\left(\pi\nu_{D}\right)\cosh\left(\frac{m+q}{2\theta} \right)+i\sinh\left(\frac{m+q}{2\theta}\right)\right]\right\}\,, \tag{104}\]
where \(\Im(x)\) denotes the imaginary part of \(x\). Assuming that \(p_{l}\in{\rm R}\), this reduces to
\[\langle I_{\rm T}(t)\rangle=\mathcal{I}\sum_{l>0} \left[\left.\cos(l\Omega t)\sum_{m}\left|\Gamma\left(\nu_{D}+i \frac{m+q}{2\theta\pi}\right)\right|^{2}\left(p_{m-l}p_{m}+p_{m}p_{l+m}\right) \sinh\left(\frac{m+q}{2\theta}\right)\right.\right. \tag{105}\] \[\left.\left.+\sin(l\Omega t)\tan(\pi\nu_{D})\sum_{m}\left|\Gamma \left(\nu_{D}+i\frac{m+q}{2\theta\pi}\right)\right|^{2}\left(p_{m}p_{l+m}-p_{ m-l}p_{m}\right)\cosh\left(\frac{m+q}{2\theta}\right)\right]\right.\] \[\left.+\mathcal{I}\sum_{m}\left|\Gamma\left(\nu_{D}+i\frac{m+q}{ 2\theta\pi}\right)\right|^{2}p_{m}^{2}\sinh\left(\frac{m+q}{2\theta}\right)\,.\]
The current can finally be rewritten as
\[\langle I_{\rm T}(t)\rangle=I_{0}+\sum_{n>0}\mathcal{I}C_{n}\cos(n\Omega t+ \varphi_{n})\,, \tag{106}\]
where
\[\varphi_{n} =\arctan\left(\frac{B_{n}}{A_{n}}\right)\] \[A_{l} =\sum_{m}\left|\Gamma\left(\nu_{D}+i\frac{m+q}{2\theta\pi} \right)\right|^{2}\left(p_{m-l}p_{m}+p_{l+m}p_{m}\right)\sinh\left(\frac{m+q}{ 2\theta}\right)\] \[B_{l} =\tan(\pi\nu_{D})\sum_{m}\left|\Gamma\left(\nu_{D}+i\frac{m+q}{ 2\theta\pi}\right)\right|^{2}\left(p_{m-l}p_{m}-p_{l+m}p_{m}\right)\cosh\left( \frac{m+q}{2\theta}\right) \tag{107}\] \[C_{n} =\frac{A_{n}}{\cos(\varphi_{n})}\] \[I_{0} =\mathcal{I}\sum_{m}\left|\Gamma\left(\nu_{D}+i\frac{m+q}{2 \theta\pi}\right)\right|^{2}p_{m}^{2}\sinh\left(\frac{m+q}{2\theta}\right)\,.\]
## Appendix E Fermi liquid calculation
In this appendix, we provide some details of the derivation of the current in the Fermi liquid picture. Or starting point is the expression for the average current in terms of the Keldysh Green's function for electron operators, namely
\[\langle I_{\rm T}(t)\rangle=-e\left[\lambda(t)G_{\rm RL}^{+-}(t,t)-\lambda^{*}( t)G_{\rm LR}^{+-}(t,t)\right]\,. \tag{108}\]
To leading order in the tunnel coupling \(\lambda\), using Dyson equation, the dressed Green's function reads
\[G_{ss^{\prime}}^{+-}(t,t)=\int{\rm d}t\left[g_{\rm ss}^{+-}(t-t^{\prime}) \lambda^{*}(t^{\prime})g_{\rm s^{\prime}s^{\prime}}^{a}(t-t^{\prime})+g_{\rm ss }^{r}(t-t^{\prime})\lambda^{*}(t^{\prime})g_{\rm s^{\prime}s^{\prime}}^{+-}(t-t ^{\prime})\right]\,. \tag{109}\]
where \(g_{ss^{\prime}}^{\eta\eta^{\prime}}(t)\) are the bare Keldysh Green's functions (in the absence of tunneling), and \(g_{ss^{\prime}}^{r/a}(t)=g_{ss^{\prime}}^{++}(t)-g_{ss^{\prime}}^{\pm\mp}(t)\).
Going to Fourier space, i.e., performing double Fourier transform and keeping in mind that we compute the Green's function at simultaneous time, we can write
\[G_{\mathrm{RL}}^{+-}(t,t) =\lambda_{0}\sum_{l}\overline{p_{l}}e^{il\Omega t}\int\frac{\mathrm{ d}\omega}{2\pi}\left[g_{\mathrm{RR}}^{+-}(\omega)g_{\mathrm{LL}}^{a}(\omega+l \Omega)+g_{\mathrm{RR}}^{r}(\omega)g_{\mathrm{LL}}^{+-}(\omega+l\Omega)\right] \tag{10}\] \[G_{\mathrm{LR}}^{+-}(t,t) =\lambda_{0}\sum_{l}p_{l}e^{-il\Omega t}\int\frac{\mathrm{d}\omega }{2\pi}\left[g_{\mathrm{LL}}^{+-}(\omega)g_{\mathrm{RR}}^{a}(\omega+l\Omega)+ g_{\mathrm{LL}}^{r}(\omega)g_{\mathrm{RR}}^{+-}(\omega+l\Omega)\right]\,. \tag{11}\]
The current can then be readily written as
\[\begin{split}\langle I_{\mathrm{T}}(t)\rangle=-e\lambda_{0}^{2} \sum_{l,m}\overline{p_{l}}p_{m}e^{i(l-m)\Omega t}\int\frac{\mathrm{d}\omega}{2 \pi}\Big{[}g_{\mathrm{RR}}^{+-}(\omega)g_{\mathrm{LL}}^{a}(\omega+l\Omega)+g_ {\mathrm{RR}}^{r}(\omega)g_{\mathrm{LL}}^{+-}(\omega+l\Omega)\\ -g_{\mathrm{LL}}^{+-}(\omega)g_{\mathrm{RR}}^{a}(\omega+m\Omega) -g_{\mathrm{LL}}^{r}(\omega)g_{\mathrm{RR}}^{+-}(\omega+m\Omega)\Big{]}\,.\end{split} \tag{12}\]
In order to (later on) make a correspondence with the \(\nu_{D}=1\) limit of the chiral Luttinger liquid calculation, we need to make consistent assumptions between the two models. In the present case, this means that we have to work in the wide band limit. This limit is implemented by setting
\[\begin{split} g_{\mathrm{ss}}^{r/a}(\omega)&=\mp i (2v_{\mathrm{F}})^{-1}\\ g_{\mathrm{ss}}^{+-}(\omega)&=2i\Im\left[g_{\mathrm{ ss}}^{a}(\omega)\right]f(\omega-\mu_{s})\,,\end{split} \tag{13}\]
with \(\mu_{\mathrm{s}}\) the chemical potential on side \(s\) and \(f(x)\) the Fermi distribution. The current can therefore be simplified into
\[\langle I_{\mathrm{T}}(t)\rangle=\frac{\lambda_{0}^{2}e}{4\pi v_{\mathrm{F}}^ {2}}\sum_{l,m}\overline{p_{l}}p_{m}e^{i(l-m)\Omega t}\int\mathrm{d}\omega\Big{[} f(\omega-\mu_{\mathrm{R}})-f(\omega+l\Omega-\mu_{\mathrm{L}})-f(\omega-\mu_{ \mathrm{L}})+f(\omega+m\Omega-\mu_{\mathrm{R}})\Big{]}\,. \tag{14}\]
Performing the integration yields
\[\langle I_{\mathrm{T}}(t)\rangle=\frac{e\Omega\lambda_{0}^{2}}{4\pi v_{F}^{2} }\sum_{l,m}\overline{p_{l}}p_{m}e^{i(l-m)\Omega t}(2q+l+m)\,. \tag{15}\]
## Appendix F Strong backscattering regime
In this appendix, we derive a formula for the current in the strong backscattering regime. Applying the duality transformation to Eq. (15) taken for the Laughlin series, one has for the current in this regime
\[\begin{split}\langle I_{\mathrm{T}}(t)\rangle=e\left(2v\tau_{0} \right)^{-2}\pi^{-3}\beta\lambda_{0}^{2}\sum_{l,m}\overline{p_{l}}p_{m}e^{i(l -m)\Omega t}\\ \times\sum_{\eta=\pm}\eta\Bigg{[}\frac{-i\eta\sin\left(\frac{ \pi}{\beta}\tau_{0}\right)\exp\left(i\eta\frac{\pi}{\beta}\tau_{0}\right)}{ \nu^{-1}-i\frac{m+q}{2\pi\theta}}{}_{2}F_{1}\left(1,1-\nu^{-1}-i\frac{m+q}{2 \pi\theta};1+\nu^{-1}-i\frac{m+q}{2\pi\theta};\exp\left(2i\eta\frac{\pi}{ \beta}\tau_{0}\right)\right)\\ -(m,q)\rightarrow(-l,-q)\Bigg{]}\,,\end{split} \tag{16}\]
where \(q=\omega_{0}/\Omega\).
We now want to perform an expansion in low \(\tau_{0}/\beta\) for the Laughlin series (where \(\nu^{-1}\) is an odd integer). The technique used in the case \(0<\nu_{D}<1\) (see Appendix D) cannot be used here as \(1-2\nu^{-1}\) is an integer, thus, Eq. (10.1) of Ref. [43] does not hold. However, another route is possible and, as we will show, the leading contribution is of first order or less in \(\tau_{0}\).
Following Prudnikov et. al. [see Eq. (7.3.1.31) of Ref. [44]] we perform a logarithmic expansion of the hypergeometric function and remove terms of order higher than three. The current can then be written as the sum of two terms:
\[\langle I_{\mathrm{T}}(t)\rangle=\langle I_{\mathrm{Fermi/DC}}(t)\rangle+ \langle I_{\mathrm{cor-AC}}(t)\rangle\,, \tag{17}\]
where the term containing the DC behavior as well as the Fermi limit is
\[\langle I_{\text{Fermi/DC}}(t)\rangle\approx ie\left(2v\tau_{0}\right) ^{-2}\pi^{-3}\beta\lambda_{0}^{2}\sum_{l,m}\overline{p_{l}}p_{m}e^{i(l-m) \Omega t}\sum_{\eta=\pm}\sin\left(\frac{\pi}{\beta}\tau_{0}\right)\frac{\Gamma \left(\nu^{-1}-i\frac{m+q}{2\pi\theta}\right)}{\Gamma\left(1-\nu^{-1}-i\frac{ m+q}{2\pi\theta}\right)}\] \[\times\sum_{k,r=0}^{\infty}\sum_{s=0}^{2\nu^{-1}-1+k}\left\{(-1)^ {k}2^{k+2\nu^{-1}-1}(i\eta)^{2k-s-2}\frac{(k+2\nu^{-1}-1)_{k}(2\nu^{-1})_{k} \left(\nu^{-1}-i\frac{m+q}{2\pi\theta}\right)_{k}\left(\frac{\pi}{\beta}\tau_{ 0}\right)^{2k-s+4\nu^{-1}-2}}{r!s!(k+2\nu^{-1}-1-s)!}\right. \tag{100}\] \[\times\left[\log\left(2\frac{\pi}{\beta}\tau_{0}\right)+i\eta \frac{\pi}{2}-\psi(k+1)+\psi\left(\nu^{-1}+k-i\frac{m+q}{2\pi\theta}\right) \right]-(m,q)\rightarrow(-l,-q)\right\},\]
while the term corresponding to the AC current when correlations are present reads
\[\langle I_{\text{cor-AC}}(t)\rangle\approx-ie\left(2v\tau_{0} \right)^{-2}\pi^{-3}\beta\lambda_{0}^{2}\sum_{l,m}\overline{p_{l}}p_{m}e^{i(l- m)\Omega t}\sum_{\eta=\pm}\sin\left(\frac{\pi}{\beta}\tau_{0}\right)\frac{1}{2 \nu^{-1}-1}\] \[\times\sum_{r=0}^{\infty}\sum_{k=0}^{2\nu^{-1}-2}\sum_{s=0}^{k} \left[(-1)^{k}(i\eta)^{r+2k-s}2^{k}\frac{(1)_{k}(1-\nu^{-1}-i\frac{m+q}{2\pi \theta})_{k}\left(\frac{\pi}{\beta}\tau_{0}\right)^{r+2k-s}}{r!s!(k-s)!(2-2\nu ^{-1})_{k}}-(m,q)\rightarrow(-l,-q)\right], \tag{101}\]
with \((x)_{n}=\prod_{k=0}^{n-1}(x+k)\) is the Pochhammer symbol.
One sees indeed that for \(\nu=1\) Eq. (101) vanishes and for \(\nu^{-1}>1\) all terms in Eq. (100) are sub-leading. Therefore, we present the computation in two different sections.
### Fermi liquid limit, \(\nu=1\)
The Fermi liquid limit is obtained by setting \(\nu^{-1}=\nu=1\). The first sum over \(k\) in Eq. (101) vanishes and only the terms of Eq. (100) contribute. Performing the sum over \(\eta\) removes all terms containing odd powers of \(\eta\), i.e., to lowest order in \(\tau_{0}/\beta\) (\(s=1\), \(k=r=0\)) the current reads
\[\langle I_{\text{T}}(t)\rangle=\frac{e\lambda_{0}^{2}\Omega}{4\pi v_{\text{F} }^{2}}\sum_{l,m}\overline{p_{l}}p_{m}e^{i(l-m)\Omega t}(2q+l+m)\,. \tag{102}\]
### Filling factors, \(\nu^{-1}>1\)
The general case of integer \(\nu^{-1}\) greater than one is obtained by remarking that the leading term in Eq. (100) is a polynomial of order five or more in \(\tau_{0}/\beta\). Therefore, we can extract the main contribution to the current from Eq. (101) only. It can be shown that the term of order minus one in \(\tau_{0}/\beta\) does not depend on \(q\) nor \(l\) or \(m\), thus it does not contribute in the current. The sum over \(\eta\) removes the zeroth order term in \(\tau_{0}/\beta\) so we are left with the first order contributions. They arise when the indices respect \(r+2k-s=2\), i.e., there are four possibilities:
* \(r=0\), \(k=s=2\);
* \(r=0\), \(k=1\), \(s=0\);
* \(r=k=s=1\);
* \(r=2\), \(k=s=0\).
We remark that the last possibility gives rise to a term independent of \(q\), \(l\), \(m\) which thus vanishes. Selecting the three first possibilities only and removing the Fermi/DC term in Eq. (100), the current ultimately reads
\[\langle I_{\text{T}}(t)\rangle\approx\frac{-e}{\pi^{2}(1-2\nu^{-1})_{3}}\left( \frac{\lambda_{0}}{v}\right)^{2}\frac{\Omega}{\Lambda}\Im\left[\sum_{l,m} \overline{p_{l}}p_{m}e^{i(l-m)\Omega t}\left(m+q\right)^{2}\right]\,. \tag{103}\]
|
2310.06102
|
Colossal c-axis response and lack of rotational symmetry breaking within
the kagome plane of the CsV$_3$Sb$_5$ superconductor
|
The kagome materials AV4$_3$Sb$_5$ (A = K, Rb, Cs) host an intriguing
interplay between unconventional superconductivity and charge-density-waves.
Here, we investigate CsV$_3$Sb$_5$ by combining high-resolution
thermal-expansion, heat-capacity and electrical resistance under strain
measurements. We directly unveil that the superconducting and charge-ordered
states strongly compete, and that this competition is dramatically influenced
by tuning the crystallographic c-axis. In addition, we report the absence of
additional bulk phase transitions within the charge-ordered state, notably
associated with rotational symmetry-breaking within the kagome planes. This
suggests that any breaking of the C$_6$ invariance occurs via different
stacking of C$_6$-symmetric kagome patterns. Finally, we find that the
charge-density-wave phase exhibits an enhanced A$_{1g}$-symmetric
elastoresistance coefficient, whose large increase at low temperature is driven
by electronic degrees of freedom.
|
Mehdi Frachet, Liran Wang, Wei Xia, Yanfeng Guo, Mingquan He, Nour Maraytta, Rolf Heid, Amir-Abbas Haghighirad, Michael Merz, Christoph Meingast, Frederic Hardy
|
2023-10-09T19:22:48Z
|
http://arxiv.org/abs/2310.06102v1
|
# Colossal _c_-axis response and lack of rotational symmetry breaking
###### Abstract
The kagome materials AV\({}_{3}\)Sb\({}_{5}\) (A = K, Rb, Cs) host an intriguing interplay between unconventional superconductivity and charge-density-waves. Here, we investigate CsV\({}_{3}\)Sb\({}_{5}\) by combining high-resolution thermal-expansion, heat-capacity and electrical resistance under strain measurements. We directly unveil that the superconducting and charge-ordered states strongly compete, and that this competition is dramatically influenced by tuning the crystallographic _c_-axis. In addition, we report the absence of additional bulk phase transitions within the charge-ordered state, notably associated with rotational symmetry-breaking within the kagome planes. This suggests that any breaking of the \(C_{6}\) invariance occurs via different stacking of \(C_{6}\)-symmetric kagome patterns. Finally, we find that the charge-density-wave phase exhibits an enhanced \(A_{19}\)-symmetric elastoresistance coefficient, whose large increase at low temperature is driven by electronic degrees of freedom.
The unique electronic band structure of delocalized electrons in kagome lattices features Dirac points, flat bands, and multiple van Hove singularities (vHS) close to the Fermi level [1]. Theoretical studies of kagome lattices demonstrate that the large density of state near van Hove filling can promote various exotic electronic orders, including charge-bond order, chiral charge-density-wave, orbital-current order, and superconducting states of various gap symmetries [2; 3; 4].
In this context, the family of recently discovered kagome metals AV\({}_{3}\)Sb\({}_{5}\) (A = K, Rb, Cs), crystallizing in the \(P6/mmm\) hexagonal space-group with perfect vanadium kagome networks, has emerged as an exciting realization of such physics with nontrivial topological properties, unconventional superconductivity and intertwined symmetry-broken states [5; 6]. Experimentally, two electronic instabilities are well established in all AV\({}_{3}\)Sb\({}_{5}\), _i.e._ a charge-density wave (CDW) below \(T_{\rm CDW}\approx 100\) K, and bulk superconductivity (SC) that reaches \(\,T_{\rm c}\approx 2.5\) K in CsV\({}_{3}\)Sb\({}_{5}\). The CDW state features a triple-**q**-modulation with wave-vector connecting the three inequivalent sublattices and the corresponding M saddle-points vHS. Below \(\,T_{\rm CDW}\), the translational symmetry of the crystal lattice is broken, but the _c_-axis periodicity remains highly debated as, _e.g._, \(2\times 2\times 2\)[7; 8] and \(2\times 2\times 4\)[9] superstructures, or a combination thereof [10; 11], are reported.
The fate of the six-fold rotational invariance of the hexagonal lattice is controversial. Several experiments including x-ray diffraction (XRD), nuclear magnetic resonance (NMR) and scanning tunneling microscopy (STM) point to a lowering to \(C_{2}\) rotational symmetry [12; 13; 11; 14]. In addition, in CsV\({}_{3}\)Sb\({}_{5}\), measurements of the electrical resistance under strain, namely elastoresistance, have been interpreted as an evidence for a growing electronic nematic susceptibility within the \(E_{2g}\) (\(x^{2}-y^{2}\)) symmetry channel, ultimately leading to an ordered nematic state at \(T_{\rm nem}=35\) K [14; 15]. However, different experiments suggest different critical temperatures for the \(C_{6}\)-symmetry breaking, ranging from \(T_{\rm nem}\) to \(\,T_{\rm CDW}\), although no thermodynamic evidence for such transition has been found. Further, conflicting results regarding a possible time-reversal symmetry-breaking at \(\,T_{\rm CDW}\,\) were reported [16; 17; 18; 19], such that it remains unsettled whether AV\({}_{3}\)Sb\({}_{5}\) could be the hosts of, _e.g._, a long sought loop current order [18; 19; 20].
Although a conventional mechanism is unable to explain the superconducting state of AV\({}_{3}\)Sb\({}_{5}\)[8], its nature remains unsettled. No consensus has been reached concerning the gap symmetry and the existence of gap nodes [21; 22; 23]. Further, it has been proposed that the SC and CDW states conspire to form a pair-density-wave [24], and, importantly, that electronic nematicity plays a key role in the mechanism of superconductivity in _e.g._ Cs(V\({}_{1-x}\)Ti\({}_{x}\))\({}_{3}\)Sb\({}_{5}\)[15].
In this Letter, we use a powerful combination of bulk thermodynamic measurements, including high-resolution thermal-expansion and heat-capacity, with elastoresistance on CsV\({}_{3}\)Sb\({}_{5}\) single crystals from two different sources to gain further insights into the CDW state and its connection with superconductivity. Our results directly demonstrate (i) a strong competition between the CDW and the SC states, dramatically influenced by \(c-\)axis tuning, (ii) the absence of an orthorhombic distortion for \(T\leq\,T_{\rm CDW}\), implying that either the CDW does
not break 6-fold symmetry or that rotational symmetry-breaking is decoupled from anisotropic strain and (iii) that the CDW is characterized by a strongly enhanced \(A_{\rm 1g}\)-symmetric elastoresistance, which further increases with decreasing temperature.
Single crystals of CsV\({}_{3}\)Sb\({}_{5}\) were grown in Shanghai (batch A) and Karlsruhe (batch B) by the flux method and characterized by x-ray diffraction and energy-dispersive x-ray analysis (see Supplemental Material). In-plane thermal-expansion measurements were carried out using a home-built high-resolution capacitive dilatometer on a large single crystal from batch A. Because of the large aspect ratio, \(c\)-axis measurements were performed using a stack of 20 smaller crystals from batch B, glued together with GE7031 varnish to a thickness of \(\approx 2\) mm. Elastoresistivity measurements were carried out by gluing samples from batch B on a piezoelectric stack (Pst 150/5x5/7 from Piezomechanik) using DevCon 5mn 2-components epoxy (Part. No. X0039) as described in Ref. [25]. To extract the symmetry-resolved elastoresistance coefficients we assumed a temperature-independent Poisson ratio \(\nu_{p}=-\epsilon_{yy}/\epsilon_{xx}\approx 0.43\)[26] for the piezoelectric stack (\(x\) is the poling direction). Heat-capacity measurements \(C\)(T) were made on the same sample from batch A in a Physical Property Measurement System from Quantum Design.
Figure 1(a) shows the relative length changes, \(\Delta L/L\), of our sample as a function of temperature. A clear first-order discontinuity, for both [100] and [001] crystal axes (in the following we use \(P6/mmm\) space group notations), accompanied by a large peak in the specific heat (see Fig.1(e)) are observed at \(T_{\rm CDW}\)\(\approx 93.5\) K, marking the transition to the CDW state. Although the CDW transition is clearly of first-order, we also observe significant fluctuation effects both above and below the transition in an appropriate Gruneisen parameter (see Supplemental Materials). At lower temperature, second-order discontinuities are clearly resolved at \(T_{\rm c}\)\(=2.5\) K in both thermal-expansion coefficient, \(\alpha_{i}(T)=1/L_{i}\,(dL_{i}/dT)\) with \(i=\{[100],[001]\}\), and \(C\)(T), as illustrated in Figs 1(d) and 1(f), respectively. However, we find no evidence of a phase transition around 60 K (see Figs 1(a) and 1(c)), especially in the \(c\)-axis thermal expansion, where sharp changes in the intensity of the superstructure reflections, accompanying the change of interlayer ordering, were observed by XRD [11]. This is rather surprising since band folding, resulting from a changing superstructure, is expected to substantially modify the Fermi surface and therefore the electronic entropy. Yet, no signature of this transition is resolved in either \(C(T)\) or \(\alpha(T)\), which measure the T- and p-derivative of the entropy, respectively.
Our measurements, however, clearly demonstrate a huge dependence of both CDW and SC on uniaxial pressure. Table 1 summarizes the initial uniaxial- and hydrostatic-pressure dependences of \(T_{\rm CDW}\) and \(T_{\rm c}\) inferred from the application of the Clausius-Clapeyron and Ehrenfest relations, respectively. The largest effect is found for \(c\)-axis where \(d\,T_{\rm CDW}/dp_{c}\) amounts to \(\approx\) -120 K GPa-1. This demonstrates that the large hydrostatic-pressure sensitivity of the CDW instability of CsV\({}_{3}\)Sb\({}_{5}\)[27; 28] originates predominantly from \(c-\)axis stress and highlights the importance of the apical Sb-bonds and Sb-derived bands [29; 30]. This is equally true for SC which also exhibits large uniaxial pressure dependences but opposite in sign, confirming that both
Figure 1: **(a)** Relative length changes, \(\Delta L/L\), along the hexagonal directions and corresponding volume change as a function of temperature. The black vertical dashed line indicates \(T_{\rm CDW}\). **(b)** Comparison of \(\Delta L_{i}/L_{i}\) measured along the orthogonal [100] and [210] hexagonal directions. **(c)** Corresponding thermal-expansion coefficient, \(\alpha=1/L\,(dL/dT)\). **(d)** shows the thermal-expansion data for \(T<4\)K on a magnified view. **(e)** Heat capacity \(C(T)\) showing the first-order transition at \(T_{\rm CDW}\) and the corresponding entropy discontinuity (inset). **(f)** shows the superconducting transition in the specific heat on an extended view. The red vertical dashed line indicates \(T_{\rm c}\).
orders are competing for the same electronic states [31].
Remarkably, the relative change of _T_c with \(c-\)axis stress, \(1/\textit{T}_{\text{c}}\) (_dT_c/\(dp_{\text{c}}\)), is roughly a factor 4 greater than that of _T_CDW. Furthermore, the relative changes of the Sommerfeld coefficient \(\gamma\) and _T_c with \(c-\)axis pressure are both positive. Thus, the increase of _T_c under \(c-\)axis pressure is likely explained by an increase in the density of states due to the reduction of the CDW gap. Interestingly, this correlates with the convergence of M saddle-points vHS toward the Fermi level [32].
Interestingly, we find no evidence for additional phase transition, in contrast to the reports of \(C_{6}\)-symmetry breaking at \(T_{\text{nem}}=35\) K by elastoresistance and NMR [14; 15], or directly below \(T_{\text{CDW}}\) by x-ray diffraction [11]. Such symmetry-breaking should be detected using our high-resolution capacitive dilatometer by comparing the strains measured along the [100] and the orthogonal [210] directions, as has been demonstrated for several Fe-based superconductors [33; 34]. This is because our spring-loaded dilatometer exerts a non-negligible stress along the measurement direction. Thus, for a measurement along the hexagonal [210] direction, the population of possible structural domains with the shorter orthorhombic axis should be favored, resulting in an in-situ detwinning of the sample below _T_CDW, if the crystal symmetry were lowered. On the other hand, the twin population would remain unaffected by the applied force for measurements along the [100] direction, which probe a mixture of both orthorhombic axes. As illustrated in Figure 1(b) we find no discernible difference between the two measurements suggesting either that the CDW does not break \(C\)6 symmetry or that its broken rotational symmetry is decoupled from anisotropic strain.
The lack of evidence for \(C_{6}\)-symmetry breaking in our thermal expansion measurements motivates a closer inspection of the changes of electrical resistance, \(\Delta R_{ii}=R_{ii}(\epsilon_{\text{xx}})-R_{ii}(\epsilon_{\text{xx}}=0)\) with \(i=\{\text{x},\text{y}\}\), in response to applied strain \(\epsilon_{\text{xx}}\). Here, we induce a small symmetry-breaking strain to our crystals using the technique introduced in Ref.[35], as depicted in the inset of Fig. 2a. With the [100] axis of CsV3Sb5 aligned with the piezoelectric poling direction x, we extract,
\[\left(\frac{\Delta R}{R-R^{0}}\right)_{\text{xx}}-\left(\frac{\Delta R}{R-R^{ 0}}\right)_{\text{yy}}=m_{\text{E}_{\text{2g}}}(\epsilon_{\text{xx}}-\epsilon _{\text{yy}}), \tag{1}\]
\[\left(\frac{\Delta R}{R-R^{0}}\right)_{\text{xx}}+\left(\frac{\Delta R}{R-R^{ 0}}\right)_{\text{yy}}=m_{\text{A}_{\text{1g}}}(\epsilon_{\text{xx}}+\epsilon _{\text{yy}}), \tag{2}\]
where \(R_{ii}^{0}\) is the \(T\to 0\) residual resistance and \(R_{\text{xx}}\) and \(R_{\text{yy}}\) correspond to resistance measurements along and transverse to the poling direction, _i.e._ parallel and perpendicular to the [100] hexagonal axis, respectively. Hereafter, we denote them longitudinal and transverse measurements, respectively. \(m_{\text{A}_{\text{1g}}}\) and \(m_{\text{E}_{\text{2g}}}\) represent the elastoresistance coefficients that transform according to the \(A_{\text{1g}}\) and \(E_{\text{2g}}\) irreducible representations of the \(D_{\text{6h}}\)
\begin{table}
\begin{tabular}{c|c|c||c} & a & c & Volume \\ \hline \hline \(d\ln\,T_{\text{c}}/dp_{\text{i}}\) [GPa\({}^{-1}\)] & -1.3 & +4.7 & +2.1 \\ \hline \(d\ln T_{\text{CDW}}/dp_{\text{i}}\) [GPa\({}^{-1}\)] & +0.24 & -1.3 & -0.81 \\ \hline \(d\ln\gamma/dp_{\text{i}}\) [GPa\({}^{-1}\)] & -0.06 & +0.73 & +0.58 \\ \end{tabular}
\end{table}
Table 1: Relative variations of _T_c, _T_CDW, and the Sommerfeld coefficient \(\gamma\) with uniaxial pressure, calculated using the Ehrenfest, Clausius-Clapeyron and Maxwell relations, respectively.
Figure 2: **(a)** Linear slopes of the resistance versus strain curves, \(1/\left(R_{ii}-R_{0}\right)\left(dR_{ii}/d\epsilon_{xx}\right)\), in the longitudinal (\(i=x\), red symbols) and transverse (\(i=y\), blue symbols) channels. The empty symbols correspond to resistance variation relative to the total resistance (including the residual term), \(1/R_{\text{ii}}\left(dR_{\text{ii}}/d\epsilon_{\text{xx}}\right)\), as discussed in earlier works [14; 15]. The inset shows a sketch of the experimental setup with the sample (black) glued on the top side of a piezoelectric stack (gray) with the crystallographic \(a\)-axis aligned along the poling direction (\(x\)). **(b)** Corresponding \(E_{\text{2g}}\) and \(A_{\text{1g}}\) symmetry-resolved elastoresistance coefficients. The vertical line indicates _T_CDW.
point group, respectively (see Supplemental Material for details). Importantly, we normalize \(\Delta R_{ii}\) by \(\left(R_{ii}-R_{ii}^{0}\right)\) instead of \(R_{ii}\) in order to obtain physically meaningful results at low temperatures, as discussed in Ref.[36].
In Fig. 2, we report the results of our elastoresistance measurements on CsV\({}_{3}\)Sb\({}_{5}\) (see Supplemental Material for the raw data). The linear slopes of the resistance versus strain curves, \(1/\left(R_{ii}-R_{0}\right)\left(dR_{ii}/d\epsilon_{xx}\right)\), are shown in Fig. 2(a). For \(T>T_{\rm CDW}\), the response to strain is weakly temperature dependent and amounts to \(\approx 2-4\), as expected in any metals. At \(T\approx T_{\rm CDW}\), a sharp peak is resolved in both directions. This peak, that has not been resolved in any AV\({}_{3}\)Sb\({}_{5}\)[14; 15], is, however, naturally expected given the strong uniaxial pressure dependence of \(T_{\rm CDW}\) according to
\[\left(\frac{dR}{d\epsilon_{\rm xx}}\right)_{T_{\rm CDW}}\approx\left(\frac{ \partial R}{\partial T}\right)_{T_{\rm CDW}}\left(\frac{dT_{\rm CDW}}{dp_{a }}\right). \tag{3}\]
The validity of Eq.(3) is provided by the essentially similar strain conditions achieved under uniaxial pressure and the in-plane biaxial and anisotropic strain induced by the piezostack (see details in Supplemental Material). Thus, a positive elastoresistance peak implies that \(dT_{\rm CDW}/dp_{a}>0\), in agreement with our thermal-expansion measurements (see table 1) and previous direct uniaxial-stress experiments [31].
For \(T\lesssim\,T_{\rm CDW}\), the elastoresistance does not turn back to a typical metallic value, but it is significantly enhanced [14; 15] for both longitudinal and transverse channels, as illustrated in Fig.2(a). Hence, the electronic properties of CsV\({}_{3}\)Sb\({}_{5}\) in the CDW state are mainly sensitive to a symmetry-preserving stress, in excellent accord with our thermodynamic results. In contrast to previous reports [14; 15], both our longitudinal and transverse measurements were carried out on the same sample, _i.e._ under similar strain-transmission conditions, which is crucial for extracting the symmetry-resolved elastoresistance coefficients shown in Fig.2(b). Specifically, the enhanced elastoresistance response within the CDW phase is totally dominated by the symmetry-preserving \(A_{\rm 1g}\) channel. The \(E_{\rm 2g}\) response, in contrast, is weak over the entire temperature range studied, in agreement with both the absence of in-plane \(C_{6}\)-symmetry breaking and the direct uniaxial strain measurements of Qian et al. [31]. In light of our thermodynamic results, the large \(m_{\rm A_{\rm 1g}}\) likely originates from a dominant \(c-\)axis contribution.
Strikingly, the \(A_{\rm 1g}\)-symmetric elastoresistance increases further in the CDW phase and reaches extremely high values at low temperature, with \(m_{\rm A_{\rm 1g}}(20\mathrm{K})\approx 120\). This arises from the predominance of the electron-electron scattering at low temperature, because the electron-phonon scattering term to \(\left(R_{xx}-R_{xx}^{0}\right)\) decreases faster than that of the electron-electron contribution, such that \(m_{\rm A_{\rm 1g}}\) effectively increases. This is again in line with our thermodynamic data since this increased \(m_{\rm A_{\rm 1g}}\) correlates with the decrease of \(\gamma\) with [100] uniaxial pressure (see Table 1) by virtue of the Kadowaki-Woods relation, \(A\propto\gamma^{2}\), which relates \(\gamma\) to the quadratic term of the resistivity in a Fermi liquid [36]. Finally, no downturn of the elastoresistance is found below \(T_{\rm mem}\approx 35\) K [37], strongly suggesting that its observation in previous works [14; 15] is a direct consequence of not correctly accounting for the residual resistivity contribution [36] (see open symbols in Fig. 2(a).).
In conclusion, we have demonstrated that the CDW in CsV\({}_{3}\)Sb\({}_{5}\) exhibits a colossal response to \(c\)-axis stress and strongly compete with SC for the same electronic states. We provide direct thermodynamic evidence that the hydrostatic-pressure dependence of these electronic instabilities originates almost entirely from \(c\)-axis tuning, as suggested by uniaxial strain experiments [31]. The enhancement of \(T_{\rm c}\) and decrease of \(T_{\rm CDW}\) under \(c-\)axis compression is in line with the strong shift of the apical Sb-derived vHS towards the Fermi energy [30; 32], highlighting the importance of Sb-derived bands in any minimal microscopic description of this system [20; 29; 30]. Besides CDW and SC, we find no thermodynamic evidence of additional bulk phase transition within the charge-ordered state that could be related to the changes of the \(c\)-axis periodicity, as reported by x-ray diffraction. The lack of an orthorhombic distortion and the negligibly small \(m_{\rm E_{\rm 2g}}\) response for \(T\leq\,T_{\rm CDW}\) consistently demonstrate (i) the absence of a broken rotational symmetry that couples to anisotropic strain and (ii) that six-fold symmetry is preserved within the individual V\({}_{3}\)Sb\({}_{5}\) layers. Our results remains, however, consistent with x-ray diffraction [11] if the reported breaking of C\({}_{6}\) invariance arises from a stacking of different CDW patterns along the \(c\)-axis, as suggested theoretically in Ref. [30]. Our data further show that the large elastoresistance, previously assigned to the nematic \(E_{\rm 2g}\) channel, definitively originates from an enhanced \(A_{\rm 1g}\) symmetry-preserving channel, that emerges from electron-electron scattering within the CDW phase. A careful comparison of thermodynamic and spectroscopic experiments under \(c-\)axis compression is a promising way to shed light on the microscopic origin of the CDW and SC formation in AV\({}_{3}\)Sb\({}_{5}\).
**Note added in proof.** After the completion of the manuscript, we became aware of the preprints of Liu _et al._[38] and Asaba _et al._[39]. We share the conclusions of Liu _et al._[38] about the absence of nematicity within the CDW state of CsV\({}_{3}\)Sb\({}_{5}\). However, our thermal-expansion results and the elastocaloric measurements of Liu _et al._[38] are at odds concerning the putative crystal-symmetry breaking well above the CDW transition reported by Asaba _et al._[39].
We acknowledge fruitful discussions with R. M. Fernandes. Work at KIT was partially funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) TRR 288-422213477
(Projects A02 and B03) and Chinesisch-Deutsche Mobilitatsprogramm of Chinesisch-Deutsche Zentrum fur Wissenschaftsforderung (Grant No. M-0496). M.F acknowledges funding by the Alexander von Humboldt fundation and the Young Investigator preparation program of the Karlsruhe Institute of Technology. Y. G. acknowledges the support by the National Natural Science Foundation of China (Grant No. 920651). W. X. thanks the support by the Shanghai Sailing Program (23YF1426900).
|
2302.09020
|
Resonance fluorescence of two asymmetrically pumped and coupled
two-level systems
|
We study a driven-dissipative duo of two-level systems in an open quantum
systems approach, modelling a pair of atoms or (more generally) meta-atoms.
Allowing for complex-valued couplings in the setup, which are of both a
coherent and incoherent character, gives rise to a diverse coupling landscape.
We consider several points on this landscape, for example where the coupling
between the two coupled two-level systems is dominated by coherent, incoherent,
unsymmetrical and even unidirectional interactions. Traversing the coupling
terrain leads to remarkable features in the populations of the pair,
correlations and optical spectra. Most notably, the famous Mollow triplet
spectrum for a single atom may be superseded for a pair by a Mollow quintuplet
(or even by a spectral singlet) and the setup allows for population trapping to
arise, all depending upon the precise nature of the coupling between the
two-level systems.
|
C. A. Downing, E. del Valle, A. I. Fernández-Domínguez
|
2023-02-17T17:38:48Z
|
http://arxiv.org/abs/2302.09020v1
|
# Resonance fluorescence of two asymmetrically pumped and coupled two-level systems
###### Abstract
We study a driven-dissipative duo of two-level systems in an open quantum systems approach, modelling a pair of atoms or (more generally) meta-atoms. Allowing for complex-valued couplings in the setup, which are of both a coherent and incoherent character, gives rise to a diverse coupling landscape. We consider several points on this landscape, for example where the coupling between the two coupled two-level systems is dominated by coherent, incoherent, unsymmetrical and even unidirectional interactions. Traversing the coupling terrain leads to remarkable features in the populations of the pair, correlations and optical spectra. Most notably, the famous Mollow triplet spectrum for a single atom may be superseded for a pair by a Mollow quintuplet (or even by a spectral singlet) and the setup allows for population trapping to arise, all depending upon the precise nature of the coupling between the two-level systems.
## I Introduction
The theory of resonance fluorescence, which describes the emission of an atom driven resonantly by an external field, has fascinated quantum opticians since the 1960's [1; 2; 3; 4; 5; 6]. Strikingly, the resulting resonance fluorescence spectrum is a so-called Mollow triplet: a central peak at resonance, with two smaller satellite peaks either side [3]. This captivating structure was first seen experimentally in the 1970's [7; 8; 9; 10], before being later observed in single dye molecules [11] and semiconductor quantum dots [12; 13; 14; 15; 16; 17]. More recently, artificial atoms in superconducting circuits [18; 19; 20] and hybrid spin-nanomechanical systems [21] have been shown to display some remarkable aspects of Mollow physics.
Resonance fluorescence in two-atom [22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37] and indeed many-atom [38; 39; 40; 41] systems was inevitably studied theoretically soon afterwards in order to elucidate the influence of cooperative effects, including the emergence of additional sidebands in the optical spectrum. More latterly, modern experiments with two artificial atoms in superconducting circuits have offered the control and tunability required to study the properties of quantum dimers under coherent excitation [42].
Here we investigate theoretically a pair of two-level systems (2LSs) as sketched in Fig. 1 (a), and in particular the interplay between cooperative resonance fluorescence and the concept of chirality [43; 44]. By chirality, we mean to refer to an asymmetry in the coupling between the two 2LSs, which arises from the competition between the considered coherent and incoherent (or dissipative) coupling [45; 46; 47]. In an important limiting case, we treat the extreme asymmetry of unidirectional (or one-way) coupling in the pair, where all backaction is excluded by design [48; 49; 50]. In this way, we explore the full gamut of Mollow and chiral physics within perhaps the simplest possible coupled system, with a view to building intuition about larger collections of qubits and quantum networks. In particular, chiral quantum networks could transmit information highly efficiently and without information backflow, while ultracompact chiral devices (acting like circulators and isolators) are necessary to build nanoscale circuits [43].
Our simple model considers the two coupled 2LSs in an open quantum systems approach. We study how the mean populations in the pair, as well as the correlations [51; 52; 53; 54] and the optical spectra, evolve as one navigates the complex coupling landscape. Most interestingly, the famous Mollow triplet spectrum for a single atom may be superseded for a pair of 2LSs by a range of spectra, from a Mollow quintuplet to a standard Lorentzian singlet, all depending upon the exact nature of the coupling between the pair. We also discover an example of a population trapping effect, where the system is essentially protected from the dissipative environment in a specific part of the coupling landscape.
The remainder of this paper is assembled in the following manner. We expound the driven-dissipative theory in Sec. 2, before focusing on the populations, correlations and spectra of the system in the coherent [Sec. 3], dissipative [Sec. 4], and unidirectional [Sec. 5] coupling regimes. Sec. 6 contains a discussion of the most important conclusions. Some supporting results for a single 2LS [Appendix A], extra calculational details for the 2LS pair [Appendix B], and a brief survey of asymmetric coupling regime [Appendix C] are provided in the three appendices.
## II Model
Our model is composed of a Hamiltonian contribution which describes the coherent coupling and coherent driving (as introduced in Sec. II.1) and dissipation which is introduced via a quantum master equation (as defined in Sec. II.2). The theoretical framework is somewhat analogous to the series of works given by Refs. [55; 56; 57; 58; 59; 60] on celebrated models of open
|
2304.04852
|
The Kraft--Barmpalias--Lewis-Pye lemma revisited
|
This note provides a simplified exposition of the proof of hierarchical Kraft
lemma proven by Barmpalias and Lewis-Pye and its consequences for the oracle
use in the Ku\v{c}era--G\'acs theorem (saying that every sequence is Turing
reducible to a random one).
|
Alexander Shen
|
2023-04-10T20:13:43Z
|
http://arxiv.org/abs/2304.04852v2
|
# The Kraft-Barmpalias-Lewis-Pye lemma revisited
###### Abstract
This note provides a simplified exposition of the proof of hierarchical Kraft lemma proven by Barmpalias and Lewis-Pye [1] and its consequences for the oracle use in the Kucera-Gacs theorem (saying that every sequence is Turing reducible to a random one).
## 1 Kraft's lemma and its online version
The following statement from coding theory is sometimes called _Kraft's lemma_1:
Footnote 1: More precisely, the statement is that the _Kraft inequality_ mentioned here is necessary and sufficient for the existence of a prefix-free code, see, e.g., [4, Theorem 3.2.1].
_for every \(n\) integers \(l_{1},\ldots,l_{n}\geqslant 1\) such that \(\sum_{i}2^{-l_{i}}\leqslant 1\), there exist binary strings \(x_{1},\ldots,x_{n}\) of lengths \(l_{1},\ldots,l_{n}\) that form a prefix-free code._
The prefix-free requirement means that strings \(x_{1},\ldots,x_{n}\) are incomparable: none of them is a prefix of another one.
It is convenient to identify strings with _aligned intervals_ inside \([0,1]\): let \(0\) be the left half, \(01\) be the second quarter (i.e., \([\frac{1}{4},\frac{1}{2}]\)), etc. Formally, a string \(x\) corresponds to the interval that contains numbers whose binary representations start with \(x\). Then the statement can be reformulated in terms of space allocation. Each \(l_{i}\) is interpreted as the request to allocate an aligned interval inside \([0,1]\) of length \(2^{-l_{i}}\) in such a way that all intervals are disjoint. This shows immediately that the condition \(\sum 2^{-l_{i}}\leqslant 1\) is necessary for the existence of the prefix code (the total space is bounded). To prove that Kraft inequality is sufficient, we may allocate the intervals in the order of decreasing length (=increasing \(l_{i}\)), from left to right. The decreasing length condition guarantees correct alignment.
However, this allocation strategy needs to know the entire list \(l_{1},\ldots,l_{n}\) in advance. A simple change in the allocation strategy makes it _on-line_ (getting the next \(l_{i}\), we choose the next \(x_{i}\) and this choice is final). For that, we keep at every moment the representation of the free space as a _union of disjoint aligned intervals of different sizes_. Initially we have one interval of size \(1\). When a new \(l_{i}\) arrives, we look for an interval of size \(2^{-l_{i}}\) in the free space list. If there is one, we allocate it (and delete it from the list). If not, we take the minimal larger interval in the free space list, and split it into halves, then one half into two halves, etc., until we get two intervals of size \(2^{-l_{i}}\). One of those intervals is allocated, and all other new parts (including the second interval of size \(2^{-l_{i}}\)) are added to the free list. The minimality guarantees that there are no intervals of that size already in the list. There is only one remaining question: _why the free list contains at least one interval of size at least \(2^{-l_{i}}\)?_ If not, all free intervals are strictly smaller than \(2^{-l_{i}}\) and have different sizes that are powers of \(2\), so the sum of their lengths is less than \(2^{-l_{i}}\), and that contradicts Kraft's inequality (note that the free space is \(1\) minus the total length of already allocated intervals).
This algorithm works for infinite sequences as well, so we get a corollary:
_For every computable sequence of natural numbers \(l_{i}\geqslant 1\) such that \(\sum_{i}2^{-l_{i}}\leqslant 1\), there exists a computable sequence of incomparable strings \(x_{i}\) of lengths \(l_{i}\)._
This result was used by Chaitin to prove the properties of prefix complexity and appears in his paper with the proof presented above (ascribed to N. Pippenger), see [5, Theorem 3.2, p. 333]. Now it is often called the _Kraft-Chaitin lemma_.
Later (also for algorithmic information theory purposes) George Barmpalias and Andrew Lewis-Pye generalized this statement to the case of hierarchical requests [1]. In the rest of this note we try to provide an easy-to-read exposition of this result (based on the discussion at the Kolmogorov seminar on complexity; the metaphor of reselling the space was suggested by Bruno Bauwens).
## 2 Kraft-Barmpalias-Lewis-Pye lemma
In the generalized version of Kraft's lemma, formulated and proven by Barmpalias and Lewis-Pye [1], the requests (still being labeled by natural numbers \(l_{i}\)) are structured hierarchically. When a new request arrives, it may be declared as a _son_ of one of the previous requests. This means that the interval allocated for it should be a part of the father's intervals (instead of being disjoint with all previous intervals). Later this son may get his own sons, etc.
In other words, requests now form a tree. We add a dummy root for this tree; it will become a father of all requests that had no father. Those requests (of level 1, sons of the root) should get disjoint intervals. The requests of level 2 have fathers of level 1 (that appeared earlier), etc. The tree grows when a new request arrives: a new leaf is attached to one of existing vertices (the new request becomes a son of some existing request, or a son of a root, if it had no father). Every tree vertex, including the dummy root, may become the father of a new request.
Formally, each request consists of the natural number \(l_{i}\geqslant 1\) (its label) and the reference to one of the previous requests or the dummy (root) one.
Now we have to say more precisely what kind of objects should be constructed to satisfy these requests. Let us note first that the space allocation would be simple if the total space requested by all the sons of a vertex never exceeded the space requested by the vertex itself. Then we could use Kraft-Chaitin's allocation process (as described above) at every vertex: each vertex \(v\) would take care of the requests of all its sons and give them space inside its own space. The only difference is that instead of a unit interval each vertex gets an interval of some size and the requests from its sons do not exceed that size in total. Note that the requests from the grandsons will be fulfilled by their fathers inside the space allocated to them, so \(v\) will not need to worry about them.
We have a quite different setting: we do _not_ require that the sum of the requests for sons of some vertex is bounded by the request for the vertex itself. Let us explain the changes needed to adapt the Kraft lemma to this situation.
Recall that the requests form a growing tree, and every request has a non-negative integer label \(l_{i}\) that means that an aligned interval of size \(2^{-l_{i}}\) is requested. The labels \(l_{i}\) may be arbitrary: for example, a vertex \(v\) can have a son \(w\) whose request is bigger than the request for \(v\); or \(v\) can have many sons with total requested size bigger than the request for \(v\). The only restriction for the labels is that the total size \(\sum 2^{-l_{i}}\) is bounded by 1. Note that this sum includes, for example, both the requests from a vertex \(v\) and its son \(w\), even if the size of the \(w\)-request is small and it can be fulfilled inside \(v\)-space. So this condition is much stronger than necessary for the case we discussed (when sons' requests fit into the father's one).
We make the following changes for the hierarchical version of the lemma:
* The allocation process is more complex. Initially for a request with label \(l_{i}\) an aligned interval of size \(2^{-l_{i}}\) is allocated. But later more aligned intervals could be allocated to the same request (to the same vertex of the requests' tree). _All these intervals should be of size \(2^{-l_{i}}\) or
_bigger_. Additional intervals can be allocated at any stage, so the space allocated to a vertex is a (growing) list of disjoint intervals of size at least \(2^{-l_{i}}\) each.
* We have two requirements for allocations. The first says that _the son's space is always inside the father's space_: each interval allocated to the son is a part of some interval allocated to the father.
* The second requirement says that _brothers have disjoint space_: if \(w\) and \(w^{\prime}\) are sons of some vertex \(v\), then intervals allocated to \(w\) should be disjoint with intervals allocated to \(w^{\prime}\).
Note that for the one-layer tree (root and its sons) we get essentially the statement of the original Kraft lemma, because additional allocated intervals are not helpful in any way.
In other words, the generalized allocation process goes as follows: a new request with label \(l_{i}\) arrives (thus extending the tree); then a new space (an interval2 of size \(2^{-l_{i}}\)) is allocated for this request and some new intervals may be added to the space allocated to other vertices (in fact, this happens only for the tree ancestors of the new request) in such a way that all the conditions mentioned above are satisfied.
Footnote 2: Or even several intervals of size at least \(2^{-l_{i}}\), though in our construction this would never happen.
**Lemma** (Barmpalias-Lewis-Pye).: _A space allocation algorithm that guarantees these properties (assuming that \(\sum_{i}2^{-l_{i}}\leqslant 1\)) exists._
## 3 Proof of the Kraft-Barmpalias-Lewis-Pye lemma
The allocation algorithm works in a hierarchical way. In the root vertex we have a Kraft space allocator that works as described above. Each vertex of the first level asks the root for an interval of required size and gets it. If no vertices of the higher level appear, that is all. Vertices of higher level request space from their fathers (so the root allocator does not deal with their requests directly).
It is useful to describe this process in terms of buying and reselling space for a fixed price (aligned interval of size \(2^{-l}\) costs \(2^{-l}\)). If there are no hierarchical requests, we are in the situation of standard Kraft-Chaitin lemma: the root allocator sells space inside \([0,1]\) to customers (i.e., requests of level 1), keeping the information about remaining free space as a list of disjoint intervals. Initially the root allocator has no money, but owns the entire interval (the free list contains one interval of size 1). Gradually the amount of space in its possession decreases, and the amount of money increases. The sum (space + money) always remains the same (1). The root allocator never runs out of space because \(\sum 2^{-l_{i}}\leqslant 1\).
In this case (no hierarchical requests) no reselling is happening, and each customer comes to the root allocator only once. Both things change when hierarchical requests appear. As we mentioned, a request (a node of the requests' tree) of level greater than 1 never talks directly with the root allocator. Instead, it speaks only with its father, who may resell some space bought earlier, or -- if needed -- may buy more space from its own father and resell all or part of this space to its son. In this scheme a node with label \(l\) can sequentially request several intervals from its father, but none of these intervals should be smaller than \(2^{-l}\) (the size of the original request). Therefore, to satisfy small requests of its sons, a node should aggregate their requests, buying the space in big chunks and reselling it in smaller ones.
Let us note that
* each seller does not make any difference between new and old customers: all requests are processed in the same way;
* a vertex that does not have enough space requests additional space from its father, so one request may trigger a chain of actions (that may propagate to the root).
To finish the proof, we should describe the aggregation algorithm and prove its correctness (this means that no vertex will run out of money or space).
Initially, a request with label \(l\) has \(2^{-l}\) units of money. It uses this money to buy an (aligned) interval of size \(2^{-l}\) from its father. After that it has no money, only the space. Then it starts reselling the space to its children (when/if they arrive). In this process it may need to buy additional amount of space from the father. Here is the algorithm for the vertex:
* Keep the information about the space you own as the list of disjoint aligned intervals of different sizes; note that the size of this space plus the amount of money you have is always \(2^{-l}\), where \(l\) is your label.
* If an interval of some size is requested, and an interval of exactly this size exists in the list, then sell this interval (and delete it from the list). The space reserve decreases and the money reserve increases (by the same value, the length of the resold interval).
* If there is no interval of the requested size in the free list, but there is a bigger free interval, then split this bigger interval into two halves, split one half in two halves, etc., until two intervals of the requested size appear. Sell one of them, and keep the other (and all bigger new intervals) in the free list. Again, the space reserve decreases, and the money reserve increases.
* It may happen also that you get a request of size \(2^{-l}\) or bigger. In this case buy an interval of the requested size from your father and immediately resell it to your son. (Free space and the amount of money remain the same.)
* Finally, it may happen that an interval of size smaller that \(2^{-l}\) is requested but all free intervals are smaller than the requested one. Since free intervals are of different sizes, this implies that the total amount of the free space is smaller than the requested interval. In this case you are low in space, but high in money: the amount that is missing for buying a new interval of size \(2^{-l}\) (this missing amount is equal to the size of free space) is smaller than the size (=price) of the requested interval. Use the money reserves plus the customer payment to get a new interval of size \(2^{-l}\) and split is as before, then give its part to the customer, and put the rest in the free list. (Since the existing free intervals were smaller than the request, the list again will consist of intervals of different sizes.)
In this scheme no cash is injected except for the \(2^{-l_{i}}\) amounts initially given to the requests, so the root allocator will never run out of space (since \(\sum 2^{-l_{i}}\leqslant 1\)). Note that the description of the allocation algorithm does not refer to money at all; all this accounting (similar to what is done sometimes for amortized analysis) is needed only to prove that the root allocator will never run out of space.
The Kraft-Barmpalias-Lewis-Pye lemma is proven.
_Remark_.: We considered the case of binary alphabet. If we have \(m\) letters, we get a tree with branching factor \(m\), and the Kraft inequality has the form \(\sum m^{-l_{i}}\leqslant 1\). Both the original Kraft-Chaitin argument and the proof of Kraft-Barmpalias-Lewis-Pye lemma can be easily adapted to this case. Now the invariant is that the list of free intervals may contain at most \(m-1\) copies of the intervals of the same size. (For \(m=2\) we get the previous requirement: all intervals are different.) In this way, the numbers of intervals of each size correspond to the digits in the \(m\)-ary representation of the amount of the free space. The allocation algorithm remains essentially the same: if there is an interval of the required size, allocate it; if not, take the minimal bigger free interval and split it into \(m\) pieces, then do the same for one of the pieces, etc. This process corresponds to subtracting \(1\) from \(m\)-ary number \(100\ldots 0\): we get \((m-1)\) new free intervals of all intermediate sizes.
Positive result: efficient coding
The Kraft lemma has a natural interpretation in terms of coding; it allows us to construct a prefix code for \(k\) letters with codewords of lengths \(l_{1},\ldots,l_{k}\) assuming that \(\sum_{i=1}^{k}2^{-l_{i}}\leqslant 1\). The Kraft-Chaitin lemma extends this result to countably many letters. The Kraft-Barmpalias-Lewis-Pye provides _hierarchical coding_: we require that codes of some letters are extensions of the code of others letters, so the tree structure of letters should be preserved in the tree of codes. Using the compactness argument, we may get a similar conclusion for _infinite_ branches. Note that the Kraft-Barmpalias-Lewis-Pye lemma is valid both for finite and infinite sequences of requests. In the latter case the requests' tree grows when new requests arrive, and we can consider the limit (infinite) tree of requests which includes all the requests.
**Lemma**.: _Assume that the tree of requests has an infinite branch \(r_{1},r_{2},\ldots\), where \(r_{1}\) is the request of the first level, and \(r_{i+1}\) is a son of \(r_{i}\). Then there exists a sequence of strings \(x_{1},x_{2},\ldots\) such that \(x_{i}\) is a prefix of \(x_{i+1}\) for all \(i\), and every \(x_{i}\) is one of the codes of \(r_{i}\) obtained by Barmpalias-Lewis-Pye construction (and, therefore, the length of \(x_{i}\) does not exceed the label of the request \(r_{i}\))._
Note that several strings (intervals) may correspond to the same request \(r_{i}\), and we claim that one can choose one of them (\(x_{i}\)) for every \(i\) in such a way that \(x_{i}\) is a prefix of \(x_{i+1}\). This choice is not effective, though.
Proof.: Let \(x_{i}\) be arbitrary string that is allocated to \(r_{i}\) during the construction. Then the corresponding interval is inside the space allocated to \(r_{i-1}\), so \(x_{i}\) has some prefix \(x_{i-1}\) that earlier was allocated to \(r_{i-1}\). Then we can find some prefix \(x_{i-2}\) of \(x_{i-1}\) allocated to \(r_{i-2}\), etc.
The only problem is that for different \(i\) we get different sequences \(x_{1},\ldots,x_{i}\), so we do not get directly an infinite sequence \(x_{1},x_{2},\ldots\) of strings allocated to \(r_{1},r_{2},\ldots\). We need to use compactness argument (Konig's lemma). Note that every request \(r_{i}\) has only finitely many strings allocated to it (at most \(2^{-l}\) if the label is \(l\)). So some \(x_{1}\) appears for infinitely many \(i\). Choosing this \(x_{1}\) and retaining only the values of \(i\) when this \(x_{1}\) is used, we then choose some \(x_{2}\) that appears infinitely many times, etc.
Note that this argument does not provide a _computable_ sequence of \(x_{i}\) even if both the sequence of requests and the branch \(r_{1},r_{2},\ldots\) are computable.
This lemma implies the following result (which was one of the main goals of [1])
**Theorem 1**.: _Let \(K\) be a total computable function on binary strings such that \(\sum_{x}2^{-K(x)}\leqslant 1\). Then there exists an oracle machine \(M\) with the following property: for every bit sequence \(\alpha\) there exists a bit sequence \(\beta\) such that \(M\) computes \(\alpha\) with oracle \(\beta\), and the oracle use when computing the prefix \(\alpha\!\upharpoonright\!n\) is at most \(K(\alpha\!\upharpoonright\!n)\)._
The oracle machine has an _input tape_ where an infinite sequence of zeros and ones (the _oracle_) is written. The machine reads the input tape bit by bit, while performing some other computations, and writes the output bit sequence (one bit at a time). For a given oracle \(\beta\) the output sequence \(\alpha\) may be finite or infinite; for every prefix \(\alpha\!\upharpoonright\!n\) of \(\alpha\) we consider the number of input bits read up to the moment when \(n\) output bits were produces, and this number is called the _oracle use_.
Proof.: Let us consider strings \(0\) and \(1\) as two requests of the first level with labels \(K(0)\) and \(K(1)\); then \(00\) and \(01\) are requests of level \(2\) that are sons of the request \(0\) and have labels \(K(00)\) and \(K(01)\); in the same way \(10\) and \(11\) are sons of \(1\), etc. Applying Kraft-Barmpalias-Lewis-Pye lemma, we get a computable sequence of allocations.
Let us first look on the strings allocated to the requests of the first level (\(0\) and \(1\)). They form an enumerable prefix-free set of strings that consists of two disjoint parts: codewords for \(0\) and codewords for \(1\). We need to construct an oracle machine that outputs \(0\) if the oracle has a prefix of the
first type, and outputs 1 if the oracle has a prefix of the second type. It would be trivial without the additional requierements (just read the oracle bits and enumerate these two parts in parallel), but we want that the machine _does not read any bits after the codeword_. To satisfy this additional requirement, we may delay the reading the next bit _until some codeword appears that is a proper extension of an already read prefix \(z\) of the oracle_. If this never happens, either \(z\) is a codeword itself (and we will find this out at some point, and produce the output bit as required), or \(z\) is not a prefix of any codeword (then we produce no output, and this is the right behavior). If this happens at some point, then we know that (because of prefix-free requirement) \(z\) is not a codeword, so we can safely read the next bit, and continue in the same way. (Formally we maintain the following invariant: any proper prefix of the already read part of oracle is not a codeword, see [6, Theorem 50, p. 86] for the details.)
After a codeword for 0 or 1 is read, we perform the same operation for the next bit. For example, if the codeword for 0 is read, we are looking for its extensions that are codewords for 00 and 01 in the same way. Theorem is proven.
_Remarks_.:
1. Note that the oracle use is monotone, so the bound implies that the oracle use is at most \(\min_{i\geqslant n}K(\alpha\!\upharpoonright i)\).
2. The same argument works if \(K\) is a partial computable function (with natural values) defined on a subtree of the full binary tree: a new vertex to the requests tree is added when the value of \(K\) on the corresponding string is computed. Moreover, if \(K\) is a computable function with an arbitrary domain, we may restrict \(K\) to the maximal subtree inside the domain of \(K\) (by checking whether \(K\) is defined on all prefixes). In this way we get a similar statement where \(K\) is an arbitrary partial computable function and we additionally assume that \(K(x)\) is defined for all prefixes \(x\) of a given sequence \(\alpha\) (that can be now finite or infinite).
## 5 Contaminated space and reduction to random sequences
A stronger version of Theorem 1 guarantees that the sequence \(\beta\) is Martin-Lof random3. According to the definition, the set of all non-random sequences is contained in an effectively open set of arbitrarily small measure. Now we assume that \(\sum_{x}2^{-K(x)}<1\) (strict inequality) and take an effectively open set \(U\) whose measure is smaller than the gap between \(\sum_{x}2^{-K(x)}\) and 1. Then we use the same argument with the following stronger version of Barmpalias-Lewis-Pye lemma.
Footnote 3: From now on we assume that the reader is familiar with algorithmic randomness and Kolmogorov complexity theory; all the needed notions and results can be found, e.g., in [6].
Again, we consider a sequence of hierarchical requests. In addition we have a parallel process that enumerates a set of aligned intervals that are considered as "contaminated". So at every moment we have an clopen contaminated subset of the unit interval that increases with time. (Only this subset matters; we do not care how this subset is split into a union of intervals.) We assume that that the contaminated part remains small; namely, we assume that the total size of all the requests _plus the size of the contaminated part_ never exceeds 1.
**Lemma**.: _In this case one can arrange the allocation process for all the requests with the following additional requirement: when an interval is allocated, it is not completely contaminated, i.e., is not covered by the part of the contaminated space known at the moment of the allocation._
_Remark_.: Note that the statement of the lemma does not prevent the following cases:
* part of the interval just allocated is contaminated;
* at some later stage the entire interval will get contaminated.
Proof.: Let us add to our picture the following insurance service: if an allocated interval (obtained from the father node) turns out to be entirely in the contaminated space at the moment of its allocation, the insurance reimburses you for the amount paid for this interval (i.e., its length), but you cannot use this interval later for reselling. With the insurance money you can then buy another interval of the same length, and if again it turns out to be completely inside the contaminated space, you again get the money back, and then buy one more interval, etc.. This process will stop at some point (since there are only finitely many intervals of the same length).
Note that you cannot get reimbursement later (only at the moment of allocation and only if the allocated interval is completely in the already contaminated zone).
The only thing to check is that the total amount of money paid by all clients during the allocation is at most 1. This money comes from two sources: the initial money the clients have, and the money paid by the insurance. Note that the insurance service never reimburses the same space twice: if an interval is reimbursed, it is blocked for the future use, and neither its owner or his brothers can use it. So when an interval is reimbursed, it is disjoint with all previously reimbursed intervals.
Now this lemma can be used to prove the following stronger version of coding theorem:
**Theorem 2**.: _Let \(K\) be a partial computable function on binary strings such that \(\sum 2^{-K(x)}<1\). Then there exists an oracle machine \(M\) with the following property: for every bit sequence \(\alpha\) such that \(K(\alpha\!\upharpoonright\!n)\) is defined for all \(n\), there exists a Martin-Lof random bit sequence \(\beta\) such that \(M\) computes \(\alpha\) with oracle \(\beta\), and the oracle use when computing the prefix \(\alpha\!\upharpoonright\!n\) is at most \(K(\alpha\!\upharpoonright\!n)\)._
Note that the inequality is strict here, and the sequence \(\beta\) is guaranteed to be random. This is a strong version of Kucera-Gacs theorem saying that every sequence is Turing reducible to a random one; see [3] about the (rather long) history of improvements on the oracle use bound in this theorem and related results.
Proof.: The proof goes as before, but we start with taking an effectively open set \(U\) that covers all non-random sequences and whose measure is small enough, so together with \(\sum 2^{-K(x)}\) it does not exceed 1. Then we can apply the lemma, using the enumeration of this effectively open set as a generation of contaminated space, and get in the same way the sequence \(\beta\) that computes \(\alpha\). We need to show only that \(\beta\) (the limit sequence) does not belong to \(U\). Assume that \(\beta\) belongs to \(U\); then some prefix \(b\) of \(\beta\) has the corresponding interval completely covered by \(U\) (since \(U\) is open). Then at some point (as the compactness argument shows) the interval corresponding to \(b\) will be completely contaminated. Starting from this moment, all the smaller intervals are also fully contaminated and cannot be allocated. On the other hand, only finitely many prefixes of \(\beta\) are allocated before this moment, so we get a contradiction.
## 6 Prefix complexity and oracle use
We assumed that the function \(K\) (the upper bound for the oracle use) was computable; the natural question is whether this result can be extended to upper semicomputable functions, or, equivalently, whether a bound with prefix complexity is valid.
This question is mentioned as open in the paper of Barmpalias and Lewis-Pye [2, p. 4484, Conjecture]. Here is the exact statement. Let K be the prefix complexity function (not to mix with italic \(K\) that appeared earlier and was a computable upper bound for K).
Let \(\alpha\) be an arbitrary sequence. Can we always construct a sequence \(\beta\) and the oracle machine \(M\) such that \(\alpha\) is computed by \(M\) with oracle \(\beta\) and the oracle use for \(n\)-bit prefix \(\alpha\!\upharpoonright\!n\) of \(\alpha\) is bounded by \(\min_{i\geqslant n}\operatorname{K}(\alpha\!\upharpoonright\!i)+c\) for some \(c\) and all \(n\)?
A weaker result4 appears in the same paper as Theorem I.5: it replaces \(K(\alpha|i)\) by \([K(\alpha|i)+\log_{2}i]\).
Footnote 4: It would be interesting to find an easy proof of this result by adapting the arguments explained above.
One can consider a stronger conjecture where the bound is replaced by \(\min\operatorname{K}(y)\) over all \(y\) that are extensions of \(\alpha|n\) (and not only for prefixes of \(\alpha\)). For this stronger conjecture the answer is negative. The counterexample, a sequence \(\alpha\) that does not have this property, can be (as proven by Mikhail Raskin, who kindly permitted to include his argument) constructed diagonally. At each step we extend the existing prefix \(a\) of \(\alpha\) to some longer \(a^{\prime}\) preventing some machine \(M\) from satisfying the requirement with some constant \(c\). (There are countably many pairs \(M,c\), so we can diagonalize against all of them.)
So let us assume that \(a\), \(M\) and \(c\) are fixed. We want to find some \(b\) that extends \(a\) and has the following property: machine \(M\) cannot compute any infinite extension of \(b\) with required bound for the oracle use. To achieve this goal, we select some computable infinite sequence that starts with \(a\) (for example, we can write all zeros after \(a\)) and let \(a^{\prime}\) be a long prefix of this sequence whose prefix complexity is negligible compared to length. (To specify the prefix of a computable sequence, we use finitely many bits to specify the program, and the remaining bits are used to specify its length that can be very large compared to the complexity.)
Now \(\operatorname{K}(a^{\prime})\) is fixed, and we may try all strings of size \(\operatorname{K}(a^{\prime})+c\) as oracles for machine \(M\). Each of them computes some sequence (finite or infinite), and we compare all these sequences with \(a\) and \(a^{\prime}\). Several cases are possible:
* Some of these sequences do not go through \(a\) at all.
* Some of them start with \(a\) and then stop or deviate from \(a^{\prime}\).
* Finally, some other could reach \(a^{\prime}\).
In any case, since there are many prefixes between \(a\) and \(a^{\prime}\), one can find some \(a^{\prime\prime}\) with the following property: any program that reaches \(a^{\prime\prime}\), makes one more step in the direction of \(a^{\prime}\). Then finally we let \(b\) be the extension of \(a^{\prime\prime}\) that deviates from the path to \(a^{\prime}\).
The chosen \(b\) has the required property. Let \(\beta\) be any infinite extension of \(b\). Assume that \(M\) computes \(\beta\) with some oracle \(\alpha\) with the bounded oracle use. Since \(a^{\prime\prime}\) is a prefix of \(\beta\) and at the same time prefix of \(a^{\prime}\), the oracle use for \(a^{\prime\prime}\) should be at most \(\operatorname{K}(a^{\prime})+c\). But all oracles that compute \(a^{\prime\prime}\) with this oracle use, compute also the next bit of \(a^{\prime}\) and therefore are unsuitable for \(b\).
_Remark_.: A more accurate accounting disproves a weaker conjecture where \(\min\operatorname{K}(y)\) over all extensions of \(\alpha|n\) is replaced by \(\min[\operatorname{K}(y)+0.99\log|y|]\) over the same extensions. Indeed, we need that the number of possible oracle prefixes of size \(\operatorname{K}(a^{\prime})+0.99\log|a^{\prime}|+c\) is smaller than the difference between the lengths of \(a^{\prime}\) and \(a\) (so we can find some \(a^{\prime\prime}\) with required properties). The length of \(a\) is fixed, so we need that \(2^{[\operatorname{K}(a^{\prime})+0.99\log|a^{\prime}|+c]}\ll a^{\prime}\), and this is possible since \(\operatorname{K}(a^{\prime})+c\) can be small compared to \(0.01\log|a^{\prime}|\).
Note that this example shows also that one cannot always find \(\beta\) that computes \(\alpha\) with oracle use \(\operatorname{KM}(\alpha|n)\), where \(\operatorname{KM}\) stands for monotone complexity. Indeed, since \(\operatorname{KM}(z)\leqslant\operatorname{K}(z)\) for all \(z\) and \(\operatorname{KM}\) is monotone, we have \(\operatorname{KM}(x)\leqslant\min\operatorname{K}(z)\) when the minimum is taken over all extensions \(z\) of \(x\). (This question is natural since \(\operatorname{KM}(\alpha|n)\) is an obvious lower bound for the oracle use.)
## Acknowledgements
This paper is based on the discussions with George Barmpalias, Bruno Bauwens, Laurent Bienvenu, Michael Raskin, Mikhail Vyalyi and other participants of _Kolmogorov seminar_ on description complexity. I thank the (anonymous) reviewers of the CCR2023 conference for their comments.
|
2302.06313
|
Sufficient conditions yielding the Rayleigh Conjecture for the clamped
plate
|
The Rayleigh Conjecture for the bilaplacian consists in showing that the
clamped plate with least principal eigenvalue is the ball. The conjecture has
been shown to hold in 1995 by Nadirashvili in dimension $2$ and by Ashbaugh and
Benguria in dimension $3$. Since then, the conjecture remains open in dimension
$d\geq 4$. In this paper, we contribute to answer this question, and show that
the conjecture is true in any dimension as long as some special condition holds
on the principal eigenfunction of an optimal shape. This condition regards the
mean value of the eigenfunction, asking it to be in some sense minimal. This
main result is based on an order reduction principle allowing to convert the
initial fourth order linear problem into a second order affine problem, for
which the classical machinery of shape optimization and elliptic theory is
available. The order reduction principle turns out to be a general tool. In
particular, it is used to derive another sufficient condition for the
conjecture to hold, which is a second main result. This condition requires the
Laplacian of the optimal eigenfunction to have constant normal derivative on
the boundary. Besides our main two results, we detail shape derivation tools
allowing to prove simplicity for the principal eigenvalue of an optimal shape
and to derive optimality conditions. Eventually, because our first result
involves the principal eigenfunction of a ball, we are led to compute it
explicitly.
|
Roméo Leylekian
|
2023-02-13T12:25:29Z
|
http://arxiv.org/abs/2302.06313v1
|
# Sufficient conditions yielding the Rayleigh Conjecture for the clamped plate
###### Abstract
The Rayleigh Conjecture for the bilaplacian consists in showing that the clamped plate with least principal eigenvalue is the ball. The conjecture has been shown to hold in 1995 by Nadirashvili [13] in dimension 2 and by Ashbaugh and Benguria [1] in dimension 3. Since then, the conjecture remains open in dimension \(d\geq 4\). In this paper, we contribute to answer this question, and show that the conjecture is true in any dimension as long as some special condition holds on the principal eigenfunction of an optimal shape. This condition regards the mean value of the eigenfunction, asking it to be in some sense minimal. This main result is based on an order reduction principle allowing to convert the initial fourth order linear problem into a second order affine problem, for which the classical machinery of shape optimization and elliptic theory is available. The order reduction principle turns out to be a general tool. In particular, it is used to derive another sufficient condition for the conjecture to hold, which is a second main result. This condition requires the Laplacian of the optimal eigenfunction to have constant normal derivative on the boundary. Besides our main two results, we detail shape derivation tools allowing to prove simplicity for the principal eigenvalue of an optimal shape and to derive optimality conditions. Eventually, because our first result involves the principal eigenfunction of a ball, we are led to compute it explicitly.
## 1 Introduction
In 1894, at the same time he was formulating his famous conjecture regarding fixed membranes, Rayleigh stated that the principal frequency of a clamped plate should be minimal when the plate is circular. Let us explain more precisely the terms of this claim. The principal frequency of a clamped plate involves the eigenvalue problem related to the bilaplacian with Dirichlet boundary conditions (also refered to as Dirichlet bilaplacian), which is the following eigenvalue problem.
\[\left\{\begin{array}{rcll}\Delta^{2}u&=&\Gamma u&in&\Omega,\\ u&=&0&on&\Omega,\\ \partial_{n}u&=&0&on&\Omega.\end{array}\right. \tag{1}\]
Here \(\Omega\subseteq\mathbb{R}^{d}\) (\(d\in\mathbb{N}^{s}\)) stands for an arbitrary bounded open set, \(u\in H^{2}_{0}(\Omega)\), \(\Gamma\) is a real number, and \(\partial_{n}=\vec{n}\cdot\nabla\) is the partial derivative in the direction of the outward normal unit vector \(\vec{n}\). It turns out that problem (1) admits countably many (nontrivial) eigencouples \((u,\Gamma)\), and that the sequence of eigenvalues is positive and grows up to infinity. This occurs since the resolvent of the Dirichlet bilaplacian is compact positive self-adjoint when seen as an operator acting on \(L^{2}(\Omega)\) (see [10] for a collection of general facts regarding the bilaplacian and, more generally, polyharmonic operators). The principal eigenvalue of the clamped plate is nothing else but the lowest of these eigenvalues, that we will denote \(\Gamma(\Omega)\) in the rest of the document in order to emphasize its dependance on the open set \(\Omega\). As for any eigenvalue of a self-adjoint operator, \(\Gamma(\Omega)\) admits a variational characterization, which is the following:
\[\Gamma(\Omega)=\min_{\begin{subarray}{c}u\in H^{2}_{0}(\Omega)\\ u\neq 0\end{subarray}}\frac{\int_{\Omega}(\Delta u)^{2}}{\int_{\Omega}u^{2}}. \tag{2}\]
Initially stated in the context of subsets of \(\mathbb{R}^{2}\) only, the Rayleigh Conjecture deals with the problem of determining the open set with least principal eigenvalue among all open sets having same measure. As its counterpart for the Dirichlet Laplacian, the conjecture claims that such a set exists, is "almost" unique, and is given by the Euclidean ball fitting the volume constraint. Note that plain uniqueness does not hold since \(\Gamma(\Omega)\) is invariant under isometries of \(\Omega\) and under removing a set of zero \(H^{2}\)-capacity from \(\Omega\) (see sections 3.3 and 3.8.1 of [13] for the definition of capacity). In other words, if \(|.|\) denotes the \(d\)-dimensional Lebesgue measure,
**Conjecture**.: _Let \(\Omega\) be a bounded open subset of \(\,\mathbb{R}^{d}\) and \(B\) a ball such that \(|B|=|\Omega|\). Then,_
\[\Gamma(\Omega)\geq\Gamma(B). \tag{3}\]
_Moreover there is equality if and only if \(\,\Omega\) is a ball (up to a set of zero \(H^{2}\)-capacity)._
After its publication in 1894, one of the first serious results on the conjecture is due to Szego [15], and states, based on symmetrisation arguments, that, as soon as the eigenfunction associated with the first eigenvalue on a set \(\Omega\) is of fixed sign, the Faber-Krahn type inequality (3) holds. However, one of the main challenges when working with fourth and higher order elliptic operators is the vacuity of the maximum principle in general for arbitrary domains. This means that, unlike the Dirichlet Laplacian, the one-sign property of the principal eigenfunction is no more guaranteed as a consequence of the non-applicability of Krein-Rutmann Theorem. Indeed, the first - and maybe the most famous - example of domains in which this one-sign property fails was found to be annuli with small inner radius in 1952 [14, 15]. This situation is troublesome in the sense that, at first glance, it deprives us of our principal tool in shape optimization, which is symmetrisation.
Nevertheless, using perturbation techniques, Mohr [16] showed in 1975 that any planar optimal regular shape, if it exists, has to be the ball. It seems however that the approach of Mohr strongly relies on the planeness of the shapes involved. Moreover, this result was finally outshined by a series of papers beginning with [13] in 1976, in which Talenti proved its famous comparison principle. An astute adaptation of this principle allowed him to find in 1981 a lower bound on the optimal eigenvalue depending on the dimension (see [13]). Following this strategy, Nadirashvili solved the conjecture in \(\mathbb{R}^{2}\) in 1995 in [12]. Subsequently, still in the wake of Talenti's approach, Ashbaugh and Benguria proved the conjecture in \(\mathbb{R}^{2}\) and \(\mathbb{R}^{3}\) in 1995 (see [1]). Furthermore, in 1996, Ashbaugh and Laugesen [1] completely solved Talenti's "two-ball problem" (see [1, equation (26)] for details) in any dimension. As a result, they showed on the one hand that the plain approach of Talenti could not answer the Rayleigh Conjecture when \(d\geq 4\), but, on the other hand, gave a very precise lower bound on the optimal eigenvalue. Since then, up to our knowledge, no significant breakthrough has been performed regarding the actual optimal shape nor the actual optimal eigenvalue in high dimension. Let us however mention the interesting papers of Kristaly [12, 13] dealing with the conjecture in non-Euclidean setting.
The goal of the present document is to contribute for a better understanding of the terms of validity of the Rayleigh Conjecture. More precisely, under existence and regularity of an optimal shape, we will show that the conjecture is true in any dimension whenever the principal eigenfunction satisfies some special condition. This will be explained in the next lines. First, we need to assume that there exists a solution with \(C^{4}\)**regular connected boundary** to the problem
\[\min\{\Gamma(\Omega):\Omega\subseteq\mathbb{R}^{d}\text{ open set, }|\Omega|=c\}, \tag{4}\]
where \(c\) is a fixed positive real number. Here, we recall that the question of the existence of an optimal shape is still open (see however the recent work [14] dealing with this issue for domains
contained in a given large box). In the rest of the document, we will denote \(\Omega\) a \(C^{4}\) regular solution to (4). The regularity assumption on \(\Omega\) will be used for invoking shape derivation. Indeed, it guarantees that the eigenfunctions are \(H^{4}(\Omega)\) (see [10, Theorem 2.20]). However, besides \(H^{4}\) regularity, at some point we will need more regularity for the principal eigenfunction. The \(L^{p}\) regularity theory (see again [10, Theorem 2.20]) will answer this need by providing \(W^{4,p}(\Omega)\) regularity, and then (thanks to Sobolev emebddings) \(C^{3,\alpha}(\overline{\Omega})\) regularity for the eigenfunction. On the other hand, the assumption on the geometry of the boundary is technical as we shall see in the proof of our main theorem. We stress the properties of regularity and geometry enjoyed by \(\Omega\) by stating the assumption
\[\Omega\text{ is }C^{4}\text{and }\partial\Omega\text{ is connected}.\] (RG)
Apart from (RG), we will need another special assumption to run our proof. This condition consists in a relation, in terms of mean value, between the first eigenfunction in \(\Omega\), that will be denoted \(u\), and the first eigenfunction in a ball \(B\) of same volume, that will be denoted \(u_{B}\):
\[\left|\int_{\Omega}u\right|\leq\left|\int_{B}u_{B}\right|.\] (M)
Then, the main conclusion of the present document is the theorem stated below.
**Theorem 1**.: _Let \(\Omega\) be an optimal shape for problem (4) satisfying (RG) and \(B\) a ball such that \(|\Omega|=|B|\). Let \(u\) be a first \(L^{2}\)-normalised eigenfunction in \(\Omega\) and \(u_{B}\) a first \(L^{2}\)-normalised eigenfunction in \(B\). Then (M) holds if and only if \(\Omega=B\) (up to a translation), in which case (M) is an equality._
**Remark**.: _Roughly speaking, Theorem 1 tells that an optimal shape of which the mean of the principal eigenfunction is minimal is a ball. Therefore, one is led to wonder if the minimality of the \(H^{2}_{0}\) norm of an eigenfunction implies the minimality of its mean. Among others, this question will be addressed in section 6._
The proof of Theorem 1 is based on a procedure that we shall call "order reduction principle", allowing to turn the fourth order eigenvalue problem (1) into a second order affine problem, for which a more sophisticated machinery is available. In particular, it becomes possible to use symmetrisation techniques, which are the other main ingredient for proving Theorem 1. However, we would like to emphasize that the order reduction principle paves the way for the utilization of many other tools coming from the field of second order elliptic operators. In order to illustrate this fact, we derive another main result, which is based on the theory of overdetermined problem stemming from the historical [11], and states as follows.
**Theorem 2**.: _Let \(\Omega\) be an optimal shape for problem (4) satisfying (RG). Let \(u\) be a first eigenfunction on \(\Omega\) such that \(\partial_{n}\Delta u\) is constant on \(\partial\Omega\). Then, \(\Omega\) is a ball._
Actually, the proofs of Theorem 1 and Theorem 2 do not appeal to the order reduction principle as a standalone. Indeed, to reveal its potential, the order reduction principle needs to thrive on the optimality condition satisfied by an optimal shape \(\Omega\). Such an optimality condition shall be derived only when the eigenvalue \(\Gamma(\Omega)\) is simple. Even if the question of simplicity of the optimal eigenvalue had already been tackled in [14], one of the main results of the present work is to propose a thorough proof of this fact and to derive the subsequent optimality condition, which is precised in the next theorem.
**Theorem 3**.: _Let \(\Omega\) be a \(C^{4}\) open set solving (4). Then, \(\Gamma(\Omega)\) is simple. Moreover, if \(u\) denotes an \(L^{2}\)-normalised eigenfunction associated with \(\Gamma(\Omega)\), \(\Delta u\) is a.e. constant equal to \(\pm\alpha\) on any connected component of \(\partial\Omega\), where_
\[\alpha:=\sqrt{\frac{4\Gamma(\Omega)}{d|\Omega|}}.\]
In the remainder of this document we will detail the proofs of Theorem 1, Theorem 2 and Theorem 3. In section 2, we present our main tool, which is the order reduction principle, roughly explained in the previous lines. Section 3 gathers some results about derivation of simple and multiple eigenvalues of the Dirichlet bilaplacian. Using these tools, in section 4, we prove Theorem 3. Section 5 is devoted to the proofs of Theorem 1 and Theorem 2. Section 6 discusses two consequences of Theorem 1.
## 2 Order reduction principle
The order reduction principle, from which arise Theorem 1 and Theorem 2, is an algebraic trick leading to an "eigenvalue problem" involving a differential operator of order lower than the bilaplacian, that is, the Laplacian. The counterpart to the reduction of the order is that the "eigenvalue problem" is not linear anymore. The precise statement is encapsulated in the next proposition.
**Proposition 4**.: _Let \(\Omega\) be a \(C^{4}\) bounded open set, and \(u\in H^{2}_{0}(\Omega)\) an eigenfunction of the bilaplacian in \(\Omega\) associated with an eigenvalue \(\mu\), so that \(\Delta u\) has trace in \(H^{\frac{3}{2}}(\partial\Omega)\). Finally, let \(g_{u}\) satisfy_
\[\left\{\begin{array}{rcll}\Delta g_{u}&=&0&in&\Omega,\\ g_{u}&=&\frac{\Delta}{\sqrt{\mu}}u&on&\partial\Omega.\end{array}\right.\]
_Then, the function \(z_{u}:=\frac{\Delta}{\sqrt{\mu}}u+u-g_{u}\) solves the equation_
\[\left\{\begin{array}{rcll}\Delta z_{u}&=&\sqrt{\mu}(z_{u}+g_{u})&in&\Omega,\\ z_{u}&=&0&on&\partial\Omega.\end{array}\right. \tag{5}\]
_In particular, \(z_{u}\) solves the following problem, the value of which is \(\frac{1}{\sqrt{\mu}}\):_
\[\frac{1}{\sqrt{\mu}}=-\min_{\begin{subarray}{c}z\in H^{1}_{0}(\Omega)\\ z\neq 0\end{subarray}}\frac{\int_{\Omega}z^{2}+\int_{\Omega}g_{u}(2z-z_{u})}{ \int_{\Omega}|\nabla z|^{2}} \tag{6}\]
_Moreover, if \(g_{u}\geq 0\), then \(z_{u}<0\)._
Proof.: The eigenfunction \(u\) satisfies by definition
\[\left\{\begin{array}{rcll}(\Delta^{2}-\mu)u&=&0&in&\Omega,\\ u&=&0&on&\partial\Omega,\\ \partial_{n}u&=&0&on&\partial\Omega.\end{array}\right.\]
The idea now relies on observing that \((\Delta^{2}-\mu)=(\Delta-\sqrt{\mu})(\Delta+\sqrt{\mu})\). Hence, setting \(y=\left(\frac{\Delta}{\sqrt{\mu}}+1\right)u\), \(y\) verifies \(\Delta y=\sqrt{\mu}y\) in \(\Omega\). Nevertheless, the boundary condition for \(y\) is \(y=\frac{\Delta}{\sqrt{\mu}}u\) on \(\partial\Omega\). Note that \(\frac{\Delta}{\sqrt{\mu}}u\in H^{\frac{3}{2}}(\partial\Omega)\) since \(\Delta u\in H^{2}(\Omega)\) thanks to the regularity assumption made on \(\partial\Omega\) (see [1, Theorem 2.20]). But if \(g_{u}\) is the solution to the Dirichlet problem \(\Delta g_{u}=0\) in \(\Omega\) and \(g_{u}=\frac{\Delta}{\sqrt{\mu}}u\) on the boundary, setting \(z_{u}:=y-g_{u}=\frac{\Delta}{\sqrt{\mu}}u+u-g_{u}\), one gets that \(z_{u}\) is an \(H^{1}_{0}(\Omega)\cap H^{2}(\Omega)\) function satisfying
\[\Delta z_{u}=\sqrt{\mu}(z_{u}+g_{u}).\]
In particular \(z_{u}\) is a critical point of the functional \(E_{\mu}\) defined on \(H^{1}_{0}(\Omega)\) and given by
\[E_{\mu}(z)=\int_{\Omega}|\nabla z|^{2}+\sqrt{\mu}\int_{\Omega}z^{2}+2\sqrt{\mu }\int_{\Omega}g_{u}z.\]
Moreover, \(E_{\mu}\) being strictly convex, \(z_{u}\) is the unique minimiser. But, from the equation involving \(z_{u}\), we derive the identity \(E_{\mu}(z_{u})=\sqrt{\mu}\int g_{u}z_{u}\). In this context, the relation
\[\int_{\Omega}|\nabla z|^{2}+\sqrt{\mu}\int_{\Omega}z^{2}+2\sqrt{\mu}\int_{ \Omega}g_{u}z\geq\sqrt{\mu}\int_{\Omega}g_{u}z_{u},\]
holding for all \(z\in H^{1}_{0}(\Omega)\), is an equality if and only if \(z=z_{u}\). Moreover, thanks to elementary manipulations, this inequality can be turned into the next one, which, as before, is attained if and only if \(z=z_{u}\).
\[-\frac{\int_{\Omega}z^{2}+\int_{\Omega}g_{u}(2z-z_{u})}{\int_{\Omega}|\nabla z|^ {2}}\leq\frac{1}{\sqrt{\mu}}.\]
This completes the proof of (6). Eventually, if \(g_{u}\geq 0\), the strong maximum principle applied to the operator \(\Delta-\sqrt{\mu}\) in (5) shows that \(z_{u}<0\) unless \(z_{u}\) vanishes identically in \(\Omega\). But if \(z_{u}=0\), due to (5), \(g_{u}=0\), and in turn \(-\Delta u=\sqrt{\mu}u\) in \(\Omega\). But since \(u\) does not vanish identically in \(\Omega\), neither does it in any open subset of \(\Omega\) by analyticity. In particular, one of the sets \(\{u>0\}\) and \(\{u<0\}\) meets \(\partial\Omega\) at some point \(p\). Then, Hopf boundary Lemma applies at \(p\) (recall that \(\Omega\), being \(C^{4}\), satisfies an interior ball condition at \(p\)) yielding \(\partial_{n}u(p)\neq 0\), which is a contradiction since \(u\in H^{2}_{0}(\Omega)\). Therefore, we conclude that \(z_{u}<0\).
**Remark**.:
1. _Setting_ \(y=\left(\frac{\Delta}{\sqrt{\mu}}-1\right)u\) _instead of_ \(y=\left(\frac{\Delta}{\sqrt{\mu}}+1\right)u\)_, we see that the function_ \(z_{u}^{\prime}:=\frac{\Delta}{\sqrt{\mu}}u-u-g_{u}\) _is_ \(H^{1}_{0}(\Omega)\cap H^{2}(\Omega)\) _and satisfies_ \(-\Delta z_{u}^{\prime}=\sqrt{\mu}(z_{u}^{\prime}+g_{u})\)_. However, we cannot obtain a variational formulation similar to (_6_) involving_ \(z_{u}^{\prime}\) _since, unlike_ \(E_{\mu}\)_, the energy functionnal of which_ \(z_{u}^{\prime}\) _is a critical point is not convex._
2. _Note that the system (_5_) is linear with respect to_ \((z_{u},g_{u})\)_. As a consequence, the variational formula (_6_) remains true when replacing_ \(z_{u}\) _and_ \(g_{u}\) _respectively with_ \(\gamma z_{u}\) _and_ \(\gamma g_{u}\) _for any_ \(\gamma\in\mathbb{R}\setminus\{0\}\)_._
Surprisingly, Proposition 4 will not only serve proving Theorem 1 and Theorem 2. Indeed, it has the following consequence which will be very useful to prove the simplicity of the optimal first eigenvalue. First, let us recall that, in the case of fourth order equations, it is not known whether having \(u=\partial_{n}u=\partial_{n}^{2}u=0\) on some arbitrary portion \(\gamma\) of \(\partial\Omega\) yields \(u=0\) in the neighbourhood of \(\gamma\). The lack of this property (called uniqueness continuation), is due to the fact that neither Holmgren principle nor Hopf boundary Lemma apply in this framework (see however Theorem 1.1 of [10] and the discussion above and below its statement).
**Corollary 5**.: _Let \(\Omega\) be a \(C^{4}\) bounded open set, and \(u\in H^{2}_{0}(\Omega)\) satisfy \(\Delta^{2}u=\mu u\) for some \(\mu>0\). Assume that \(\Delta u=0\) on \(\partial\Omega\). Then, \(u=0\) in \(\Omega\)._
Proof.: Assume that \(u\) does not vanish identically, so that it is an eigenfunction. The hypothesis \(\Delta u=0\) on \(\partial\Omega\) reads \(g_{u}=0\) on \(\partial\Omega\) and then in \(\Omega\), where \(g_{u}\) is defined as in Proposition 4. Then, the function \(z_{u}\) satisfies \(\Delta z_{u}=\sqrt{\mu}z_{u}\). This means that either \(z_{u}=0\), or \(-\sqrt{\mu}\) is an eigenvalue of the Dirichlet Laplacian. As the latter cannot hold, \(z_{u}=0\), and hence \(-\Delta u=\sqrt{\mu}u\), so that \(u\) is an eigenfunction of the Dirichlet Laplacian. Because \(u\in H^{2}_{0}(\Omega)\), we run into a contradiction using Hopf boundary Lemma as in the end of the proof of Proposition 4.
## 3 Shape derivatives
In order to fully exploit Proposition 4, one needs to gain information on the function \(g_{u}\) (defined in the statement of the Proposition 4) when \(\Omega\) is an optimal shape. As \(g_{u}\) depends on the value of \(\Delta u\) on \(\partial\Omega\), one might use shape derivatives. Shape derivatives for eigenvalues of polyharmonic operators are less famous than their counterparts for the Laplacian, for which one might refer to the classical textbook [12]. Note moreover that this reference does not deal in details with the derivative of multiple eigenvalues. For a framework on the derivation of simple and multiple eigenvalues of a general abstract operator see [13]. For the concrete shape derivation of simple and multiple eigenvalues of the bilaplacian and Dirichlet polyharmonic operators, we found only few references. Indeed, see [1, section 5] and the references therein, and in particular [10],
that we shall refer to in this section. For that purpose, assume \(\Omega\) to be arbitrary, and let \(\Gamma_{k}^{\Omega}\) be the functional defined on \(W^{5,\infty}(\mathbb{R}^{d},\mathbb{R}^{d})\) by
\[\Gamma_{k}^{\Omega}(V)=\Gamma_{k}((\operatorname{id}+V)\Omega). \tag{7}\]
Here, \(\Gamma_{k}(\Omega)\) denotes the \(k\)-th eigenvalue of the bilaplacian on \(\Omega\), **counted with multiplicity**. We will also use the notation \(u_{k}\) to designate one of the two \(L^{2}\)-normalised eigenfunctions associated with \(\Gamma_{k}(\Omega)\). Then, if \(\Gamma_{k}(\Omega)\) is of multiplicity \(p\in\mathbb{N}^{*}\), and if \(\Gamma_{k}(\Omega)=...=\Gamma_{k+p-1}(\Omega)\), we have that, in a neighbourhood \(\mathcal{W}\) of \(0\) in \(W^{5,\infty}(\mathbb{R}^{d},\mathbb{R}^{d})\), the set \(\{\Gamma_{k+i-1}^{\Omega}(V):1\leq i\leq p,V\in\mathcal{W}\}\) is made of the union of \(p\) analytic branchs. The derivatives of these branchs at \(0\) obey the mere formula given in the following result (see Theorem 3.5, Lemma 4.1 and formula (4.4) of [1]).
**Theorem 6**.: _Let \(\Omega\) be a \(C^{4}\) bounded open set and \(k,p\in\mathbb{N}^{*}\). Assume that \(\Gamma_{k-1}(\Omega)<\Gamma_{k}(\Omega)=...=\Gamma_{k+p-1}(\Omega)<\Gamma_{k+p }(\Omega)\) (if \(k=1\), set \(\Gamma_{k-1}(\Omega)=0\)). Then, the functionals \(\Gamma_{k}^{\Omega},...,\Gamma_{k+p-1}^{\Omega}\) defined in (7) are Gateau-differentiable at \(0\) both on the right and on the left, and their partial derivatives in the direction of a vector field \(V\in W^{5,\infty}(\mathbb{R}^{d},\mathbb{R}^{d})\) (both on the right and on the left) shall be mapped in a bijective way to the (possibly multi)set_
\[\mathcal{D}_{V}:=\left\{-\int_{\partial\Omega}\left(\Delta u_{k+i-1}\right)^{ 2}V\cdot\vec{n},\quad 1\leq i,j\leq p\right\}. \tag{8}\]
**Remark**.: _Note that it is not true in general that \(\Gamma_{k}^{\Omega},...,\Gamma_{k+p-1}^{\Omega}\) are differentiable, even if they do both on the left and on the right. Indeed, their derivative on the left and on the right might not coincide since the bijection with \(\mathcal{D}_{V}\) changes when one derives on the left or on the right. However, when \(p=1\), there is no permutation of \(\mathcal{D}_{V}\) else than identity._
**Corollary 7**.: _Let \(\Omega\) be a \(C^{4}\) bounded open set and \(k\in\mathbb{N}^{*}\). Assume that \(\Gamma_{k}(\Omega)\) is simple. Then, the functional \(\Gamma_{k}^{\Omega}\) defined in (7) is Gateau-differentiable at \(0\), and its partial derivative in the direction of a vector field \(V\in W^{5,\infty}(\mathbb{R}^{d},\mathbb{R}^{d})\) is_
\[\partial_{V}\Gamma_{k}^{\Omega}(0)=-\int_{\partial\Omega}\left(\Delta u_{k} \right)^{2}V\cdot\vec{n}. \tag{9}\]
This result shows that the shape derivative of the first eigenvalue precisely involves the values of the Laplacian of the first eigenfunction (as long as it is unique) on the boundary. But two issues remain. The first is to deal with the volume constraint appearing in (4). To do so, we define, the volume functional \(\mathcal{V}^{\Omega}:W^{5,\infty}(\mathbb{R}^{d},\mathbb{R}^{d})\to\mathbb{R}\) by
\[\mathcal{V}^{\Omega}(V)=|(\operatorname{id}+V)\Omega|. \tag{10}\]
Then, we build from \(\Gamma_{k}^{\Omega}\) the functional \(G_{k}^{\Omega}\) on \(W^{5,\infty}(\mathbb{R}^{d},\mathbb{R}^{d})\) by setting
\[G_{k}^{\Omega}=\left(\mathcal{V}^{\Omega}\right)^{\frac{d}{2}}\Gamma_{k}^{ \Omega}. \tag{11}\]
It is classical to introduce \(G_{k}^{\Omega}\) as it essentially behaves as \(\Gamma_{k}^{\Omega}\) but has the property that \(\omega\mapsto G_{k}^{\omega}(0)\) is scale-invariant, hence if \(\Omega\) is an optimal shape for (4), \(0\) minimizes \(G_{k}^{\Omega}\). Moreover, since the derivative of \(\mathcal{V}^{\Omega}\) is known ([1, Theorem 5.2.2]), we end up with the next corollary.
**Corollary 8**.: _With the hypotheses of Corollary 7, the functional \(G_{k}^{\Omega}\) defined in (11) is Gateau-differentiable at \(0\), and its partial derivative in the direction of a vector field \(V\in W^{5,\infty}(\mathbb{R}^{d},\mathbb{R}^{d})\) is_
\[\partial_{V}G_{k}^{\Omega}(0)=\left[\int_{\partial\Omega}\frac{4\Gamma_{k}( \Omega)}{d|\Omega|}V\cdot\vec{n}-\int_{\partial\Omega}\left(\Delta u_{k} \right)^{2}V\cdot\vec{n}\right]|\Omega|^{\frac{d}{2}}. \tag{12}\]
The second issue regarding Corollary 7 is the assumption on the simplicity of \(\Gamma_{k}(\Omega)\). Indeed, as already mentionned, in the context of fourth order elliptic operators, the lack of positivity prevents from using Krein-Rutman Theorem. As a result, one is unable to prove simplicity of the first eigenvalue, which actually fails in general (see [1, Theorem 3.9]). Fortunately, as roughly justified in [16], it can be proved that simplicity holds for the principal eigenvalue on a domain with minimal eigenvalue. The proof of this fact is obtained by contradiction, using the derivative of a multiple eigenvalue. It will be a consequence of the next proposition.
**Proposition 9**.: _Let \(\Omega\) be a \(C^{4}\)bounded open set and \(k,p\in\mathbb{N}^{*}\), \(p>1\). Assume that \(\Gamma_{k-1}(\Omega)<\Gamma_{k}(\Omega)=...=\Gamma_{k+p-1}(\Omega)<\Gamma_{k+p}(\Omega)\) (if \(k=1\), set \(\Gamma_{k-1}(\Omega)=0\) for instance). Then, there exists \(V_{+},V_{-}\in W^{5,\infty}(\mathbb{R}^{d},\mathbb{R}^{d})\) such that_
\[\partial_{V_{-}}^{+}\Gamma_{k}^{\Omega}(0) <0<\partial_{V_{+}}^{+}\Gamma_{k+p-1}^{\Omega}(0),\] \[\partial_{V_{\pm}}\mathcal{V}^{\Omega}(0) =0.\]
_Here, \(\partial_{V}^{+}\) (resp. \(\partial_{V}^{-}\)) denotes the derivative in the direction \(V\) on the right (resp. on the left)._
Proof.: We draw our inspiration from [10, Lemma 2.5.9]. Let \(V\in W^{5,\infty}(\mathbb{R}^{d},\mathbb{R}^{d})\) be such that \(\int_{\partial\Omega}V\cdot\vec{n}=0\). Thanks to Theorem 6, we know that \(\partial_{V}^{+}\Gamma_{k}^{\Omega}(0),...,\partial_{V}^{+}\Gamma_{k+p-1}^{ \Omega}(0)\) are the elements of the multiset
\[\mathcal{D}_{V}=\left\{-\int_{\partial\Omega}\left(\Delta u_{k}\right)^{2}V \cdot\vec{n},...,-\int_{\partial\Omega}\left(\Delta u_{k+p-1}\right)^{2}V\cdot \vec{n}\right\}.\]
Assume by contradiction that for each such \(V\), only \(0\) belongs to \(\mathcal{D}_{V}\). From this fact, one shall conclude that, for any \(1\leq i\leq p\), \(\left(\Delta u_{k+i-1}\right)^{2}\) is constant on \(\partial\Omega\). To see this, we first need to show that the set \(\{V\cdot\vec{n}:V\in W^{5,\infty}(\mathbb{R}^{d},\mathbb{R}^{d}),\int_{ \partial\Omega}V\cdot\vec{n}=0\}\) is dense in \(C_{m}(\partial\Omega):=\{\varphi\in C(\partial\Omega):\int_{\partial\Omega} \varphi=0\}\). Indeed, this would yield that, for all \(1\leq i\leq p\),
\[\forall\varphi\in C_{m}(\partial\Omega),\qquad\int_{\partial\Omega}\left( \Delta u_{k+i-1}\right)^{2}\varphi=0,\]
from which we deduce (thanks to Riesz-Markov Theorem) that the measure \(\left(\Delta u_{k+i-1}\right)^{2}\) coincides with its mean value, or in other words is constant.
To show the expected density result, we let \(\varphi\in C_{m}(\partial\Omega)\). Extending \(\varphi\) and \(\vec{n}\) into \(C_{c}(\mathbb{R}^{d})\) functions (see [10, formula (5.39)]), we define \(V:=\varphi\vec{n}\) which is then \(C_{c}(\mathbb{R}^{d},\mathbb{R}^{d})\) as well. Moreover, immediately \(V\cdot\vec{n}=\varphi\) and hence \(\int_{\partial\Omega}V\cdot\vec{n}=0\). Unfortunately, in general \(V\notin W^{5,\infty}(\mathbb{R}^{d},\mathbb{R}^{d})\). However, by density there exists \(V_{k}\in C_{c}^{\infty}(\mathbb{R}^{d},\mathbb{R}^{d})\) converging toward \(V\) uniformly in \(\mathbb{R}^{d}\). Now let \(W\in C_{c}^{\infty}(\mathbb{R}^{d},\mathbb{R}^{d})\) be such that \(\int_{\partial\Omega}W=c\neq 0\) and set \(\delta_{k}:=\int_{\partial\Omega}V_{k}\cdot\vec{n}\). Then, \(\delta_{k}\to 0\), hence \(\frac{\delta_{k}}{c}W\) converges to \(0\) uniformly. Consequently, putting \(W_{k}:=V_{k}-\frac{\delta_{k}}{c}W\), we obtain that \(W_{k}\cdot\vec{n}\) belongs to \(\{V\cdot\vec{n}:V\in W^{5,\infty}(\mathbb{R}^{d},\mathbb{R}^{d}),\int_{ \partial\Omega}V\cdot\vec{n}=0\}\) and converges toward \(\varphi\) uniformly.
We have shown that for each \(1\leq i\leq p\), there exists \(c_{i}\) such that \(\Delta u_{k+i-1}(x)=\pm c_{i}\) for a.e. \(x\in\partial\Omega\). But as, for any measurable set \(A\subseteq\mathbb{R}^{d-1}\) a.e. proper, the function \(\mathbb{1}_{A}\) belongs to \(H^{s}\) if and only if \(s<\frac{1}{2}\), and as \(\Delta u_{k+i-1}\in H^{\frac{s}{2}}(\partial\Omega)\) (because \(u_{k+i-1}\in H^{4}(\Omega)\)), we see that \(\Delta u_{k+i-1}\) is actually a.e. constant. Considering its opposite if needed, we may assume that \(\Delta u_{k+i-1}=c_{i}\) on \(\partial\Omega\) for all \(1\leq i\leq p\). But then for \(1\leq i<j\leq p\), \(u_{ij}:=c_{j}u_{k+i-1}-c_{i}u_{k+j-1}\) is an eigenfunction of the bilaplacian in \(\Omega\), and it satisfies the ancillary condition \(\Delta u_{ij}=0\) on \(\partial\Omega\). Corollary 5 asserts that this cannot occur unless \(u_{k+i-1}\) and \(u_{k+j-1}\) are colinear, which is impossible.
We have shown that there exists \(1\leq i_{0}\leq p\) and \(V\in W^{5,\infty}(\mathbb{R}^{d},\mathbb{R}^{d})\) such that \(\partial_{V}\mathcal{V}^{\Omega}(0)=0\) and \(\partial_{V}^{+}\Gamma_{k+i_{0}-1}^{\Omega}(0)\neq 0\). Replacing \(V\) by \(-V\), there exists actually \(1\leq i_{+},i_{-}\leq p\) and \(V_{+},V_{-}\in W^{5,\infty}(\mathbb{R}^{d},\mathbb{R}^{d})\) such that \(\partial_{V_{\pm}}\mathcal{V}^{\Omega}(0)=0\) and \(\pm\partial_{V_{\pm}}^{+}\Gamma_{k+i_{\pm}-1}^{\Omega}(0)>0\). Note that as \(\partial_{V_{+}}^{+}\Gamma_{k+p-1}^{\Omega}(0)\) corresponds to the greatest value in \(\mathcal{D}_{V_{+}}\) whereas \(\partial_{V_{-}}^{+}\Gamma_{k}^{\Omega}(0)\) is the lowest value in \(\mathcal{D}_{V_{-}}\), we end up with \(\partial_{V_{-}}^{+}\Gamma_{k}^{\Omega}(0)<0\) and \(0<\partial_{V_{+}}^{+}\Gamma_{k+p-1}^{\Omega}(0)\).
The conclusions of the present section might be combined in order to obtain an information on the function \(g_{u}\) defined in Proposition 4 in the case of an optimal domain \(\Omega\). This is the purpose of the next paragraph.
## 4 Symplicity of the eigenvalue and optimality conditions
In this section we prove Theorem 3, and discuss the corresponding optimality condition.
Proof of Theorem 3.: Let \(\Omega\) be a \(C^{4}\) optimal shape for problem (4). The simplicity of \(\Gamma(\Omega)\) is a direct consequence of Proposition 9, as if one had \(\Gamma_{1}(\Omega)=\Gamma_{2}(\Omega)\), there would exist a vector field \(V\) such that \(\partial_{V}\mathcal{V}^{\Omega}(0)=0\) and \(\partial_{V}^{+}\Gamma_{1}^{\Omega}(0)<0\). Then, we would have \(\partial_{V}^{+}G_{1}^{\Omega}(0)<0\), hence \(\Omega\) would not minimise \(\omega\mapsto|\omega|^{\frac{4}{4}}\Gamma_{1}(\omega)\), so it would not solve (4).
Now that simplicity has been proved, we shall invoke Corollary 8. Indeed, as \(\Omega\) is an optimal shape, \(0\) is a critical point of \(G_{1}^{\Omega}\), hence we get the optimality condition
\[0=\int_{\partial\Omega}\left[\frac{4\Gamma(\Omega)}{d|\Omega|}-\left(\Delta u \right)^{2}\right]V\cdot\vec{n},\qquad\forall V\in W^{5,\infty}(\mathbb{R}^{d },\mathbb{R}^{d}).\]
We conclude that \(\Delta u=\pm\alpha\) a.e. on \(\partial\Omega\). Moreover, as \(\Delta u\) is \(H^{\frac{3}{2}}(\partial\Omega)\), it is a.e. constant on each connected component of \(\partial\Omega\) (otherwise, as in the proof of Proposition 9, it would be \(H^{s}(\partial\Omega)\) only for \(s<1/2\)).
Note that the optimality condition given in Theorem 3 is actually fulfilled by any \(C^{4}\) regular shape \(\Omega\) with simple principal eigenvalue and such that \(0\) is a critical point for \(G_{1}^{\Omega}\). This motivates the following definition.
**Definition 10**.: _An open set \(\Omega\) is a critical shape (for the principal eigenvalue) if any \(L^{2}\)-normalised first eigenfunction \(u\) on \(\Omega\) is such that \(\Delta u\) is a.e. constant equal to \(\pm\sqrt{\frac{4\Gamma(\Omega)}{d|\Omega|}}\) on each connected component of \(\partial\Omega\)._
**Remark**.: _Any ball \(B\) is a critical shape (derive \(G_{1}^{B}\) in the direction of a radially symmetric vector field)._
Considering the order reduction principle proved in Section 2 and the optimality condition derived in the present section, we are equipped for proving Theorem 1 and Theorem 2.
## 5 Proofs of Theorem 1 and Theorem 2
In this section, we combine the order reduction principle (Proposition 4) and the optimality condition (Theorem 3) to provide proofs for Theorem 1 and Theorem 2. Let us begin with the most straightforward, which is undoubtedly Theorem 2. With Theorem 3 in mind, we see that it is enough to prove Theorem 2 for critical shapes, which is performed below.
**Theorem 11**.: _Let \(\Omega\) be a critical shape satisfying (RG). Let \(u\) be a first eigenfunction on \(\Omega\) such that \(\partial_{n}\Delta u\) is constant on \(\partial\Omega\). Then, \(\Omega\) is a ball._
Proof.: Without loss of generality, we assume \(u\) to be \(L^{2}\)-normalised. Since \(\Omega\) is a critical shape, we know that \(\Delta u\) is a.e. constant on each connected component of \(\partial\Omega\). But \(\partial\Omega\) is assumed connected, hence \(\Delta u\) is constant on \(\partial\Omega\) equal to \(\pm\alpha\), where \(\alpha=\sqrt{\frac{4\Gamma(\Omega)}{d|\Omega|}}\). Considering \(-u\) if needed, we shall assume that \(\Delta u=\alpha\) a.e. on \(\partial\Omega\), and, consequently, \(g_{u}=\sqrt{\frac{4}{d|\Omega|}}>0\) a.e. not only on \(\partial\Omega\) but in the whole \(\Omega\). Applying the order reduction principle (Proposition 4), we obtain that \(z_{u}=\frac{\Delta}{\sqrt{\mu}}u+u-g_{u}\) is negative and satisfies (5). Moreover, the fact that \(\partial_{n}\Delta u\) remains constant on the boundary (combined with the fact that \(g_{u}\) is constant) shows that \(\partial_{n}z_{u}\) is constant on \(\partial\Omega\). Thus \(z_{u}<0\) satisfies an overdetermined problem of order 2, and we conclude applying Serrin's Theorem [12, Theorem 2].
We now turn to the proof of Theorem 1. To do so, we use the variational formulation of the first eigenvalue involving \(z_{u}\) given by Proposition 4. This new expression is interesting in the sense that it allows using symmetrisation techniques available for one-sign \(H^{1}_{0}(\Omega)\) functions. That's why we recall the Schwarz symmetrisation (see the classical [12] for a general discussion on level set rearrangements).
**Definition 12**.: _Let \(\Omega\) be an open set and \(u\) be a measurable function on \(\Omega\). Let \(B\) be a ball of same volume than \(\Omega\). The nonincreasing spherical symmetric rearrangment (also called Schwarz symmetrisation) of \(u\) is the measurable function \(u^{*}\) defined on \(B\) such that its radial part is the generalised inverse of the distribution function \(\mu_{u}\) of \(u\), that is_
\[u^{*}(x):=\mu_{u}^{[-1]}(|B_{|x|}|)=\inf\{t:\mu_{u}(t)\leq|B_{|x|}|\}=\inf\{t:| \{u>t\}|\leq|B_{|x|}|\},\]
_where \(B_{r}\) denotes the ball of radius \(r\) and of same center as \(B\). We recall that \(u\) and \(u^{*}\) are equimeasurable and that if \(u\in H^{1}_{0}(\Omega)\) is nonnegative, \(u^{*}\in H^{1}_{0}(B)\). Moreover, for any \(z\in H^{1}_{0}(\Omega)\), we define \(z^{\#}:=-(-z)^{*}\)._
Then, Theorem 1 will be a consequence of the following result.
**Theorem 13**.: _Let \(\Omega\) be a critical shape satisfying (RG) and \(B\) a ball such that \(|\Omega|=|B|\). Let \(u\) be a first \(L^{2}\)-normalised eigenfunction on \(\Omega\) and \(u_{B}\) a first \(L^{2}\)-normalised eigenfunction on \(B\). Assume that_
\[\left|\int_{\Omega}u\right|\leq\left|\int_{B}u_{B}\right|.\] (M)
_Then, inequality (3) holds. Moreover, if (M) is strict, (3) is also strict. Finally, if \(\,\Gamma(\Omega)=\Gamma(B)\), \(\Omega\) has to be a translation of \(B\)._
Proof.: Proceeding as in the beginning of the proof of Theorem 11, we obtain that \(g_{u}=\sqrt{\frac{4}{d|\Omega|}}>0\) a.e. in \(\Omega\). Since \(B\) is also a critical shape satisfying (RG) (recall the remark below the defintion of critical shapes), the same applies to \(u_{B}\), and we conclude that also \(g_{u_{B}}=\sqrt{\frac{4}{d|\Omega|}}\) a.e. in \(B\).
In particular, \(g_{u}\geq 0\), hence \(z_{u}<0\), and \(z_{u}^{\#}\) is a negative \(H^{1}_{0}(B)\) function. Moreover, the properties of the Schwarz symmetrisation ensure that \(\int_{\Omega}|\nabla z_{u}|^{2}\geq\int_{B}|\nabla z_{u}^{\#}|^{2}\), \(\int_{\Omega}z_{u}^{2}=\int_{B}(z_{u}^{\#})^{2}\), and \(\int_{\Omega}z_{u}=\int_{B}z_{u}^{\#}\). Therefore, thanks to Proposition 4,
\[\frac{1}{\sqrt{\Gamma(\Omega)}}=-\frac{\int_{\Omega}|z_{u}|^{2}+g_{u}\int_{ \Omega}z_{u}}{\int_{\Omega}|\nabla z_{u}|^{2}}\leq-\frac{\int_{B}|z_{u}^{\#}| ^{2}+g_{u}\int_{B}z_{u}^{\#}}{\int_{B}|\nabla z_{u}^{\#}|^{2}}\leq-\min_{z\in H ^{1}_{0}(B)}\frac{\int_{B}z^{2}+g_{u_{B}}\int_{B}(2z-z_{u}^{\#})}{\int_{B}| \nabla z|^{2}}.\]
Note that the numerator in the above quotients is always nonpositive (from the first equality), which justifies the first inequality. Now, we claim that \(\int_{B}z_{u}^{\#}\leq\int_{B}z_{u_{B}}\). Indeed, if true, this result would lead to
\[\frac{1}{\sqrt{\Gamma(\Omega)}}\leq-\min_{z\in H^{1}_{0}(B)}\frac{\int_{B}z^{ 2}+g_{u_{B}}\int_{B}(2z-z_{u_{B}})}{\int_{B}|\nabla z|^{2}}=\frac{1}{\sqrt{ \Gamma(B)}},\]
the last equality coming once again from Proposition 4 applied to \(B\). This would in turn give the Faber-Krahn inequality \(\Gamma(\Omega)\geq\Gamma(B)\). Note also that if \(\int_{B}z_{u}^{\#}\leq\int_{B}z_{u_{B}}\) is strict, then \(\Gamma(\Omega)\geq\Gamma(B)\) is also strict.
Hence it remains only to prove that \(\int_{B}z_{u}^{\#}\leq\int_{B}z_{u_{B}}\). But thanks to the properties of the Schwarz rearrangement, \(\int_{B}z_{u}^{\#}=\int_{\Omega}z_{u}\). Then, using the expression of \(z_{u}\) combined with the fact that \(\int_{\Omega}\Delta u=0\), we find \(\int_{B}z_{u}^{\#}=\int_{\Omega}u-|\Omega|g_{u}\). In the same way, \(\int_{B}z_{u_{B}}=\int_{B}u_{B}-|B|g_{u_{B}}\). Thus, as \(|\Omega|=|B|\) and \(g_{u}=g_{u_{B}}\), we obtain that \(\int_{B}z_{u}^{\#}\leq\int_{B}z_{u_{B}}\) if and only if \(\int_{\Omega}u\leq\int_{B}u_{B}\), which holds by assumption. Moreover, if one of these inequalities is strict, the other also holds strictly.
Eventually, if \(\Gamma(\Omega)=\Gamma(B)\), all our inequalities become equalities. In particular, \(\int_{\Omega}|\nabla z_{u}|^{2}=\int_{B}|\nabla z_{u}^{\#}|^{2}\), thus we apply [13, Theorem 2.2]. This is possible since, on the one hand, as
is analytic in \(\Omega\), \(z_{u}\) is also analytic, hence \(|\{z_{u}=t\}|=0\) for all \(\inf z_{u}<t<\sup z_{u}\). On the other hand, proceeding as in [11, Proposition 5], \(u\) can be proved to be bounded, hence, thanks to classical elliptic regularity ([1, Theorem 2.20]), it is actually \(W^{4,p}(\Omega)\), and in particular \(C^{3,\gamma}(\overline{\Omega}),0<\gamma<1\) due to Sobolev embeddings. Eventually, \(z_{u}\) is Lipschitz in \(\mathbb{R}^{d}\). Then, [11, Theorem 2.2] yields that, up to translation, \(z_{u}=z_{u}^{\#}\), and in particular that \(\Omega\) is a ball.
Proof of Theorem 1.: If \(\Omega\) is an optimal shape satisfying (RG), Theorem 3 shows that \(\Gamma(\Omega)\) is simple and that \(\Omega\) is a critical shape. Assume that its \(L^{2}\)-normalized principal eigenfunction \(u\) verifies (M). Theorem 13 then applies and shows that the inequality \(\Gamma(\Omega)\geq\Gamma(B)\) holds. But as \(\Omega\) is optimal, we conclude that \(\Gamma(\Omega)=\Gamma(B)\), hence Theorem 13 implies that \(\Omega=B\) up to a translation. In particular, (M) turns out to be an equality.
Theorem 1 relies on the central hypothesis (M). Unfortunately, the inequality \(\left|\int_{\Omega}u\right|\leq\left|\int_{B}u_{B}\right|\) seems not easy to check in general. For instance, to estimate the mean value of \(u\) on the optimal domain \(\Omega\), one could try to use the inequality
\[\int_{\Omega}u\leq g_{u}|\Omega|=\sqrt{\frac{4|\Omega|}{d}}, \tag{13}\]
coming from the fact that \(\int_{\Omega}z_{u}\leq 0\) (recall Proposition 4). However, as \(B\) is a critical shape, \(u_{B}\) satisfies (13) as well, hence, for proving (M), it is illusory to intend showing the reverse \(\sqrt{\frac{4|\Omega|}{d}}\leq\int_{B}u_{B}\), since it would mean that \(z_{u_{B}}=0\).
The above discussion illustrates a general fact: for showing (M), any argument only based on the fact that \(\Omega\) is a critical shape is doomed to failure since the same argument applied to \(B\) (which is also a critical shape) would then lead to the reverse inequality. Nevertheless, even if (M) is a restrictive assumption, Theorem 1 has two interesting consequences that we shall explain in section 6.
## 6 Consequences of Theorem 1
The first immediate corollary of Theorem 1 regards the volume of one of the nodal domains of \(u\).
**Corollary 14**.: _With the hypotheses of Theorem 1, if \(\int_{\Omega}u>0\), writing \(\Omega_{+}:=\{u>0\}\),_
\[\sqrt{|\Omega_{+}|}>\int_{B}u_{B}. \tag{14}\]
Proof.: Assume by contradiction that \(\sqrt{|\Omega_{+}|}\leq\int_{B}u_{B}\). Using that \(\int_{\Omega}u\leq\int_{\Omega_{+}}u\leq\sqrt{|\Omega_{+}|}\sqrt{\int_{\Omega _{+}}u^{2}}\leq\sqrt{|\Omega_{+}|}\), we get that (M) holds. Theorem 1 indicates that \(\Omega=B\) up to a translation. Therefore, all the above inequalities, in particular Holder's, are equalities. This means that \(u=1\) in \(\Omega_{+}=B\), a contradiction.
This result confirms that it might be interesting to evaluate the mean value of \(u_{B}\). This is possible since \(u_{B}\) shall be computed explicitly as it is stated in the next result, the proof of which is detailed in appendix page 13.
**Proposition 15**.: _Let \(B\) be the ball \(B(0,R)\). The function \(u_{B}\) is radially symmetric, and \(u_{B}\) or \(-u_{B}\) is given by the formula_
\[u_{B}(r)=\frac{1}{\sqrt{d|B|}}\left[\frac{J_{\nu}(k_{\nu}r)}{J\nu(k_{\nu}R)}- \frac{L_{\nu}(k_{\nu}r)}{I_{\nu}(k_{\nu}R)}\right]\left(\frac{r}{R}\right)^{- \nu}, \tag{15}\]
_where \(\nu:=d/2-1\), \(J_{\nu}\) and \(I_{\nu}\) stand for the Bessel and modified Bessel functions of order \(\nu\), and \(k_{\nu}:=\gamma_{\nu}/R\), \(\gamma_{\nu}\) being the first positive zero of \(f_{\nu}\) defined by_
\[f_{\nu}(r)=\left[\frac{J_{\nu+1}}{J\nu}(r)+\frac{I_{\nu+1}}{I_{\nu}}(r)\right] r^{d-1}.\]
_Moreover,_
\[\int_{B}u_{B}=\frac{\sqrt{d|B|}}{\gamma_{\nu}R^{d-2}}\left[\frac{J_{\nu+1}}{J \nu}(\gamma_{\nu})-\frac{I_{\nu+1}}{I_{\nu}}(\gamma_{\nu})\right]=2\frac{\sqrt {d|B|}}{\gamma_{\nu}R^{d-2}}\frac{J_{\nu+1}}{J\nu}(\gamma_{\nu}). \tag{16}\]
Note that it is easy to evaluate numerically (16). Indeed, in Python 3 for instance, the package special, from the module scipy, directly provides the functions jv and iv, corresponding respectively to the Bessel functions \(J_{\nu}\) and \(I_{\nu}\). Then it remains to compute \(\gamma_{\nu}\), but this can be done by dichotomy thanks to (19) as long as one knows \(j_{\nu,1}\) and \(j_{\nu,2}\) (where, for \(n\in\mathbb{N}^{*}\), \(j_{\nu,n}\) are the positive zeros of \(J_{\nu}\)). For that observe, as explained in Theorem 2.1 and Theorem 2.2 of [13], that the zeros of \(J_{\nu}\) can be approximated by computing the eigenvalues of some matrix.
In Table 1 is given the value of \(\int_{B}u_{B}\) in the case where \(B\) is the ball of volume 1. We also give the minimum volume allowed for \(\Omega_{+}\) to satisfy (14), that is \((\int_{B}u_{B})^{2}\).
Let us now discuss another consequence of Theorem 1. We already mentionned that it is not possible to use only the criticality of an optimal shape for proving (M). On the other hand, the optimality of some shape \(\Omega\) means, by definition, that
\[\int_{\Omega}|\Delta u|^{2}\leq\int_{B}|\Delta u_{B}|^{2},\]
where \(u\) (resp. \(u_{B}\)) is an \(L^{2}\)-normalised first eigenfunction on \(\Omega\) (resp. \(B\)). From this inequality, one shall wonder whether it is possible to deduce (M). This problem is actually not so far from a maximum principle type property, which classically asserts that if \(v_{1},v_{2}\in H^{1}_{0}(\omega)\) satisfy \(-\Delta v_{1}\leq-\Delta v_{2}\), then \(v_{1}\leq v_{2}\) in \(\omega\). In our situation, it would be desirable to convert these pointwise inequalities into integral ones. Therefore, even if it does not immediately answer our initial concern, it would be interesting to study to which extent the following \(L^{p}\) norm and mean value formulations of the maximum principle hold: for \(v_{1}\in W^{1,p}_{0}(\omega_{1})\cap W^{2,p}(\omega_{1})\) and \(v_{2}\in W^{1,p}_{0}(\omega_{2})\cap W^{2,p}(\omega_{2})\),
\[\int_{\omega_{1}}|\Delta v_{1}|^{p}\leq\int_{\omega_{2}}|\Delta v _{2}|^{p} \Longrightarrow \int_{\omega_{1}}|v_{1}|^{p}\leq\int_{\omega_{2}}|v_{2}|^{p}, \tag{17}\] \[\int_{\omega_{1}}(-\Delta v_{1})^{p}\leq\int_{\omega_{2}}(-\Delta v _{2})^{p} \Longrightarrow \int_{\omega_{1}}v_{1}^{p}\leq\int_{\omega_{2}}v_{2}^{p}. \tag{18}\]
At this time, we were not able to answer the above (quite vague) questions, and could only argue that (18) cannot hold in full generality for \(p=1\), since it would imply that any \(H^{2}_{0}\) function has zero mean value. Anyway, in the remaining, we will state an intersting consequence of Theorem 1 using the standard maximum principle combined with Talenti's comparison principle, which we recall below (see [14]).
\begin{table}
\begin{tabular}{c|c|c} \(d\) & \(\int_{B}u_{B}\) & \((\int_{B}u_{B})^{2}\) \\ \hline
4 & 0.6056 & 0.3668 \\
5 & 0.5643 & 0.3185 \\
6 & 0.5308 & 0.2817 \\
7 & 0.5028 & 0.2528 \\
8 & 0.4790 & 0.2294 \\
9 & 0.4583 & 0.2101 \\ \end{tabular}
\end{table}
Table 1: Value of \(\int_{B}u_{B}\) for several dimensions. Here, \(B\) is chosen to be the ball of volume 1.
**Theorem 16**.: _Let \(\omega\) be an open set and \(\omega^{*}\) its Schwarz symmetrisation. Let \(f\in L^{2}(\omega)\) and \(u\in H^{2}(\omega)\) the solution of_
\[\begin{cases}-\Delta u=f&in\quad\omega,\\ u=0&on\quad\partial\omega.\end{cases}\]
_Let \(f^{*},u^{*}\in L^{2}(\omega^{*})\) be the Schwarz symmetrisations of \(f,u\) and let \(v\in H^{2}(\omega^{*})\) solve_
\[\begin{cases}-\Delta v=f^{*}&in\quad\omega^{*},\\ v=0&on\quad\partial\omega^{*}.\end{cases}\]
_Assume that \(u\geq 0\). Then,_
\[v\geq u^{*}\qquad\text{a.e. in }\omega^{*}.\]
**Remark**.:
1. _The hypothesis_ \(u\geq 0\) _is not precised in_ _[_15_]__, but it is mentionned in_ _[_11_, Theorem 3.1.1]__. This comes from the definition of Schwarz symmetrisation for signed functions, which differs in both references. Here, in view of Definition_ 12_, we conform to the convention adopted in_ _[_11_]__._
2. _We mention that, as long as_ \(u\) _is assumed nonnegative, Schwarz symmetrisation might be replaced by Talenti symmetrisation (see_ _[_15_]__), which is defined in the following way: let_ \(f\in L^{2}(\omega)\)_, we set, for all_ \(s\in[0,|\omega|[\)_,_ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ __ ____ __ ____ __ ____ __ ____ ____ ____ ____ ____ ____ ____ ____ ____ ____ ____ ______ ______ ______ ______ ______ ______ ______ ______ ______ ______ ______ ________ ________ ________ ________ ________ ________ ________ ________ ________ ________ __________ ________ ________ __________ ________ __________ ________ __________ __________ ________ __________ __________ ________ ________ __________ ________ __________ ________ __________ ________ __________ ________ ________ ________ ________ ________ ________ ________ ________ ________ ________ ________ ______ ________ ________ ______ ________ ________ ______ ________ ______ ______ ______ ______ ______ ______ ________ ______ ______ ______ ______ ______ ______ ______ ______ ______ ______ ______ ______ ______ ______ ______ ______ ______ ______ ______ ______ ______ ______ ______ ______ ______ ______ ______ ______ ______ ______ ______ ______ ____ ______ ______ ______ ______ ______ ______ ______ ______ ______ ______ ______ ______ ______ ______ ______ ______ ______ ______ ________ ______ ______ ______ ______ ______ ______ ______ ________ ______ ________ ______ ______ ________ ______ ________ ______ ________ ______ ______ ________ ______ ______ ______ ________ ______ ______ ________ ______ ______ ______ ______ ________ ______ ______ ______ ________ ______ ______ ________ ______ ________ ______ ______ ________ ______ ______ ________ ______ ______ ________ ______ ______ ________ ______ ______ ________ ______ ______ ________ ______ ________ ______ ________ ______ ________ ________ ________ ________ ________ ________ ________ ________ ________ ________ ________ ______ __________ ________ ________ ______ ________ __________ ______ ________ ________ ______ __________ ______ ________ ______ ________ ________ ________ ________ ______ ______ ________ ________ ______ ______ ______ ______ ________ ________ ______ ________ ______ ________ ________ ________ ________ ______ ________ ______ ________ ________ ______ ________ ________ ________ ________ ______ ______ ________ ______ ________ ______ ______ ________ ________ ______ ______ ________ ____ ________ ______ ______ ______ ________ ______ ______ ____ ________ ________ ____ ______ ______ ____ ________ ______ ________ ______ ______ ____ ______ ________ ________ ____ ________ ______ ______ ______ ______ ________ ______ ________ ______ ______ __________ ________ ______ ______ ________ ________ ________ ________ ______ ______ ______ ________ ________ ________ ________ ______ ______ ________ ________ __________ ______ __________ ________ ____________ ________ __________ ________ ____________ __________ ________ __________ ________ ____________ ________ ____________ __________ __________ __________________
## Appendix
Proof of Proposition 15.: For readability, we ommit the subscript \(B\) in \(u_{B}\). According to [1], in \(B\), the first eigenfunction is radially symmetric and of the form \(\forall r\in[0,R[\),
\[u(r)=\left(AJ_{\nu}(kr)+BI_{\nu}(kr)\right)r^{-\nu},\]
where \(k:=\Gamma(B)^{\frac{1}{4}}\). Then, using the identities \(J_{\nu}^{\prime}(x)=\frac{\nu J_{\nu}(x)}{x}-J_{\nu+1}(x)\) and \(I_{\nu}^{\prime}(x)=\frac{\nu I_{\nu}(x)}{x}+I_{\nu+1}(x)\) we find
\[\partial_{r}u(r)=\left(-AJ_{\nu+1}(kr)+BI_{\nu+1}(kr)\right)kr^{-\nu}.\]
Now, for \(u\) to fulfill the condition \(u(R)=\partial_{r}u(R)=0\) although being non trivial, one observes that the matrix
\[M=\left(\begin{array}{cc}J_{\nu}(kR)&I_{\nu}(kR)\\ -J_{\nu+1}(kR)&I_{\nu+1}(kR)\end{array}\right)\]
needs having a non trivial kernel. In other words, its determinant needs to vanish, hence
\[f_{\nu}(kR)=J_{\nu}(kR)I_{\nu+1}(kR)+J_{\nu+1}(kR)I_{\nu}(kR)=0.\]
Conversely, as soon as \(k\) satisfies this equation, \(u\) will be solution of an eigenvalue problem in \(B\) with Dirichlet boundary conditions. Consequently, \(k\) is necessarily the lowest positive solution of this equation, meaning that \(k=k_{\nu}\).
We also invoke the article [1, equation (2.2)] according to which the positive zeros \(\gamma_{\nu,n}\) of \(f_{\nu}\) and the positive zeros \(j_{\nu,n}\) of \(J_{\nu}\) interlace in the following way
\[j_{\nu,n}<\gamma_{\nu,n}<j_{\nu,n+1}. \tag{19}\]
In particular, \(M\neq 0\) and it has a one-dimensional kernel generated, in virtue of the identity \(u(R)=0\), by the vector \((I_{\nu}(\gamma_{\nu}),-J_{\nu}(\gamma_{\nu}))\). In other words, there exists a reel number \(\beta\) such that
\[\left(\begin{array}{c}A\\ B\end{array}\right)=\beta\left(\begin{array}{c}I_{\nu}(\gamma_{\nu})\\ -J_{\nu}(\gamma_{\nu})\end{array}\right)\]
Finding the values of \(A\) and \(B\) is thus equivalent to determining \(\beta\). For that purpose, we use the normalisation of \(u\), i.e.
\[\begin{split} 1=\int_{\Omega}u^{2}=\beta^{2}|\mathbb{S}^{d-1}|& \left[I_{\nu}(\gamma_{\nu})^{2}\int_{0}^{R}J_{\nu}(kr)^{2}r^{d-2\nu-1}\right. \\ &\left.+J_{\nu}(\gamma_{\nu})^{2}\int_{0}^{R}I_{\nu}(kr)^{2}r^{d-2 \nu-1}\right.\\ &\left.-2I_{\nu}(\gamma_{\nu})J_{\nu}(\gamma_{\nu})\int_{0}^{R}I_{ \nu}(kr)J_{\nu}(kr)r^{d-2\nu-1}\right].\end{split} \tag{20}\]
As \(d-2\nu-1=1\), it turns out that we need to compute the integral of product of Bessel functions against \(r\). That's why we use the Gradshteyn and Ryzhik collection [1, section 6.521, formula 1], that is, for all \(\alpha\neq\beta\in\mathbb{C}\) and \(\nu>-1\),
\[\int_{0}^{1}xJ_{\nu}(\alpha x)J_{\nu}(\beta x)=\frac{\beta J_{\nu-1}(\beta)J_ {\nu}(\alpha)-\alpha J_{\nu-1}(\alpha)J_{\nu}(\beta)}{\alpha^{2}-\beta^{2}}= \frac{\alpha J_{\nu+1}(\alpha)J_{\nu}(\beta)-\beta J_{\nu+1}(\beta)J_{\nu}( \alpha)}{\alpha^{2}-\beta^{2}}. \tag{21}\]
We apply this formula with \(\alpha=i\gamma_{\nu}\) et \(\beta=\gamma_{\nu}\), and find
\[\int_{0}^{R}I_{\nu}(k_{\nu}r)J_{\nu}(k_{\nu}r)r^{d-2\nu-1}=\frac{R^{2}}{2 \gamma_{\nu}}[I_{\nu+1}(\gamma_{\nu})J_{\nu}(\gamma_{\nu})+J_{\nu+1}(\gamma_{ \nu})I_{\nu}(\gamma_{\nu})]=\frac{R^{2}}{2\gamma_{\nu}}f_{\nu}(\gamma_{\nu})=0.\]
For the other integrals, we first remark that when \((\alpha,\beta\in\mathbb{R}\) and \(\alpha\to\beta\) in (21), one obtains
\[\int_{0}^{1}xJ_{\nu}(\beta x)^{2}=\frac{J_{\nu}(\beta)^{2}}{2\beta}\frac{d}{d \beta}\left[\frac{\beta J_{\nu+1}(\beta)}{J_{\nu}(\beta)}\right]=\frac{1}{2} \left[J_{\nu+1}(\beta)^{2}+J_{\nu}(\beta)^{2}-\frac{\nu}{\beta}J_{\nu+1}(\beta )J_{\nu}(\beta)\right]. \tag{22}\]
Hence, with \(\beta=\gamma_{\nu}\), we find
\[\int_{0}^{R}J_{\nu}(k_{\nu}r)^{2}r^{d-2\nu-1}=\frac{R^{2}}{2}\left[J_{\nu+1}( \gamma_{\nu})^{2}+J_{\nu}(\gamma_{\nu})^{2}-\frac{\nu}{\gamma_{\nu}}J_{\nu+1} (\gamma_{\nu})J_{\nu}(\gamma_{\nu})\right].\]
But because both extremal members in (22) depend holomorphicly on \(\beta\), this formula remains true even when \(\beta\in\mathbb{C}\) thanks to the isolation of zeros, hence, we can apply it to \(\beta=i\gamma_{\nu}\):
\[\int_{0}^{R}I_{\nu}(k_{\nu}r)^{2}r^{d-2\nu-1}=\frac{R^{2}}{2}\left[-I_{\nu+1} (\gamma_{\nu})^{2}+I_{\nu}(\gamma_{\nu})^{2}-\frac{\nu}{\gamma_{\nu}}I_{\nu+1 }(\gamma_{\nu})I_{\nu}(\gamma_{\nu})\right].\]
Eventually, if \(E\) denotes the term between the brackets in (20), then
\[E = \tfrac{R^{2}}{2} \left[I_{\nu}(\gamma_{\nu})^{2}\left(J_{\nu+1}(\gamma_{\nu})^{2} +J_{\nu}(\gamma_{\nu})^{2}-\tfrac{\nu}{\gamma_{\nu}}J_{\nu+1}(\gamma_{\nu})J_{ \nu}(\gamma_{\nu})\right)\right.\] \[\left.+J_{\nu}(\gamma_{\nu})^{2}\left(-I_{\nu+1}(\gamma_{\nu})^{2 }+I_{\nu}(\gamma_{\nu})^{2}-\tfrac{\nu}{\gamma_{\nu}}I_{\nu+1}(\gamma_{\nu})I _{\nu}(\gamma_{\nu})\right)\right]\] \[= \tfrac{R^{2}}{2} \left[(I_{\nu}(\gamma_{\nu})J_{\nu+1}(\gamma_{\nu})-J_{\nu}( \gamma_{\nu})I_{\nu+1}(\gamma_{\nu}))(I_{\nu}(\gamma_{\nu})J_{\nu+1}(\gamma_ {\nu})+J_{\nu}(\gamma_{\nu})I_{\nu+1}(\gamma_{\nu}))\right.\] \[\left.+2J_{\nu}(\gamma_{\nu})^{2}I_{\nu}(\gamma_{\nu})^{2}-\tfrac {\nu}{\gamma_{\nu}}I_{\nu}(\gamma_{\nu})J_{\nu}(\gamma_{\nu+1}(\gamma_{\nu}) I_{\nu}(\gamma_{\nu})+I_{\nu+1}(\gamma_{\nu})J_{\nu}(\gamma_{\nu}))\right]\] \[= J_{\nu}(\gamma_{\nu})^{2}I_{\nu}(\gamma_{\nu})^{2}R^{2}.\]
Using \(|\mathbb{S}^{d-1}|\tfrac{R^{d}}{d}=|\Omega|\), we have that \(\beta^{-2}=\tfrac{d}{R^{d}}|\Omega|E\), hence
\[A=\frac{R^{\nu}}{J_{\nu}(\gamma_{\nu})\sqrt{d|\Omega|}},\qquad B=-\frac{R^{ \nu}}{I_{\nu}(\gamma_{\nu})\sqrt{d|\Omega|}}. \tag{23}\]
In particular,
\[u(r)=\frac{1}{\sqrt{d|\Omega|}}\left(\frac{J_{\nu}(k_{\nu}r)}{J_{\nu}(k_{\nu} R)}-\frac{I_{\nu}(k_{\nu})}{I_{\nu}(k_{\nu}R)}\right)\left(\frac{r}{R}\right)^{-\nu}.\]
which corresponds to (15). After having obtained the expression of \(u\), we would like to compute its integral, that is
\[\int_{B}u=A|\mathbb{S}^{d-1}|\int_{0}^{r}J_{\nu}(kr)r^{d-1-\nu}+B|\mathbb{S}^ {d-1}|\int_{0}^{r}I_{\nu}(kr)r^{d-1-\nu}.\]
Observing that \(d-1-\nu=\nu+1\), we will use advantageously formulas 5 and 7 from section 6.561 of [10], which read,
\[\int_{0}^{1}x^{\nu+1}J_{\nu}(\alpha x)=\alpha^{-1}J_{\nu+1}(\alpha),\qquad\int_ {0}^{1}x^{\nu+1}I_{\nu}(\alpha x)=\alpha^{-1}I_{\nu+1}(\alpha).\]
Applying these formulas to \(\alpha=\gamma_{\nu}\), we get
\[\int_{0}^{R}J_{\nu}(kr)r^{d-1-\nu}=\frac{R^{d-\nu}}{\gamma_{\nu}}J_{\nu+1}( \gamma_{\nu}),\qquad\int_{0}^{R}I_{\nu}(kr)r^{d-1-\nu}=\frac{R^{d-\nu}}{\gamma_ {\nu}}I_{\nu+1}(\gamma_{\nu}).\]
This, combined with (23) gives the desired formula
\[\int_{B}u= \frac{R^{\nu}}{\sqrt{d|B|}}|\mathbb{S}^{d-1}|\frac{R^{d-\nu}}{ \gamma_{\nu}}\left[\frac{J_{\nu+1}}{J_{\nu}}(\gamma_{\nu})-\frac{I_{\nu+1}}{I_{ \nu}}(\gamma_{\nu})\right]\] \[= \frac{\sqrt{d|B|}}{\gamma_{\nu}}\left[\frac{J_{\nu+1}}{J_{\nu}}( \gamma_{\nu})-\frac{I_{\nu+1}}{I_{\nu}}(\gamma_{\nu})\right].\]
Note that the last equality in (16) comes from the fact that \(f_{\nu}(\gamma_{\nu})=0\).
## Aknowledgements
I would like to thank Enea Parini and Francois Hamel for their valued support and useful comments during the elaboration of this document.
|
2302.10571
|
SurvLIMEpy: A Python package implementing SurvLIME
|
In this paper we present SurvLIMEpy, an open-source Python package that
implements the SurvLIME algorithm. This method allows to compute local feature
importance for machine learning algorithms designed for modelling Survival
Analysis data. Our implementation takes advantage of the parallelisation
paradigm as all computations are performed in a matrix-wise fashion which
speeds up execution time. Additionally, SurvLIMEpy assists the user with
visualization tools to better understand the result of the algorithm. The
package supports a wide variety of survival models, from the Cox Proportional
Hazards Model to deep learning models such as DeepHit or DeepSurv. Two types of
experiments are presented in this paper. First, by means of simulated data, we
study the ability of the algorithm to capture the importance of the features.
Second, we use three open source survival datasets together with a set of
survival algorithms in order to demonstrate how SurvLIMEpy behaves when applied
to different models.
|
Cristian Pachón-García, Carlos Hernández-Pérez, Pedro Delicado, Verónica Vilaplana
|
2023-02-21T09:54:32Z
|
http://arxiv.org/abs/2302.10571v2
|
# SurvLIMEpy: A Python package implementing SurvLIME
###### Abstract
In this paper we present **SurvLIMEpy**, an open-source Python package that implements the SurvLIME algorithm. This method allows to compute local feature importance for machine learning algorithms designed for modelling Survival Analysis data. Our implementation takes advantage of the parallelisation paradigm as all computations are performed in a matrix-wise fashion which speeds up execution time. Additionally, **SurvLIMEpy** assists the user with visualization tools to better understand the result of the algorithm. The package supports a wide variety of survival models, from the Cox Proportional Hazards Model to deep learning models such as DeepHit or DeepSurv. Two types of experiments are presented in this paper. First, by means of simulated data, we study the ability of the algorithm to capture the importance of the features. Second, we use three open source survival datasets together with a set of survival algorithms in order to demonstrate how **SurvLIMEpy** behaves when applied to different models.
**Keywords:** Interpretable Machine Learning; eXplainalble Artificial Intelligence,; Survival Analysis; Machine Learning; Python.
## 1 Introduction
Survival Analysis, also known as time-to-event analysis, is a field of Statistics that aims to study the time until a certain event of interest occurs. The reference approach for modelling the survival time is the Cox Proportional Hazards Model [12].
A survival study follows up a set of individuals among whom some will eventually experience the event of interest. Due to the nature of these studies, it is common to find the problem of censorship. An event may not be observed in all individuals due to lost to follow-up, dropping from the study or finishing the study without the event occurring. The Cox Proportional Hazards Model takes into account the phenomenon of censorship, since the estimation of the parameters is done through a likelihood function that deals with censorship.
Nowadays, a wide set of machine learning models are able to tackle Survival Analysis problems. Among them, it is worth highlighting Random Survival Forest (Ishwaran et al., 2008), survival regression with accelerated failure time model in XGBoost (Barnwal et al., 2022) or adaptations of deep learning algorithms for Survival Analysis such as DeepHit (Lee et al., 2018) or DeepSurv (Katzman et al., 2018). These models have proven to have good prediction capacity, as reported in Wang et al. (2019); Spooner et al. (2020); Hao et al. (2021).
Despite the continuous advances in the development of machine learning algorithms for healthcare applications, their adoption by medical practitioners and policy makers in public health is still limited. One of the main reasons is the black-box nature of most of the models, in the sense that the reasoning behind their predictions is often hidden from the user.
Interpretable Machine Learning (or, equivalently, eXplainable Artificial Intelligence, XAI for short) is a recent field born out of the need to derive explanations from machine learning models (Barredo Arrieta et al., 2020). Two popular interpretability methods are LIME (Ribeiro et al., 2016) and SHAP (Lundberg and Lee, 2017), which provide explanations locally around a test example. Although they are extensively used (Barr Kumarakulasinghe et al., 2020), these algorithms are not designed to deal with time-to-event data, which invalidates their use in survival applications.
The SurvLIME algorithm (Kovalev et al., 2020), inspired by LIME, was the first method presented in order to interpret black box survival models. This method aims to compute local interpretability by means of providing a ranking among the set of features for a given individual \(\mathbf{x}_{*}\), but unlike the methods mentioned previously, it considers the time space to provide explanations. First, it generates a set of neighbours, then it obtains a set of predictions for the neighbours and, finally, a Cox Proportional Hazards Model (local explainer) is fitted, minimising the distance between the predictions provided by the black box model and the predictions provided by the local explainer. The coefficients of the local model serve as an explanation for the survival machine learning model.
In a recent work, it was presented SurvSHAP(t) (Krzyzinski et al., 2023), an interpretability method inspired by SHAP algorithm designed to explain time-to-event machine learning models. In short, the explanation is provided by means of a time-dependent function. The time space is in
cluded in the explanation with the goal of detecting possible dependencies between the features and the time. Alongside this method, implementations of SurvLIME and SurvSHAP(t) algorithms were presented in the R package **survexp**[Spytek et al., 2022].
In this work we present an open-sourced Python package, **SurvLIMEpy**, which implements the SurvLIME algorithm. The package offers some degrees of freedom to the users. For instance, they can choose how to obtain the neighbours of the test individual, the distance metric to be minimised or to carry out a Monte-Carlo simulation. Furthermore, we provide details on how to use it, illustrated with some open source survival datasets as well as a simulation study, aiming to analyse the performance of the SurvLIME algorithm. As far as we know, this is the first Python implementation of this method.
The rest of the paper is organised as follows: in Section 2, the most relevant parts of the SurvLIME algorithm are presented. In Section 3, we introduce the package implementation. Additionally, a use case is provided. In Section 4, we present some experiments conducted with both simulated and real datasets. In this section, we use some of the state-of-the-art machine learning and deep learning algorithms for Survival Analysis in order to show how **SurvLIMEpy** is used with those models. Finally, conclusions are presented in Section 5.
## 2 SurvLIME algorithm
In this section we summarise the SurvLIME algorithm, which was presented in Kovalev et al. [2020]. We first introduce some notation. Let \(\mathbf{D}=\{(\mathbf{x}_{j},\tau_{j},\delta_{j})\}\)\(j\in\{1,\ldots,n\}\) be a dataset of triplets that represent individuals, where \(\mathbf{x}_{j}\in\mathbb{R}^{p}\) is a \(p\)-dimensional feature vector, \(\tau_{j}\) is the time to event or lost to follow-up time, and \(\delta_{j}\) is the event indicator (1 means the event occurs and 0 otherwise). Let \(t_{1}<\cdots<t_{m+1}\) be the distinct times from \(\{\tau_{1},\ldots,\tau_{n}\}\). Let
\[\hat{H}\colon\mathbb{R}^{p}\times\mathbb{R}_{>0} \rightarrow\mathbb{R}_{>0}\] \[(\mathbf{x},t) \mapsto\hat{H}(\mathbf{x},t)\]
be the already trained machine learning model that predicts the Cumulative Hazard Function (CHF; see Appendix A for more details) of an individual \(\mathbf{x}\) at time \(t\). In SurvLIME [Kovalev et al., 2020], the authors prove that \(\hat{H}(\mathbf{x},t)\) can be written as
\[\hat{H}(\mathbf{x},t)=\sum_{i=1}^{m+1}\hat{H}_{i}(\mathbf{x})\mathds{1}_{ \Omega_{i}}(t), \tag{1}\]
where \(\Omega_{i}=[t_{i},t_{i+1})\), being \(t_{m+2}=t_{m+1}+\gamma\) (\(\gamma\) a small positive number) and \(\mathds{1}_{\Omega_{i}}(t)\) the indicator function (1 if \(t\in\Omega_{i}\) and 0 otherwise). In the original
paper, the authors did not specify any value for \(\gamma\). In our implementation, we use \(10^{-6}\).
It is important to note that the function \(\hat{H}_{i}(\mathbf{x})\) is constant in \(\Omega_{i}\). Therefore, if \(\Omega=\cup_{i=1}^{m+1}\Omega_{j}\) and \(g\colon\Omega\to\mathbb{R}\) is a monotone function, then
\[g(\hat{H}(\mathbf{x},t))=\sum_{i=1}^{m+1}g\left[\hat{H}_{i}(\mathbf{x})\right] \mathds{1}_{\Omega_{i}}(t). \tag{2}\]
Given the prediction provided by the black-box model \(\hat{H}(\mathbf{x}_{*},t)\) for an individual \(\mathbf{x}_{*}\), SurvLIME finds the importance of each feature by means of approximating \(\hat{H}(\mathbf{x}_{*},t)\) by the Cox Proportional Hazards Model, \(\hat{H}_{cox}(\mathbf{x}_{*},t)=H_{0}(t)\exp(\hat{\boldsymbol{\beta}}^{ \mathrm{T}}\mathbf{x}_{*})\) (see Appendix A for more details).
Applying Expression (1) to the Cox Proportional Hazards Model, a new expression for this model is obtained:
\[\hat{H}_{cox}(\mathbf{x}_{*},t)=\sum_{i=1}^{m+1}\left[\hat{H}_{0}(t_{i})\exp (\hat{\boldsymbol{\beta}}^{\mathrm{T}}\mathbf{x}_{*})\right]\mathds{1}_{ \Omega_{i}}(t). \tag{3}\]
After fixing the individual \(\mathbf{x}_{*}\), both functions, \(\hat{H}(\mathbf{x}_{*},t)\) and \(\hat{H}_{cox}(\mathbf{x}_{*},t)\), only depend on t. Taking the logarithms \(\phi(t)=\ln[\hat{H}(\mathbf{x}_{*},t)]\) and \(\phi_{cox}(t)=\ln[\hat{H}_{cox}(\mathbf{x}_{*},t)]\) and taking into account Expression (2), the following can be derived:
\[\phi(t)=\sum_{i=1}^{m+1}\ln\left[\hat{H}_{i}(\mathbf{x}_{*})\right]\mathds{1 }_{\Omega_{i}}(t), \tag{4}\]
\[\phi_{cox}(t)=\sum_{i=1}^{m+1}\left(\ln\left[\hat{H}_{0}(t_{i})\right]+\hat{ \boldsymbol{\beta}}^{\mathrm{T}}\mathbf{x}_{*}\right)\mathds{1}_{\Omega_{i}}( t). \tag{5}\]
Let us consider \(\alpha(t)=\phi(t)-\phi_{cox}(t)=\sum_{i=1}^{m+1}(\ln[\hat{H}_{i}(\mathbf{x}_{ *})]-\ln[\hat{H}_{0}(t_{i})]-\hat{\boldsymbol{\beta}}^{\mathrm{T}}\mathbf{x} _{*})\mathds{1}_{\Omega_{i}}(t)\). Since \(\alpha(t)\) is a piecewise constant function and \(s(t)=t^{2}\) is a monotone function for \(t\geq 0\), we can use Expression (2) to write \((\phi-\phi_{cox})^{2}\) as a piecewise constant function,
\[s(\alpha(t))=\left(\phi(t)-\phi_{cox}(t)\right)^{2}=\sum_{i=1}^{m+1}\left(\ln \left[\hat{H}_{i}(\mathbf{x}_{*})\right]-\ln\left[\hat{H}_{0}(t_{i})\right]- \hat{\boldsymbol{\beta}}^{\mathrm{T}}\mathbf{x}_{*}\right)^{2}\mathds{1}_{ \Omega_{i}}(t). \tag{6}\]
The next step is to find a vector \(\hat{\mathbf{\beta}}\) that minimises the \(\ell^{2}\) distance between \(\phi\) and \(\phi_{cox}\). Taking into account that both functions are considered in \(\Omega\),
\[\begin{split}\mathrm{d}^{2}(\phi,\phi_{cox})&=\|\phi- \phi_{cox}\|_{2}^{2}\\ &=\int_{\Omega}\left[\phi(t)-\phi_{cox}(t)\right]^{2}\,dt\\ &=\int_{\Omega}\sum_{i=1}^{m+1}\left(\ln\left[\hat{H}_{i}( \mathbf{x}_{*})\right]-\ln\left[\hat{H}_{0}(t_{i})\right]-\hat{\mathbf{\beta}}^{ \mathrm{ T}}\mathbf{x}_{*}\right)^{2}\mathds{1}_{\Omega_{i}}(t)\,dt\\ &=\sum_{i=1}^{m+1}\left(\ln\left[\hat{H}_{i}(\mathbf{x}_{*}) \right]-\ln\left[\hat{H}_{0}(t_{i})\right]-\hat{\mathbf{\beta}}^{\mathrm{ T}}\mathbf{x}_{*}\right)^{2}\Delta t_{i},\end{split} \tag{7}\]
where \(\Delta t_{i}=(t_{i+1}-t_{i})\). We have used Expression (6) and that \(\ln[\hat{H}_{i}(\mathbf{x}_{*})]-\ln[\hat{H}_{0}(t_{i})]-\hat{\mathbf{\beta}}^{ \mathrm{ T}}\mathbf{x}_{*}\) does not depend on \(t\) to derive the previous expression.
Since SurvLIME is inspired by LIME, a set of \(N\) points \(\{\mathbf{e}_{1},\ldots,\mathbf{e}_{N}\}\) are generated in a neighbourhood of \(\mathbf{x}_{*}\), and the objective function is expressed in terms of these points and their corresponding weights, that depend on the distance between the \(N\) points and the individual \(\mathbf{x}_{*}\). Applying Expression (7) for all the neighbours \(\mathbf{e}_{k}\), the following objective is obtained:
\[\min_{\hat{\mathbf{\beta}}}\sum_{k=1}^{N}w_{k}\sum_{i=1}^{m+1}\left(\ln\left[\hat{ H}_{i}(\mathbf{e}_{k})\right]-\ln\left[\hat{H}_{0}(t_{i})\right]-\hat{\mathbf{\beta}}^{ \mathrm{ T}}\mathbf{e}_{k}\right)^{2}\Delta t_{i}. \tag{8}\]
For each point \(\mathbf{e}_{k}\) a weight is computed using a kernel function, \(w_{k}=K(\mathbf{x}_{*},\mathbf{e}_{k})\); the closer \(\mathbf{e}_{k}\) is to \(\mathbf{x}_{*}\), the higher the value of \(w_{k}\) is.
Finally, the authors introduce weights \(u_{ki}=\hat{H}_{i}(\mathbf{e}_{k})/\ln(\hat{H}_{i}(\mathbf{e}_{k}))\) in Expression (8), as the difference between \(\hat{H}(\mathbf{x},t)\) and \(\hat{H}_{cox}(\mathbf{x},t)\) could be significantly different from the distance between their logarithms. Therefore, the goal is to minimise the following expression:
\[\min_{\hat{\mathbf{\beta}}}\sum_{k=1}^{N}w_{k}\sum_{i=1}^{m+1}u_{ki}^{2}\left(\ln \left[\hat{H}_{i}(\mathbf{e}_{k})\right]-\ln\left[\hat{H}_{0}(t_{i})\right]- \hat{\mathbf{\beta}}^{\mathrm{ T}}\mathbf{e}_{k}\right)^{2}\Delta t_{i}. \tag{9}\]
Note that the first two factors in Expression (9) are quadratic and the last one is positive. Thus, the product is a convex function. Since we are considering a weighted sum of convex functions, the resulting expression is also convex. Therefore, there exists a solution for this problem.
Algorithm 1 summarises how to proceed in order to obtain the coefficients \(\hat{\mathbf{\beta}}\) for the local Cox Proportional Hazards Model approximation. In case all the features are standardised, feature \(i\) is more important than feature \(j\) if \(|\hat{\beta}_{i}|>|\hat{\beta}_{j}|\). If they are not standardised, \(\mathbf{x}_{*}\) must be taken into account to perform the comparison: feature \(i\) is more important than feature \(j\) if \(|\hat{\beta}_{i}x_{*i}|>|\hat{\beta}_{j}x_{*j}|\).
```
0: Input variables: * Training dataset \(\mathbf{D}=\{(\mathbf{x_{j}},\tau_{j},\delta_{j})\}\), \(j\in\{1,\dots,n\}\). * Individual of interest \(\mathbf{x_{*}}\). * Number of neighbours to generate \(N\). * Black-box model for the Cumulative Hazard Function \(\hat{H}\colon\mathbb{R}^{p}\times\mathbb{R}_{>0}\to\mathbb{R}_{>0}\). * Kernel function \(K\colon\mathbb{R}^{p}\times\mathbb{R}^{p}\to\mathbb{R}_{>0}\) to compute the weights according to the distance to \(\mathbf{x_{*}}\).
0: Obtain vector \(\hat{\boldsymbol{\beta}}\) for the local Cox Proportional Hazards Model approximation.
1. Obtain the distinct times \(t_{i}\), \(i\in\{1,\dots,m+1\}\) from \(\mathbf{D}\).
2. Estimate the baseline Cumulative Hazard Function \(\hat{H}_{0}(t)\) using \(\mathbf{D}\) and the Nelson-Aalen estimator.
3. Generate \(N\) neighbours of \(\mathbf{x_{*}}\): \(\{\mathbf{e}_{1},\dots,\mathbf{e}_{N}\}\).
4. For each time step \(t_{i}\) and for each \(\mathbf{e}_{k}\), obtain the prediction \(\hat{H}_{i}(\mathbf{e}_{k})\).
5. Obtain \(\ln\left(\hat{H}_{i}(\mathbf{e}_{k})\right)\).
6. For each \(\mathbf{e}_{k}\), obtain the weight \(w_{k}=K(\mathbf{x_{*}},\mathbf{e}_{k})\).
7. For each time step \(t_{i}\) and for each \(\mathbf{e}_{k}\), obtain \(u_{ki}=\hat{H}_{i}(\mathbf{e}_{k})/\ln\left(\hat{H}_{i}(\mathbf{e}_{k})\right)\).
8. Solve the convex optimisation problem stated in Expression (9).
```
**Algorithm 1**SurvLIME algorithm.
## 3 Package implementation
In this section, we introduce **SurvLIMEpy**, an open-source Python package that implements the SurvLIME algorithm. It is stored in the Python Package Index (PyPI)1 and the source code is available at GitHub2. Additionally, we present a detailed explanation of the implementation as well as some additional flexibility provided to the package.
Footnote 1: [https://pypi.org/project/survlimepy/](https://pypi.org/project/survlimepy/)
Footnote 2: [https://github.com/imatge-upc/SurvLIMEpy](https://github.com/imatge-upc/SurvLIMEpy)
Section 3.1 introduces a matrix-wise formulation for Expression (9). Sections 3.2 to 3.4 describe the parts of the package that the user can adjust. Sections 3.5 and 3.6 describe how to use the package and some code examples are given.
### Matrix-wise formulation
In order to apply a parallelism paradigm, and thus reduce the total execution time, the optimisation problem can be formulated matrix-wise. Before developing it, we introduce some notation. Let \(\mathbf{1}_{d}\) be the vector of ones of size \(d\), i.e., \(\mathbf{1}_{d}=(1,\ldots,1)^{\mathrm{ T}}\). Let \(\mathbf{A}=(a_{ij})\) and \(\mathbf{C}=(c_{ij})\) be two matrices of the same size. By \(\oslash\) we denote the element-wise division between \(\mathbf{A}\) and \(\mathbf{C}\). Likewise, \(\odot\) denotes the element-wise product.
The first step is to find the matrix expression for \(\ln[\hat{H}_{0}(t_{i})]\). Let \(\mathbf{v}_{0}\) be the component-wise logarithm of the baseline Cumulative Hazard function evaluated at each distinct time, i.e., \(\mathbf{v}_{0}=(\ln[\hat{H}_{0}(t_{1})],\ldots,\ln[\hat{H}_{0}(t_{m+1})])^{ \mathrm{ T}}\). To produce a matrix, let us consider the product between \(\mathbf{1}_{N}\) and \(\mathbf{v}_{0}^{\mathrm{ T}}\), \(\mathbf{L}_{0}=\mathbf{1}_{N}\mathbf{v}_{0}^{\mathrm{ T}}\). \(\mathbf{L}_{0}\) is a matrix of size \(N\times(m+1)\). Note that all the rows contain exactly the same vector \(\mathbf{v}_{0}\).
After that, a matrix \(\mathbf{E}\) containing \(N\) neighbours is obtained. The size of \(\mathbf{E}\) is \(N\times p\). Each row in \(\mathbf{E}\), denoted by \(\mathbf{e}_{k}\), is a neighbour of \(\mathbf{x}_{*}\). To find the matrix expression for \(\ln[\hat{H}_{i}(\mathbf{e}_{k})]\), let \(\mathbf{V}=(v_{ki})\) be the matrix that contains the values of the Cumulative Hazard Function for the neighbours evaluated at each distinct time, i.e., \(v_{ki}=\hat{H}_{i}(\mathbf{e}_{k})\). Let us consider the component-wise logarithm of \(\mathbf{V}\), \(\mathbf{L}=(\ln[v_{ki}])\). \(\mathbf{V}\) and \(\mathbf{L}\) are of size \(N\times(m+1)\).
Next, we find the matrix-wise expression for \([\hat{H}_{i}(\mathbf{e}_{k})/\ln(\hat{H}_{i}(\mathbf{e}_{k})]^{2}\). Let \(\mathbf{M}\) be the resulting matrix of the element-wise division between \(\mathbf{V}\) and \(\mathbf{L}\), i.e., \(\mathbf{M}=\mathbf{V}\oslash\mathbf{L}\) and let \(\mathbf{M}_{2}=\mathbf{M}\odot\mathbf{M}\), which is of size \(N\times(m+1)\).
The next step is to find the matrix expression for \(\hat{\boldsymbol{\beta}}^{\mathrm{ T}}\mathbf{e}_{k}\). Let \(\hat{\boldsymbol{\beta}}\) be the unknown vector (of size \(p\)) we are looking for. Let us consider the product between \(\mathbf{E}\) and \(\hat{\boldsymbol{\beta}}\), \(\mathbf{\tilde{p}}=\mathbf{E}\hat{\boldsymbol{\beta}}\), which is a vector of size \(N\) and whose \(k-th\) component is \(\hat{\boldsymbol{\beta}}^{\mathrm{ T}}\mathbf{e}_{k}\). To obtain a matrix of size \(N\times(m+1)\), let us consider the product between \(\mathbf{\tilde{p}}\) and \(\mathbf{1}_{m+1}\), \(\mathbf{\Lambda}=\mathbf{\tilde{p}}\mathbf{1}_{m+1}^{\mathrm{ T}}\). All the columns in \(\mathbf{\Lambda}\) contain the same vector \(\mathbf{\tilde{p}}\).
Let us obtain the matrix-wise expression of \((\ln[\hat{H}_{i}(\mathbf{e}_{k})]-\ln[\hat{H}_{0}(t_{i})]-\hat{\boldsymbol{ \beta}}^{\mathrm{ T}}\mathbf{e}_{k})^{2}\). First, let us consider \(\mathbf{\Theta}=\mathbf{L}-\mathbf{L}_{0}-\mathbf{\Lambda}\). Note that the size of the matrix \(\mathbf{\Theta}\) is \(N\times(m+1)\). Second, let us consider the element-wise square of the previous matrix, denoted by \(\mathbf{\Theta}_{2}\), i.e., \(\mathbf{\Theta}_{2}=\mathbf{\Theta}\odot\mathbf{\Theta}\). The component \((k,i)\) of the previous matrix contains the desired expression, where \(k\in\{1,\ldots,N\}\) and \(i\in\{1,\ldots,m+1\}\).
Now, we obtain the matrix expression for \(u_{ki}^{2}(\ln[\hat{H}_{i}(\mathbf{e}_{k})]-\ln[\hat{H}_{0}(t_{i})]-\hat{ \boldsymbol{\beta}}^{\mathrm{ T}}\mathbf{e}_{k})^{2}\). To do that, let \(\mathbf{\Pi}\) be the matrix obtained by the element-wise multiplication between \(\mathbf{M}_{2}\) and \(\mathbf{\Theta}_{2}\), \(\mathbf{\Pi}=\mathbf{M}_{2}\odot\mathbf{\Theta}_{2}\). \(\mathbf{\Pi}\) is of size \(N\times(m+1)\).
Let \(\mathbf{t}\) be the vector of size \(m+2\) containing the distinct times (we apply the same consideration as in Section 2, i.e., \(t_{m+2}=t_{m+1}+\gamma\)). Let \(\boldsymbol{\psi}_{t}\) be the vector of time differences between two consecutive distinct times, i.e., \(\boldsymbol{\psi}_{t}=(t_{2}-t_{1},\ldots,t_{m+2}-t_{m+1})^{\mathrm{ T}}\), which is a vector of size \(m+1\).
To obtain \(\sum_{i=1}^{m+1}u_{ki}^{2}(\ln[\hat{H}_{i}(\mathbf{e}_{k})]-\ln[\hat{H}_{0}(t_ {i})]-\hat{\boldsymbol{\beta}}^{\mathrm{ T}}\mathbf{e}_{k})^{2}\Delta t_{i}\) matrix-wise, let
\(\mathbf{\pi}\) be the resulting vector of multiplying the matrix \(\mathbf{\Pi}\) and the vector \(\psi_{t}\), i.e., \(\mathbf{\pi}=\mathbf{\Pi}\mathbf{\psi}_{t}\). The vector \(\mathbf{\pi}\) is of size \(N\) and the \(k-th\) component of it contains the desired expression.
Finally, let \(\mathbf{w}\) be the vector of weights for the neighbours, which is of size \(N\). This vector is obtained by applying the kernel function over all the neighbours, i.e., \(w_{k}=K(\mathbf{x}_{*},\mathbf{e}_{k})\). Then, Expression (9) can be formulated as
\[\min_{\hat{\mathbf{\beta}}}\mathbf{w}^{\mbox{\tiny T}}\mathbf{\pi}, \tag{10}\]
where the vector \(\mathbf{\pi}\) depends on the vector \(\hat{\mathbf{\beta}}\). Algorithm 2 summarises this matrix-wise implementation.
In order to find a numerical solution for Expression (10), we use the **cvxpy** package (Diamond and Boyd, 2016). This library contains functionalities that allow to perform matrix-wise operations as well as element-wise operations. Moreover, **cvxpy** library allows the user to choose the solver applied to the optimisation algorithm. In our implementation, we use the default option, which is the Operator Splitting Quadratic Program solver, OSQP for short (Stellato et al., 2020).
### Neighbour generation and kernel function
The neighbour generating process is not specified in the original LIME paper nor in the SurvLIME publication. As reported in Molnar (2022), this issue requires great care since explanations provided by the algorithm may vary depending on how the neighbours are generated.
In our implementation, we use a non-parametric kernel density estimation approach. Let \(\mathbf{x}_{1},\ldots,\mathbf{x}_{n}\), a \(p\) dimensional sample drawn from a random variable \(\mathcal{X}\) with density function \(f\). Let \(\hat{\sigma}_{j}\) be the sampling standard deviation of the the \(j-th\) component of \(\mathcal{X}\). For a point \(\mathbf{x}\in\mathbb{R}^{p}\), a kernel-type estimator of \(f(\mathbf{x})\) is
\[\hat{f}(\mathbf{x})=\frac{1}{nb^{p}\prod_{j=1}^{p}\hat{\sigma}_{j}}\sum_{i=1} ^{n}\exp\left(-\frac{1}{2b^{2}}\|\mathbf{x}-\mathbf{x}_{i}\|_{s}^{2}\right),\]
where \(\|\mathbf{x}-\mathbf{x}_{i}\|_{s}=\sqrt{\sum_{j=1}^{p}(x_{j}-x_{ij})^{2}/\hat {\sigma}_{j}^{2}}\) is the Euclidean distance between the standardised versions of \(\mathbf{x}\) and \(\mathbf{x}_{i}\), and \(b\) is the bandwidth, a tuning parameter which, by default, we fix as \(b=[4/(n[p+2])]^{1/(p+4)}\), following the Normal reference rule (Silverman, 1986, page 87). Observe that \(\hat{f}(\mathbf{x})\) is a mixture of \(n\) multivariate Normal density functions, each with weight \(1/n\), mean value \(\mathbf{x}_{i}\) and common covariance matrix \(\mathbf{\hat{\Sigma}}=b^{2}\cdot\operatorname{diag}(\hat{\sigma}_{1}^{2},\ldots, \hat{\sigma}_{p}^{2})\). We consider such a Normal distribution centering it at a point of interest \(\mathbf{x}_{*}\): \(\mathcal{N}(\mathbf{x}_{*},\mathbf{\hat{\Sigma}})\).
First, a matrix containing a set of \(N\) neighbours, denoted by \(\mathbf{E}\), is generated, each row \(\mathbf{e}_{k}\) coming from a \(\mathcal{N}(\mathbf{x}_{*},\mathbf{\hat{\Sigma}})\), \(k\in\{1,\ldots,N\}\). Afterwards, the
weight \(w_{k}\) of neighbour \(\mathbf{e}_{k}\) is computed as the value of the density function of the \(\mathcal{N}(\mathbf{x}_{*},\mathbf{\hat{\Sigma}})\) evaluated at \(\mathbf{e}_{k}\).
### Functional norm
While the original publication uses the \(\ell^{2}\) functional norm to measure the distance between \(\phi\) and \(\phi_{cox}\) in Expression (7), other works such as SurvLIME-Inf (Utkin et al., 2020) use \(\ell^{\infty}\). The authors of SurvLIME-Inf claim that this norm speeds up the execution time when solving the optimisation problem.
In our implementation, the computational gain of using the infinity norm was negligible when solving the problem in a matrix-wise formulation as explained in Section 3.1. The \(\ell^{2}\) is set as the default distance in our implementation. However, the user can choose other norms.
### Supported survival models
Throughout this work, we represent a survival model as a function \(\hat{H}\colon\mathbb{R}^{p}\times\mathbb{R}_{>0}\to\mathbb{R}_{>0}\). However, the packages that implement the different models do not work in the same way, since their implementations employ a function that takes as input a vector of size \(p\) and outputs a vector of size \(m+1\), where \(m+1\) is the number of distinct times (see Section 2 for more details). Therefore, the output is a vector containing the Cumulative Hazard Function evaluated at each distinct time, i.e., \(\hat{H}\colon\mathbb{R}^{p}\to\mathbb{R}_{>0}^{m+1}\).
Our package can manage multiple types of survival models. In addition to the Cox Proportional Hazards Model (Cox, 1972), which is implemented in the **sksurv** library (Polsterl, 2020), **SurvLIMEpy** also manages other algorithms: Random Survival Forest (Ishwaran et al., 2008), implemented in the **sksurv** library, Survival regression with accelerated failure time model in XGBoost (Barnwal et al., 2022), implemented in the **xgbse** library (Vieira et al., 2020), DeepHit (Lee et al., 2018) and DeepSurv (Katzman et al., 2018), both implemented in the **pycox** library (Kavamme et al., 2019).
The set of times for which the models compute a prediction can differ across models and their implementations. Whereas Cox Proportional Hazards Model, Random Survival Forest and DeepSurv offer a prediction for each distinct time, \(\{t_{1},\ldots,t_{m+1}\}\), the other models work differently: for a given integer \(q+1\), they estimate the \(q+1\) quantiles \(\{\tilde{t}_{1},\ldots,\tilde{t}_{q+1}\}\) and then, they offer a prediction for each \(\tilde{t}_{j}\).
The first models output a vector of size \(m+1\), \((\hat{H}(t_{1}),\ldots,\hat{H}(t_{m+1}))^{\mathrm{T}}\). The second models output a vector of size \(q+1\), \((\hat{H}(\tilde{t}_{1}),\ldots,\hat{H}(\tilde{t}_{q+1}))^{\mathrm{T}}\). Since SurvLIME requires the output of the model to be a vector of length \(m+1\), we use linear interpolation in order to fulfill this condition. All of the machine learning packages provide a variable specifying the set of times for which the model provides a prediction. We use this variable to perform the interpolation.
We choose to ensure the integration of the aforementioned machine learning algorithms with **SurvLIMEpy** as they are the most predominant in the field (Wang et al., 2019; Spooner et al., 2020; Hao et al., 2021). In Sections 3.5 and 3.6 there are more details on how to provide the prediction function to the package. Note that if a new survival package is developed, **SurvLIMEpy** will support it as long as the output provided by the predict function is a vector of length \(q+1\), \(0<q\leq m\).
Usually, the libraries designed to create machine learning algorithms for survival analysis make available two functions to create predictions, one for the Cumulative Hazard Function (CHF) and another one for the Survival Function (SF). For example, for the **sksurv** package this functions are predict_cumulative_hazard_function and predict_survival_function, respectively. **SurvLIMEpy** has been developed to work with both of them. The user should specify which prediction function is using. By default, the
package assumes that the prediction function is for the CHF. In case of working with the SF, a set of transformations is performed in order to work with CHF (see Appendix A, where the relationship between the CHF and the SF is explained).
### Package structure
The class 'SurvLimeExplainer' is used as the main object of the package to computes feature importance.
SurvLimeExplainer( training_features, training_events, training_times, model_output_times, HO, kernel_width, functional_norm, random_state )
training_features: Matrix of features of size \(n\times p\), where \(n\) is the number of individuals and \(p\) is the size of the feature space. It can be either a **pandas** data frame or a **numpy** array.
training_events: Vector of event indicators, of size \(n\). It corresponds to the vector \((\delta_{1},\ldots,\delta_{n})^{\mbox{\tiny T}}\) and it must be a Python list, a **pandas** series or a **numpy** array.
training_times: Vector of event times, of size \(n\). It corresponds to the vector \((\tau_{1},\ldots,\tau_{n})^{\mbox{\tiny T}}\) and this must be a Python list, a **pandas** series or a **numpy** array.
model_output_times (optional): Vector of times for which the model provides a prediction, as explained is Section 3.4. By default, the vector of distinct times \((t_{1},\ldots,t_{m+1})^{\mbox{\tiny T}}\), obtained from training_times, is used. If provided, it must be a **numpy** array.
HO (optional): Vector of baseline cumulative hazard values, of size \(m+1\), used by the local Cox Proportional Hazards Model. If the user provides it, then a **numpy** array, a Python list or a StepFunction (from **sksurv** package) must be used. If the user does not provide it, the package uses the non-parametric algorithm of Nelson-Aalen [1]. It is computed using the events \(\delta_{i}\) (training_events) and times \(\tau_{i}\) (training_times).
kernel_width (optional): Bandwidth of the kernel (\(b\) parameter defined in Section 3.2) used in the neighbours generating process as well as to compute the vector of weights \(\mathbf{w}\). A float must be used. The default value for this parameter is equal to \(4/(n[p+2])]^{1/(p+4)}\). See Section 3.2 for more details.
* functional_norm (optional): Norm used in order to calculate the distance between the logarithm of the Cox Proportional Hazards Model, \(\phi_{cox}(t)\), and the logarithm of the black box model, \(\phi(t)\). If provided, it must be either a float \(k\geq 1\), in order to use \(\ell^{k}\), or the string "inf", in order to use \(\ell^{\infty}\). The default value is set to 2. See Section 3.3 for more details.
* random_state (optional): Number to be used for the random seed. The user must provide a value if the results obtained must be reproducible every time the code is executed. The default is set to empty (no reproducibility needed).
In order to obtain the coefficients of the local Cox Proportional Hazards Model, the aforementioned class has a specific method:
explain_instance( data_row, predict_fn, type_fn, num_samples, verbose )
* data_row: Instance to be explained, i.e., \(\mathbf{x}_{*}\). It must be a Python list, a **numpy** array or a **pandas** series. The length of this array must be equal to the number of columns of the training_features matrix, i.e., \(p\).
* predict_fn: Prediction function, i.e., \(\hat{H}\colon\mathbb{R}^{p}\to\mathbb{R}^{q+1}_{>0}\). It must be a callable (i.e., a Python function). See Section 3.4 for more details.
* type_fn (optional): String indicating whether the prediction function, predict_fn, is for the Cumulative Hazard Function or for the Survival Function. The default value is set to "cumulative". The other option is "survival".
* num_samples (optional): Number of neighbours \(N\) to be generated. The default value is set to 1000. See Section 3.2 for more details.
* verbose (optional): Boolean indicating whether to show the **cvxpy** messages. Default is set to false.
In addition to the main functions, there are three additional functionalities provided by the package. The first one, plot_weights(), allows to visualise the SurvLIME coefficients. This function returns a bar plot of the computed values. The function has two optional input parameters. The first one, with_colour, is a boolean parameter indicating whether to use a red colour palette for the features that increase the Cumulative Hazard Function and a blue palette for those that decrease it. If it is set to false,
the grey colour is used for all the bars. The default value is true. The other input parameter is figure_path. In case the user provides a value, it must be a path where the plot is stored as a.png file.
The second functionality is devoted to perform a Monte-Carlo simulation. When using the explain_instance() method, the optimisation problem is solved once: a single set of neighbours is generated and, therefore, a single vector of coefficients is obtained. For a given individual \(\mathbf{x}_{*}\), the method montecarlo_explanation() allows to obtain a set of vectors (of coefficients) \(\{\boldsymbol{\hat{\beta}}_{1},\ldots,\boldsymbol{\hat{\beta}}_{b}\}\) each corresponding to a different random set of neighbours. In order to use it, the number of simulations, \(b\), must be provided. Once all the simulations are performed, the mean value, \(\boldsymbol{\bar{\beta}}=1/b\sum_{j=1}^{b}\boldsymbol{\hat{\beta}}_{j}\), is calculated to obtain a single vector of feature importance for the individual \(\mathbf{x}_{*}\).
This method allows to use a matrix \(\mathbf{X}_{*}\) (of size \(h\times p\), where \(h\) is the number of individuals to be explained) as input, instead of a single individual \(\mathbf{x}_{*}\). Therefore, a matrix \(\mathbf{B}\) (of size \(h\times p\)) is obtained: a row \(i\) of \(\mathbf{B}\) is a vector containing the feature importance of the individual \(i\) of \(\mathbf{X}_{*}\). The function montecarlo_explanation() is part of the 'SurvLineExplainer' class.
montecarlo_explanation( data, predict_fn, type_fn, num_samples, num_repetitions, verbose ) Note that all the input parameters are the same as the input parameters of explain_instance() except for two of them:
* data: Instances to be explained, i.e., \(\mathbf{X}_{*}\). It must be a pandas data frame, a pandas series, a numpy array or a Python list.
* num_repetitions (optional): Integer indication the number of simulations, \(b\). The default value is set to 10.
Finally, plot_montecarlo_weights() is the last functionality we have developed and it allows to create a boxen plot from the values obtained by montecarlo_explanation() method. plot_montecarlo_weights() has two optional input parameters: with_colour and figure_path. These parameters behave in the same way as the input parameters of the function plot_weights().
### Code example
The following code fragment shows how to use the package to compute the importance vector for the features for a single individual. In order to run it, let us suppose we have a machine learning model already trained, denoted by model, which has a method that obtains a prediction for the Cumulative Hazard Function, model.predict_cumulative_hazard_function and
it has an attribute containing the times for which the previous method provides a prediction, model.event_times_ (we are adopting the notation of **sksurv** package).
The individual to be explained is denoted by individual, the dataset containing the features is denoted by features, the vector containing the event indicators is denoted by events and the vector containing the times is denoted by times.
```
fromsurvlimepyimportSurvLimeExplainer explainer=SurvLimeExplainer( training_features=features, training_events=events, training_times=times, model_output_times=model.event_times_ ) explanation=explainer.explain_instance( data_row=individual, predict_fn=model.predict_cumulative_hazard_function, num_samples=1000 ) explainer.plot_weights()
```
The last line displays the importance of each feature. The result is shown in Figure 1. The computed coefficients are displayed in descending order, with a red colour palette for the features that increase the Cumulative Hazard Function and a blue palette for those that decrease it. The remaining input parameters in 'SurvLimeExplainer' as well as in function explain_instance() use their corresponding default values.
The next code block exemplifies how to use montecarlo_explanation() to obtain a set of SurvLIME values as well as the plot_montecarlo_weights() method to display them. We make use of the same notation as before, i.e., model.predict_cumulative_hazard_function, model.event_times_, features, events and times.
Instead of explaining a single individual, we explain a set of \(h\) individuals. Let, X_ind be a **numpy** array of size \(h\times p\). For each individual, we perform 100 repetitions and, for each repetition, 1000 neighbours are generated. The code needed to obtain the results is very similar to the previous one. The last line of the code example is responsible for displaying Figure 2. Note that the variable mc_explanation is a **numpy** array of size \(h\times p\), where the row \(i\) contains the feature importance for individual \(i\) in X_ind.
from survlimepy import SurvLimeExplainer explainer = SurvLimeExplainer( training_features=features, training_events=events, training_times=times, model_output_times=model.event_times ) mc_explanation = explainer.montecarlo_explanation( data=X_ind, predict_fn=model.predict_cumulative_hazard_function, num_repetitions=100, num_samples=1000 ) explainer.plot_montecarlo_weights()
## 4 Experiments
In this section, we present the experiments performed to test the implementation of our package **SurvLIMEpy**. In order to ensure reproducibility we have created a separate repository3 in which we share the code used throughout this section.
Figure 1: SurvLIME values obtained with explainer.plot_weights() method. The input parameter with_colour is set to true.
We conduct two types of experiments. The first is by means of simulated data, as the authors of the original paper of SurvLIME. Given that they describe in detail how their data was generated, we are able to follow the same procedure. As we use simulated data we can compare the results of the SurvLIME algorithm with the data generating process. Therefore, we can measure how much the coefficients provided by the algorithm deviate from the real coefficients (i.e. the simulated ones).
The second set of experiments is with real survival datasets. In this part, we use machine learning as well as deep learning algorithms. Our goal is to show how **SurvLIMEpy** can be used with the state-of-the-art machine learning models. For those experiments, we do not have results to compare with, unlike what happens in the case of simulated data. Therefore, just qualitative insights are provided.
### Simulated data
First, two sets of data are generated randomly and uniformly in the \(p\)-dimensional sphere, where \(p=5\). Each set is configured as follows:
* Set 1: Center, \(c_{1}=(0,0,0,0,0)\), radius, \(r_{1}=8\), number of individuals, \(n_{1}=1000\).
* Set 2: Center, \(c_{2}=(4,-8,2,4,2)\), radius, \(r_{2}=8\), number of individuals, \(n_{2}=1000\).
Figure 2: SurvLIME values obtained by means of using the method explainer.plot_montecarlo_weights(). The input parameter with_colour is set to true.
Using these parameters, two datasets represented by the matrices \(\mathbf{X}_{r}\) of size \(n_{r}\times p\) are generated (\(r\in\{1,2\}\)). A row from these datasets represents an individual and a column represents a feature. Therefore, \(x_{ij}\) represents the value of feature \(j\) for the individual \(i\).
The Weibull distribution is used to generate time data (Bender et al., 2005). This distribution respects the assumption of proportional hazards, the same as the Cox Proportional Hazards Model does. The Weibull distribution is determined by two parameters: the scale, \(\lambda\), and the shape, \(\nu\). Given the set of data \(r\), a vector of time to events (of size \(n_{r}\)) is generated as
\[\boldsymbol{\tau}_{r}=\left(\frac{-\ln(\mathbf{u}_{r})}{\lambda_{r}\exp( \mathbf{X}_{r}\boldsymbol{\beta}_{r})}\right)^{1/\nu_{r}}, \tag{11}\]
where \(\mathbf{u}_{r}\) is a vector of \(n_{r}\) independent and identically uniform distributions in the interval \((0,1)\). Both functions, the logarithm and the exponential, are applied component-wise. As done in Kovalev et al. (2020), all times greater than 2000 are constrained to 2000. Each set \(r\) has the following set of parameters:
* Set 1: \(\lambda_{1}=10^{-5}\), \(\nu_{1}=2\), \(\boldsymbol{\beta}_{1}^{\mathrm{ T}}=(10^{-6},0.1,-0.15,10^{-6},10^{-6})\).
* Set 2: \(\lambda_{2}=10^{-5}\), \(\nu_{2}=2\), \(\boldsymbol{\beta}_{2}^{\mathrm{ T}}=(10^{-6},-0.15,10^{-6},10^{-6},-0.1)\).
Note that for the first set, the second and the third features are the most important ones. On the other hand, for the second set, the second and the fifth features are the most relevant.
In order to generate the event indicator, a Bernoulli distribution, with a probability of success equal to 0.9, is used. For each set a vector (of size \(n_{r}\)) of independent and identically distributed random variables is obtained. Let \(\boldsymbol{\delta}_{r}\) be the vector of such realisations. The random survival data of each set \(r\) is represented by a triplet \(\mathbf{D}_{r}=(\mathbf{X}_{r},\boldsymbol{\tau}_{r},\boldsymbol{\delta}_{r})\).
Even though the authors of the original SurvLIME paper simulated data this way, it is worth mentioning that this is not the standard procedure in Survival Analysis. The usual way to generate data consists of using two different distributions of times, \(\boldsymbol{\tau}_{0}\) and \(\boldsymbol{\tau}_{1}\): \(\boldsymbol{\tau}_{0}\) is the censoring time and \(\boldsymbol{\tau}_{1}\) is the time-to-event. Then, the vector of observed times \(\boldsymbol{\tau}=(\tau_{i})\) is obtained as \(\tau_{i}=\min(\tau_{0i},\tau_{1i})\). In order to generate the event indicator vector \(\boldsymbol{\delta}=(\delta_{i})\), it is taken into account both vectors \(\boldsymbol{\tau}_{0}\) and \(\boldsymbol{\tau}_{1}\): \(\delta_{i}=\mathds{1}_{\{\tau_{1i}\leq\tau_{0i}\}}\). In this way, it is obtained that \(\mathsf{P}(\delta_{i}=1)=\mathsf{P}(\tau_{1i}\leq\tau_{0i})\). Nonetheless, we proceed in the same way as in the original paper so that the results can be compared.
**SurvLIMEpy** allows to create a random survival dataset according to the criteria described previously. The class 'RandomSurvivalData' manages this part.
RandomSurvivalData( center, radius, coefficients, prob_event, lambda_weibull, v_weibull, time_cap, random_seed )
* center: The center of the set. It must be a Python list of length \(p\).
* radius: The radius of the set. It must be a float.
* coefficients: The \(\boldsymbol{\beta}_{r}\) vector that is involved in Expression (11). It must be a Python list of length \(p\).
* prob_event: The probability for the Bernoulli distribution. It must be a float in \((0,1)\).
* lambda_weibull: The \(\lambda_{r}\) parameter that is involved in Expression (11). It must be a float positive number.
* v_weibull: The \(\nu_{r}\) parameter that is involved in Expression (11). It must be a float positive number.
* time_cap (optional): If the time obtained is greater than time_cap, then time_cap is used. It must be a float positive number.
* random_seed (optional): Number to be used for the random seed. The user must provide a value if the results obtained must be reproducible every time the code is executed. The default is set to empty (no reproducibility needed).
This class contains the method random_survival_data(num_points) that returns the dataset. The input parameter, num_points, is an integer indicating the number of individuals, \(n_{r}\), to generate. The output of this function is a tuple of three objects: (1) \(\mathbf{X}_{r}\) the matrix containing the features (of size \(n_{r}\times p\)); (2) \(\boldsymbol{\tau}_{r}\) the vector of times to event (of size \(n_{r}\)); (3) \(\boldsymbol{\delta}_{r}\) the vector of event indicators (of size \(n_{r}\)).
After obtaining both datasets, they are split randomly into two parts, a training dataset, \(\mathbf{D}_{r}^{train}\) and a test dataset, \(\mathbf{D}_{r}^{test}\). The training dataset consists of 900 individuals, whereas the test dataset consists of 100 individuals.
For each training dataset, a Cox Proportional Hazards Model is fitted. Let \(\hat{H}_{r}(\mathbf{x},t)\), \(r\in\{1,2\}\), be the resulting models. The next step is to use **SurvLIMEpy** to obtain the importance of each feature. The test datasets, still unexploited, are used to rank the relevance of each feature. For a given test individual from set \(r\), the set up for **SurvLIMEpy** is:
* Training dataset, \(\mathbf{D}=\mathbf{D}_{r}^{train}\).
* Number of neighbours, \(N_{r}=1000\).
* Black-box model for the Cumulative Hazard Function: \(\hat{H}_{r}(\mathbf{x},t)\).
* Kernel function, \(K(\cdot,\cdot)=\) Gaussian Radial Basis function.
Figure 3 shows the results obtained using **SurvLIMEpy** package to compute the coefficients. In green, the vector of real coefficients, \(\boldsymbol{\beta}_{r}\), is depicted. In blue, the estimated parameters according to Cox Proportional Hazards Model, \(\boldsymbol{\hat{\beta}}_{r}^{c}\). In orange, the coefficients obtained by **SurvLIMEpy**, \(\boldsymbol{\hat{\beta}}_{r}^{s}\), \(r\in\{1,2\}\). The individual to be explained is the center of the set. Note that the results we have obtained are similar to the ones obtained in the original paper of SurvLIME.
Given that the real coefficients, \(\boldsymbol{\beta}_{r}\), are known, the \(\ell^{2}\) distance between \(\boldsymbol{\beta}_{r}\) and \(\boldsymbol{\hat{\beta}}_{r}^{s}\) can be computed. In order to study the variance of SurvLIME algorithm, the previous experiment is repeated 100 times, i.e, a Monte-Carlo simulation is performed. Throughout all the simulations, the individual to be explained is the same, the center of the set.
Thus, a set of 100 distances are obtained, \(\{d_{1},\ldots,d_{100}\}\). From this set, the mean, the minimum and the maximum distance can be calculated. Let \(\boldsymbol{\hat{\beta}}_{mean}^{s}\), \(\boldsymbol{\hat{\beta}}_{min}^{s}\) and \(\boldsymbol{\hat{\beta}}_{max}^{s}\) be the SurvLIME coefficients related to those distances. Doing such a Monte-Carlo simulation for all the individuals in the test datasets, \(\mathbf{D}_{1}^{test}\) and \(\mathbf{D}_{2}^{test}\), leads to obtain 3 different samples of SurvLIME coefficients: \(\{\boldsymbol{\hat{\beta}}_{mean,1}^{s},\ldots,\boldsymbol{\hat{\beta}}_{ mean,100}^{s}\}\), \(\{\boldsymbol{\hat{\beta}}_{min,1}^{s},\ldots,\boldsymbol{\hat{\beta}}_{min,10 0}^{s}\}\) and \(\{\boldsymbol{\hat{\beta}}_{max,1}^{s},\ldots,\boldsymbol{\hat{\beta}}_{max,1 00}^{s}\}\).
Figure 4 shows the boxen plots for the three previous sets of coefficients. The left plots depict the boxen plot for the mean coefficient; the middle plots are for the minimum coefficient; the right ones correspond to the maximum coefficient. The results show that the coefficients of SurvLIME were close to the real coefficients for both sets of data. Furthermore, the mean values of
Figure 3: Real coefficients for parameters (green), estimated coefficients by CoxPH (blue) and SurvLIME coefficients (orange). Results for set 1 (left). Results for set 2 (right). The individual to be explained is the center of the set.
the computed coefficients behave similarly to the best approximations and they show a low variance.
In the worst case scenario, SurvLIME does not behave as well as in the other two scenarios. The variance of the SurvLIME coefficients is much higher, especially for the second set of data. However, the bias is as good as the bias of the other two scenarios.
### Real data
Now, we test our implementation on three open-access datasets. Each dataset is presented together with a bivariate analysis. For categorical features, the percentage of individuals that experienced the event is computed for each category. Continuous features are categorised according to their quartiles, and the resulting categorical features are described as before.
The first dataset is the UDCA dataset [Lindor et al., 1994]. It contains individuals with primary biliary cirrhosis (PBC) that were randomised for treatment with unsodeoxycholic acid (UDCA). A total of 9.46% of the individuals experienced the event. The features of this dataset are:
* trt (categorical): Treatment received. 0 is for placebo and 1 is for UDCA.
* stage (categorical): Stage of disease. 0 is for better and 1 is for worse.
* bili (continuous): Bilirubin value at entry.
* riskscore (continuous): The Mayo PBC risk score at entry.
Figure 4: Boxen plot for the mean (left) minimum (middle) and maximum (right) distance. Results are shown for individuals of the first set (top) and the second set (bottom).
Note that the UDCA dataset contains an individual whose riskscore is missing. We drop this individual from the dataset. The bivariate descriptive analysis is displayed in Table 1.
The second dataset is the LUNG dataset (Loprinzi et al., 1994). It contains individuals with advanced lung cancer from the North Central Cancer Treatment Group. A total of 70.33% of the individuals experienced the event. The features of this dataset are:
* inst (categorical): Institution code. The institutions are coded with numbers between 1 and 33.
* sex (categorical): Gender. 1 is for male and 2 is for female.
* ph.ecog (categorical): ECOG performance score as rated by the physician. The categories are:
* 0: Asymptomatic.
* 1: Symptomatic but completely ambulatory.
* 2: In bed \(<\)50% of the day.
* 3: In bed \(>\) 50% of the day but not bedbound.
* age (continuous): Age of the individual.
* ph.karno (continuous): Karnofsky performance score rated by physician.
* pat.karno (continuous): Karnofsky performance score as rated by the individual.
\begin{table}
\begin{tabular}{l r r} \hline \multicolumn{2}{c}{trt feature} & \multicolumn{2}{c}{stage feature} \\ Category & percentage\_cat \\ \hline
0 & 11.90 \\
1 & 7.06 \\ \hline \multicolumn{2}{c}{bili feature} & \multicolumn{2}{c}{riskscore feature} \\ Category & percentage\_cat \\ \hline \(\leq 0.6\) & 2.17 \\ \((0.6,1]\) & 2.56 \\ \((1,1.9]\) & 14.30 \\ \(>1.9\) & 19.00 \\ \hline \end{tabular}
\begin{tabular}{l r r} \hline \multicolumn{2}{c}{riskscore feature} \\ Category & percentage\_cat \\ \hline \(\leq 4.3\) & 0.00 \\ \((4.3,5]\) & 0.00 \\ \((5,5.8]\) & 9.52 \\ \(>5.8\) & 30.80 \\ \hline \end{tabular}
\end{table}
Table 1: Percentage of individuals that have experienced the event according to each category for all the features in the UDCA dataset.
* meal.cal (continuous): Calories consumed at meals.
* wt.loss (continuous): Weight loss in last six months.
We drop some information regarding LUNG dataset. First, we do not use the feature inst because it does not provide any further information allowing institutions identification. Second, we remove the meal.cal feature, since it contains a total of 20.6% of missing values. Third, 18 individuals have at least one feature with missing information. We drop those individuals from the dataset. Finally, with regards the feature ph.ecog, just a single individual is in the category 3. We do not consider this individual, therefore we drop it. After this preprocessing, we are left with 209 individuals.
As for the UDCA dataset, a bivariate descriptive analysis is performed in LUNG dataset. Table 2 contains the results. Those features dropped from the dataset are not included in that table.
The last dataset is the Veteran dataset (Kalbfleisch and Prentice, 2002) which consists of individuals with advanced inoperable Lung cancer. The individuals were part of a randomised trial of two treatment regimens. The event of interest for the three datasets is the individual's death. A total of
\begin{table}
\begin{tabular}{l r} \hline \hline \multicolumn{2}{c}{sex feature} \\ Category & percentage\_cat \\ \hline
1 & 79.80 \\
2 & 56.50 \\ \hline \hline \multicolumn{2}{c}{age feature} \\ Category & percentage\_cat \\ \hline \(\leq 56\) & 64.80 \\ \((56,63]\) & 65.40 \\ \((63,69]\) & 70.60 \\ \(>69\) & 80.80 \\ \hline \hline \multicolumn{2}{c}{pat.karno feature} \\ Category & percentage\_cat \\ \hline \(\leq 70\) & 80.00 \\ \((70,80]\) & 75.00 \\ \((80,90]\) & 62.70 \\ \(>90\) & 56.20 \\ \hline \hline \end{tabular}
\begin{tabular}{l r} \hline \multicolumn{2}{c}{ph.ecog feature} \\ Category & percentage\_cat \\ \hline
0 & 56.70 \\
1 & 71.70 \\
2 & 86.00 \\ \hline \hline \multicolumn{2}{c}{ph.karno feature} \\ Category & percentage\_cat \\ \hline \(\leq 80\) & 77.00 \\ \((80,90]\) & 62.70 \\ \(>90\) & 62.10 \\ \hline \hline \multicolumn{2}{c}{wt.loss feature} \\ Category & percentage\_cat \\ \hline \(\leq 0\) & 68.90 \\ \((0,6]\) & 59.10 \\ \((6,15]\) & 79.20 \\ \(>15\) & 72.50 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Percentage of individuals that have experienced the event according to each category for all the features in the LUNG dataset.
93.43% of the individuals experienced the event. The features of this dataset are:
* trt (categorical): Treatment received. 1 is for standard and 2 is for test.
* prior (categorical): It indicates if the patient has received another therapy before the current one. 0 means no and 10 means yes.
* celltype(categorical): Histological type of the tumor. The categories are: squamous, smallcell, adeno and large.
* karno (continuous): Karnofsky performance score.
* age (continuous): Age of the individual.
* diagtime (continuous): Months from diagnosis to randomisation.
Note that the Veteran dataset does not contain any missing value. The results of the bivariate descriptive analysis for the Veteran dataset are displayed in Table 3.
\begin{table}
\begin{tabular}{l r r} \hline \hline \multicolumn{2}{c}{trt feature} & \multicolumn{2}{c}{prior feature} \\ Category & percentage\_cat \\ \hline
1 & 92.80 \\
2 & 94.10 \\ \hline \hline \multicolumn{2}{c}{celltype feature} & \multicolumn{2}{c}{karno feature} \\ Category & percentage\_cat \\ \hline squamous & 88.60 \\ smallcell & 93.80 \\ adeno & 96.30 \\ large & 96.30 \\ \hline \hline \multicolumn{2}{c}{age feature} & \multicolumn{2}{c}{diagtime feature} \\ Category & percentage\_cat \\ \hline \(\leq 51\) & 94.30 \\ \((51,62]\) & 87.20 \\ \((62,66]\) & 100.00 \\ \(>66\) & 93.30 \\ \hline \hline \end{tabular}
\begin{tabular}{l r r} \hline \hline \multicolumn{2}{c}{prior feature} \\ Category & percentage\_cat \\ \hline
0 & 93.80 \\
10 & 92.50 \\ \hline \hline \multicolumn{2}{c}{karno feature} \\ Category & percentage\_cat \\ \hline \hline \(\leq 40\) & 97.40 \\ \((40,60]\) & 95.10 \\ \((60,75]\) & 92.00 \\ \(>75\) & 87.90 \\ \hline \hline \end{tabular}
\begin{tabular}{l r r} \hline \hline \multicolumn{2}{c}{karno feature} \\ Category & percentage\_cat \\ \hline \(\leq 3\) & 90.50 \\ \((3,5]\) & 97.00 \\ \((5,11]\) & 90.00 \\ \(>11\) & 96.9 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Percentage of individuals that have experienced the event according to each category for all the features in the Veteran dataset.
Table 4 shows a brief summary of each dataset: \(p\) corresponds to the number of features, while \(p^{*}\) is the number of features after pre-processing (dropping and doing one-hot-encoding), \(n\) denotes the number of individuals of the dataset, and \(n_{full}\) is the number of individuals once the missing values are dropped.
We model the event of interest by means of machine learning algorithms. Given a dataset \(\mathbf{D}\), it is divided randomly into two sets: a training dataset, \(\mathbf{D}^{train}\), and a test dataset, \(\mathbf{D}^{test}\), using 90% of individuals for training and 10% for testing.
Once the data is split, we preprocess \(\mathbf{D}^{train}\). We apply one-hot-encoding to categorical features. If a categorical feature has \(k\) categories, then we create \(k-1\) binary features. The category without a binary feature is the reference category. After that, the original feature is deleted from the dataset since we use the \(k-1\) new features treated as continuous ones. Continuous features are also preprocessed. Given \(\mathbf{\tilde{x}}_{j}\), we first estimate the mean, \(\hat{\mu}^{j}_{train}\), and the standard deviation, \(\hat{\sigma}^{j}_{train}\). Then, the standardisation performed is \((\mathbf{\tilde{x}}_{j}-\hat{\mu}^{j}_{train})/\hat{\sigma}^{j}_{train}\). This new feature is used instead of \(\mathbf{\tilde{x}}_{j}\).
The same preprocess is applied on \(\mathbf{D}^{test}\). Note that the parameters that involve the preprocess (for both, categorical and continuous features) are taken from the preprocess performed on \(\mathbf{D}^{train}\), i.e., nothing is estimated in the test set. Let \(\mathbf{\tilde{D}}^{train}\) and \(\mathbf{\tilde{D}}^{test}\) be the datasets obtained after preprocessing them.
Afterwards, a model is trained in \(\mathbf{\tilde{D}}^{train}\) and \(\mathbf{\tilde{D}}^{test}\) is used to obtain the c-index value, a goodness-of-fit measure for survival models (see Appendix A for more details about c-index and Survival Analysis).
In this section, we use five distinct machine learning algorithms: the Cox Proportional Hazards Model (CoxPH), Random Survival Forest (RSF) (both from **sksurv** package), eXtreme Gradient Boosted Survival Trees (XGB) (from **xgbse** package) as well as continuous and time-discrete deep learning models, DeepSurv and DeepHit (both from **pycox** package). We have
\begin{table}
\begin{tabular}{l l l l l l} \hline \hline Dataset & Acronym & \(p\) & \(p^{*}\) & \(n\) & \(n_{full}\) \\ \hline Trial of Usrodeoxycholic Acid & UDCA & 4 & 4 & 170 & 169 \\ NCCTG Lung Cancer & LUNG & 8 & 7 & 228 & 209 \\ Veterans’ Administration Lung Cancer & Veter & 6 & 8 & 137 & 137 \\ Study & & & & & \\ \hline \hline \end{tabular}
\end{table}
Table 4: Summary of the open access datasets used, where \(p\) is number of features for the corresponding dataset, \(p^{*}\) is the number of features after pre-processing (dropping and doing one-hot-encoding), \(n\) denotes the total number of individuals in the dataset, and \(n_{full}\) is the number of individuals after dropping missing values.
performed an hyperparameter tuning strategy for each model and dataset.
Having trained a model, SurvLIMEpy is applied to obtain feature importance. For a given individual \(i\) of \(\mathbf{\tilde{D}}^{test}\), SurvLIME algorithm is used 100 times, which produces a set of 100 coefficients: \(\{\boldsymbol{\hat{\beta}}_{i,1}^{s},\ldots,\boldsymbol{\hat{\beta}}_{i,100}^{s}\}\). Then, the mean value across all the simulation is calculated, \(\boldsymbol{\bar{\beta}}_{i}^{s}=(1/100)\sum_{j=1}^{100}\boldsymbol{\hat{ \beta}}_{i,j}^{s}\). That vector, \(\boldsymbol{\bar{\beta}}_{i}^{s}\), is used as the feature importance for the individual \(i\). This process is applied to all the individuals in the test dataset. Therefore, a set of coefficients \(\{\boldsymbol{\bar{\beta}}_{1}^{s},\ldots,\boldsymbol{\bar{\beta}}_{n_{t}}^{s}\}\) is obtained, where \(n_{t}\) is the total number of individuals in the test dataset. This set of coefficients is used in this study. Note that for UDCA \(n_{t}\) is equal to 17, for LUNG it is equal to 21, and for Veter it is equal to 14.
Table 5 shows the value of the c-index for the different models. It can be seen that for all the datasets, the c-index related to deep learning models (i.e., DeepSurv and DeepHit) is 0.5 or close to this value, which is the value that one would obtain if a random model were taking decisions. An explanation for such a value is found in the number of individuals: the sample size of the datasets is small relative to the number of parameters of those models. Figures 5 to 7 depict the feature importance for each model and dataset. The number of points used to obtain each of the boxen plots depicted in these figures is equal to the number of individuals in \(\mathbf{\tilde{D}}^{test}\). For each figure, the set of SurvLIME coefficients used to produce those figures is \(\{\boldsymbol{\bar{\beta}}_{1}^{s},\ldots,\boldsymbol{\bar{\beta}}_{n_{t}}^{s}\}\).
As the value of the c-index is so low for DeepSurv and DeepHit, we do not show the feature importance for those models in this section. However, in Section 4.3 we use simulated data in order to train deep learning models with an acceptable c-index and show the feature importance for those models.
Figure 5: Feature importance for the UDCA dataset. The input parameter with_colour is set to false.
Figure 6: Feature importance for the LUNG dataset. The input parameter with_colour is set to false.
Figure 7: Feature importance for the Veteran dataset. The input parameter with_colour is set to false.
experiencing the event. However, according to Figure 5, the higher the value of bili, the lower the risk of experiencing the event. A possible explanation for this anomaly could be that bili feature correlates with riskscore feature, Pearson correlation coefficient between both of them is equal to 0.69. The Cox Proportional Hazards Model is very sensitive to this phenomenon.
Out of all the models, the Cox Proportional Hazards Model is the only one whose coefficients can be directly compared with the SurvLIME's coefficients. Table 6 contains both sets of coefficients: the left column is for the coefficients of the Cox Proportional Hazards Model and the right column is for the median values of SurvLIME coefficients when it explains the Cox Proportional Hazards Model. Note that the median values are for the set \(\{\bar{\boldsymbol{\beta}}_{1}^{s},\ldots,\bar{\boldsymbol{\beta}}_{n_{t}}^{s}\}\). Therefore, they are median values of mean values, since each vector \(\bar{\boldsymbol{\beta}}_{j}^{s}\) is the mean vector across all the simulations. It can be seen that both sets of coefficients in Table 6 are close.
With regards to LUNG dataset, the feature importance is depicted in Figure 6. For the Cox Proportional Hazards Model, the most important feature is ph.ecog. According to the model, the category that increases the most the CHF is 2 (ph.ecog_2), followed by category 1 (ph.ecog_1) and then by the category 0 (reference category). This is concordant with the values displayed in Table 2.
On the other hand, for the other two models, the most important one is age: the older an individual is, the higher the value of the CHF. The results shown in the Table 2 are in the same direction: the older an individual is, the higher the probability of experiencing the event.
Table 7 contains the coefficients for the Cox Proportional Hazards Model and the median values of SurvLIME coefficients when it explains the Cox Proportional Hazards Model. The median values are calculated in the same way as they were calculated for the UDCA dataset. Note that both sets of coefficients are close.
Finally, Figure 7 shows the feature importance for each model. The three models consider that karno feature is the most important. According to the models, the higher the value of this feature, the lower the CHF is. This is aligned with what is shown in Table 3. Table 8 contains the coefficients for
\begin{table}
\begin{tabular}{l r r} \hline \hline Feature & Cox & SurvLIME \\ \hline risksscore & 2.4397 & 1.6110 \\ stage & -0.0264 & -0.0392 \\ trt & -0.6480 & -0.3937 \\ bili & -1.7014 & -1.0954 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Coefficients of Cox Proportional Hazards Model (middle column) and median values of SurvLIME coefficients (right column) for UDCA dataset.
the Cox Proportional Hazards Model and the median values of SurvLIME coefficients when it explains this model. As for the UDCA as well as the LUNG datasets, both sets of coefficients are close.
To conclude with this section, we have seen that our implementation captures the value of the coefficients when the machine learning model is the Cox Proportional Hazards Model.
### Simulated data and deep learning models
As shown in Table 5, DeepSurv and DeepHit did not perform better than a random model in any of the presented datasets. To show that our implementation of SurvLIME algorithm is able to obtain feature importance for deep learning models, we make use of simulated data. Concretely, the data generating process is the same as the one used for set 1 in Section 4.1.
In order to train the deep learning models, we follow the same procedure as in Section 4.2: 90% of the individuals are used to train the models and 10% are used to obtain the c-index as well as to obtain feature importance.
\begin{table}
\begin{tabular}{l r r} \hline Feature & Cox & SurvLIME \\ \hline ph.ecog\_2 & 0.6678 & 0.6117 \\ ph.ecog\_1 & 0.4419 & 0.4049 \\ age & 0.1551 & 0.1422 \\ ph.karno & 0.3206 & 0.2939 \\ pat.karno & -0.1805 & -0.1654 \\ wt.loss & -0.1491 & -0.1367 \\ sex & -0.2991 & -0.2742 \\ \hline \end{tabular}
\end{table}
Table 7: Coefficients of Cox Proportional Hazards Model (middle column) and median values of SurvLIME coefficients (right column) for LUNG dataset.
\begin{table}
\begin{tabular}{l r r} \hline Feature & Cox & SurvLIME \\ \hline trt & 0.0979 & 0.0569 \\ prior & -0.0107 & -0.0138 \\ diagtime & -0.0166 & -0.0088 \\ age & -0.0454 & -0.0253 \\ celltype\_squamous & -0.5197 & -0.3690 \\ celltype\_smallcell & -0.0557 & -0.0461 \\ celltype\_large & -0.3110 & -0.2278 \\ karno & -0.7381 & -0.5251 \\ \hline \end{tabular}
\end{table}
Table 8: Coefficients of Cox Proportional Hazards Model (middle column) and SurvLIME coefficients (right column) for Veteran dataset.
Table 9 shows that both models have an acceptable predictive capacity on the simulated data. Using the same Monte-Carlo strategy, 100 different simulations are computed over the 100 test individuals. The 100 mean values, \(\{\bar{\boldsymbol{\beta}}_{1}^{s},\ldots,\bar{\boldsymbol{\beta}}_{100}^{s}\}\), computed across all the simulations are shown in Figure 8. It can be seen that the only features which deviate significantly from 0 are the feature two and three. This is aligned with the true coefficients, as shown in Table 10. In order to produce this table, we use the median values of the SurvLIME coefficients, i.e., the median across the set \(\{\bar{\boldsymbol{\beta}}_{1}^{s},\ldots,\bar{\boldsymbol{\beta}}_{100}^{s}\}\). We omit to provide the SurvLIME coefficients for DeepHit since the values we have obtained are very similar to the values of DeepSurv.
Conclusions
In this paper **SurvLIMEpy** has been introduced in the form of a Python library. To the extent of our knowledge, this is the first module that tackles the problem of model explainability for time-to-event data in the Python programming language.
We have successfully demonstrated the validity of our implementation of the SurvLIME algorithm through a series of experiments with simulated and real datasets. Furthermore, we also grant flexibility to the algorithm by allowing users to adjust some of its internal parameters.
Finally, a future research line would take into account how the feature importance evolves over time and incorporate it to **SurvLIMEpy**. Special care must be taken into account as the computational cost would increase significantly.
## Acknowledgments
This research was supported by the Spanish Research Agency (AEI) under projects PID2020-116294GB-I00 and PID2020-116907RB-I00 of the call MCIN/ AEI /10.13039/501100011033, the project 718/C/2019 funded by Fundacio la Marato de TV3 and the grant 2020 FI SDUR 306 funded by AGAUR.
|
2308.05641
|
The White Dwarf Mass-Orbital Period Relation Under Wind Mass Loss
|
Helium white dwarfs (HeWDs) are thought to form from low-mass red giant stars
experiencing binary interaction. Because the helium core mass of a red giant
star is closely related to the stellar radius, there exists well-known relation
between the orbital period ($P_{\rm orb}$) and the mass ($M_{\rm WD}$) of the
HeWDs, which is almost independent of the type of the companion star.
Traditional derivation of the $M_{\rm WD}$-$P_{\rm orb}$ relation generally
neglected the effect of wind mass loss from the red giants, while observations
show that wind mass loss from red giants in binary systems is systematically
higher than that from isolated stars. In this work, we calculate binary
evolution with tidally enhanced stellar wind (TEW) and find that it causes
significantly scatter of the traditional $M_{\rm WD}$-$P_{\rm orb}$ relation.
The TEW can prevent the red giants from overflowing their Roche lobes and slow
down the growth of the helium core, leaving a lower-mass HeWD for given orbital
period. This scenario may account for some of the HeWD binaries that deviate
from the traditional $M_{\rm WD}$-$P_{\rm orb}$ relation. However, we point out
that observations of more HeWD binaries in wide orbits are needed to test the
TEW model and to constrain the enhanced wind factor.
|
Shi-Jie Gao, Xiang-Dong Li
|
2023-08-10T15:33:14Z
|
http://arxiv.org/abs/2308.05641v1
|
# The White Dwarf Mass-Orbital Period Relation Under Wind Mass Loss
###### Abstract
Helium white dwarfs (HeWDs) are thought to form from low-mass red giant stars experiencing binary interaction. Because the helium core mass of a red giant star is closely related to the stellar radius, there exists well-known relation between the orbital period (\(P_{\rm orb}\)) and the mass (\(M_{\rm WD}\)) of the HeWDs, which is almost independent of the type of the companion star. Traditional derivation of the \(M_{\rm WD}\)-\(P_{\rm orb}\) relation generally neglected the effect of wind mass loss from the red giants, while observations show that wind mass loss from red giants in binary systems is systematically higher than that from isolated stars. In this work, we calculate binary evolution with tidally enhanced stellar wind (TEW) and find that it causes significantly scatter of the traditional \(M_{\rm WD}\)-\(P_{\rm orb}\) relation. The TEW can prevent the red giants from overflowing their Roche lobes and slow down the growth of the helium core, leaving a lower-mass HeWD for given orbital period. This scenario may account for some of the HeWD binaries that deviate from the traditional \(M_{\rm WD}\)-\(P_{\rm orb}\) relation. However, we point out that observations of more HeWD binaries in wide orbits are needed to test the TEW model and to constrain the enhanced wind factor.
keywords: binaries: close - stars: evolution - stars: winds - stars: pulsars - stars: white dwarfs
## 1 Introduction
Single low and intermediate mass stars generally evolve to be carbon-oxygen white dwarfs (WDs). However, stars with masses \(\lesssim 2.3\) M\({}_{\odot}\) in close binaries can lose their envelope through Roche-lobe overflow (RLOF) and finally leave helium white dwarfs (HeWDs). During the red giant branch (RGB), the mass of the core and the radius \(R\) of the star are hardly dependent on the mass of the hydrogen-rich envelope1(Refsdal and Weigert, 1971; Kippenhahn, 1981; Webbink et al., 1983; de Kool et al., 1986; Han, 1998; Kippenhahn et al., 2012). On the other hand, RLOF mass transfer requires that the donor radius \(R\) is always close to its Roche lobe (RL) radius \(R_{\rm L}\) which is determined by the orbital separation and the mass ratio of the components (Eggleton, 1983). Consequently, there is specific correlation between the orbital periods \(P_{\rm orb}\) and the masses \(M_{\rm WD}\) of the HeWDs in the post-mass transfer binaries (Joss et al., 1987; Rappaport et al., 1995; Tauris and Savonije, 1999). Previous investigations showed that opacity, mixing length, convective overshooting and metallicity can influence the size of the RGB stars, and hence the \(M_{\rm WD}\)-\(P_{\rm orb}\) relation (e.g., Rappaport et al., 1995; Tauris and Savonije, 1999; Chen et al., 2013; Istrate et al., 2016), while the mass transfer efficiency, the masses of HeWD's progenitor and its companion, as well as the angular momentum loss mechanisms have insignificant effects on the \(M_{\rm WD}\)-\(P_{\rm orb}\) relation (e.g., De Vito and Benvenuto, 2010; Lin et al., 2011; Chen et al., 2013; Jia and Li, 2014).
Footnote 1: The degenerate core is highly concentrated and the pressure drops dramatically within a thin burning shell above the core surface. The extended envelope is nearly weightless and has no influence on the burning shell. So the stellar luminosity, effective temperature and stellar radius are mainly dependent on core mass. See also Chapter 33 of Kippenhahn et al. (2012).
Observations of binary millisecond pulsars (MSPs) with HeWD companions are broadly consistent with the \(M_{\rm WD}-P_{\rm orb}\) relation. However, some of the systems seem to have HeWDs from higher than expected at given \(P_{\rm orb}\)(Tauris and Savonije, 1999; Shao and Li, 2012). Moreover, similar feature exists in other types of HeWD binaries with non-degenerate companions, e.g., main-sequence (MS) stars2(see section 3), indicating that the inconsistency has nothing to do with the nature of the companion stars, but should be related with the evolution of the HeWD progenitors.
Footnote 2: MS stars in MS+HeWD binaries were the accretors at the first mass transfer phase, during which the progenitor of the HeWD overflowed its RL. After the mass transfer, a rejuvenated MS star and a HeWD were left in the system. So, one can also expect the \(M_{\rm WD}\)-\(P_{\rm orb}\) relation in MS+HeWD binaries.
Compared with mass loss via RLOF, wind mass loss from RGB stars is usually neglected in previous works when deriving the \(M_{\rm WD}\)-\(P_{\rm orb}\) relation. However, there is observational evidence indicating that the wind loss rate of RGB stars is systematically higher in binary systems than that in single RGB stars (e.g., Kenyon et al., 1988; Seaquist et al., 1993; Miko, 2ijewska, 2012). For example, Zamanov et al. (2008) showed that the symbiotic giants rotate on average more than twice as fast as the field red giants, and as a result of the rapid rotation, symbiotic giants have on average \(\sim 10\) times larger mass loss rates than normal red giants. There is also independent evidence of the wind focusing towards the orbital plane in the symbiotic star EG And (Shagatova et al., 2021). The bipolarity of the recurrent nova in the symbiotic star RS Ophiuchi may result from the enhanced wind in the equatorial disc (O'Brien et al., 2006). Some planetary nebulae, for example, the Southern Crab (Corradi et al., 2001), show
an asymmetric morphology which may result from the enhanced wind in the equatorial disc. Observations of GX 1+4, the prototype of symbiotic X-ray binaries, indicate a mass loss rate \(\sim 10^{-6}\) M\({}_{\odot}\) yr\({}^{-1}\) of the giant star (Chakrabarty et al., 1998), significantly higher than predicted by the mass-loss law of Reimers (1975). Hence, it is necessary to explore the influence of wind mass loss on the evolution of RGB binaries and the resulting \(M_{\rm WD}\)-\(P_{\rm orb}\) relation.
This paper is organised as follows. In section 2, we introduce the physical considerations and binary evolution model. We present our results in section 3 and compare with observations in section 4. Finally, we discuss our results and present our conclusions in section 5.
## 2 Methods and calculations
We use the state-of-the-art stellar evolution code MESA3 (version 22.11.1, Paxton et al., 2011, 2013, 2015, 2018, 2019; Jermyn et al., 2023) to evolve binary stars composed by a zero-age MS star (the progenitor of the HeWD/the donor) with mass \(M_{\rm d}\) and a companion (the accretor) with mass \(M_{\rm a}\). When the donor evolves to be a RGB star, we adopt the prescription of stellar wind loss rate from Reimers (1975), i.e.,
Footnote 3: [https://github.com/MESAHbb/mesa](https://github.com/MESAHbb/mesa)
\[\dot{M}_{\rm w}=-4\times 10^{-8}\times\nu_{\rm IR}\left(\frac{M_{\rm d}}{1\ {\rm M}_{\odot}}\right)^{-1}\left(\frac{L}{10^{3}{\rm L}_{\odot}}\right)\left( \frac{R}{10^{2}{\rm R}_{\odot}}\right)\ {\rm M}_{\odot}\ {\rm yr}^{-1}, \tag{1}\]
where \(L\) and \(R\) are the luminosity and radius of the donor star respectively, and \(\eta_{\rm R}\) is a scaling factor and usually taken to be 0.5. The Reimers' formula is valid when the donor is earlier than the asymptotic giant branch (AGB, or the central helium abundance by mass is \(<10^{-4}\)). Besides, Tout & Eggleton (1988a) suggested that tidal interactions and/or magnetic activities in binaries may enhance the wind mass-loss rate in cool subgiants/giants by a factor,
\[\eta_{\rm T}=1+B_{\rm w}\times{\rm min}\left[\left(\frac{R}{R_{\rm L}}\right) ^{6},\ \frac{1}{2^{6}}\right], \tag{2}\]
where \(B_{\rm w}\) is an adjustable parameter and usually taken to be \(\sim 10^{3}\) - \(10^{4}\), to be compatible with observations of RS CVn binaries, barium and CH (methylidyne) stars, and Type Ia supernovae (e.g., Tout & Eggleton, 1988a; Han et al., 1995; Chen et al., 2011; Meng & Han, 2016; Wu et al., 2021). The factor \((R/R_{\rm L})^{6}\) is from the tidal friction torque formula (Zahn, 1977; Campbell & Papaloizou, 1983). In the tidally enhanced stellar wind (TEW) model, the presence of a companion enhances the mass-loss rate regardless of its nature. The mechanism of the enhancement, although not physically understood, is rooted in tidal friction and the dynamo activity of the donor star. So it has also been adopted in researches on the evolution of interacting binaries containing a neutron star (NS) or black hole accretor (Regos et al., 1998; Smedley et al., 2017). In our calculation, the TEW prescription is applied when the donor is on the RGB with the central hydrogen abundance by mass \(<1\times 10^{-6}\).
The mass transfer maintains through both capture of the stellar wind and RLOF. The wind accretion efficiency \(\beta_{\rm w}\) of the accretor is calculated with the Bondi-Hoyle-Lyttleton prescription (Hoyle & Lyttleton, 1939; Bondi & Hoyle, 1944; Bondi, 1952; Edgar, 2004), and the maximum efficiency is usually limited by 0.5 to avoid the accretion rate of the accretor exceeding the wind mass-loss rate of the donor (Paxton et al., 2015). The excess wind material (with a mass fraction \(1-\beta_{\rm w}\)) is assumed to leave the system taking the specific angular momentum of the donor (Paxton et al., 2015). For mass transfer via RLOF, we adopt the Kolb scheme (Kolb & Ritter, 1990). Different from the classical RLOF scheme in Eggleton's STARS code (e.g., Tauris & Savonije, 1999) which requires \(R\geq R_{\rm L}\), here the mass transfer can proceed even when \(R<R_{\rm L}\) because the donor stars usually have an extended atmosphere, and the pressure scale height at the stellar surface is taken into account to obtain the RLOF rate. The Kolb scheme leads to a lighter HeWD than with the classical RLOF scheme for a given orbital period (Zhang et al., 2021). We assume that half of the transferred mass via RLOF is accreted by the accretor and the other half is ejected from the binary, probably in the form of disk winds and outflows (e.g., Podsiadlowski et al., 2002), taking away the specific angular momentum of the binary. In the meantime, the accretion rate is limited by the Eddington accretion rate4 (\(\sim 4\times 10^{-8}\) M\({}_{\odot}\) yr\({}^{-1}\)) for a 1.4 M\({}_{\odot}\) NS accretor or the thermal time-scale mass accretion rate (\(\sim 10^{-7}\) M\({}_{\odot}\) yr\({}^{-1}\)) for a 1 M\({}_{\odot}\) MS accretor (Hurley et al., 2002). We find that the \(M_{\rm WD}\)-\(P_{\rm orb}\) relation is insensitive to the value of the mass transfer efficiency.
Footnote 4: The Eddington accretion rate is defined as the accretion rate at which the radiation pressure from the accreted material can be balanced by the inward gravitational force (Frank et al., 2002).
We also take into account angular momentum loss caused by magnetic braking (Rappaport et al., 1983) and gravitation wave radiation (Landau & Lifshitz, 1971). Magnetic braking only works when the donor has a convective envelope and a radiative core simultaneously, and the braking index \(\gamma_{\rm mb}\) is set to be 3.0 (Paxton et al., 2015). The competition between mass transfer and magnetic braking results in a bifurcation period (\(\sim 1-2\) d) which divides binaries with open to tight orbits (Pylyser & Savonije, 1988). The \(M_{\rm WD}\)-\(P_{\rm orb}\) correlation is valid for diverging binaries with \(P_{\rm orb}\geq 2\) d and there is large scatter in the HeWD masses when \(P_{\rm orb}\sim 2\) d (Istrate et al., 2014). For binaries with orbital periods \(\gtrsim 10\) d mass loss dominates angular momentum loss, and both magnetic braking and gravitational wave radiation can be ignored.
## 3 Results
We have evolved a grid of binaries consisting of an MS donor with initial mass \(M_{\rm d,ini}\) of 1.2, 1.4, 1.6 and 1.8 M\({}_{\odot}\), and an NS or MS accretor with initial mass \(M_{\rm a,ini}\) of 1.2, 1.5 and 1.8 M\({}_{\odot}\). The lower mass limit of \(M_{\rm d,ini}\) is taken to ensure that the MS lifetime is shorter than 12 Gyr. The initial orbital period \(P_{\rm orb,ini}\) in logarithm is in the range of \([-0.5,2.6]\) in steps of 0.1 dex between \(-0.5\) and 1.1, and 0.2 dex between 1.2 and 2.6, respectively. The orbits are assumed to be circular. The metallicity of the donor star is assumed to be \(Z=0.02\). In addition, we adopt the exponentially overshooting model for the stellar core with an efficiency factor \(f_{\rm ov}=0.0016\)(Herwig, 2000). We exclude binaries with RLOF at the beginning of the evolution, or with the donor star overflowing from the \(L_{2}\) Lagrange point, or
\begin{table}
\begin{tabular}{c c c} \hline Models & \(\eta_{\rm R}\) & \(B_{\rm w}\) \\ \hline NoWind & 0 & – \\ Wind & 0.5 & 0 \\ TEW3 & 0.5 & \(1\times 10^{3}\) \\ TEW4 & 0.5 & \(1\times 10^{4}\) \\ TEW5 & 0.5 & \(1\times 10^{5}\) \\ \hline \end{tabular}
\end{table}
Table 1: Models in our calculations.
with the mass transfer rate via RLOF exceeding \(1\times 10^{-4}\) M\({}_{\odot}\) yr\({}^{-1}\). We only select evolutionary tracks leaving a HeWD remnant. We adopt the default definition in MESA of the boundary between the helium-rich core and the hydrogen-rich envelope to be the outermost location where the mass fraction of \({}^{1}\)H \(\leq 0.1\) and the mass fraction of \({}^{4}\)He \(\geq 0.1\). To examine the effects of the TEW, we construct five models of stellar winds, that is, no stellar wind from the donor (_NoWind_), normal wind with \(B_{\rm w}=0\) (_Wind_), TEW with \(B_{\rm w}=10^{3}\) (_TEW3_), \(10^{4}\) (_TEW4_), and \(10^{5}\) (_TEW5_). The parameters of the five models are summary in Table 1.
Figure 1 shows the Hertzsprung-Russell Diagram of the donor in different models. Here, accretors are assumed to be NSs. In the left and right panels, (\(M_{\rm d,ini}\), \(M_{\rm a,ini}\), \(P_{\rm orb,ini}\)) = (1.4 M\({}_{\odot}\), 1.5 M\({}_{\odot}\), 63.10 d) and (1.4 M\({}_{\odot}\), 1.5 M\({}_{\odot}\), 0.79 d), respectively. The gray solid, blue solid, blue solid, blue dash-dotted, blue dashed and blue dotted lines correspond to Models _Nowind, Wind, TEW3, TEW4_ and _TEW5_, respectively. The red circles and crosses denote the onset and end of RLOF, respectively. The red dashed lines depicts the evolution of an isolated 1.4 M\({}_{\odot}\) star (with stellar parameters as in Model _Wind_) for comparison. The maximum luminosity that the donor can reach in
Figure 1: Hertzsprung-Russell Diagram of the donor in different models, with (\(M_{\rm d,ini}\), \(M_{\rm a,ini}\), \(P_{\rm orb,ini}\))=(1.4 M\({}_{\odot}\), 1.5 M\({}_{\odot}\), 63.10 d) and (1.4 M\({}_{\odot}\), 1.5 M\({}_{\odot}\), 0.79 d) for the left and right panels, respectively. Here, accretors are assumed to be NSs. The gray solid, blue solid, blue dash-dotted, blue dashed and blue dotted lines correspond to Models _NoWind, Wind, TEW3, TEW4_ and _TEW5_, respectively. The red dashed line in each panel depicts the evolution of an isolated 1.4 M\({}_{\odot}\) star. The red circles and crosses denote the onset and end of RLOF, respectively.
Figure 2: The ratio of the donor radius and its RL radius as a function of the donor mass. The red dashed line indicates \(R=R_{\rm L}\). The red circles and crosses denote the onset and end of RLOF, respectively.
both cases decreases with increasing wind mass loss. Since the core mass of a RGB star increases with increasing luminosity (e.g., Joss et al., 1987; Kippenhahn et al., 2012), a less massive HeWD is left with larger \(B_{\rm w}\).
Figure 2 shows the evolution of \(R/R_{\rm L}\) as a function of the donor mass \(M_{\rm d}\). The red dashed line indicates \(R=R_{\rm L}\), and other line types and symbols have the same meaning as in Figure 1. The dips in the curves are mainly caused by the dredge-up process or hydrogen shell flashes (Istrate et al., 2014, 2016). It is seen that the donor's radius does not always equal its RL radius (see the red dashed line). This is because in the Kolb scheme RLOF can occur even if \(R\) does not exceed \(R_{\rm L}\). In the left panel, in Models _NoWind_, _Wind_ and _TEW3_, RLOF starts when the donor is on the RGB, while in Models _TEW4_ and _TEW5_ there is no RLOF because the wind mass loss drives the orbital evolution and the donor can not catch up the expansion of the RL. In the right panel, RLOF commences when the donor is on the late MS in all cases. It can be seen that the gap between \(R\) and \(R_{\rm L}\) becomes larger with increasing \(B_{\rm w}\), leaving a lower-mass HeWD compared with the case of _NoWind_. This can also be seen in Figure 3, which shows the relation between the donor mass and the helium core mass (\(M_{\rm c}\)). Compared with Model _Nowind_, wind mass loss can significantly limits the core growth. Moreover, TEW
Figure 4: The orbital period as a function of the donor mass. The red solid line indicates the \(M_{\rm WD}-P_{\rm orb}\) relations of Lin et al. (2011). The red circles and crosses denote the onset and end of RLOF, respectively.
Figure 3: The helium core mass as a function of the donor mass. The red dashed line indicates \(M_{\rm d}=M_{\rm c}\). The red circles and crosses denote the onset and end of RLOF, respectively.
can continue stripping the donor mass after the RLOF, resulting in a lower-mass HeWD. For example, when (\(M_{\rm d,ini}\), \(M_{\rm a,ini}\), \(P_{\rm orb}\)) = (1.4 M\({}_{\odot}\), 1.5 M\({}_{\odot}\), 63.10 d) and (1.4 M\({}_{\odot}\), 1.5 M\({}_{\odot}\), 0.79 d), we find the final WD masses to be \(\sim\) 0.27 M\({}_{\odot}\) and \(\sim\) 0.17 M\({}_{\odot}\) in Model _TEWS_, and \(\sim\) 0.4 M\({}_{\odot}\) and \(\sim\) 0.24 M\({}_{\odot}\) in Model _NoWind_, respectively.
In Figure 4, we show the ten evolutionary tracks in the donor mass-orbital period (\(M_{\rm d}\)-\(P_{\rm orb}\)) plane. The red solid line in each panel shows the \(M_{\rm WD}\)-\(P_{\rm orb}\) relation calculated by Lin et al. (2011). We see that the final state of the tracks are on the left of the red solid line, that is, the final HeWD get less massive at given orbital period with increasing wind mass loss.
The \(M_{\rm WD}\)-\(P_{\rm orb}\) relations of different models are compared in the left panel of Figure 5. The dots, pluses, stars, triadius and crosses represent the results in Models _NoWind_, _Wind_, _TEW3_, _TEW4_ and _TEW5_, respectively. In the left panel, the gray, purple, blue and green markers correspond to \(M_{\rm d,ini}\) = 1.2, 1.4, 1.6 and 1.8 M\({}_{\odot}\), respectively. In the right panel, the gray, purple and blue markers correspond to \(M_{\rm a,ini}\) = 1.2, 1.5 and 1.8 M\({}_{\odot}\), respectively. Because of the weak dependence of the Roche-lobe radius on the mass ratio in our cases, the \(M_{\rm WD}\)-\(P_{\rm orb}\) relation is also weakly dependent on the initial accretor mass. The relation obtained by Lin et al. (2011) is shown by the red dashed line. The difference between our results in Model _NoWind_ and Lin et al. (2011) is caused by the different RLOF schemes used (see discussion in Zhang et al. 2021). The red solid line in Figure 5 denotes the best-fit curve for Model _Wind_ in the form5
Footnote 5: The fitting was made by using the least squares method implemented in SciPr (Virtanen et al. 2020).
\[P_{\rm orb}({\rm d})=\frac{249831.9489m^{9.3420}}{(0.00128+1.5048m^{7.9567}+1.4 975m^{7.9577})0.5}, \tag{3}\]
where \(m=M_{\rm WD}/{\rm M}_{\odot}\). Note that there is a break between binaries with \(M_{\rm WD}\lesssim 0.21\) M\({}_{\odot}\) and \(\gtrsim 0.21\) M\({}_{\odot}\) in Models _NoWind_ and _Wind_, because the progenitors of HeWDs with mass \(\lesssim 0.21\) M\({}_{\odot}\) do not have a well-developed degenerate helium core at the onset of mass transfer (e.g., Ergma et al. 1998; Nelson et al. 2004; Jia & Li 2014; Istrate et al. 2016; Chen et al. 2017). Although our fitting is limited for HeWDs with \(M_{\rm WD}>0.21\) M\({}_{\odot}\), it also presents a good description of the \(M_{\rm WD}\)-\(P_{\rm orb}\) relation for \(M_{\rm WD}<0.21\) M\({}_{\odot}\).
For Models _TEW3_, _TEW4_ and _TEWS_, \(M_{\rm WD}\) and \(P_{\rm orb}\) do not follow a simple relation as for Models _NoWind_ and _Wind_. The deviation becomes larger with increasing wind mass loss and the \(M_{\rm WD}\)-\(P_{\rm orb}\) relation moves to up left of the red line. In these cases the donors do not always fill their RLs, so the descended HeWD mass depend on both the initial orbital period and the donor mass - \(M_{\rm WD}\) increases with increasing \(M_{\rm d,ini}\). The final orbital periods in Model _TEWS_ are about one order of magnitude longer than those in Model _NoWind_ for a given \(M_{\rm WD}\), and there is no HeWD binary formed with \(P_{\rm orb}\lesssim 2\) d in Model _TEW5_.
## 4 Comparison with observations
Most HeWD+NS binaries are binary MSPs with spin periods \(P_{\rm s}\lesssim 20\) ms. The short spin periods are thought to origin from accretion of mass and angular momentum of the NSs during their previous low/intermediate-mass X-ray binary evolution (Bhattacharya & van den Heuvel 1991, for a review).
We collect the data of HeWD+NS binaries from the ATNF pulsar catalogue (Manchester et al. 2005)6. We plot the measured masses and orbital periods of HeWD+NS binaries in Figure 6. They generally have a small but non-zero eccentricity, so the observed orbital periods should be reduced by a factor of \((1-e)^{3/2}\) (where \(e\) is the eccentricity) when compared with the calculated \(M_{\rm WD}\)-\(P_{\rm orb}\) relation. The blue crosses represent the binaries with the WD masses calculated by assuming the NS masses to be 1.35 M\({}_{\odot}\) and the orbital inclination angles to be 60\({}^{\circ}\) (the light blue and light red ones correspond to the spin periods \(<20\) ms and \(>20\) ms, respectively). The lower and upper limits of the WD masses are obtained by assuming the orbital inclination angle to be 90\({}^{\circ}\) and 26\({}^{\circ}\) respectively, which cover the 90% probability mass range for randomly oriented orbits. The coloured stars indicate the binary pulsars with more accurate mass measurements, including by measuring the orbital decay caused by gravitational wave radiation (Antoniadis et al. 2013) and by measuring the Shapiro delay in pulsar timing (Ng et al. 2020). The parameters and references of these binaries are listed in Table 2.
Footnote 6: [https://www.atnf.csiro.au/people/pulsar/psrcat/](https://www.atnf.csiro.au/people/pulsar/psrcat/)
From Figure 6, we find that quite a few HeWD+NS binaries are located on the left of the traditional \(M_{\rm WD}\)-\(P_{\rm orb}\) relation depicted by the red dashed line, that is, the observed WD masses are less than expected at given orbital period, although the error bars of the WD masses are quite large. Note that small values of the orbital inclination and large NS mass can increase the inferred HeWD mass for given observed mass functions. However, Tauris (1996) pointed out that there does not seem to be any observational selection effect favouring small inclination angle for HeWD+NS binaries. Shao & Li (2012) accordingly suggested that, to explain the mismatch between the observations with the theoretical \(M_{\rm WD}\)-\(P_{\rm orb}\) relation, an initially massive NS (\(\sim 2\) M\({}_{\odot}\)) may be required for part of the binaries7. Considering the fact that the mass measurements of the recycled pulsars reveal a mean mass of 1.48 M\({}_{\odot}\) and a dispersion of 0.2 M\({}_{\odot}\)(Ozel et al. 2012), it is unclear whether and why the birth masses of NSs in specific binaries are more massive than others.
Footnote 7: The reason is that accretion is likely inefficient in wide binaries due to the super-Eddington mass transfer and the thermal and viscous instability in the accretion disc (Lasota 2001; Hameury 2020), thus the NSs have grown little mass during the mass transfer stage.
Figure 6 shows that models with wind mass loss seem to offer a more reasonable explanation for the least massive HeWD binaries (see also Smedley et al. 2017). For example, the masses of HeWDs in PSRs J1713+0747, J0955-6150 and J1918-0642 are significantly lower than expected from the red dashed curve calculated by both Lin et al. (2011) and Model _Nowind_, but consistent with Model _Wind_. Moreover, PSRs J0614-3329 and J2043+1711 can only be accounted for by the TEW models, suggesting that it is necessary to consider the effect of wind mass loss on the \(M_{\rm WD}\)-\(P_{\rm orb}\) relation8.
Footnote 8: There are a few HeWD binaries located to the right of the red solid/dashed curves. The HeWDs in these binaries could be formed from lower-metallicity stars. Stars with the same initial mass but lower metallicities have smaller stellar radius and form WDs with shorter orbital periods (Rappaport et al. 1995; Tauris & Savonje 1999; Iía & Li 2014; Istrate et al. 2016).
An issue related to the TEW models is whether the NS spins can be accelerated to milliseconds with wind accretion. Figure 2 shows that in Models _TEW4_ and _TEW5_, the RGB stars do not always fill their RLs, thus the NSs can only accrete mass and angular momentum from the stellar wind in this situation. The relatively low accretion efficiency in wind-fed X-ray binaries makes it hard for the NSs to be efficiently spun up. Figure 7 shows the initial orbital period vs. the final orbital period relation in various models. Only the binaries with RLOF accreted mass \(\gtrsim 0.1\) M\({}_{\odot}\) (which is necessary for the formation of MSPs under disc accretion, Burderi et al. 2005) are plotted. It is evident that, the more wind mass loss, the shorter the final orbital
periods of MSPs. In Models _TEW4_ and _TEW5_, the longest orbital periods for MSPs are around 200 d and 50 d, respectively. They seem to be in contradiction with the observed MSPs demonstrated in Figure 6 and Table 2, especially for Model _TEW5_.
A potential solution to this problem is the wind-RLOF scheme, where the slowly expanding stellar wind fills the RL and is gravitationally focused to the \(L_{1}\) point (Mohamed & Podsiadlowski, 2012; Abate et al., 2013). Compared with the canonical Bondi-Hoyle-Lyttleton wind accretion (Hoyle & Lyttleton, 1939; Bondi & Hoyle, 1944), wind-RLOF may significantly enhance the mass and angular
\begin{table}
\begin{tabular}{l l l l l l l l} \hline Name & Type & \(M_{\rm WD}\) [M\({}_{\odot}\)] & \(P_{\rm orb}\) [d] & \(e\) & \(M_{\rm NS}\) [M\({}_{\odot}\)] & \(P_{\rm s}\) [ms] & References \\ \hline J2016+1948 & HeWD+NS & \(0.45\pm 0.02\) & 635.04 & 0.00147981 & \(1.0\pm 0.5\) & 64.9404 & Gonzalez et al. (2011) \\ J0337+1715b & HeWD+(HeWD+NS) & \(0.41058\pm 0.0004040
momentum transfer efficiencies, and possibly spin up the NSs to be MSPs. The condition and mechanisms of wind-RLOF depend on the properties of dust grains in the cool, outer atmosphere of AGB stars. When the dust grains are accelerated by radiation pressure, collisional momentum transfer between dust and gas also drags the surrounding gas outward. Mohamed & Podsiadlowski (2012) and Abate et al. (2013) suggested that wind-RLOF occurs when the dust-formation radius is beyond the RL radius of the AGB star. It is still uncertain whether RGB stars also have wind dust-formation regions (Boyer et al., 2008, 2010; Origlia et al., 2010; McDonald et al., 2011a,b). Whether and how wind-RLOF can alter the spin evolution of NSs need further investigation and are beyond the scope of this work.
Another possible mechanism that can affect the \(M_{\rm WD}\)-\(P_{\rm orb}\) relation is evaporation of the progenitor of a HeWD by high-energy irradiation from an MSP (van den Heuvel & van Paradijs, 1988; Ruderman et al., 1989). Jia & Li (2016) showed that this can make the \(M_{\rm WD}\)-\(P_{\rm orb}\) relation more scattered, in particular in binaries with \(P_{\rm orb}\lesssim 20\) d. So, it is difficult to account for the HeWD binaries like PSRs J0614-3329 (\(P_{\rm orb}=53.6\) d) and J1713+0747 (\(P_{\rm orb}=67.8\) d) by donor evaporation. Furthermore, some HeWD+MS binaries, in which high-energy irradiation is obviously lacking, also seem to deviate from the \(M_{\rm WD}\)-\(P_{\rm orb}\) relation. To see this in more detail, we compare the data of various types of HeWD binaries with theoretical predictions in Figure 8. Here, we only show binaries with \(M_{\rm a,ini}<M_{\rm d,ini}\) to ensure that the MS donors (the progenitors of HeWDs) evolve faster than the MS accretors. We collect the data of HeWD+MS binaries from the literature and list them in Table 3. Apart from the canonical HeWD+MS binaries we also include EL CVn-type binaries, R CMa-type binaries and stripped
Figure 6: Comparison between observational data and the results obtained from our models in the \(M_{\rm WD}\)-\(P_{\rm orb}\) plane. Similar to Figure 5, the gray markers show the \(M_{\rm WD}\)-\(P_{\rm orb}\) relations of our models. The light blue and the light red crosses indicate the observed binary pulsars with spin periods \(<20\) ms and \(>20\) ms, respectively. The colored stars show the HeWDs in binary pulsars with more accurate mass measurements (see Table 2).
Figure 7: The relation between the initial orbital period \(P_{\rm orb,ini}\) and the final period \(P_{\rm orb,fin}\) in different models with the NS accreted mass via RLOF \(\geq 0.1\rm M_{\odot}\).
red giant (SRG)+MS binaries9. In addition, three HeWD+subgiant binaries and one SRG+subgiant binary are also listed in Table 3. Although the subgiants have evolved away from MS, the mass transfer process has not occurred. So, they are also expected to follow the the \(M_{\rm WD}\)-\(P_{\rm orb}\) relation.
Footnote 9: Here, EL CVn-type binaries are eclipsing binaries containing an AF-type MS dwarf star and a low-mass (\(\leq 0.2\) M\({}_{\odot}\)) HeWD precursor (prleWD) (Marcted et al., 2014). They are thought to form from stable mass transfer rather than common envelope evolution, because the later usually leads to merging of the components if the RGB stars have such low mass cores (Chen et al., 2017). R CMa-type binaries are the progenitors of EL CVn binaries, and have a short orbital period and a low-mass ratio among Algol systems (Varricutt & Asbok, 1999; Budding & Butland, 2011; Lehmann et al., 2013; Lee et al., 2016). SRG+MS binaries in Table 3, selected from _Gaia_ DR3 (Eldarry & Rix, 2022), are earlier than R CMa-type binaries and will also evolve to be HeWD+MS binaries. The donors are close to filling their RLs.
Figure 8 shows that, while the majority of the sample seem to be compatible with the predictions of Models _NoWind_ and _Wind_ (corresponding to the red dashed and solid lines respectively), there are quite a few outliers located on the left of the lines, which are more consistent with the TEW models. For example, to account for TYC 8394-1331-1, WASP 1814+6738, TIC 149160359, KIC 8087799, WASP 1628+10, and several anomalous SRG+HeWD binaries requires models with \(B_{\rm W}=10^{3}-10^{4}\). Moreover, HE 0430-2457 and KIC 8145411 have very long orbital periods (771 d and 456 d, respectively). The HeWD masses are so small that even Model _TEWS_ cannot explain them. These two peculiar binaries may have formed from triple/multiple interactions (Vos et al., 2018; Masuda et al., 2019; Gosnell et al., 2019; Pandey et al., 2021). However, this scenario can not explain why both KIC 8145411 and HE 0430-2457 have nearly circular orbits10 and why low-mass HeWDs do not appear to be very rare among long-period binaries (see Masuda et al., 2019). In addition, Khurana et al. (2023) suggested that KIC 8145411 may be formed by binary-binary strong encounters in star clusters, where the binary orbit was significantly widened. But they also pointed out that the formation rate in this channel is rather low, only \(\sim 4\) Myr\({}^{-1}\).
Footnote 10: The probability of falsely rejecting a circular orbit is \(\mathcal{P}=0.59\) for HE 0430–2457 Vos et al. (2018). Lucy & Sweeney (1971) argued that if \(\mathcal{P}>0.05\), the orbit is effectively circular.
## 5 Discussion and Conclusions
In this work, we investigate the effects of wind mass loss on the \(M_{\rm WD}\)-\(P_{\rm orb}\) relation. We show that enhanced stellar winds significantly influence the formation paths of HeWDs, because wind mass loss continues stripping the envelope of the RGB progenitors of HeWDs or the pre-HeWDs during and after the RLOF phase. These processes can efficiently slow the growth of the degenerate helium core of the RGB stars. The TEW model may explain why some MSP binaries and HeWD+MS binaries possess less massive HeWD companions than expected.
Figure 8: Similar to Figure 6 but for HeWD+MS binaries. Only binaries with \(M_{\rm a,ini}<M_{\rm d,ini}\) are shown. The dots, crosses, stars and triangles represent the HeWD binaries, EL CVn-type binaries, R CMa-type binaries and other preWD+MS/subgiant binaries, respectively.
Tout & Eggleton (1988a) proposed the idea of TEW on a heuristic basis to account for the RS CVn system Z Her, which was the only reliable system where a mass-radius paradox certainly existed. Afterwards, this prescription has been widely invoked to interpret the formation and evolution of Algol-like and related binaries (Tout & Eggleton, 1988b; Hanna, 2012; Zhang & Qian, 2013), Barium stars (Han et al., 1995; Jorissen et al., 1998; Jorissen and van Eck, 2000; Karakas et al., 2000; Bonacic Marinovic et al., 2008; Lu et al., 2020) and Carbon-rich extremely metal poor stars (Lau et al., 2007), the morphology of the off-centre planetary nebulae (Soker et al., 1998), Type Ia supernovae from symbiotic stars (Kenyon et al., 1993; Chen et al., 2011; Meng and Han, 2016; Wu et al., 2021), the horizontal branch morphology of globular clusters (Lei et al., 2013a,b, 2014), and long-period eccentric binaries with a barium star (Bonacic Marinovic et al., 2008; Vos et al., 2015), HeWD (Siess et al., 2014; Merle et al., 2014) or subdwarf star companion (Vos et al., 2015). However, the mechanism of TEW is still under debate. Apart from tidal interactions or dynamo activity as suggested by Tout and Eggleton (1988a), stellar winds of RGB/AGB stars in binary systems might also be enhanced by stellar pulsation (Eggleton, 2002), rotation (Nieuwenhuijzen & de Jager
\begin{table}
\begin{tabular}{l l l l l l} \hline Name & Type & \(M_{\rm WD}\) [M\({}_{\odot}\)] & \(P_{\rm orb}\) [4] & \(e\) & \(M_{\rm MS}\) [M\({}_{\odot}\)] & References \\ \hline
2MASS J1836-5110 & HeWD+subgiant & \(0.40\pm 0.01\) & \(461.48\pm 0.04\) & \(0.028\pm 0.001\) & \(1.38\pm 0.16\) & Parsons et al. (2023) \\ KIC 8145411 & HeWD+MS & \(0.20\pm 0.01\) & \(455.83\) & \(0.143\) & \(1.132\pm 0.08\) & Masuda et al. (2019) \\ TYC 8394–1331–1 & HeWD+subgiant & \(0.24\pm 0.01\) & \(51.851\) & \(0.02\pm 0.02\) & \(1.31\pm 0.12\) & Parsons et al. (2023) \\ TYC 6992–827–1 & HeWD+subgiant & \(0.28\pm 0.01\) & \(41.45\pm 0.01\) & \(0.013\pm 0.06\) & \(1.31\pm 0.14\) & Parsons et al. (2023) \\ RRLYR-02792 & HeWD+subgiant & \(0.261\pm 0.015\) & \(15.24340\) & \(0.0072\) & \(1.67\pm 0.06\) & Pietrzynski et al. (2012) \\ KOI 74 & HeWD+MS & \(0.22\pm 0.03\) & \(5.18875\) & - & \(2.22^{+0.10}_{-0.14}\) & van Kerkwijk et al. (2010) \\ KOI-3818 & HeWD+MS & \(0.20\pm 0.026\) & \(3.8170428\) & - & \(2.14\pm 0.12\) & Faigler et al. (2015) \\ KIC 2851474 & HeWD+MS & \(0.210\pm 0.018\) & \(2.7682925\) & - & \(2.34\pm 0.19\) & Faigler et al. (2015) \\ KOI 1224 & HeWD+MS & \(0.22\pm 0.02\) & \(2.69802\) & - & \(1.59\pm 0.06\) & Breton et al. (2012) \\ KIC 10727668 & HeWD+MS & \(0.189^{+0.01}_{-0.012}\) & \(2.30592\) & \(0.018^{+0.002}_{-0.007}\) & \(3.03^{+0.17}_{-0.17}\) & Odesses and Lovekin (2022) \\ KIC 9285857 & HeWD+MS & \(0.191\pm 0.019\) & \(1.8119579\) & - & \(1.94\pm 0.16\) & Faigler et al. (2015) \\ KIC 9164561 & HeWD+MS & \(0.197\pm 0.05\) & \(1.267040\) & - & \(2.02\pm 0.06\) & Rappaport et al. (2015) \\ KIC 4169521 & HeWD+MS & \(0.210\pm 0.015\) & \(1.172555671\) & - & \(1.982\pm 0.092\) & Faigler et al. (2015) \\ \hline KIC 8113154 & EL CVn-type & \(0.26\pm 0.02\) & \(2.5868779\) & - & \(1.65\pm 0.01\) & Zhang et al. (2019) \\ KIC 10989032 & EL CVn-type & \(0.24\pm 0.02\) & \(2.3050976\) & - & \(2.64\pm 0.19\) & Zhang et al. (2017) \\ V1224 Cas & EL CVn-type & \(0.19\pm 0.02\) & \(2.27537\) & - & \(2.16\pm 0.02\) & Wang et al. (2018) \\ WASP P1314-28 & EL CVn-type & \(0.200\pm 0.02\) & \(1.88275\) & - & \(1.986\pm 0.017\) & Lee et al. (2020) \\ WASP I8144+48 & EL CVn-type & \(0.172\pm 0.005\) & \(1.79942\) & - & \(1.659\pm 0.048\) & Lee et al. (2022) \\ KIC 8262223 & EL CVn-type & \(0.21\pm 0.01\) & \(1.61301476\) & - & \(1.96\pm 0.06\) & Guo et al. (2017) \\ TIC 416264037 & EL CVn-type & \(0.18\pm 0.01\) & \(1.15991\) & - & \(1.70\pm 0.10\) & Wang et al. (2020b) \\ WASP I625–04 & EL CVn-type & \(0.187\pm 0.002\) & \(1.526323\) & - & \(1.745\pm 0.013\) & Lee et al. (2022) \\ TIC 149160539 & EL CVn-type & \(0.16\pm 0.01\) & \(1.120738\) & - & \(1.75\pm 0.10\) & Wang et al. (2020b) \\ KIC 8087799 & EL CVn-type & \(0.16\pm 0.02\) & \(0.9262976\) & - & \(2.20\pm 0.20\) & Zhang et al. (2017) \\ EL CVn & EL CVn-type & \(0.176\pm 0.004\) & \(0.795627\) & - & \(1.43\pm 0.027\) & Wang et al. (2020a) \\ WASP P8043–11 & EL CVn-type & \(0.220\pm 0.008\) & \(0.792839\) & - & \(1.733\pm 0.031\) & Hong et al. (2021) \\ WASP I628+10 & EL CVn-type & \(0.135\pm 0.02\) & \(0.72\) & - & \(1.36\pm 0.05\) & Maxted et al. (2014) \\ J0247–25 & EL CVn-type & \(0.187\pm 0.002\) & \(0.6678306\) & - & \(1.356\pm 0.007\) & Kim et al. (2021) \\ \hline KIC 12268220 & R CMa-type & \(0.23\pm 0.05\) & \(4.421580\) & - & \(1.99^{+0.52}_{-0.0}\) & Cui et al. (2020) \\ KIC 7368103 & R CMa-type & \(0.21\pm 0.02\) & \(2.1825147\) & - & \(1.77\pm 0.19\) & Wang et al. (2019) \\ KIC 8823397 & R CMa-type & \(0.21\pm 0.02\) & \(1.5065038\) & - & \(2.09\pm 0.17\) & Wang et al. (2019) \\ KIC 6206751 & R CMa-type & \(0.215\pm 0.006\) & \(1.24534\) & - & \(1.66\pm 0.04\) & Lee and Park (2018) \\ OO Dra & R CMa-type & \(0.187\pm 0.009\) & \(1.23838\) & - & \(2.031\pm 0.058\) & Lee et al. (2018) \\ R CMa & R CMa-
1988; Zamanov et al. 2008; Bear & Soker 2010) and the effective gravity reduced by the presence of a companion star (Frankowski and Tylenda 2001). Thus, it is premature to confidently estimate the magnitude of \(B_{\rm w}\) (see Equation 1 and 2). It likely varies across different types of binaries and different evolutionary stages. Figure 6 and 8 demonstrate that the sample with parameters accurate enough to test the \(M_{\rm WD}\)-\(P_{\rm orb}\) relation is still small. There are not yet enough HeWD binaries to make a more statistically significant judgement and to find that under which condition can the TEW model work.
In addition, since mass loss via TEW proceeds with RLOF, this may result in an eccentric orbit. Soker (2000) proposed that the significant eccentricities \(\sim 0.1-0.4\) found in some tidally strongly interacting binaries (e.g., binaries with RGB/AGB companions) are caused by an enhanced mass loss during periastron passages. Bonacic Marinovic et al. (2008) found that the enhanced mass loss of the AGB stars at periastron can result in a net eccentricity growth rate that is comparable to the tidal circularisation. Furthermore, Siess et al. (2014) showed that an enhanced-wind mass loss on the RGB can avoid RLOF and they found that the eccentricity can be preserved and even increased if the initial separation is large enough. Although here are HeWD binaries like KIC 8145411 with relatively large eccentricities, most wide binaries in Tables 2 and 3 are almost circular. Future observations of more wide HeWD binaries are crucial in testing the applicability of the TEW model.
## Acknowledgements
We are grateful to an anonymous referee for helpful comments. This work was supported by the National Key Research and Development Program of China (2021YFA0718500), the Natural Science Foundation of China under grant No. 12041301, 12121003, and Project U1838201 supported by NSFC and CAS. The computation was made by using the facilities at the High-Performance Computing Center of Collaborative Innovation Center of Advanced Microstructures (Nanjing University, [https://mpc.nju.edu.cn](https://mpc.nju.edu.cn)). Figures in this work are made use of Matplotlib(Hunter, 2007), NuoPy(Harris et al., 2020) and Jupyter-Lab ([https://jupyter.org](https://jupyter.org)).
## Data availability
All data underlying this article will be shared on reasonable request to the corresponding author.
|
2306.02482
|
Aerial Swarm Defense using Interception and Herding Strategies
|
This paper presents a multi-mode solution to the problem of defending a
circular protected area (target) from a wide range of attacks by swarms of
risk-taking and/or risk-averse attacking agents (attackers). The proposed
multi-mode solution combines two defense strategies, namely: 1) an interception
strategy for a team of defenders to intercept multiple risk-taking attackers
while ensuring that the defenders do not collide with each other, 2) a herding
strategy to herd a swarm of risk-averse attackers to a safe area. In
particular, we develop mixed integer programs (MIPs) and geometry-inspired
heuristics to distribute and assign and/or reassign the defenders to
interception and herding tasks under different spatiotemporal behaviors by the
attackers such as splitting into smaller swarms to evade defenders easily or
high-speed maneuvers by some risk-taking attackers to maximize damage to the
protected area. We provide theoretical as well as numerical comparisons of the
computational costs of these MIPs and the heuristics, and demonstrate the
overall approach in simulations.
|
Vishnu S. Chipade, Dimitra Panagou
|
2023-06-04T21:15:03Z
|
http://arxiv.org/abs/2306.02482v1
|
# Aerial Swarm Defense using Interception and Herding Strategies
###### Abstract
This paper presents a multi-mode solution to the problem of defending a circular protected area (target) from a wide range of attacks by swarms of _risk-taking_ and/or _risk-averse_ attacking agents (attackers). The proposed multi-mode solution combines two defense strategies, namely: 1) an interception strategy for a team of defenders to intercept multiple _risk-taking_ attackers while ensuring that the defenders do not collide with each other, 2) a herding strategy to herd a swarm of _risk-averse_ attackers to a safe area. In particular, we develop mixed integer programs (MIPs) and geometry-inspired heuristics to distribute and assign and/or reassign the defenders to interception and herding tasks under different spatiotemporal behaviors by the attackers such as splitting into smaller swarms to evade defenders easily or high-speed maneuvers by some risk-taking attackers to maximize damage to the protected area. We provide theoretical as well as numerical comparison of the computational costs of these MIPs and the heuristics, and demonstrate the overall approach in simulations.
autonomous agents, cooperative robots, task assignment, and multi-robot systems.
## I Introduction
### _Motivation_
Swarm technology has a wide range of applications [1], however may also pose threat to safety-critical infrastructure such as government facilities, airports, and military bases. The presence of adversarial agents or swarms nearby such entities, with the aim of causing physical damage or collecting critical information, can lead to catastrophic consequences. The adversarial agents (attackers) could be either risk-averse (self-interested), or risk-taking. Risk-averse attackers will try to avoid collision with other static or dynamic agents in order to avoid any damage to themselves. Risk-averse attackers could be more interested in collecting critical information by loitering around the safety-critical area (protected area) than intending to physically damage the protected area. On the other hand, risk-taking attackers will have low priority for their own survival compared to their mission. Such attackers could be interested in physically damaging the protected area. The degree of risk-aversion could vary among the attackers. Furthermore, the attackers may 1) cooperate among themselves and stay together as a swarm or do not stay together, or 2) do not cooperate among themselves.
Research has attributed various defense strategies to defend against different types of attackers, for example, 1) physical interception strategies [2]-[12] (mostly for risk-taking attackers), 2) herding strategies [13]-[23] (mostly against risk-averse attackers). With a wide range of potential behaviors by the attackers, a single type of defense approach may not be sufficient, economical or even desirable. In this paper, we combine interception-based and herding-based defense strategies for the defenders to provide a multi-mode defense solution against a wide range of adversarial attacks.
### _Related work_
#### I-B1 Multi-player pursuit evasion games
In pursuit-evasion games a team of pursuers aims to capture or intercept a team of evaders, while the evaders aim to evade from pursuers for as long as possible. Various approaches including optimal control techniques [24], area-minimization techniques [4], [25], value function based technique [26], mean-field approach and reinforcement-learning techniques [27], [28] exist in the literature to solve pursuit-evasion games. The existing solutions provide useful insights, however they in principle do not consider an area under risk that is targeted by the attackers. Therefore, pursuit-evasion approaches are less suitable for the class of area-defense problems studied in this paper.
#### I-B2 Multi-agent area (target) defense
The area or target defense problem with a single agent on either team has been studied as a zero-sum differential game using various solution techniques including optimal control [29]-[33] and reachability analysis [34]. However, extending these approaches to multi-agent settings suffers from the curse of dimensionality. To remedy this, researches have been using a "divide and conquer" approach, i.e., solve the one-on-one problem or the problem with small number of agents for all such combinations of the agents, and scale up this solution to the original multi-agent problem.
In [3], the authors consider a multiplayer reach-avoid game. The authors solve the reach-avoid game for each pair of defender and attacker operating in a compact domain with obstacles using a Hamilton-Jacobi-Issacs (HJI) reachability approach. The solution is then used to assign defenders against the attackers using graph-theoretic maximum matching.
In the perimeter defense problem studied in [10] defenders are restricted to move on the perimeter of a protected
area. Local games between small teams of defenders and attackers are solved and then assignments are done using a polynomial time algorithm.
The aforementioned studies provide useful insights to the area or target defense problem, however, are limited in application due to the use of simple motion models, such as single integrators. In [5], Target-Attacker-Defender (TAD) game with agents moving under double-integrator dynamics is considered. Due to the increased computational complexity of solving a zero-sum differential game optimally for high-dimensional systems, the authors use an isochrones method to design time-optimal control strategies for the players in 1-vs-1 TAD game. However, despite bounded acceleration inputs, no bounded velocities for the agents can be ensured or is assumed in [5].
In all of the aforementioned work, the defenders coordinate with each other for the assignment task to intercept the attackers, however, they do not consider collision avoidance among themselves. Furthermore, the aforementioned interception strategies, while useful against _risk-taking_ attackers, may be an extreme measure against _risk-averse_ attackers. In other words, there may be cases where one may prefer to herd the _risk-averse_ defenders to some safe area and take control of these attackers in favor of the defenders, instead of intercepting them.
#### Iii-B3 Swarm herding
Herding has been studied previously in [13, 14, 15]. The approach in [13] uses an \(n\)-wavefront algorithm to herd a flock of birds away from an airport, where the birds on the boundary of the flock are influenced based on the locations of the airport and a safe area.
The herding method in [14] utilizes a circular-arc formation of herders to influence the nonlinear dynamics of the herd based on a potential-field approach, and designs a point-offset controller to guide the herd close to a specified location. In [15], biologically-inspired strategies are developed for confining a group of agents; the authors develop strategies based on the "wall" and "encirclement" methods that dolphins use to capture a school of fish. In addition, they compute regions from which this confinement is possible; however, the results are limited to constant-velocity motion. A similar approach called _herding by caging_ is adopted in [16], where a cage of high potential is formed around the attackers. An RRT approach is used to find a motion plan for the agents; however, the cage is assumed to have already been formed around the agents, while the caging of the agents thereafter is only ensured with constant velocity motion under additional assumptions on the distances between the agents. Forming such a cage could be more challenging in case of self-interested, risk-averse attackers under non-constant velocity motion.
In [17], [18], the authors discuss herding using a switched-system approach; the herder (defender) chases targets (evaders/attackers) sequentially by switching among them so that certain dwell-time conditions are satisfied to guarantee stability of the resulting trajectories. However, the assumption that only one of the targets is influenced by the herder at any time might be limiting and non-practical in real applications. The authors in [19] use approximate dynamic programming to obtain suboptimal control policies for the herder to chase a target agent to a goal location. A game-theoretic formulation is used in [20] to address the herding problem by constructing a virtual barrier similar to [14]. However, the computational complexity due to the discretization of the state and control-action spaces limits its applicability.
Most of the aforementioned approaches for herding are limiting due to one or many of the following aspects: 1) simplified motion models, 2) absence of obstacles in the environment, 3) no consideration of inter-agent collisions, 4) assumption of a particular form of potential field to model the repulsive motion of the attackers with respect to the defenders.
We have addressed the above issues in our recent work [21], [22], which develops a method, termed as 'StringNet Herding', for defending a protected area from a swarm of attackers in a 2D obstacle environment. In 'StringNet Herding', a closed formation of strings ('StringNet') is formed by the defenders to surround the swarm of attackers. It is assumed that the attackers will stay together within a circular footprint as a swarm and collectively avoid the defenders. It is also assumed that the string between two defenders serves as a barrier through which the attackers cannot escape (e.g., a physical straight-line barrier, or some other mechanism). The StringNet is then controlled to herd the swarm of attackers to a safe area. The control strategy for the defenders in 'StringNet Herding' is a combination of time-optimal control actions and finite-time, state-feedback, bounded control actions, so that the attackers can be herded to safe area in a timely manner.
In [23], [35], we extended the 'StringNet Herding' approach to scenarios where attackers no longer stay together and may split into smaller swarms in reaction to the defenders' presence. Particularly, we first identify the spatial distributions (clusters/swarms) of the attackers that satisfy certain properties, using the density-based spatial clustering for applications with noises (DBSCAN) algorithm [36]. Then, we developed a mixed-integer quadratically constrained program (MIQCP) to distribute and assign the sub-teams of the defenders to the identified clusters of the attackers, so that the clusters of the attackers are herded to one of the safe areas. Note that we use swarm and cluster interchangeably throughout the paper.
### _Overview of the proposed approach_
As discussed above, a wide range of approaches exist for area defense scenarios. However, only a specific type of behavior by the attackers is considered in each of the aforementioned works. To address a wide range of behaviors by the attackers a multi-mode solution is provided in this paper. We first make the following assumption.
**Assumption 1** (Inter-Defender Collision-Aware Interception Strategy (IDCAIS)).: _There exists an interception strategy to intercept multiple attackers in an area-defense game, such that the defenders account for inter-defender
collisions while they intercept the attackers as quickly as possible._
Such interception strategy is provided in [12] (under review).
The multi-mode defense approach discussed in this paper is summarized in Figure 1. In this multi-mode defense approach, the spatial distributions of the attackers are continuously monitored using the DBSCAN algorithm, which classifies attackers into clusters of at least three agents. The attackers that either belong to clusters of less than 3 attackers, or are classified as noises by the DBSCAN algorithm, are called unclustered attackers. At time \(t=0\) s (the right half section in Figure 1), the defending team employs the IDCAIS against the unclustered attackers; under this interception strategy, some of the defenders are assigned to intercept the unclustered attackers in minimum time using collision-aware defender-to-attacker assignment (CADAA) [12] (discussed later), these defenders are called intercepting defenders. The rest unassigned defenders, called herding defenders, are distributed into sub-teams and assigned to herd the identified clusters of the attackers to one of the safe areas using 'StringNet Herding' approach [23], as long as the attackers stay together and avoid the defenders. If the attackers further split into new smaller clusters and/or individual attackers (unclustered attackers) at some time \(t>0\) (shown in the left half section in Figure 1), then the defenders are also further distributed into smaller sub-teams and assigned to herd the newly formed attackers' clusters and to intercept the newly-identified unclustered attackers that separated from the original cluster of the attackers using an optimal assignment algorithm.
### _Summary of our contributions_
We develop a multi-mode defense strategy against wide range of swarm attacks using the IDCAIS and the 'StringNet Herding' [22] approach. Compared to the prior literature and our own work, the contributions of this paper are:
1. a centralized, iterative algorithm to assign the defenders to the attackers' clusters identified at \(t=0\) so that the defenders gather on the shortest paths of the attackers' swarms to the protected area before the attackers reach there;
2. a decentralized algorithm using mixed integer quadratically constrained quadratic programs (MIQCQPs) to assign the defenders to intercept the unclustered attackers, and to herd the attackers' newly-formed swarms in the case a swarm of attackers splits into smaller swarms at any future time \(t>0\);
3. heuristics to solve the MIQCQP approximately but quickly to find the assignment in real time;
4. theoretical as well as numerical comparison of the computational cost of the assignment algorithms.
### _Organization_
The rest of the paper is structured as follows. Section II provides the mathematical modeling, assumptions made and a statement of the problem studied. The strategy and the assignment algorithms of the multi-mode defense approach are discussed in Section III.More specifically, the optimal assignment algorithms at \(t=0\) and \(t>0\), their sub-optimal but computationally better alternative algorithms, heuristics to solve these optimal assignment problems in a computationally efficient manner, as well as their performance comparison are discussed in Section III. Simulation results for various scenarios demonstrating the proposed multi-mode framework are provided in Section IV. The paper is concluded in Section V.
## II Modeling and Problem Statement
_Notation_: We use \(\left\|\cdot\right\|\) to denote the Euclidean norm of its argument. \(\left|\cdot\right|\) denotes absolute value of a scalar argument or cardinality of a set argument. A ball of radius \(\rho\) centered at the origin is defined as \(\mathcal{B}_{\rho}=\{\mathbf{r}\in\mathbb{R}^{2}|\left\|\mathbf{r}\right\| \leq\rho\}\) and that centered at \(\mathbf{r}_{c}\) is defined \(\mathcal{B}_{\rho}(\mathbf{r}_{c})=\{\mathbf{r}\in\mathbb{R}^{2}|\left\| \mathbf{r}-\mathbf{r}_{c}\right\|\leq\rho\}\). \(A\backslash B\) denotes all the elements of the set \(A\) that are not in the set \(B\). Some most commonly used variables in the paper are described in Table I.
We consider \(N_{a}\) attackers denoted as \(\mathcal{A}_{i}\), \(i\in I_{a}=\{1,2,...,N_{a}\}\), and \(N_{d}\) defenders denoted as \(\mathcal{D}_{j}\), \(j\in I_{d}=\{1,2,...,N_{d}\}\), operating in a 2D environment \(\mathcal{W}\subseteq\mathbb{R}^{2}\) that contains a protected area \(\mathcal{P}\subset\mathcal{W}\), defined as \(\mathcal{P}=\{\mathbf{r}\in\mathbb{R}^{2}\ |\ \left\|\mathbf{r}\right\|\leq\rho_{p}\}\), and \(N_{s}\) safe areas \(\mathcal{S}_{m}\subset\mathcal{W}\), defined as \(\mathcal{S}_{m}=\{\mathbf{r}\in\mathbb{R}^{2}\ |\ \left\|\mathbf{r}-\mathbf{r}_{sm} \right\|\leq\rho_{sm}\}\), for all \(m\in I_{s}=\{1,2,...,N_{s}\}\), where \(\rho_{p}\) and \(\rho_{sm}\) are the radii of the protected area and \(m^{th}\) safe area, respectively, and \(\mathbf{r}_{sm}\) is the center of \(m^{th}\) safe area. Visual depiction of the above elements is shown in Figure 2. The number of defenders is no less than that of attackers, i.e., \(N_{d}\geq N_{a}\). The agents \(\mathcal{A}_{i}\) and \(\mathcal{D}_{j}\) are modeled as discs of radii \(\rho_{a}\) and \(\rho_{d}\), where \(\rho_{d}\leq\rho_{a}\), respectively. Let \(\mathbf{r}_{ai}=[x_{ai}\ y_{ai}]^{T}\) and \(\mathbf{r}_{dj}=[x_{dj}\ y_{dj}]^{T}\) be the position vectors of \(\mathcal{A}_{i}\) and \(\mathcal{D}_{j}\), respectively; \(\mathbf{v}_{ai}=[v_{x_{ai}}\ v_{y_{ai}}]^{T}\), \(\mathbf{v}_{dj}=[v_{x_{dj}}\ v_{y_{dj}}]^{T}\) be the velocity vectors, respectively, and \(\mathbf{u}_{ai}=[u_{x_{ai}}\ u_{y_{ai}}]^{T}\), \(\mathbf{u}_{dj}=[u_{x_{dj}}\ u_{y_{dj}}]^{T}\) be the accelerations, which serve also as the control inputs, respectively, all resolved in a global inertial frame \(\mathcal{F}_{gi}(\mathbf{\hat{i},\hat{j}})\) (see Fig.2). The agents move under
Figure 1: Overview of the Multi-mode Defense Approach
double integrator (DI) dynamics with linear drag (damped double integrator), similar to isotropic rocket [37]:
\[\dot{\mathbf{x}}_{\star}=\begin{bmatrix}\dot{\mathbf{r}}_{\star}\\ \dot{\mathbf{v}}_{\star}\end{bmatrix}=\begin{bmatrix}\mathbf{0}_{2}&\mathbf{I} _{2}\\ \mathbf{0}_{2}&-C_{D}\mathbf{I}_{2}\end{bmatrix}\mathbf{x}_{\star}+\begin{bmatrix} \mathbf{0}_{2}\\ \mathbf{I}_{2}\end{bmatrix}\mathbf{u}_{\star} \tag{1}\]
where \(\star\in\{ai|i\in I_{a}\}\cup\{dj|j\in I_{d}\}\), \(C_{D}>0\) is the known, constant drag coefficient. The accelerations \(\mathbf{u}_{ai}\) and \(\mathbf{u}_{dj}\) are bounded by \(\bar{u}_{a}\), \(\bar{u}_{d}\) as given in (2) such that \(\bar{u}_{a}<\bar{u}_{d}\).
\[\|\mathbf{u}_{ai}\|\leq\bar{u}_{a},\quad\|\mathbf{u}_{dj}\|\leq\bar{u}_{d}, \tag{2}\]
By incorporating the drag term, the damped double integrator (1) inherently poses a speed bound on each agent under a limited acceleration control, i.e., \(\|\mathbf{v}_{ai}\|<\bar{v}_{a}=\frac{\bar{u}_{a}}{C_{D}}\) and \(\|\mathbf{v}_{dj}\|<\bar{v}_{d}=\frac{\bar{u}_{d}}{C_{D}}\), and does not require an explicit constraint on the velocity of the agents while designing bounded controllers, as in earlier literature. So we have \(\mathbf{x}_{ai}\in\mathcal{X}_{a}\), for all \(i\in I_{a}\), where \(\mathcal{X}_{a}=\mathbb{R}^{2}\times\mathcal{B}_{\bar{v}_{a}}\) and \(\mathbf{x}_{dj}\in\mathcal{X}_{d}\), for all \(j\in I_{d}\), where \(\mathcal{X}_{d}=\mathbb{R}^{2}\times\mathcal{B}_{\bar{v}_{d}}\). We make the following assumption:
**Assumption 2**.: _All the defenders know the position \(\mathbf{r}_{ai}\) and velocity \(\mathbf{v}_{ai}\) of the attacker \(\mathcal{A}_{i}\) that lies inside a circular sensing zone \(\mathcal{Z}_{d}=\{\mathbf{r}\in\mathbb{R}^{2}|~{}\|\mathbf{r}\|\leq\varrho_{d}\}\) for all \(i\in I_{a}\), where \(\varrho_{d}>0\) is the radius of the defenders' sensing zone. Every attacker \(\mathcal{A}_{i}\) has a similar local sensing zone \(\mathcal{Z}_{ai}=\{\mathbf{r}\in\mathbb{R}^{2}~{}|~{}\|\mathbf{r}-\mathbf{r}_ {ai}\|\leq\varrho_{ai}\}\), where \(\varrho_{ai}>0\) is the radius of \(\mathcal{A}_{i}\)'s sensing zone (Fig. 2)._
For Assumption 2 to hold, a system of sensors such as radars, lidars, cameras, etc., that are spatially distributed around the protected area can be used. The data from these sensors are assumed to be processed by a central computer and communicated to all the defenders.
Each defender is capable of connecting to other two defenders via string barriers. String barriers are realized as impenetrable and extendable line barriers (e.g., spring-loaded pulley and a rope or other similar mechanism [38]) that prevent attackers from passing through them. The extendable string barrier allows free relative motion of the two defenders connected by the string. The string barrier can have a maximum length of \(\bar{R}_{sb}\). If the string barrier were to be physical one, then it can be established between two defenders \(\mathcal{D}_{j}\) and \(\mathcal{D}_{j^{\prime}}\) only when they are close to each other and have almost same velocity, i.e., \(\|\mathbf{r}_{dj}-\mathbf{r}_{dj^{\prime}}\|\leq\epsilon_{1}<\bar{R}_{sb}\) and \(\|\mathbf{v}_{dj}-\mathbf{v}_{dj^{\prime}}\|\leq\epsilon_{2}\), where \(\epsilon_{1}\) and \(\epsilon_{2}\) are small numbers that depend on the physical size of the defenders as well as the mechanism and their capability to physically connect at a given distance.. Each defender \(\mathcal{D}_{j}\)
\begin{table}
\begin{tabular}{l l} \(\mathcal{A}_{i}\) & denotes the \(i^{th}\) attacker & \\ \(\mathcal{A}_{c_{k}}(t)\) & denotes the group of attackers indexed by \(A_{c_{k}}(t)\) \\ \(A_{c_{k}}(t)\) & set of indices of the attackers in \(k^{th}\) cluster \\ \(A_{vc}(t)\) & set of indices of the unclustered attackers at time \(t\) \\ \(A_{c}^{(k)}(t)\) & set of indices of clusters of the attackers separated from the \(k^{th}\) cluster of attackers at time \(t\) \\ \(A_{vc}^{(k)}(t)\) & set of indices of the unclustered attackers separated from the \(k^{th}\) cluster of attackers at time \(t\) \\ \(\mathcal{A}_{bc}(t_{sc})\) & data structure storing information of the attackers in \(k^{th}\) cluster of attackers after it splits at \(t=t_{sc}\) \\ \(\mathcal{A}_{k}(t_{sc}).f\) & denotes the data field \(f\) of the data structure \\ \(\mathcal{D}_{j}\) & denotes the \(j^{th}\) defender \\ \(\mathcal{D}_{c_{k}}(t)\) & denotes the group of defenders indexed by \(D_{c_{k}}(t)\) \\ \(\mathcal{D}_{c_{k}}^{c}(t)\) & \(\mathcal{D}_{c_{k}}^{t}(t)\) group of central and terminal defenders on the Open-StringNet \(\mathcal{S}^{qp}_{\alpha(D_{c}}(t))\), resp. \\ \(\mathcal{D}_{c_{k}}^{l}(t)\), \(\mathcal{D}_{c_{k}}^{r}(t)\) & group of terminal defenders on the left and right end of Open-StringNet \(\mathcal{S}^{qp}_{\alpha(D_{c}}(t))\), resp. \\ \(D_{c_{k}}(t)\) & set of indices of the defenders assigned to \(k^{th}\) cluster of attackers at time \(t\) \\ \(\mathcal{D}_{k}(t_{sc})\) & data structure storing information of the defenders indexed by \(D_{k}(t_{sc}^{-})\) \\ \(\mathcal{D}_{k}(t_{sc}).f\) & denotes the data field \(f\) of the data structure \\ \(\mathcal{G}_{sn}^{cl}(I_{d})\) & Closed-StringNet formed by the defenders with indices as in the set \(I_{d}\) \\ \(\mathcal{G}^{qp}_{\alpha(I_{d})}(I_{d})\) & Open-StringNet formed by the defenders with indices as in the set \(I_{d}\) \\ \(I_{a}\), \(I_{ac}(t)\) & equals set \(\{1,2,...,N_{a}\}\), \(\{1,2,...,N_{ac}(t)|\}\), resp. \\ \(I_{d}\), \(I_{d_{ck}}(t)\) & equals set \(\{1,2,...,N_{d}\}\), \(\{1,2,...,\mathcal{A}_{d}(|A_{c_{k}}(t)|)\}\), resp. \\ \(N_{a}\), \(N_{ac}(t)\) & number of attackers and attackers’ clusters, resp. \\ \(\mathcal{N}_{d}\), & number of defenders \\ \(\mathcal{A}_{d}(\cdot)\) & defender-to-attacker resource allocation function \\ \(\mathbf{r}_{ai}\), \(\mathbf{r}_{dj}\) & position of \(i^{th}\) attacker, \(j^{th}\) defender, resp. \\ \(\mathbf{r}_{sm}\) & center \(m^{th}\) safe area \\ \(t_{sc}\) & time at which attackers’ split event happens \\ \(\mathbf{r}_{sc}^{-}\) & time instant just before attackers’ split event \\ \(\mathbf{u}_{ai}\), \(\mathbf{u}_{dj}\) & acceleration of \(i^{th}\) attacker, \(j^{th}\) defender, resp. \\ \(\mathbf{v}_{ai}\), \(\mathbf{v}_{dj}\) & velocity of \(i^{th}\) attacker, \(j^{th}\) defender, resp. \\ \(\beta_{c}(t)\) & set of mappings of defenders’ assignment to the clusters of the attackers at time \(t\) \\ \(\beta_{c_{k}}(t)\) & mapping that assigns defenders to the \(k^{th}\) cluster of the attackers at time \(t\) \\ \(\beta_{uc}(t)\) & mapping that assigns defenders to the unclustered attackers at time \(t\) \\ \(\rho_{d}^{int}\) & interception radius of a defender \\ \(\delta_{jk}^{cl}(t)\) & decision variable to decide if \(\mathcal{D}_{j}\) is assigned to herd attackers’ swarm \(\mathcal{A}_{c_{k}}\) at time \(t\) \\ \(\delta_{ji}^{int}(t)\) & decision variable to decide if \(\mathcal{D}_{j}\) is assigned to intercept the attacker \(\mathcal{A}_{i}\) at time \(t\) \\ \end{tabular}
\end{table}
Table I: Table of notation
Figure 2: Schematic of a scenario showing multiple attackers (red filled circles with white arrows), some as risk-averse swarms while some individual risk-taking attackers, trying to reach the protected area \(\mathcal{P}\) and defenders (blue filled circles with white arrows) spread around \(\mathcal{P}\).
is endowed with an interception/capture radius \(\rho_{d}^{int}\), i.e., the defender \(\mathcal{D}_{j}\) is able to physically damage an attacker \(\mathcal{A}_{i}\) when \(\|\mathbf{r}_{dj}(t)-\mathbf{r}_{ai}(t)\|<\rho_{d}^{int}\) for some \(t>0\).
The goal of the attackers is to send as many attackers as possible to the protected area \(\mathcal{P}\). The defenders aim to either intercept these attackers or herd them away to one of the safe areas in \(\mathcal{S}=\{\mathcal{S}_{1},\mathcal{S}_{2},...,\mathcal{S}_{N_{s}}\}\) in order to defend the protected area \(\mathcal{P}\). Formally, we consider the following problem.
**Problem 1** (Swarm Defense).: _Design a defense strategy for a team of defenders to defend a protected area from a wide range of adversarial attacks by attackers, where attackers could possibly stay together as swarms or stay alone during the attack._
Next, we discuss the multi-mode defense strategy that addresses Problem 1.
## III Multi-mode Defense Strategy
The attackers may show wide range of behaviors, such as: some or all attackers staying close together, some or all attackers avoiding defenders while attacking the protected area, some attackers not intending to damage the protected area but only interested in reaching in its neighborhood maybe for collecting some key information, while some attackers only interested in physically damaging the protected area at any cost, etc.
In this section, we provide a multi-mode algorithm to combine the 'StringNet Herding' approach developed in [21]-[23] and the IDCAIS to defend against wide range of behaviors by the attackers discussed earlier. In the following, we first revisit some key definitions related to 'StringNet Herding'.
**Definition 1** (Closed-StringNet).: _The Closed-StringNet \(\mathcal{G}_{\mathrm{sn}}^{\mathrm{op}}(I_{d})=(\mathcal{V}_{\mathrm{sn}}^{ \mathrm{cl}}(I_{d}),\mathcal{E}_{\mathrm{sn}}^{\mathrm{cl}}(I_{d}))\) is a cycle graph consisting of: 1) a subset of defenders as the vertices, \(\mathcal{V}_{\mathrm{sn}}^{\mathrm{cl}}(I_{d})=\{\mathcal{D}_{j}\mid j\in I_{d}\}\), 2) a set of edges, \(\mathcal{E}_{\mathrm{sn}}^{\mathrm{cl}}(I_{d})=\{(\mathcal{D}_{j},\mathcal{D} _{j^{\prime}})\in\mathcal{V}_{\mathrm{sn}}^{\mathrm{cl}}(I_{d})\times\mathcal{ V}_{\mathrm{sn}}^{\mathrm{cl}}(I_{d})|\mathcal{D}_{j}\stackrel{{ \longleftarrow}}{{\longrightarrow}}\mathcal{D}_{j^{\prime}}\}\), where the operator \(\stackrel{{\longleftarrow}}{{\longrightarrow}}\) denotes an impenetrable line barrier between the defenders._
**Definition 2** (Open-StringNet).: _The Open-StringNet \(\mathcal{G}_{\mathrm{sn}}^{\mathrm{op}}(I_{d})=(\mathcal{V}_{\mathrm{sn}}^{ \mathrm{op}}(I_{d}),\mathcal{E}_{\mathrm{sn}}^{\mathrm{op}}(I_{d}))\) is a path graph consisting of: 1) a set of vertices, \(\mathcal{V}_{\mathrm{sn}}^{\mathrm{op}}(I_{d})\) and 2) a set of edges, \(\mathcal{E}_{\mathrm{sn}}^{\mathrm{op}}(I_{d})\), similar to that in Definition 1._
The 'StringNet Herding' approach consists of four phases: 1) gathering, 2) seeking, 3) enclosing, and 4) herding. In the gathering phase, the defenders establish an Open-StringNet on the time-optimal path of the attackers' swarm. Then in the seeking phase they seek to get close to the attackers' swarm if they had not traveled along their time-optimal trajectory as expected by the defenders. During the seeking phase, the defenders ensure that they maintain the Open-StringNet formation. Next, during the enclosing phase, as the defenders come sufficiently close to the attackers, they enclose the attackers by establishing a Closed-StringNet around the attackers' swarm. In the herding phase, the Closed-StringNet is moved to the nearest safe area which also takes the enclosed attackers to the safe area.
Next, we describe how the defenders are assigned to either intercept the unclustered (more likely risk-taking) attackers or herd the clustered (more likely risk-averse) attackers during different temporal and spatial events.
### _Optimal assignment at \(t=0\)_
We first identify spatial distributions (clusters) of the attackers that are detected in the annular region between the circles \(\mathbf{r}=\varrho_{d}\) and \(\mathbf{r}=\varrho_{d}^{game}\) (see Fig. 2). For the cluster identification, we use DBSCAN algorithm [36] with parameters \(\varepsilon_{nb}=\frac{\bar{\rho}_{ac}(m_{pts}-1)}{N_{a}-1}\) and \(m_{pts}=3\) where \(\bar{\rho}_{ac}=\frac{\bar{R}_{ab}}{2}\cot(\frac{\pi}{N_{d}})\) is the radius of the largest circle inscribed in the largest Closed-StringNet formation that can be formed by the \(N_{d}\) defenders. This choice of parameters for the DBSCAN algorithm ensures that the identified clusters have more than 3 attackers in them and have sizes for which subteams of the defenders can be found which can herd these clusters. This is because one needs at least 3 defenders to form a Closed-StringNet and if \(N_{d}=N_{a}\) then we may not have enough defenders to enclose all swarms of the attackers with less than 3 attackers in them. Hence all the swarms of the attackers with less than 3 attackers will be termed as singular swarms and the member attackers of these singular swarms will be identified as noise by DBSCAN algorithm and classified as unclustered attackers. For more details on how the parameters of the DBSCAN are chosen, refer to [23]. Let \(\mathcal{A}_{c}(0)=\{\mathcal{A}_{c_{1}}(0),\mathcal{A}_{c_{2}}(0),\ldots, \mathcal{A}_{c_{N_{ac}}(0)}(0)\}\) be the set of \(N_{ac}(0)\) swarms of the attackers at \(t=0\) identified using the DBSCAN algorithm. Here \(\mathcal{A}_{c_{k}}(0)=\{\mathcal{A}_{i}|i\in A_{c_{k}}(0)\}\), for \(k\in I_{ac}(0)=\{1,2,3,...,N_{ac}(0)\}\) where \(A_{c_{k}}(0)\subseteq I_{a}\) is the set of indices of the attackers that belong to the \(k^{th}\) cluster of the attackers at \(t=0\). Let \(\mathcal{A}_{uc}(0)=\{\mathcal{A}_{i}|i\in A_{uc}(0)\}\) denote the set of unclustered attackers where \(A_{uc}(0)\subseteq I_{a}\) is the set of indices of the attackers that are not clustered by the DBSCAN algorithm, i.e., the attackers that are treated as the noises by the DBSCAN algorithm. The defenders aim to intercept the unclustered attackers assuming that these attackers are risk-taking while they attempt to herd the clustered attackers with the hope that the clustered attackers will stay together and try to avoid the defenders. For this we need to assign some individual defenders to intercept the unclustered attackers and some sub-teams of the defenders to herd the identified clusters of the attackers. Since the unclustered attackers are likely to be risk-taking and hence pose more risk to the protected area, the assignment of the best defenders to intercept these unclustered attackers is done first and then the rest defenders are assigned to herd the clustered attackers.
We first us collision-aware defender-to-attacker assignment (CADAA) to assign defenders to intercept the identified unclustered attackers \(\mathcal{A}_{uc}(0)\) such that these attackers are intercepted as quickly as possible and the possible collisions among the defenders are minimized. Let \(\delta_{ji}^{int}(0)\) be the binary decision variable at time \(t=0\) that takes
value 1 if the defender \(\mathcal{D}_{j}\) is assigned to intercept attacker \(\mathcal{A}_{i}\) and 0 otherwise. Let \(C_{d}^{int}(\mathbf{X}_{dj}^{ai})\) be the cost incurred by the defender \(\mathcal{D}_{j}\) to capture the attacker \(\mathcal{A}_{i}\) and is given by:
\[C_{d}^{int}(\mathbf{X}_{dj}^{ai})=\begin{cases}t_{d}^{int}(\mathbf{x}_{dj}, \mathbf{x}_{ai}),&\text{if }\mathbf{x}_{ai}\in\mathcal{R}_{d}(\mathbf{x}_{dj});\\ c_{l},&\text{otherwise};\end{cases} \tag{3}\]
where \(\mathbf{X}_{dj}^{ai}=[\mathbf{x}_{dj}^{T},\mathbf{x}_{a}^{T}]^{T}\), \(t_{d}^{int}(\mathbf{x}_{dj},\mathbf{x}_{ai})\) is the minimum time required by the defender \(\mathcal{D}_{j}\) to capture the attacker \(\mathcal{A}_{i}\) that is moving towards the protected area \(\mathcal{P}\) under time-optimal control action as defined in [12], \(c_{l}\) (\(>>1\)) is a very large number, and \(\mathcal{R}_{d}(\mathbf{x}_{dj})=\{\mathbf{x}_{a}\in\tilde{\mathcal{X}}_{d} |t_{d}^{int}(\mathbf{x}_{d},\mathbf{x}_{a})-t_{a}^{int}(\mathbf{x}_{a},\mathbf{ r}_{p})\leq 0\}\) is the winning region of the defender \(\mathcal{D}_{j}\) starting at \(\mathbf{x}_{dj}\), where \(\tilde{\mathcal{X}}_{a}=(\mathbb{R}^{2}\backslash\mathcal{P})\times\mathcal{ B}_{\bar{\mathbf{e}}_{a}}\), \(t_{a}^{int}(\mathbf{x}_{a},\mathbf{r}_{p})\) is the time that the attacker starting at \(\mathbf{x}_{a}\) requires to reach the protected area at \(\mathbf{r}_{p}\). Let \(C_{d}^{col}(\mathbf{X}_{dj}^{ai},\mathbf{X}_{dj^{\prime}}^{ai^{\prime}})\) is the cost associated with a collision that may occur between the two defenders that are assigned interception task and is defined as:
\[C_{d}^{col}(\mathbf{X}_{dj}^{ai},\mathbf{X}_{dj^{\prime}}^{ai^{\prime}})= \begin{cases}\frac{1}{t_{d}^{col}(\mathbf{X}_{dj}^{ai};\mathbf{X}_{dj^{\prime}} ^{ai^{\prime}})},&\text{if }\mathcal{D}_{j}\ \&\mathcal{D}_{j^{\prime}}\ \text{ collide};\\ 0,&\text{otherwise}.\end{cases} \tag{4}\]
where \(t_{d}^{col}(\mathbf{X}_{dj}^{ai},\mathbf{X}_{dj^{\prime}}^{ai^{\prime}})\) is time of collision between \(\mathcal{D}_{j}\) and \(\mathcal{D}_{j^{\prime}}\) on their time-optimal trajectories.
We find the optimal \(\delta_{ji}^{int*}(0)\) by solving the following CADAA problem at \(t=0\):
\[\operatorname*{arg\,min}_{\mathbf{\delta}^{int}(0)} \sum_{i\in A_{uc}(0)}\sum_{j\in L_{d}}\Bigl{(}(1-w)C_{d}^{int}( \mathbf{X}_{dj}^{ai})\delta_{ji}^{int}(0)+\] \[w\sum_{i^{\prime}\in A_{uc}(0)}\sum_{j^{\prime}\in L_{d}}C_{d}^ {col}(\mathbf{X}_{dj}^{ai},\mathbf{X}_{dj^{\prime}}^{ai^{\prime}})\delta_{ji}^ {int}(0)\delta_{ji^{\prime}i^{\prime}}^{int}(0)\Bigr{)}\] (5a) Subject to \[\sum_{i\in A_{uc}(0)}\delta_{ji}^{int}(0)=1,\quad\forall j\in I_{d}; \tag{5b}\] \[\sum_{j\in I_{d}}\delta_{ji}(0)=1,\quad\forall i\in A_{uc}(0);\] (5c) \[\delta_{ji}^{int}(0)\in\{0,1\},\quad\forall j\in I_{d},\;i\in A_{uc }(0); \tag{5d}\]
where \(\mathbf{\delta}^{int}(0)=[\delta_{ji}^{int}(0)|i\in A_{uc}(0),j\in I_{d}]^{T}\in \{0,1\}^{N_{d}|A_{uc}(0)|}\) is the binary decision vector and \(w\in(0,1)\) is user specified weight of the collision cost that is used to adjust the importance of the collisions among the defenders and the time to intercept the attackers at the assignment stage.
A mapping \(\beta_{uc}(0,\cdot):\{i\in A_{uc}(0)\}\rightarrow\{j\in I_{d}\}\), which gives the index of the defender assigned to intercept a given unclustered attacker \(\mathcal{A}_{i}\) is then defined as:
\[\beta_{uc}(t,i)=\operatorname*{arg\,max}_{j}\delta_{ji}^{int*}(0),\quad\forall t \geq 0. \tag{6}\]
Let \(\mathcal{D}_{uc}(0)=\{\mathcal{D}_{\beta_{uc}(t,i)}|i\in A_{uc}(0)\}\) denote the set of defenders that are assigned to the unclustered attackers \(\mathcal{A}_{uc}(0)\) and \(D_{uc}(0)=\{\beta_{uc}(t,i)|i\in A_{uc}(0)\}\) be the set of indices of the defenders in \(\mathcal{D}_{uc}(0)\). Let \(\mathcal{D}_{c}(0)=\{\mathcal{D}_{j}|j\in D_{c}(0)\}\) denote the set of all the other unassigned defenders, where \(\mathcal{D}_{c}(0)=I_{d}\backslash D_{uc}(0)\). These unassigned defenders \(\mathcal{D}_{c}(0)\) are then employed to herd the identified clusters of the attackers.
Next, we describe a centralized approach to find a time-optimal, collision free motion plan for the defenders in \(\mathcal{D}_{c}(0)\) to gather on the shortest paths of the attackers' swarms.
#### Iii-B1 Centralized Approach
In this approach, the two problems: i) of choosing the best gathering formations, and ii) of the assignment of the defenders in \(\mathcal{D}_{c}(0)\) to the goal locations on these gathering formations are solved simultaneously. We provide a bisection method based iterative scheme as detailed in Algorithm 1 to solve the above two problems simultaneously. Let \(\mathscr{R}_{d}(N_{a}):\mathbb{Z}_{>0}\rightarrow\mathbb{Z}_{>0}\) be the defender-to-attacker resource allocation function that outputs the number of the defenders that can be assigned to the given \(N_{a}\) attackers. We make the following assumption about the defender-to-attacker resource allocation function.
**Assumption 3**.: _The defender-to-attacker resource allocation function is a strictly monotonically increasing function, i.e., \(\mathscr{R}_{d}(N_{a})<\mathscr{R}_{d}(N_{a}+1)\), such that \(\mathscr{R}_{d}(N_{a})\geq N_{a}\)._
Assumption 3 ensures that there are adequate number of defenders to go after each attacker in the event the attackers in the swarm disintegrate into singular swarms.
Consider a line formation \(\mathscr{F}_{dc_{k}}^{line}\) characterized by positions \(\mathbf{p}_{k}^{line}(\mathbf{r}_{df_{k}},\phi_{k})=\{\mathbf{p}_{k,1}^{line},\mathbf{p}_{k,2}^{line},...,\mathbf{p}_{k,\mathscr{R}_{d}(|A_{c_{k}}|)}^{line}\}\) where
\[\mathbf{p}_{k,l}^{line}(\mathbf{r}_{df_{k}},\phi_{k})\triangleq\mathbf{r}_{df_{k} }+\hat{R}_{l}\mathbf{\Theta}(\phi_{k}+\frac{\pi}{2}), \tag{7}\]
for all \(l\in I_{dc_{k}}(0)=\{1,2,...,\mathscr{R}_{d}(|A_{c_{k}}(0)|)\}\), where \(\mathbf{\hat{\Theta}}(\theta)=[\cos(\theta),\ \sin(\theta)]^{T}\) is the unit vector making an angle \(\theta\) with \(x\)-axis, and \(\hat{R}_{l}=\hat{R}_{d}^{d,g}\Bigl{(}\frac{\mathscr{R}_{d}(|A_{c_{k}}|)-2l+1}{2} \Bigr{)}\), where \(\hat{R}_{d}^{d,g}(\leq\bar{R}_{sb})\) is the user defined separation between the defenders at the gathering formation.
Corresponding to each attackers' cluster \(\mathcal{A}_{c_{k}}\), the desired gathering formation \(\mathscr{F}_{dc_{k}}^{g}\) for the defenders to gather at is chosen to be a line formation1\(\mathscr{F}_{dc_{k}}^{line}\) centered at \(\mathbf{r}_{df_{k}}\) with orientation \(\phi_{k}\), characterized by the positions \(\mathbf{\xi}_{c_{k}}^{g}=\{\mathbf{\xi}_{c_{k,1}}^{g},\mathbf{\xi}_{c_{k,2}}^{g},...,\mathbf{ \xi}_{c_{k,\mathscr{R}_{d}(|A_{c_{k}}|)}^{line}\}\}=\mathbf{p}_{k}^{line}( \mathbf{r}_{df_{k}},\phi_{k})\), as obtained in Algorithm 1. These positions are static, i.e., \(\mathbf{\xi}_{c_{k,l}}^{g}=\mathbf{\xi}_{c_{k,l}}^{g}=\mathbf{0}\) for all \(l\in I_{dc_{k}}\). The gathering centers \(\mathbf{r}_{df_{k}}\), for all \(k\in I_{ac}(0)\), are chosen to lie outside the protected area \(\mathcal{P}\). Algorithm 1 also outputs the Defender-to-Attacker-Swarm Assignment (DASA), \(\beta\), which is defined formally as:
Footnote 1: This is a better choice compared to a semicircular formation as chosen in [22]. Because, the semicircular formation, for a given length constraint on the string barrier (\(\bar{R}_{sb}\)), creates smaller blockage to the attackers as compared to the line formation. Although
respectively, in order to successfully herd the swarm \(\mathcal{A}_{c_{k}}(t)\) to the closest safe area._
The set of defenders assigned to gather on the path of the cluster \(\mathcal{A}_{c_{k}}(0)\) is denoted by \(\mathcal{D}_{c_{k}}(0)=\{\mathcal{D}_{j}|j\in D_{c_{k}}(0)\}\), where \(D_{c_{k}}(0)\) is the set of indices defined as: \(D_{c_{k}}(0)=\{\beta_{c_{k}}(0,1),\beta_{c_{k}}(0,2),...,\beta_{c_{k}}(0, \mathcal{R}_{d}(0,|\mathcal{A}_{c_{k}}|))\}\) for all \(k\in I_{ac}(0)\). Each of these sub-teams \(\mathcal{D}_{c_{k}}(0)\)'s of the defenders are tasked to achieve the Open-StringNet formations \(\mathcal{G}_{\text{sn}}^{op}(D_{c_{k}}(0))\) on the shortest paths of the oncoming attacking swarms. Assuming \(N_{d}=N_{a}\), we choose \(\mathcal{R}_{d}(|\mathcal{A}_{c_{k}}|)=|\mathcal{A}_{c_{k}}|\), i.e., the number of defenders assigned to a swarm \(\mathcal{A}_{c_{k}}\) is equal to the number of attackers in \(\mathcal{A}_{c_{k}}\).
```
Input:\(\mathbf{r}_{d}(0)\), \(\mathbf{x}_{a}(0)\), \(D_{c}(0)\), \(\{A_{c_{k}}(0)|k\in I_{ac}(0)\}\)
1for\(k=1:N_{ac}(0)\)do
2 CoM of \(\mathcal{A}_{c_{k}}(0)\): \(\mathbf{x}_{ac_{k}}(0)=\sum_{i\in A_{c_{k}}(0)}\frac{\mathbf{x}_{ai}(0)}{| \mathcal{A}_{c_{k}}(0)|}\); \(\mathbf{P}_{ac_{k}}\)=timeOptimalTraj\((\mathbf{x}_{ac_{k}}(0))\);
3while\(\Sigma_{T_{lead}}>\epsilon_{tol}\)do
4\(\Sigma_{T_{lead}}=0;\gamma_{ac_{k}}^{<}=0;\gamma_{ac_{k}}^{>}=\Gamma_{ac_{k}}- \rho_{pa};\boldsymbol{\xi}^{g}=[\;]\);
5for\(k=1:N_{ac}(0)\)do
6\(\gamma_{ac_{k}}=\frac{\gamma_{ac_{k}}^{<}-\gamma_{ac_{k}}^{>}}{\Gamma_{df_{k} }(0)=\mathscr{P}_{ac_{k}}(\gamma_{ac_{k}})}\);
7\(\mathbf{r}_{df_{k}}(0)=\mathscr{P}_{ac_{k}}(\gamma_{ac_{k}})\);
8\(\boldsymbol{\xi}^{g}_{c_{k}}=\mathbf{p}_{lk}^{\text{time}}(\mathbf{r}_{df_{k} }(0),\vartheta_{ac_{k}}(\gamma_{ac_{k}})-\pi)\)
9\(\boldsymbol{\xi}^{g}\leftarrow\{\boldsymbol{\xi}^{g},\boldsymbol{\xi}^{g}_{c _{k}}\}\);
10\([\beta_{c}(0),\mathcal{T}]\)=assignDtoGMILP\((\mathbf{r}_{dc}(0),\,\boldsymbol{\xi}^{g})\);
11for\(k=1:N_{ac}\)do
12\(\Sigma_{T_{lead}}=\Sigma_{T_{lead}}+|\frac{\gamma_{ac_{k}}}{\Gamma_{a}}- \mathcal{T}_{k}-\Delta T_{dc_{k}}^{g}|\);
13if\(\frac{\gamma_{ac_{k}}}{\Gamma_{a}}-\mathcal{T}_{k}-\Delta T_{d}^{g}<\)0then
14\(\gamma_{ac_{k}}^{<}=\gamma_{ac_{k}}\);
15else
16\(\gamma_{ac_{k}}^{<}=\gamma_{ac_{k}}\);
17
18
19return\(\boldsymbol{\xi}^{g}\), \(\beta_{c}(0)\), \(\{\mathbf{r}_{df_{1}}(0),\mathbf{r}_{df_{2}}(0),....\mathbf{r}_{df_{N_{ac}}(0 )}(0)\}\)
```
**Algorithm 1**Gathering formations for the defenders
In Algorithm 1, timeOptimalTraj\((\mathbf{x}_{ac_{k}}(0))\) function finds the time-optimal trajectory \(\mathbf{P}_{ac_{k}}\) for an agent starting at \(\mathbf{x}_{ac_{k}}(0)\) to reach the protected area. The trajectory \(\mathbf{P}_{ac_{k}}\) is associated with mappings \(\mathscr{P}_{ac_{k}}:[0,\Gamma_{ac_{k}}]\rightarrow\mathbb{R}^{2}\) and \(\vartheta_{ac_{k}}:[0,\Gamma_{ac_{k}}]\rightarrow[0,2\pi]\). Here \(\mathscr{P}_{ac_{k}}(\gamma_{ac_{k}})\) gives the Cartesian coordinates, and \(\vartheta_{ac_{k}}(\gamma_{ac_{k}})\) gives the direction of the tangent to the path at the location reached after traveling \(\gamma_{ac_{k}}\) distance along the path from the initial position. \(\mathbf{r}_{d}(0)=\{\mathbf{r}_{df}(0)|j\in I_{d}\}\) is the set of initial positions of the defenders and \(\mathbf{x}_{a}(0)=\{\mathbf{x}_{ai}(0)|i\in I_{a}\}\) is the set of initial states of the attackers. Each defender is assumed to have zero initial velocity2. The function assignDtoGMILP assigns each defender \(\mathcal{D}_{j}\) in \(\mathcal{D}_{c}(0)\) initially located at \(\mathbf{r}_{df}(0)\) to one of the gathering locations in \(\boldsymbol{\xi}^{g}=\{\boldsymbol{\xi}^{g}_{c_{1}},\boldsymbol{\xi}^{g}_{c_{2} },...,\boldsymbol{\xi}^{g}_{c_{N_{ac}}(0)}\}\) by solving the following the mixed integer linear program (MILP):
Footnote 2: This is not a conservative assumption because if a defender has non-zero speed, one can apply acceleration opposite to its velocity to make the speed zero and assume the initial position for that defender to be the position at which this speed will become zero.
\[\operatorname*{arg\,min}_{\boldsymbol{\delta}}\sum_{k=1}^{N_{ac}(0)}\sum_{l=1} ^{|I_{ac_{k}}(0)|}\sum_{j\in D_{c}(0)}\left\|\mathbf{r}_{df}(0)-\boldsymbol{ \xi}^{g}_{c_{k},l}\right\|\delta^{c_{k}}_{jl}\] (8a) Subject to \[\sum_{k\in I_{ac}(0)}\sum_{l\in I_{ac}(0)}\delta^{c_{k}}_{jl}=1, \quad\forall j\in D_{c}(0); \tag{8b}\] \[\sum_{j\in D_{c}(0)}\delta^{c_{k}}_{jl}=1,\quad\forall l\in I_{ac _{k}(0)},\,\forall k\in I_{ac}(0),;\] (8c) \[\delta^{c_{k}}_{jl}\in\{0,1\},\quad\forall j\in D_{c}(0),\,\forall l \in I_{ac_{k}(0)},\,\forall k\in I_{ac}(0); \tag{8d}\]
where the distance between an initial position \(\mathbf{r}_{df}(0)\) and \(\boldsymbol{\xi}^{g}_{c_{k},l}\) is used as the metric for solving the assignment problem, the constraints (8b) ensure that each defender is assigned to a single goal location, the constraints (8c) ensure that each goal location is assigned a unique defender, and the last constraints (8d) force the decision variable \(\delta^{c_{k}}_{jl}\) to be binary. The decision variable \(\delta^{c_{k}}_{jl}\) is \(1\) if the defender \(\mathcal{D}_{j}\) is assigned to go to the goal location \(\boldsymbol{\xi}^{g}_{c_{k},l}\) and \(0\) otherwise; and \(\boldsymbol{\delta}\in\{0,1\}^{N_{s}(0)}\) is the binary decision vector defined as \(\boldsymbol{\delta}=[\delta^{c_{k}}_{jl}|\forall j\in D_{c}(0),\,\,\forall l\in I _{dc_{k}}(0),\,\,\forall k\in I_{ac}(0)]^{T}\), where \(N_{\delta}(0)=(N_{d}-|A_{ac}(0)|)\sum_{k\in I_{ac}(0)}\mathscr{R}_{d}(|A_{c_{k}}(0 )|)\). The function assignDtoGMILP also outputs \(\mathcal{T}=\{\mathcal{T}_{1},\mathcal{T}_{2},...,\mathcal{T}_{N_{ac}}\}\), where \(\mathcal{T}_{k}\), for all \(k\in I_{ac}(0)\), is the time required by the sub-team \(\mathcal{D}_{c_{k}}(0)\) to gather at their desired gathering formation. The parameter \(\epsilon_{tol}>0\) is a user defined small number used as the convergence tolerance.
The idea in Algorithm 1 is to find the gathering formations that are as far from the protected area as possible and each subteam \(\mathcal{D}_{c_{k}}(0)\) of the defenders is able to reach their assigned gathering formation at least \(\Delta T_{dc_{k}}^{g}\) s before the center of mass (CoM) of \(\mathcal{A}_{c_{k}}\), that follows its time-optimal trajectory towards the protected area, reaches the center of the gathering formation. Here \(\Delta T_{dc_{k}}^{g}\), for all \(k\in I_{ac}(0)\) is a user-defined time that accounts for the size of the swarm \(\mathcal{A}_{c_{k}}\) and the time required to get connected by strings once arrived at the desired formation.
The Defender-to-Attacker-Swarm Assignment \(\beta_{c_{k}}(0,\cdot)\), for all \(k\in I_{ac}(0)\), is then obtained as:
\[\beta_{c_{k}}(0,l)=\operatorname*{arg\,max}_{j}\delta^{c_{k}*}_{jl} \tag{9}\]
where \(\delta^{c_{k}*}_{jl}\) is the optimal value of \(\delta^{c_{k}}_
\(\rho_{ac_{k}}(t_{se})=\max_{i\in A_{c_{k}}(t_{se})}\|{\bf r}_{ai}(t_{se})-{\bf r }_{ac_{k}}(t_{se})\|\) exceeds the value \(\bar{\rho}_{ac_{k}}(t_{se})\)._
We also make the following assumption regarding the splitting behavior of the attackers.
**Assumption 4**.: _Once a swarm of attackers splits, its member attackers never rejoin each other, i.e., for all \(i\in I_{a}\), if \(\exists\,t>0\) such that \({\cal A}_{i}\notin{\cal A}_{c_{k}}(t)\) for any \(k\in I_{ac}(t)\) then \({\cal A}_{i}\notin{\cal A}_{c_{k}}(t^{\prime})\) for all \(t\leq t^{\prime}\)._
The splitting behavior of the attackers requires reassignment of the defenders, that were supposed to herd the given swarm of the attackers that just split, to the newly available interception or herding tasks. Next, we describe a mixed-integer quadratically constrained quadratic program (MIQCQP) to solve this assignment problem.
#### Iv-B1 Decentralized optimal assignment using MIQCQP
When a swarm of attackers \({\cal A}_{c_{k}}\) splits into smaller swarms at \(t=t_{se}\). The newly identified swarms of the attackers by the DBSCAN algorithm are assigned new indices. Namely, one of the swarm is assigned the index \(k\), i.e. the index of the parent swarm \({\cal A}_{c_{k}}\) and the rest swarms are assigned integers greater than \(N_{ac}(t_{se}^{-})\) as their indices, where \(t_{se}^{-}\) denotes the instant immediately before \(t=t_{se}\). Let \(A_{c}^{(k)}(t_{se})\) denote the indices of the clusters of the attackers that are newly formed out of the parent cluster \({\cal A}_{c_{k}}(t_{se}^{-})\), when the cluster \({\cal A}_{c_{k}}\) splits at \(t=t_{se}\), as identified by the DBSCAN algorithm. \({\cal A}_{uc}^{(k)}(t_{se})\) is the set of unclustered attackers separated from the original cluster \({\cal A}_{c_{k}}(t_{se})\) after the original cluster has split. We aim to assign the defenders in \({\cal D}_{c_{k}}(t_{se}^{-})\), that are already connected via Open-StringNet \({\cal G}_{sn}^{op}(D_{c_{k}}(t_{se}^{-}))\) and were tasked to herd the original cluster \({\cal A}_{c_{k}}(t_{se}^{-})\), to either intercept the unclustered attackers separated from the original cluster \({\cal A}_{c_{k}}(t_{se}^{-})\) or herd the smaller clusters formed by the attackers in the original swarm \({\cal A}_{c_{k}}(t_{se}^{-})\) after splitting. Herding the smaller swarms of the attackers still requires the sub-teams of the defenders to stay connected via Open-StringNets while the defenders assigned to intercept the unclustered attackers will now disconnect themselves from the rest of the Open-StrigNet. In [23], we solved a connectivity constrained generalized assignment problem (C2GAP) to assign connected sub-teams of the defenders to herd the newly formed sub-swarms of the attackers after the original attacking swarm splits. In contrast to that, the current assignment problem is more complex due to the requirement of assigning some individual defenders, who shall disconnect themselves from the rest of the Open-StringNet, to intercept the unclustered attackers.
Let \(\delta_{jk^{\prime}}^{herd}(t_{se})\) be the binary decision variable at time \(t=t_{se}\) that takes value 1 if the defender \({\cal D}_{j}\) is assigned to herd the swarm \({\cal A}_{c_{k^{\prime}}}(t_{se})\) and 0 otherwise. We formulate the MIQCQP in (10) to assign the defenders on the Open-StringNet \({\cal G}_{sn}^{op}(D_{c_{k}}(t_{se}^{-}))\) to herd the newly formed swarms of the attackers, \({\cal A}_{c_{k^{\prime}}}(t_{se})\), for all \(k^{\prime}\in A_{c}^{(k)}(t_{se})\), and the unclustered attackers \({\cal A}_{uc}^{(k)}(t_{se})\). In (10), \({\cal G}^{(k)}(t_{se})\in\{0,1\}^{N_{{\cal G}^{(k)}}(t_{se})}\) is the binary decision vector defined as \({\boldsymbol{\delta}}^{(k)}(t_{se})=[[\delta_{jk^{\prime}}^{herd}(t_{se})]k^{ \prime}\in A_{c}^{(k)}(t_{se}),j\in D_{c_{k}}(t_{se}^{-})][\delta_{jk^{\prime}} ^{int}(t_{se})]i\in A_{uc}^{(k)}(t_{se}),j\in D_{c_{k}}(t_{se}^{-})]]^{T}\), where \(N_{{\boldsymbol{\delta}}^{(k)}}(t_{se})=|{\cal D}_{c_{k}}(t_{se}^{-})|\left(|A_{ c}^{(k)}(t_{se})|+|A_{uc}^{(k)}(t_{se})|\right)\); \(I_{d_{c_{k}}}^{\prime}=\{1,2,...,|{\cal D}_{c_{k}}|-1\}\); and \(\beta_{k}^{-}(l)=\beta_{c_{k}}(t_{se}^{-},l)\).
The optimization cost in (10) is the sum of distances of the defenders from the centers of the attackers' swarms to which they are assigned, the times to capture required by the defenders to capture the unclustered attacker that are assigned to them, and the collision costs incurred by the defenders that are assigned interception task. This ensures that the collective effort needed by all the defenders is minimized when enclosing the swarms of the attackers and that the unclustered attackers are captured as quickly as possible while minimizing any possible collisions among the fast moving defenders that are assigned the interception task. The constraints (10b) ensure that each of the defenders in \({\cal D}_{c_{k}}(t_{se}^{-})\) is assigned either to exactly one unclustered attacker or to exactly one swarm of the attackers. The capacity constraints (10c) ensure that for all \(k^{\prime}\in A_{c}^{(k)}(t_{se})\), the swarm \({\cal A}_{c_{k^{\prime}}}(t_{se})\) has exactly \({\cal R}_{d}(|{\cal A}_{c_{k^{\prime}}}(t_{se})|)\) defenders assigned to it. The constraints (10d) ensure that each unclustered attacker in \({\cal A}_{uc}^{(k)}(t_{se})\) has exactly one of the terminal defenders assigned to it. The quadratic constraints (10e) ensure that all the defenders assigned to swarm \({\cal A}_{c_{k^{\prime}}}(t_{se})\) are connected together with an underlying Open-StringNet for all \(k^{\prime}\in A_{c}^{(k)}\) and the constraint (10f) ensures that all the \(|D_{c_{k}}(t_{se}^{-})|\) defenders are assigned to the attackers' swarms and the unclustered attackers.
The aforementioned MIQCQP (10) is solved by the lead defender in \({\cal D}_{c_{k}}(t_{se}^{-})\), where the lead defender is identified to be the one in the middle of the Open-StringNet formation, i.e., the defender \({\cal D}_{\beta_{k}(t_{se}^{-},l_{l}^{\prime})}\) where \(l_{l}=\lfloor\frac{|{\cal D}_{c_{k}}(t_{se}^{-})|}{2}\rfloor\), for all \(k\) for which the \({\cal A}_{c_{k}}\) have split. This helps the defenders find the Defender-to-Attacker-Swarm assignment quickly, and without having to consider all the agents in the assignment formulation, i.e., in a decentralized way.
The aforementioned MIQCQP (10) can be solved using a MIP solver Gurobi [39]. After solving (10), one can find the mapping \(\beta_{c_{k^{\prime}}}(t,\cdot)\), for all \(k^{\prime}\in A_{c}^{(k)}(t_{se})\), as follows:
\[\beta_{c_{k^{\prime}}}(t,l)=\beta_{c_{k}}^{-}(l_{0}+l),\ \ \ \ \forall t\in[t_{se}+t_{comp},\ t_{se}^{next}], \tag{11}\]
where \(l_{0}\) is the smallest integer for which \(\delta_{\beta_{c_{k}}^{-}(l_{0}+1)k}(t_{se})=1\); \(t_{comp}\) is the computation time to solve (10); and \(t_{se}^{next}\) is an unknown future time at which a split happens. In other words, the assignment obtained using the states at \(t_{se}\) continues to be a valid assignment until the next split event happens at some unknown time \(t_{se}^{next}\) in the future. The worst-case time complexity of the MIQCQP in (10) is:
\[C_{M}^{comp}(t_{se},k)=O(2^{N_{{\boldsymbol{\delta}}^{(k)}}(t_{se})}) \tag{12}\]
where \(N_{{\boldsymbol{\delta}}^{(k)}}(t_{se})=|{\cal D}_{c_{k}}(t_{se}^{-})|\left(|A_{c}^ {(k)}(t_{se})|+|A_{uc}^{(k)}(t_{se})|\right)\).
### _Suboptimal assignment when attackers split at \(t>0\)_
#### Iv-C1 Assignment using reduced-size MIQCQP (rs-MIQCQP)
The worst-case complexity
of the MIQCQP in (10) can be reduced further under certain assumption on the behavior of the attackers. Let us first define a conical envelope around the center of a swarm.
**Definition 5** (Conical Envelope).: _A conical envelope \(E_{con}(\mathbf{r}_{0},\psi)\), centered at \(\mathbf{r}_{0}=[x_{0},y_{0}]^{T}\) is defined as \(E_{con}(\mathbf{r}_{0},\psi)=\big{\{}\{(x,y)\in\mathbb{R}^{2}|y-y_{0}-m_{1}(x- x_{0})>0\}\cap\{(x,y)\in\mathbb{R}^{2}|y-y_{0}-m_{2}(x-x_{0})<0\}\big{\}}\cup\big{\{} \{(x,y)\in\mathbb{R}^{2}|y-y_{0}-m_{1}(x-x_{0})<0\}\cap\{(x,y)\in\mathbb{R}^{2}| y-y_{0}-m_{2}(x-x_{0})<0\}\big{\}}\cap\{(x,y)\in\mathbb{R}^{2}|y-y_{0}-m_{2}(x-x_{0})>0\} \big{\}}\), where \(m_{1}=\tan\left(\tan^{-1}(\frac{y_{0}-y_{y}}{x_{0}-x_{p}})-\frac{\pi}{2}-\psi\right)\) and \(m_{2}=\tan\left(\tan^{-1}(\frac{y_{0}-y_{y}}{x_{0}-x_{p}})-\frac{\pi}{2}+\psi\right)\)._
**Assumption 5**.: _A swarm of the attackers \(\mathcal{A}_{c_{k}}\), for any \(k\), splits at \(t=t_{se}\), such that all the unclustered attackers (swarms with less than 3 attackers) are the farthest from the center of the original swarm \(\mathcal{A}_{c_{k}}(t_{se}^{-})\) and their centers lie within the conical envelope \(E_{con}(\mathbf{r}_{ac_{k}}(t_{se}^{-}),\frac{\pi}{4})\), i.e., \(\forall i\in A_{uc}^{(k)}(t_{se})\), \(\|\mathbf{r}_{ai}(t_{se})-\mathbf{r}_{ac_{k}}(t_{se}^{-})\|>\max_{k^{\prime} \in A_{c}^{(k)}(t_{se})}\big{\|}\mathbf{r}_{ac_{k^{\prime}}}(t_{se})-\mathbf{ r}_{ac_{k}}(t_{se}^{-})\big{\|}\) and \(\mathbf{r}_{ai}(t_{se})\in E_{con}(\mathbf{r}_{ac_{k}}(t_{se}),\frac{\pi}{4})\) (gray shaded region in Fig. 3)._
Assumption 5 implies that the unclustered attackers aim to spread in the direction transverse to the direction toward the protected area because of the presence of the defenders in front of them in order to maximize their chances of not getting captured by the defenders and reaching the protected area. Under Assumption 5, we can assign only the defenders from either end of the Open-StringNet to intercept the unclustered attackers while assign the defenders in the central part of the Open-StringNet to herd the newly formed clusters of the attackers.
Let \(\mathcal{D}^{l}_{c_{k}}(t_{se}^{-})=\{\mathcal{D}_{j}|j\in D^{l}_{c_{k}}(t_{se }^{-})\}\) be the group of \(|A_{uc}^{(k)}(t_{se})|\) defenders at the left end of the Open-StringNet \(\mathcal{G}^{op}_{sn}(D_{c_{k}}(t_{se}^{-}))\), where \(D^{l}_{c_{k}}(t_{se}^{-})=\{\beta^{-}_{c_{k}}(1),\beta^{-}_{c_{k}}(2),...,\beta^ {-}_{c_{k}}(1)|\}\). Here the left end of the Open-StringNet formation refers to the end approached first when one rotates anti-clockwise standing at the center \(\mathbf{r}_{d_{k}}\) and starting when facing in the direction \(\phi_{k}\) of the formation (see Fig. 3). Similarly, let \(\mathcal{D}^{r}_{c_{k}}(t_{se}^{-})=\{\mathcal{D}_{j}|j\in D^{r}_{c_{k}}(t_{se }^{-})\}\) be the group of \(|A_{uc}^{(k)}(t_{se})|\) defenders at the right end of the Open-StringNet formation \(\mathcal{G}^{op}_{sn}(D_{c_{k}}(t_{se}^{-}))\), where \(D^{r}_{c_{k}}(t_{se}^{-})=\{\beta^{-}_{c_{k}}(|D_{c_{k}}(t_{se}^{-})|-|A_{uc}^{ (k)}|+1),\beta^{-}_{c_{k}}(|D_{c_{k}}(t_{se}^{-})|-|A_{uc}^{(k)}|+2),...,\beta^ {-}_{c_{k}}(|D_{c_{k}}(t_{se}^{-})|)\}\) (see Fig. 3). Let us call \(\mathcal{D}^{l}_{c_{k}}(t_{se}^{-})=\{\mathcal{D}_{j}|j\in D^{l}_{c_{k}}(t_{ se}^{-})\cup D^{r}_{c_{k}}(t_{se}^{-})\}\) as the group of terminal defenders of the Open-StringNet \(\mathcal{G}^{op}_{sn}(D_{c_{k}}(t_{se}^{-}))\). We denote by \(\mathcal{D}^{r}_{c_{k}}(t_{se}^{-})=\{\mathcal{D}_{j}|j\in D^{c}_{c_{k}}(t_{ se}^{-})\}\) the central defenders, the group of the defenders excluding the terminal defenders \(\mathcal{D}^{l}_{c_{k}}(t_{se}^{-})\), where \(D^{c}_{c_{k}}(t_{se}^{-})=D_{c_{k}}(t_{se}^{-})\backslash D^{l}_{c_{k}}(t_{se}^ {-})\).
Next, we develop a reduced-size MIQCQP, in which only the terminal defenders \(\mathcal{D}^{l}_{c_{k}}(t_{se}^{-})\) are assigned the interception task, in (13). In (13) the length of the decision vector \(\mathbf{\delta}^{(k)*}_{rs}(t_{se})=[[\delta^{hef}_{jk}(t_{se})]k^{\prime}\in A_{c}^{ (k)}(t_{se}),\ j\in D_{c_{k}}(t_{se}^{-})],[\delta^{int}_{ji}(t_{se})]i\in A_{uc} ^{(k)}(t_{se}),\ j\in D_{c_{k}}(t_{se}^{-}),\]
Figure 3: Assignment of the defenders after the attackers split using rs-MIQCQP
\(D_{c_{k}}(t_{se}^{-})|]^{T}\) is \(N_{\mathbf{\delta}_{rs}^{(k)}}(t_{se})=|\mathcal{D}_{c_{k}}(t_{se}^{-})||A_{c}^{(k) }(t_{se})|+\min(2|A_{uc}^{(k)}(t_{se})|,|\mathcal{D}_{c_{k}}(t_{se}^{-})|)|A_{uc }^{(k)}(t_{se})|\). We have the following result about the computation cost of (13).
**Lemma 1**.: _The worst-case computational cost of (13), \(C_{rsM}^{comp}(t_{se},k)\), satisfies:_
\[C_{rsM}^{comp}(t_{se},k)=O(2^{N_{\mathbf{\delta}_{rs}^{(k)}}(t_{se})})\leq C_{M}^ {comp}(t_{se},k). \tag{14}\]
_Furthermore, if the number of unclustered attackers is less than half of the total number of attackers in the original cluster, i.e., \(|A_{uc}^{(k)}(t_{se})|<\frac{|\mathcal{A}_{c_{k}}(t_{se}^{-})|}{2}\), then \(C_{rsM}^{comp}(t_{se},k)<C_{M}^{comp}(t_{se},k)\)._
Figure 3 shows an instance of the assignment of the defenders on the Open-StringNet \(\mathcal{G}_{sn}^{op}(D_{c_{1}}(t_{se}^{-}))\) at some time \(t=t_{se}\), where \(D_{c_{1}}(t_{se}^{-})=\{1,2,3,...,13\}\), to the newly formed clusters \(\mathcal{A}_{c_{1}}(t_{se})=\{\mathcal{A}_{1},\mathcal{A}_{3},\mathcal{A}_{4 },\mathcal{A}_{6},\mathcal{A}_{9}\}\), \(\mathcal{A}_{c_{2}}(t_{se})=\{\mathcal{A}_{2},\mathcal{A}_{5},\mathcal{A}_{7 },\mathcal{A}_{8},\mathcal{A}_{10}\}\) and the unclustered attackers \(\mathcal{A}_{uc}^{(1)}(t_{se})=\{\mathcal{A}_{11},\mathcal{A}_{12},\mathcal{A}_ {13}\}\). After solving the rs-MIQCQP (13), as shown in Fig. 3, defenders \(\mathcal{D}_{\beta_{1}^{-}(1)}\), \(\mathcal{D}_{\beta_{1}^{-}(2)}\) and \(\mathcal{D}_{\beta_{1}^{-}(13)}\) are assigned to the unclustered attackers \(\mathcal{A}_{12}\), \(\mathcal{A}_{11}\), \(\mathcal{A}_{13}\), respectively, so that these attackers can be intercepted as soon as possible. The connected sub- teams \(\{\mathcal{D}_{\beta_{1}^{-}(8)},\mathcal{D}_{\beta_{1}^{-}(9)},\mathcal{D}_{ \beta_{1}^{-}(10)},\mathcal{D}_{\beta_{1}^{-}(11)},\mathcal{D}_{\beta_{1}^{- }(12)}\}\) and \(\{\mathcal{D}_{\beta_{1}^{-}(3)},\mathcal{D}_{\beta_{1}^{-}(4)},\mathcal{D}_{ \beta_{1}^{-}(5)},\mathcal{D}_{\beta_{1}^{-}(6)},\mathcal{D}_{\beta_{1}^{-}(7 )}\}\) are assigned to the newly formed swarms of the attackers \(\mathcal{A}_{c_{1}}(t_{se})\) and \(\mathcal{A}_{c_{2}}(t_{se})\), respectively.
#### Iii-B2 Hierarchical approach to assignment (a heuristic)
Finding the optimal assignment of the defenders for interception and herding tasks by solving the MIQCQPs (10) and (13) may not be real-time implementable for a large number of agents (\(>100\)). In this subsection, we develop a computationally efficient hierarchical approach to find the defender-to-attacker-swarm assignment under Assumption 5. The idea is to split a large dimensional assignment problem into smaller, low-dimensional assignment problems that can be solved optimally and quickly.
Let \(\mathscr{A}_{k}(t_{se})\) be a data structure that stores information about the attackers in \(\mathcal{A}_{c_{k}}(t_{se}^{-})\) and has data fields: \(\mathscr{A}_{k}(t_{se}).\textbf{r}_{ac}=[\textbf{r}_{ac_{k}^{\prime}}|k^{\prime }\in A_{c}^{(k)}(t_{se})]\), centers of the newly formed attackers' swarms after separating from the original swarm \(\mathcal{A}_{c_{k}}(t_{se}^{-})\); \(\mathscr{A}_{k}(t_{se}).\textbf{n}_{ac}=[|\mathcal{A}_{c_{k^{\prime}}}(t_{se}) ||k^{\prime}\in A_{c}^{(k)}(t_{se})|\), numbers of the attackers in each swarm; \(\mathscr{A}_{k}(t_{se}).N_{ac}=|A_{c}^{(k)}(t_{se})|\), total number of attackers' clusters formed from \(\mathcal{A}_{k}(t_{se}^{-})\); \(\mathscr{A}_{k}(t_{se}).\textbf{r}_{uc}=[\textbf{r}_{ai}|i\in A_{uc}^{(k)}(t_{ se})]\) current states of the unclustered attackers in \(\mathcal{A}_{uc}^{(k)}(t_{se})\); \(\mathscr{A}_{k}(t_{se}).N_{uc}\), total number of unclustered attackers; \(\mathscr{A}_{k}(t_{se}).N_{u}=|\mathcal{A}_{c_{k}}(t_{se}^{-})|\), total number of attackers \(\mathcal{A}_{c_{k}}(t_{se}^{-})\). Similarly, \(\mathscr{D}_{k}(t_{se})\) is a data structure that stores the information of the defenders on the original Open-StringNet \(\mathcal{G}_{sn}^{op}(D_{c_{1}}(t_{se}^{-}))\) with data fields: \(\mathscr{D}_{k}(t_{se}).\textbf{r}_{d}=[\textbf{r}_{d}|j\in D_{c}(t_{se}^{-})]\) positions of the defenders on \(\mathcal{G}_{sn}^{op}(D_{c_{k}}(t_{se}^{-}))\); and \(\mathscr{D}_{k}(t_{se}).\beta=\beta_{c_{k}}(t_{se}^{-})\), the original assignment mapping of the defenders on the Open-StringNet \(\mathcal{G}_{sn}^{op}(D_{c_{k}}(t_{se}^{-}))\).
Algorithm 2 provides the steps to solve the assignment problem quickly by hierarchically reducing the original big assignment problem into smaller ones.
In Algorithm 2, the function splitUnclustAtt \((\mathscr{A}_{k}(t_{se}),\mathscr{D}_{k}(t_{se}))\) splits the unclustered attackers \(\mathcal{A}_{uc}^{(k)}(t_{se})\) into two groups: left group \(\mathcal{A}_{uc}^{(k),l}(t_{se})\) and right group \(\mathcal{A}_{uc}^{(k),r}(t_{se})\). The normal bisector of the line segment joining the positions \(\textbf{r}_{d\beta_{c_{k}}^{-}(1)}(t_{se})\) and \(\textbf{r}_{d\beta_{c_{k}}^{-}(|D_{c_{k}}|)}(t_{se})\) acts as separating hyperplane for the groups \(\mathcal{A}_{uc}^{(k),l}(t_{se})\) and \(\mathcal{A}_{uc}^{(k),r}(t_{se})\). The unclustered attackers that lie in the half-plane containing the left side of Open-StringNet and the normal bisector itself are part of the left group \(\mathcal{A}_{uc}^{(k),l}(t_{se})\) and the rest unclustered attackers in \(\mathcal{A}_{uc}^{(k),r}(t_{se})\) are part of the right group \(\mathcal{A}_{uc}^{(k),r}(t_{se})\) (see Fig. 4). The function splitUnclustAtt also outputs \(\mathcal{D}_{uc}^{(k),l}(t_{se})\), the leftmost \(|\mathcal{A}_{uc}^{(k),l}(t_{se})|\) defenders on the Open-StringNet \(\mathcal{G}_{sn}^{op}(D_{c_{k}}(t_{se}^{-}))\); and \(\mathcal{D}_{uc}^{(k),r}(t_{se})\), the rightmost \(|\mathcal{A}_{uc}^{(k),r}(t_{se})|\)-defenders on the Open-StringNet \(\mathcal{G}_{sn}^{op}(D_{c_{k}}(t_{se}^{-}))\) (see Fig. 4). The function CADAA \((\mathcal{A}_{uc}^{(k),l}(t_{se}),\mathcal{D}_{uc}^{(k),l}(t_{se}))\) assigns the defenders in \(\mathcal{D}_{uc}^{(k),l}(t_{se})\) to intercept the
attackers \(\mathcal{A}_{uc}^{(k),l}(t_{se})\) by solving CADAA (5). Line 6 in Algorithm 2 removes the the defenders in \(\mathcal{D}_{uc}^{(k),l}(t_{se})\) and \(\mathcal{D}_{uc}^{(k),r}(t_{se})\), that are already assigned to intercept the unclustered attackers, from further processing. The function \(\texttt{assignHierarchical}(\mathscr{A}_{k}(t_{se}),\mathscr{D}_{k}(t_{se}))\) then assigns the remaining connected defenders on the Open-StringNet to the clusters of the attackers \(\{\mathcal{A}_{c_{ll}}(t_{se})|k^{\prime}\in A_{c}^{(k)}(t_{se})\}\).
```
Input:\(\mathscr{A}_{k}(t_{se})\), \(\mathcal{D}_{k}(t_{se})\)
1\([\mathcal{A}_{uc}^{(k),l}(t_{se}),\mathcal{D}_{uc}^{(k),l}(t_{se}),\mathcal{A} _{uc}^{(k),r}(t_{se}),\mathcal{D}_{uc}^{(k),r}(t_{se})]=\) splitUnclustAtt \((\mathscr{A}_{k}(t_{se}),\mathscr{D}_{k}(t_{se}))\);
2\(\beta_{uc}^{(k),l}\)= CADAA \((\mathcal{A}_{uc}^{(k),l}(t_{se}),\mathcal{D}_{uc}^{(k),l}(t_{se}))\);
3\(\beta_{uc}^{(k),r}\)= CADAA \((\mathcal{A}_{uc}^{(k),r}(t_{se}),\mathcal{D}_{uc}^{(k),r}(t_{se}))\);
4\(\beta_{uc}(t_{se})\leftarrow\{\beta_{uc}(t_{se}),\;\beta_{uc}^{(k),l}\cup\beta_{uc }^{(k),r}\}\);
5\(\mathscr{D}_{k}(t_{se}).\mathcal{D}_{ck}\leftarrow(\mathscr{D}_{k}(t_{se}). \mathcal{D}_{ck})\backslash(\mathcal{D}_{uc}^{(k),l}(t_{se})\cup\mathcal{D}_{uc }^{(k),r}(t_{se}))\);
6\(\beta_{c}(t_{se})\leftarrow\{\beta_{c}(t_{se}),\;\texttt{assignHierarchical}( \mathscr{A}_{k}(t_{se}),\mathscr{D}_{k}(t_{se}))\}\); return\(\beta_{uc}(t_{se}),\beta_{c}(t_{se})\);
```
**Algorithm 2**Defender-to-Attacker-Swarm Assignment (DASA)
In the function \(\texttt{assignHierarchical}\), the function \(\texttt{splitClustersEqual}\) (\(\mathscr{A}_{k}(t_{se}),\mathscr{D}_{k}(t_{se})\)) splits the clusters of the attackers into two groups \(\mathscr{A}_{k}^{l}(t_{se})\) and \(\mathscr{A}_{k}^{\prime}(t_{se})\) of roughly equal number of attackers and the defenders into two groups \(\mathscr{D}_{k}^{l}(t_{se})\) and \(\mathscr{D}_{k}^{r}(t_{se})\). The split is performed based on the angles \(\psi_{k^{\prime}}\) made by relative vectors \(\texttt{r}_{ac_{ll}}(t_{se})-\texttt{r}_{dc_{ll}}(t_{se}^{-})\), for all \(k^{\prime}\in A_{c}^{(k)}(t_{se})\), with the vector \(\texttt{r}_{d_{ll}}(t_{se}^{-})-\texttt{r}_{dc_{ll}}(t_{se}^{-})\) where \(\texttt{r}_{dc_{ll}}(t_{se}^{-})=\frac{\texttt{r}_{d_{ll}}(t_{se})+\texttt{r}_ {d_{ll}}(t_{se})}{2}\) is the center of \(\mathcal{D}_{c_{ll}}(t_{se}^{-})\), where \(j_{1}=\beta_{c_{k}}^{-}(1)\) and \(j_{t}=\beta_{c_{k}}^{-}(|D_{c_{k}}(t_{se}^{-})|)\). We first arrange these angles \(\psi_{k^{\prime}}\) in the descending order. The first few clusters in the arranged list with roughly half the total number of attackers become the left group \(\mathscr{A}_{k}^{l}(t_{se})\) and the rest become the right group \(\mathscr{A}_{k}^{r}(t_{se})\) (see Fig. 4). Similarly, the left group \(\mathscr{D}_{k}^{l}(t_{se})\) is formed by the first \(\mathscr{A}_{k}^{l}(t_{se}).N_{a}\) defenders as per the assignment \(\beta_{c_{k}}^{-}\) and the rest defenders form the right group \(\mathscr{D}_{k}^{r}(t_{se})\) (see Fig. 4). We assign the defenders in \(\mathscr{D}_{k}^{l}(t_{se})\) only to the swarms in \(\mathscr{A}_{k}^{l}(t_{se})\) and those in \(\mathscr{D}_{k}^{r}(t_{se})\) only to the swarms in \(\mathscr{A}_{k}^{r}(t_{se})\). By doing so we may or may not obtain an assignment that minimizes the cost in (10a) but we reduce the computation time significantly and obtain a reasonably good assignment quickly. As in the function \(\texttt{assignHierarchical}\), the process of splitting is done recursively until the number of attackers' swarms is smaller than a pre-specified number \(\underline{N}_{ac}(>2)\). The function \(\texttt{assignMIQCQP}\) finds the defender-to-attacker-swarm assignment \(\beta_{c}(t_{se})\) by solving (13) after setting \(A_{uc}^{(k)}(t_{se})\) and \(D_{c_{ll}}^{t}(t_{se}^{-})\) as empty sets, i.e., no assignments of the terminal defenders to the unclustered attackers as this assignment is already performed in the prior steps.
We have the following result about the worst-case computational cost of the hierarchical heuristic.
**Lemma 2**.: _For a given assignment problem of assigning \(|D_{c_{k}}(t_{se}^{-})|\) defenders to \(\mathscr{A}_{k}(t_{se}).N_{a}\) (\(=|D_{c_{k}}(t_{se}^{-})|\)) attackers divided into \(\mathscr{A}_{k}(t_{se}).N_{ac}\) clusters and \(\mathscr{A}_{k}(t_{se}).N_{uc}\) unclustered attackers with a given threshold \(\underline{N}_{ac}(>2)\), the worst-case computational cost of the hierarchical heuristic in Algorithm 2 is:_
\[\begin{array}{l}C_{H}^{comp}(t_{se},k)\,=O(2^{(\mathscr{A}_{k}(t_{se}).N_{ac} )^{2}}+(N_{rsM}^{\prime}-1)2^{3\underline{N}_{ac}^{2}}\\ \qquad\qquad\qquad+2\underline{N}_{ac}n_{max}+2^{3n_{ac,k}^{2}})\end{array} \tag{15}\]
_where \(N_{rsM}^{\prime}=[\frac{\mathscr{A}_{k}(t_{se}).N_{ac}}{N_{ac}}]\), \(n_{max}=(\mathscr{A}_{k}(t_{se}).N_{a}-\mathscr{A}_{k}(t_{se}).N_{uc}-3n_{ac,k}-3 \underline{N}_{ac}(N_{rsM}^{\prime}-1))\) and \(n_{ac,k}=\mathscr{A}_{k}(t_{se}).N_{ac}-N_{rsM}^{\prime}\underline{N}_{ac}\)._
Proof.: In Algorithm 2, two CADAA problems (mixed integer quadratic programs) are solved (line 3 and 4) to assign the defenders to the left and right group of unclustered attackers. Suppose the number of unclustered attackers in left and right group are \(N_{uc}^{l}=|\mathcal{A}_{uc}^{(k),l}(t_{se})|\) and \(N_{uc}^{r}=|\mathcal{A}_{uc}^{(k),r}(t_{se})|\), respectively.
Additionally, there are several rs-MIQCQPs that are solved in Algorithm 2 to assign defenders to the clusters of the attackers. Maximum number of the clusters in any rs-MIQCQP solved in Algorithm 2 is \(\underline{N}_{ac}\). Based on the hierarchical breakdown of the original assignment
Figure 4: Grouping for the hierarchical algorithm
problem, the maximum number of such rs-MIQCQP's is \(N^{\prime}_{rsM}=\lfloor\frac{\mathscr{A}_{k}(t_{se}).N_{se}}{\frac{N}{\alpha_{c} }}\rfloor.\) Let \(n_{i}\) (\(\geq 3\underline{N}_{ac}\)) denote the number of attackers in the \(\underline{N}_{ac}\) clusters in the \(i^{th}\) rs-MIQCQP for all \(i\in\{1,2,3,...,N^{\prime}_{rsM}\}.\) Similarly, let \(n_{0}\) be the number of attackers in the remaining \(n_{ac,k}=\mathscr{A}_{k}(t_{se}).N_{ac}-N^{\prime}_{rsM}\underline{N}_{ac}\) clusters considered in a separate rs-MIQCQP. We also have that equal number of defenders are to be assigned to these attackers by solving these integer programs. Then, the worst-case computational cost of solving all integer programs in Algorithm 2 is:
\[C^{comp}=O\big{(}\underbrace{2^{(N^{l}_{uc})^{2}}+2^{(N^{\prime}_{uc})^{2}}}_{ C^{comp}_{uc}}+\underbrace{2^{n_{0}n_{ac,k}}+\sum_{i=1}^{N^{\prime}_{rsM}}2^{n_{i} \underline{N}_{ac}}}_{C^{comp}_{c}}\big{)} \tag{16}\]
where \(N^{l}_{uc}+N^{r}_{uc}=\mathscr{A}_{k}(t_{se}).N_{uc}\), and \(\sum_{i=0}^{N^{\prime}_{rsM}}n_{i}=\mathscr{A}_{k}(t_{se}).N_{a}-\mathscr{A}_ {k}(t_{se}).N_{uc}\). Since the assignments to unclustered and clustered attackers are made separately, we will find the maximum values of \(C^{comp}_{uc}\) and \(C^{comp}_{uc}\) separately. The maximum value of \(C^{comp}_{uc}\) occurs when either \(N^{l}_{uc}=\mathscr{A}_{k}(t_{se}).N_{uc}\) and \(N^{r}_{uc}=0\) or \(N^{l}_{uc}=0\) and \(N^{l}_{uc}=\mathscr{A}_{k}(t_{se}).N_{uc}\). We have that \(n_{ac,k}\leq\underline{N}_{ac}\). Then, the maximum value of \(C^{comp}\) subject to \(\sum_{i=1}^{n_{s}\underline{N}_{ac}}n_{i}=\mathscr{A}_{k}(t_{se}).N_{a}- \mathscr{A}_{k}(t_{se}).N_{ac}\) occurs when all \(n_{i}\), except one \(n_{i}\) for some \(i\in\{1,2,3,...,N^{\prime}_{rsM}\}\), take their smallest values, i.e., when \(n_{0}=3n_{ac,k}\), \(n_{i}=3\underline{N}_{ac}\) for all \(i\in\{2,3,...,N^{\prime}_{rsM}\}\) and \(n_{1}=n_{max}=\mathscr{A}_{k}(t_{se}).N_{a}-\mathscr{A}_{k}(t_{se}).N_{ac}-3 \underline{N}_{ac}(N^{\prime}_{rsM}-1).\) Hence, the worst-case computational cost of the hierarchical heuristic is \(C^{comp}_{H}(t_{se},k)=O\big{(}2^{(\mathscr{A}_{k}(t_{se}).N_{ae})^{2}}+(N^{ \prime}_{rsM}-1)2^{3\underline{N}_{ac}^{2}}+2^{3\underline{N}_{ac}^{2}}( \mathscr{A}_{k}(t_{se}).N_{a}-\mathscr{A}_{k}(t_{se}).N_{ac}-3n_{ac,k}-3 \underline{N}_{ac}(N^{\prime}_{rsM}-1))+2^{3n_{ac,k}^{2}}\big{)}.\)
### _Assignment when attackers' swarm does not avoid defenders_
When the attackers in a given swarm \(\mathcal{A}_{c_{k}}(t)\) do not try to avoid the defenders and instead just aim to reach the protected area, i.e., the attackers are risk-taking, then herding will not be an effective way of defense. Mathematically, this intention of swarm of attackers \(\mathcal{A}_{c_{k}}(t)\) to not avoid defenders and simply target protected area, is characterized by the following condition.
\[\|\mathbf{r}_{ac_{k}}-\mathbf{r}_{p}\|\leq\|\mathbf{r}_{df_{k}}(0)-\mathbf{r} _{p}\|\ \&\ (\mathbf{r}_{ac_{k}}-\mathbf{r}_{p})^{T}\mathbf{v}_{ac_{k}}<0 \tag{17}\]
This condition implies that the center of mass of attackers in \(\mathcal{A}_{c_{k}}(t)\) has come closer towards the protected area than the gathering center of the corresponding herding defenders in \(\mathcal{D}_{c_{k}}(t)\) and the attackers' average velocity vector points towards the protected area. In other words, the attackers in \(\mathcal{A}_{c_{k}}(t)\) are not necessarily moving away from the defenders and they intend to simply reach the protected area \(\mathcal{P}\), i.e., the attackers are risk taking. Once swarm \(\mathcal{A}_{c_{k}}(t)\) satisfies (17), the corresponding defenders \(\mathcal{D}_{c_{k}}(t)\) choose to intercept all the attackers in \(\mathcal{A}_{c_{k}}(t)\). The defenders in \(\mathcal{D}_{c_{k}}\) are assigned to intercept the attackers in \(\mathcal{A}_{c_{k}}\) by using CADAA similar to (5) with \(A_{c_{k}}(t)\) and \(D_{c_{k}}(t)\) at the place of \(A_{uc}(0)\) and \(I_{d}\), respectively.
### _Comparison of the assignment algorithms_
In this section, we compare the computational performance of the assignment algorithms. Using the results from Lemma 1 and 2, we have the following result about the computational cost of the MIQCQP, the rs-MIQCQP and the heuristic in Algorithm 2.
**Theorem 3**.: _Let Assumption 5 hold and \(1<\underline{N}_{ac}<N_{ac}\), then the worst-case computational costs \(C^{comp}_{M}\), \(C^{comp}_{rsM}\) and \(C^{comp}_{H}\) of the MIQCQP, the rs-MIQCQP and the heuristic, respectively, satisfy: \(C^{comp}_{H}(t_{se},k)<C^{comp}_{rsM}(t_{se},k)\leq C^{comp}_{M}(t_{se},k)\)._
Proof:: From Lemma 2, we have:
\[C^{comp}_{H} =C^{comp}_{H}(t_{se},k)\] (18) \[=O\big{(}2^{(\mathscr{A}_{k}(t_{se}).N_{uc})^{2}}+(N^{\prime}_{rsM} -1)2^{3\underline{N}_{ac}^{2}}\] \[\quad\quad+2\underline{N}_{ac}+2^{3n_{ac,k}^{2}}+2^{3n_{ac,k}^{2} }(N^{\prime}_{rsM}-1)+\underline{N}_{ac}n_{max}+3n_{ac,k}^{2}\big{)}\] \[\leq O\big{(}2^{(\mathscr{A}_{k}(t_{se}).N_{uc})^{2}}+\underline{N}_{ac }(\mathscr{A}_{k}(t_{se}).N_{a}-\mathscr{A}_{k}(t_{se}).N_{uc})\big{)}\times\] \[\leq O\big{(}2^{\big{(}(\mathscr{A}_{k}(t_{se}).N_{uc})^{2}+ \underline{N}_{ac}(\mathscr{A}_{k}(t_{se}).N_{a}-\mathscr{A}_{k}(t_{se}).N_{ac}) \big{)}}\] \[\leq O\big{(}2^{\big{(}(\mathscr{A}_{k}(t_{se}).N_{uc})^{2}+ \underline{N}_{ac}(\mathscr{A}_{k}(t_{se}).N_{a})\big{)}}\] \[\quad\quad+O\big{(}2^{\min(2\mathscr{A}_{k}(t_{se}).N_{uc}| \mathcal{D}_{ck}(t_{se}^{-}))|\mathscr{A}_{k}(t_{se}).N_{uc}\times\] \[\quad\quad\quad 2^{|\mathcal{D}_{ck}(t_{se}^{-})|\mathscr{A}_{k}(t_{se}).N_{ ac}}\big{)}\] \[\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\
more suitable for real-time operation, see the Figure 8 for better comparison.
We also compare the resulting cost of the heuristic, \(cost_{H}\), against the optimal cost, \(cost_{rsM}\), obtained by solving the rs-MIQCQP by calculating the percentage error \(\%E=\frac{100|cost_{rM}-cost_{H}|}{cost_{rsM}}\). As one can observe in Fig.9 the percentage error \(\%E\) is below \(4\%\) for all the evaluated cases. This means that the proposed heuristic provides an assignment solution that is very close to the one obtained by rs-MIQCQP within a fraction of the time taken by rs-MIQCQP. The heuristic algorithm can be run at around 2-5 Hz for problems with up to 60 attackers and up to 24 individual risk taking attackers. The analysis providing theoretical guarantees on the cost of the heuristic is left open for future research.
### _Control augmentation for inter-defender collision avoidance_
The intercepting defenders need to avoid collisions with other intercepting as well as the herding defenders for their own safety. Each intercepting defender \(\mathcal{D}_{j}\), for all \(j\in D_{uc}(t)\), employs an exponential CBF (ECBF) [40, 41] based control augmentation to avoid collisions with other defenders such that their time-optimal control action corresponding to their assigned attacker is minimally augmented. This ECBF based control considers the Open-StringNets and Close-StringNets formed by the sub-teams of the herding defenders as big individual agents with their corresponding formation radii that the individual intercepting defenders need to avoid.
Figure 5: Computation time for MIQCQP in (10)
Figure 8: Comparison of computation times of the rs-MIQCQP and the hierarchical heuristic (The line types solid (-), dash (- -), and dash-dot (-.) correspond to the cases with \(N_{uc}=8\), \(N_{uc}=16\), \(N_{uc}=24\) respectively. )
Figure 6: Computation time for rs-MIQCQP in (13)
Figure 7: Computation time for Hierarchical Approach in Algorithm 2
## IV Simulation Results
In this section, we provide MATLAB simulations to demonstrate the effectiveness of the multi-mode defense strategy in different scenarios, as explained below. Some key parameters used in the simulations are: \(\rho_{a}=\rho_{d}=0.5\,m\), \(C_{D}=1.5\), \(\bar{v}_{a}=6\,m/s\) (\(\bar{u}_{a}=9\,m/s^{2}\)), \(\bar{v}_{d}=12.27\,m/s\) (\(\bar{u}_{d}=18.4\,m/s^{2}\)), \(\varrho_{d}^{int}=5\,m\), \(\rho_{p}=45\,m\). The computer specifications used to run these simulations are the same as those used in Section III-E.
We consider a total number of five scenarios (case studies) whose simulation videos are available at ([https://youtu.be/cofhjqudT9U](https://youtu.be/cofhjqudT9U)). For the interest of space, in this section we provide plots of the simulation of Scenario 3. The description of all scenarios, as well as the detailed results of Scenario 3, are given in the following subsections.
#### Iv-1 Defenders and Attackers are equal in number
We consider three different scenarios.
* Scenario (1): There are 32 attackers that appear, at \(t=0\), to be divided into swarms \(\mathcal{A}_{c_{1}}(0)=\{\mathcal{A}_{i}|i\in\{1,2,...,20\}\},\mathcal{A}_{c_{2 }}(0)=\{\mathcal{A}_{i}|i\in\{21,22,...,29\}\}\) and unclustered attackers \(\mathcal{A}_{uc}(0)=\{\mathcal{A}_{30},\mathcal{A}_{31},\mathcal{A}_{32}\}\) that are trying to reach the protected area, and 32 defenders that are aiming to prevent the attackers from doing so. In this scenario, after some time, \(\mathcal{A}_{c_{1}}\) splits into 3 smaller swarms and some of the terminal attackers from \(\mathcal{A}_{c_{2}}\) separate into individual risk-taking attackers.
* Scenario (2): There are 20 attackers that are divided into swarms \(\mathcal{A}_{c_{1}}(0)=\{\mathcal{A}_{i}|i\in\{1,2,...,12\}\},\mathcal{A}_{c_{2 }}(0)=\{\mathcal{A}_{i}|i\in\{13,14,...,17\}\}\) and unclustered attackers \(\mathcal{A}_{uc}(0)=\{\mathcal{A}_{18},\mathcal{A}_{19},\mathcal{A}_{20}\}\). In this scenario, some of the attackers from \(\mathcal{A}_{c_{1}}\) separate as individual risk-taking attackers.
* Scenario (3): At t=0, when the attackers are first identified, they are observed to be distributed as: 2 swarms \(\mathcal{A}_{c_{1}}(0)=\{\mathcal{A}_{i}|i\in\{1,2,3,...,10\}\}\), \(\mathcal{A}_{c_{2}}(0)=\{\mathcal{A}_{i}|i\in\{11,12,13,14\}\}\), and unclustered attackers \(\mathcal{A}_{uc}(0)=\{\mathcal{A}_{15},\mathcal{A}_{16}\}\).
In the interest of space, we only discuss Scenario 3 in more detail here. For the purpose of demonstration, the motion of the unclustered attackers is simulated under the time-optimal control to reach the protected area. The problem of finding the defenders' assignment to the attackers and the gathering formations is solved using Algorithm 1. This results into two sub-teams of defenders \(D_{c_{1}}(0)=\{\mathcal{D}_{12},\mathcal{D}_{10},\mathcal{D}_{16},\mathcal{D }_{14},\mathcal{D}_{8},\mathcal{D}_{7},\mathcal{D}_{9},\mathcal{D}_{13}, \mathcal{D}_{1},\mathcal{D}_{2}\}\) and \(D_{c_{2}}(0)=\{\mathcal{D}_{15},\mathcal{D}_{11},\mathcal{D}_{6},\mathcal{D}_{ 3}\}\) being assigned to gather on the time-optimal paths of \(\mathcal{A}_{c_{1}}(0)\) and \(\mathcal{A}_{c_{2}}(0)\), respectively, and 2 individual defenders \(\mathcal{D}_{4}\) and \(\mathcal{D}_{5}\) being assigned to intercept the unclustered attackers \(\mathcal{A}_{15}\) and \(\mathcal{A}_{16}\), respectively. Figure 9(a) shows the paths traversed by the players until all defenders' sub-teams gather at their respective desired formations, between the time interval \([0,77.66]\) sec. As observed, both sub-teams of the defenders are able to successfully gather on the desired formations before respective attackers' swarm could reach there. The paths for the defenders in \(\mathcal{D}_{c_{1}}(0)\) and the attackers in \(\mathcal{A}_{c_{1}}(0)\) during the time interval \([77.66,130.14]\) sec are shown in Figure 9(c). As one can observe, the attackers \(\mathcal{A}_{c_{1}}(0)\) split at \(t=t_{se}=93.12\) sec into two smaller swarms \(\mathcal{A}_{c_{1}}(t_{se})=\{\mathcal{A}_{2},\mathcal{A}_{3},\mathcal{A}_{4}, \mathcal{A}_{5}\}\) and \(\mathcal{A}_{c_{3}}(t_{se})=\{\mathcal{A}_{6},\mathcal{A}_{7},\mathcal{A}_{8}, \mathcal{A}_{9}\}\), and two outermost attackers, classified as unclustered attackers \(\mathcal{A}_{uc}^{(1)}(t_{se})=\{\mathcal{A}_{1},\mathcal{A}_{10}\}\), separate from the rest of the attackers in an attempt to circumvent the oncoming defenders. After solving the rs-MIQCQP (13), the defenders in \(\mathcal{D}_{c_{1}}(0)\) are also divided into two smaller sub-teams \(\mathcal{D}_{c_{1}}(t_{se})=\{\mathcal{D}_{10},\mathcal{D}_{16},\mathcal{D}_{14 },\mathcal{D}_{8}\}\) and \(\mathcal{D}_{c_{3}}(t_{se})=\{\mathcal{D}_{7},\mathcal{D}_{9},\mathcal{D}_{13}, \mathcal{D}_{1}\}\) and two terminal defenders \(\mathcal{D}_{12}\) and \(\mathcal{D}_{2}\). The sub-teams \(\mathcal{D}_{c_{1}}(t_{se})\) and \(\mathcal{D}_{c_{3}}(t_{se})\) are assigned to herd \(\mathcal{D}_{c_{1}}(t_{se})\) and \(\mathcal{D}_{c_{3}}(t_{se})\), respectively. And, the terminal defenders \(\mathcal{D}_{12}\) and \(\mathcal{D}_{2}\) are tasked to intercept the unclustered attackers \(\mathcal{A}_{1}\) and \(\mathcal{A}_{10}\), respectively. By the time \(t=130.14\) sec the two unclustered attackers are already captured and the two swarms of attackers are also completely enclosed by Closed-StringNets \(\mathcal{G}_{sn}^{cl}(\mathcal{D}_{c_{1}}(t_{se}))\) and \(\mathcal{G}_{sn}^{cl}(\mathcal{D}_{c_{3}}(t_{se}))\). Similarly, as shown in Figure 9(b) the defenders in \(\mathcal{D}_{c_{2}}(0)\) also successfully enclose the attackers in \(\mathcal{A}_{c_{2}}(0)\) at \(t=146.17\) sec. Finally, as observed in Figure 9(d) all the enclosed attackers' swarms are herded to the respective closest areas by the Closed-StringNets formed by the defenders' sub-teams. As mentioned also above, simulations for the additional scenarios are provided in the simulation video available at [https://youtu.be/cofhjqudT9U](https://youtu.be/cofhjqudT9U).
#### Iv-2 Attackers outnumber the defenders
We also studied the performance of the proposed algorithm in a few scenarios where attackers outnumber the defenders. Particularly, we consider the following two scenarios.
* Scenario (4): There are 16 attackers that are, at \(t=0\), divided into 2 swarms \(\mathcal{A}_{c_{1}}(0)=\{\mathcal{A}_{i}|i\in\{1,2,...,6\}\},\mathcal{A}_{c_{2}}(0)= \{\mathcal{A}_{i}|i\in\{7,8,...,14\}\}\) and unclustered attackers \(\mathcal{A}_{uc}(0)=\{\mathcal{A}_{15},\mathcal{A}_{16}\}\) and there are only 14 defenders. In this scenario, since the
Figure 9: % Error in the costs of the rs-MIQCQP and the hierarchical heuristic
defenders are short in number by 2 and there are 2 swarms of attackers, resource allocation assigns 5 defenders (\(\mathcal{D}_{c_{1}}(0)=\{\mathcal{D}_{5},\mathcal{D}_{7},\mathcal{D}_{8},\mathcal{D }_{12},\mathcal{D}_{13}\}\)) to \(\mathcal{A}_{c_{1}}\) which has 6 attackers in it and 7 defenders (\(\mathcal{D}_{c_{2}}(0)=\{\mathcal{D}_{2},\mathcal{D}_{1},\mathcal{D}_{9},\mathcal{ D}_{6},\mathcal{D}_{10},\mathcal{D}_{11},\mathcal{D}_{14}\}\)) to \(\mathcal{A}_{c_{2}}\) which has 8 attackers in it and the remaining two defenders to intercept the unclustered attackers. As time progresses, at around \(t_{se}=93.58\) sec, \(\mathcal{A}_{c_{2}}(t_{se}^{-})\) splits into two smaller swarms \(\mathcal{A}_{c_{2}}(t_{se})=\{\mathcal{A}_{7},\mathcal{A}_{8},\mathcal{A}_{9}, \mathcal{A}_{10}\}\) and \(\mathcal{A}_{c_{3}}(t_{se})=\{\mathcal{A}_{11},\mathcal{A}_{12},\mathcal{A}_{13 },\mathcal{A}_{14}\}\). Again, since \(\mathcal{D}_{c_{2}}(t_{se}^{-})\) is short by 1 defender, only 3 defenders (\(\mathcal{D}_{c_{2}}(t_{se})=\{\mathcal{D}_{2},\mathcal{D}_{1},\mathcal{D}_{3}\}\)) are assigned to \(\mathcal{A}_{c_{2}}(t_{se})=\{\mathcal{D}_{6},\mathcal{D}_{10},\mathcal{D}_{11 },\mathcal{D}_{14}\}\)) are assigned to \(\mathcal{A}_{c_{3}}(t_{se})\). The trajectories of the players for this scenario are shown in the simulation video ([https://youtu.be/cofhjqudT9U](https://youtu.be/cofhjqudT9U)). As one can observer in the video, the defenders are still able to enclose the attackers' swarms successfully and herd them to respective safe areas despite more number of attackers in the attacking swarms. This is because the attackers did not disperse and stayed in compact formations throughout, that the available defenders were capable of enclosing with the given constraints (\(\bar{R}\)). However, this is a very specific behaviour by the attackers that results in outcomes in favor of the defenders.
* Scenario (5): There are 6 attackers, all of them individual attackers and only 4 defenders. The four attackers (\(\mathcal{A}_{1},\mathcal{A}_{2},\mathcal{A}_{3},\mathcal{A}_{4}\)) approach the protected area from one side and the other two (\(\mathcal{A}_{5},\mathcal{A}_{6}\)) approach the protected area from the opposite side. Because of the initial states of the defenders, (\(\mathcal{D}_{2},\mathcal{D}_{4},\mathcal{D}_{3},\mathcal{D}_{1}\)) are assigned to attackers (\(\mathcal{A}_{1},\mathcal{A}_{2},\mathcal{A}_{3},\mathcal{A}_{4}\)) in that order. After the defender \(\mathcal{D}_{3}\) and \(\mathcal{D}_{1}\) capture their target attackers they get assigned to \(\mathcal{A}_{6}\) and \(\mathcal{A}_{5}\) respectively. Again, the trajectories of the players are shown in the simulation video ([https://youtu.be/cofhjqudT9U](https://youtu.be/cofhjqudT9U)). As one can observe in the video, despite the reassignment, the attackers \(\mathcal{A}_{5}\) and \(\mathcal{A}_{6}\) are able to reach the protected area. This is because the attackers \(A_{1}-A_{4}\) started moving away from the protected area as they saw the defenders coming towards them. By the time the \(\mathcal{D}_{3}\) and \(\mathcal{D}_{1}\) intercepted \(\mathcal{A}_{3}\) and \(\mathcal{A}_{4}\), the defenders had already moved very far from the protected area and hence were not able to come back in time and intercept the remaining two attackers.
These two scenarios show that the success of the defenders when attackers outnumber the defenders is not necessarily govern by the difference in their number but rather by the initial state of the players and how the attackers behave.
## V Conclusions
In this paper, we combine a multi-mode inter-defender collision-aware interception strategy (IDCAIS) with a swarm-herding strategy (StringNet Herding) to provide a multi-mode defense strategy against a wide range of behaviors by the attackers. We provided mixed-integer programs and computationally-efficient heuristics to allocate the interception or herding task to the defenders. Through simulations we showed how the defenders initially attempt to herd the attackers instead of intercepting the risk-averse swarms of the attackers, and how defenders redistribute to sub-teams and reassign either the herding or the interception role to themselves as the attackers split and take on risk-taking or risk-averse roles. The provided heuristics for solving the assignment problems offer a significant reduction in the computational time, by at least a factor of 4-5, while being close to the optimal solution, within 4% error. Future work will focus on considering modeling and measurement uncertainty, as well as extending the formulation to 3D spaces.
|
2301.09570
|
A Novel Power-optimized CMOS sEMG Device with Ultra Low-noise integrated
with ConvNet (VGG16) for Biomedical Applications
|
The needle bio-potential sensors for measuring muscle and brain activity need
invasive surgical targeted muscle reinnervation (TMR) and a demanding process
to maintain, but surface bio-potential sensors lack clear bio-signal reading
(Signal-Interference). In this research, a novel power-optimized complementary
metal-oxide-semiconductor (CMOS) Surface Electromyography (sEMG) is developed
to improve the efficiency and quality of captured bio-signal for biomedical
application: The early diagnosis of neurological disorders (Dystonia) and a
novel compatible mind-controlled prosthetic leg with human daily activities. A
novel sEMG composed of CMOS Op-Amp based PIC16F877A 8-bit CMOS Flash-based
Microcontroller is utilized to minimize power consumption and data processing
time. sEMG Circuit is implemented with developed analog filter along with
infinite impulse response (IIR) digital filter via Fast Fourier Transform
(FFT), Z-transform, and difference equations. The analysis shows a significant
improvement of 169.2% noise-reduction in recorded EMG signal using developed
digital filter compared to analog one according to numerical root mean square
error (RMSE). Moreover, digital IIR was tested in two stages: algorithmic and
real-world. As a result, IIR's algorithmic (MATLAB) and real-world RMSEs were
0.03616 and 0.05224, respectively. A notable advancement of 20.8% in data
processing duration in EMG signal analysis. Optimizing VGG, AlexNet, and ResNet
ConvNet as trained and tested on 15 public EEG (62-electrode) and 18 subjects'
observed EMG data. The results indicate that VGG16-1D is 98.43% higher. During
real testing, the accuracy was 95.8 +/- 4.6% for 16 subjects (6 Amputees-10
Dystonia). This study demonstrates the potential for sEMG, paving the way for
biomedical applications.
|
Ahmed Ayman - Mohamed Sabry
|
2023-01-04T01:46:55Z
|
http://arxiv.org/abs/2301.09570v2
|
# Conscious Brain Mind-Controlled Cybonthitic
###### Abstract
Lower limb amputations affect about 28.9 million people worldwide, influencing normal human functions, we are developing a conscious brain mind-controlled Cybonthitic cyborgionic-leg to provide a professional solution for this problem, which is classified as restricted knee movement, short-term solution, limited pressure bearing, unspecific analog reading of EMG; Because the output voltage measured in nano-volts, resulting in unspecific knee movement. The functionality of these modern gadgets is still limited due to a lack of neuromuscular control (i.e. For movement creation, control relies on human different neural signals to peripheral muscles). Electromyographic (EMG) or myoelectric signals are neuromuscular control signals that can be recorded from muscles for our engineering goals. We worked on a sophisticated prosthetic knee design with a 100-degree angle of motion. We also used a specific type of coiled spring to absorb abrupt or unexpected motion force. In addition, we amplified the EMG output from (Nano-Voltage) to (Milli-Voltage) using customized instrumentation amplifiers (operational amplifiers). We used a full-wave rectifier to convert AC to DC, as a consequence of these procedures, sine-wave output voltage measures in millivolts, and the spring constant indicates the most force for every 1cm. Von mises Stress analysis shows bearing as 3000N is the maximum load for the design. Detecting the edge of a stairwell using the first derivative. The benefit of a system that controls the prosthetic limb is activated by the **patient's own EMG impulses**, rather than sensors linked to the body.
Index Terms--sEMG, Prosthetic limb, Lower limb amputation
## I Introduction
A prosthetic is a term that refers to an artificial device that replaces a physical component or organ. A prosthetic leg _epilepsy a lower limb that has been manipulated for many reasons._ The National Limb Loss Information Center supplied the amputation data in that there are roughly 1.7 million per-subjecting with limb loss in the United States [5]. A Myoelectric ray is used to extract EMG characteristics. Using the following equation, the statistical parameter of FFT energy coefficients controlled by myoelectric (EMG) impulses. This is possible because the neuromuscular system of amputees stays intact following amputation. The remaining impulses are used, and with correct processing, they are enough to control the movement of the leg[6]. Actuators such as motors are used to replace the function of muscles by supplying force for leg movement.\(A\) hysteretic prosthetic leg _diffre from externally_
powered legs, which depend on external power to operate limb through pulses transmitted from the brain.
## II Methods
### sEMG Signal Processing
Electromyography signals are used to determine the electrical activity of muscle fibres during contraction and rest. Two approaches are used to capture these myoelectric signals: invasive and noninvasive. Invasive methods use needle electrodes to record the sEMG signal. However, noninvasive is often preferable, since it is positioned right above the skin surface without requiring the electrode to be inserted into the patient's body. Numerous issues such as motion artefacts, electrode misplacement, and noise interpolation all have an effect on the EMG signal. To extract additional information, signal processing techniques including as filtering, rectification, baseline drifting, and threshold levelling are used to EMG signals. The block diagram of the EMG signal processing is depicted in Figure 1.
As seen in (fig 1), the fast Fourier transform (FFT) translates a time-domain signal into many frequency scales in the ambient data in that there are roughly 1.7 million per-subjecting with limb loss in the United States [5]. A Myoelectric ray is used to extract EMG characteristics. Using the following equation, the statistical parameter of FFT energy coefficients may be determined:
\[\begin{array}{l}\mathbf{X}^{\text{t}}\\ \mathbf{y}^{2}_{i=I}\end{array}\begin{array}{l}\mathbf{y}^{2}_{i}(\text{j=1,2,3,4})(1) \end{array}\]
where pre-gelled surface electrodes are used to capture EMG data for limb rotational movement. Two electrodes are inserted into the limb's acromial and clavicular portions of the central deltoid muscle (anterior fibers). For efficient
Fig. 1: Neural network for signal processing. Adapted from Medical and Biological Engineering and Computing
electrode. Surface electrodes are used to detect EMG signals. However, the selected signal has an amplitude of microvolts. Thus, a preamplifier is required to convert the microwvoltage
EMG signal (\(\mu\)V) to millivoltage (mV). The EMG signal is delivered into the preamplifier directly from the electrode.
Using Instrumentation Amplifier (IA) as shown in (fig 2)
cause having a high CMRR, a high input impedance, fixed gain for amplification, and high-low pass filtering.
As An instrumentation amplifier(IA) is used to offer a significant degree of gain for extremely low-level signals, often in the face of excessive noise levels. The principal characteristics of IAs are high gain, a high common-mode rejection ratio (CMRR), and a high input impedance. Instrumentation amplifiers precise, operationally integrated amplifiers with differential input and single-ended or differential output.
Among its major characteristics are a high common mode rejection ratio (CMRR), a high open loop gain, a low DC offset, a low drift, a low input impedance, and low noise.
### High and Low pass filtration
To remove the high-frequency signal, the preamplifier's output is sent through a lowpass filter. To create an efficient filter many filter topologies are compared. as shown in (fig 3) that (HPF)and (LPF) are critical in filtering pulses that have been amplified. The EMG signals that have been moved are processed. To eliminate motion artefacts and external noise from the collected EMG data, a high pass filter (HPF) with a cutoff frequency of 20 Hz is used. For step band attenuation, a fourth order Butterworth high pass filter has been used.
When examining muscular contraction,it is recommended to select dominant EMG signals with a frequency range of 20 Hz - 500 Hz. The EMG signals are corrupted by noise.
### C.RECIFICATION AND AMPLIFICATION
A corrected signal is required. The goal of rectification is
to eliminate the signal's negative components. By eliminating the negative components, the negative amplitude is converted to a positive value by squaring the total signal. As predicted, this step also squares the amplitude value. Additionally, if the amplitude is less than one, squaring would shift the amplitude away from one toward zero, lowering the value. Amplification
is used to increase the signal's amplitude to a suitable level. The signal is multiplied by a constant value, which increases the amplitude of the whole signal by that amount. The output of this step is a positive signal (devid of negative components)
with an amplitude within the specified range.
### D.Smoothening
After the first few processing stages, the signal in band
still resembles an EMG signal in terms of contraction and relaxation phases, with the most noticeable difference being the conversion of negative components to positive ones after rectification. As previously stated, the contraction phase of the signal is the most interesting; thus, it must be separated from the remainder of the signal. This is accomplished by
sending the signal through a low-pass filter that detects just the signal's envelope. Smoothing produces a signal with blunt peaks precisely during contraction stages.
### Prosthetic Leg Model
Making a prosthetic limb with a high boosting capacity,
flexibility, comfort, and shock absorption for long-term usage requires considerable effort. When fabricating a prosthetic
With, it should be lightweight for ease of control and have a good load bearing capability. The prosthetic limb is constructed from lightweight but robust materials. The limb may try may not have functioning knee and ankle joints, depending on the side of the amputation. The socket is a very accurate
created for achieving engineering goals such as high bearing capacity, light, and long-term use.
As you learn to walk with your prosthetic leg, you will need therapy to strengthen your legs, arms, and cardiovascular as you learn to walk with your prosthetic leg, you will need therapy to strengthen your legs, arms, and cardiovascular system. You'll work closely with rehabilitation experts, physical therapists, and occupational therapists to develop a rehabilitation plan that is customised to your unique mobility requirements. Maintaining a healthy leg is a critical component of this routine.
Fig. 4. 3D design of prosthetic limb
Fig. 5. Angel of rotation
Fig. 6. A convolution of a 5x5 image with a 3x3 kernel.
Fig. 4. 3D design of prosthetic limb
Fig. 5. Angel of rotation
Fig. 6. A convolution of a 5x5 image with a 3x3 kernel.
Fig. 4. A convolution of a 5x5 image with a 3x3 kernel.
Fig. 5. Angel of rotation
[MISSING_PAGE_POST]
is considered support. Max and Min values are gathered as used as reference points for prediction, as they are included into the formula for prediction:
\[\text{x}_{\text{i}}^{\text{-}}\underset{\text{max}(\text{X})}{=}\underset{\text{ max}(\text{X})}{=}\underset{\text{min}(\text{X})}{=}\underset{\text{min}(\text{X})}{=}\underset{ \text{min}(\text{X})}{=} \tag{3}\]
As data converted from 2d into 1D. Then, The segmented data of the original time series are defined in this equation (A) in the time domain and in (B) in the frequency domain. The signal was recorded from the vastus lateralis (VL) muscle during Whole body vibration (WBV) at 30 Hz. While the sEMG signal in the time domain does not highlight any specific characteristics to WBV. The sEMG signal in the frequency domain shows an abnormally large spike at the vibration frequency and at a few multiple harmonics, which is indicative of dystonia. During WBV, simultaneously and for preliminary reasons, an electrophysiological signal was obtained from the patella.
Figure (9) presents such a signal obtained during WBVat30Hz. The patellar signal resembles a sinusoidal wave at 30 Hz in the temporal domain. In the frequency domain, excessive peaks noted at the vibration frequency in order to reduce the amplitude of the wave components. No myoelectrical activity is seen for any other frequency.
A surface electromyography (sEMG) spectrum of the Vastus lateralis undergoing shaking of the whole body at 30 Hz. sEMG signals were processed with the no-filter approach (solid black line), linear interpolation (solid grey line), band- and band-pass filter (solid grey lines) and early activity in the left tibialis anterior during the time domain. As demonstrated in Figure 10, running (a). Showing how left leg delayed in activation of filters, particularly High Passfiltration (HPF) and Low Pass motor neuron with respect to other leg as shown in (fig 8).filtration, play a key role (LPF).
Fig. 8: sEMG of a patient with runner’s dystonia presenting as task-sEMG signals were processed with the no-filter approach (solid black line), linear interpolation (solid grey line), band- and band-pass filter (solid grey lines) and early activity in the left tibialis anterior during the time domain. The signal was recorded from the vastus lateralis (VL) muscle during Whole body vibration (WBV) at 30 Hz. sEMG signal in the time domain does not highlight any specific characteristics to WBV. The sEMG signal in the frequency domain shows an abnormally large spike at the vibration frequency and at a few multiple harmonics, which is indicative of dystonia. During WBV, simultaneously and for preliminary reasons, an electrophysiological signal was obtained from the patella.
Figure 7: Outputs of pattern recognition algorithm
Figure 9: Surface electromyography (sEMG) signal of the Vastus Lateralis during wholebody vibration at 30 Hz illustrated in (A) the time domain and (B) the frequency domain. In the frequency domain, excessive spikes are visible at the vibration frequency and its multiple harmonics.
## IV Muscle Re-innervation Patterns
Strong EMG signals were elicited by the re-innervated hand string muscles, notably during contractions related to ankle motions. When the patient flexed his knees, he noticed a lot of co-activation of re-innervated muscles (Fig.11).
Each attempted move resulted in different EMG signal patterns, implying that precise pattern recognition control was an important issue that is not a problem. The categorization accuracy of the patient's attempted movements was 96% with a virtual system mean/in make it easier to store the data. Low pass and high pass are crucial in cancelling any noise when data collected.
Each attempted move resulted in different EMG signal patterns, implying that precise pattern recognition control was an important issue that is not a problem. Then, to convert it from ac to dc, we utilize full wave rectifiers, which make it easier to store the data. Low pass and high pass are crucial in cancelling any noise when data collected.
## V.Results
For finding the factors that may affect results. It is tested in various conditions. Test on 3 channels of EMG that collect electrical pulses. This shows how movement and shaking of your body affect and make noise in data when it collects. As shown in this graph when there is no cable movement, there is no noise in data. But when slow or fast cable movements, this makes spike, and noise in data. So, by using p300 algorithm, and low-pass filtration, it removes noise and spike to return to original value of it as shown in (Fig. 12).
A. Piezoelectric and Voltage generator
The Piezoelectric Effect is the capacity of some materials to create an electric charge when subjected to mechanical stress. When piezoelectric material is subjected to mechanical stress, there is a change in the positive and negative charge centres, resulting in an external electrical field. When inverted, an external electrical field stretches compresses piezoelectric material. Using piezoelectric device to recharge batteries. It is necessary to test numerous times, see if it is suitable to be main voltage generator.
As demonstrated in the graph, as the load increases, so does the voltage. The output was sufficient for being main voltage generator.
### Stress Analysis of Presthestic limbs
For Design Prosthetic limb needs to have a high bearing capacity, flexibility, comfort, shock absorption, long-term use. For design, a prosthetic limb needs to have a high bearing capacity, flexibility, comfort, shock absorption, and long-term use.
## VI Conclusion
The piezoelectric Effect is the capacity of some materials to create an electric charge when subjected to mechanical stress. When piezoelectric material is subjected to mechanical stress, there is a change in the positive and negative charge centres, resulting in an external electrical field. When inverted, an external electrical field stretches compresses piezoelectric material. Using piezoelectric device to recharge batteries. It is necessary to test numerous times, see if it is suitable to be main voltage generator.
|
2308.14994
|
ICARUS: An Android-Based Unmanned Aerial Vehicle (UAV) Search and Rescue
Eye in the Sky
|
The purpose of this paper is to develop an unmanned aerial vehicle (UAV)
using a quadcopter with the capability of video surveillance, map coordinates,
a deployable parachute with a medicine kit or a food pack as a payload, a
collision warning system, remotely controlled, integrated with an android
application to assist in search and rescue operations.
Applied research for the development of the functional prototype,
quantitative and descriptive statistics to summarize data by describing the
relationship between variables in a sample or population. The quadcopter
underwent an evaluation using a survey instrument to test its acceptability
using predefined variables to select respondents within Caloocan City and
Quezon City, Philippines.
Demographic profiles and known issues and concerns were answered by 30
respondents. The results were summarized and distributed in Tables 1 and 2.
In terms of demographic profiles, the number of SAR operators within the
specified areas is distributed equally, most are male, single, and within the
age bracket of 31 and above. In issues and concerns, the most common type of
search and rescue was ground search and rescue. Human error is the primary
cause of most injuries in operating units. The prototype was useful and
everyone agreed, in terms of acceptability, drone technology will improve
search and rescue operations.
The innovative way of utilizing Android and drone technology is a new step
towards the improvement of SAR operations in the Philippines.
The LiPo battery must be replaced with a higher capacity and the drone
operator should undergo a training course and secure a permit from the Civil
Aviation Authority of the Philippines (CAAP).
|
Manuel Luis C. Delos Santos, Jerum B. Dasalla, Jomar C. Feliciano, Dustin Red B. Cabatay
|
2023-08-29T02:49:16Z
|
http://arxiv.org/abs/2308.14994v1
|
## Short Paper
## Abstract
_Purpose_ - The purpose of this paper is to develop an unmanned aerial vehicle (UAV) using a quadcopter with the capability of video surveillance, map coordinates, a deployable parachute with a medicine kit or a food pack as a payload, a collision warning system, remotely controlled, integrated with an android application to assist in search and rescue operations.
Method - Applied research for the development of the functional prototype, quantitative and descriptive statistics to summarize data by describing the relationship between variables in a sample or population (Kaur et al., 2018). The quadcopter underwent an evaluation using a survey instrument to test its acceptability using predefined variables to select respondents within Caloocan City and Quezon City, Philippines.
Results - Demographic profiles and known issues and concerns were answered by 30 respondents. The results were summarized and distributed in Tables 1 and 2.
Discussion - In terms of demographic profiles, the number of SAR operators within the specified areas is distributed equally, most are male, single, and within the age bracket of 31 and above. In issues and concerns, the most common type of search and rescue was ground search and rescue. Human error is the primary cause of most injuries in operating units. The prototype was useful and everyone agreed, in terms of acceptability, drone technology will improve search and rescue operations.
Conclusion - The innovative way of utilizing Android and drone technology is a new step towards the improvement of SAR operations in the Philippines.
Recommendations - The LiPo battery must be replaced with a higher capacity and the drone operator should undergo a training course and secure a permit from the Civil Aviation Authority of the Philippines (CAAP).
Social Implication - Some people are scared of drones due to privacy issues and fears that they could be used to spy against them.
Keywords - Icarus, Unmanned Aerial Vehicle, Search and Rescue, Eye in the Sky
## 1 Introduction
In search and rescue operations every second counts. To function as efficiently as possible, it is important to be able to obtain a rapid overview of the situation. The type of view that is often only possible from the sky. An Unmanned Aerial Vehicle (UAV) is a type of aircraft that does not require a human pilot onboard. Also known as a drone amongst the general public. Helicopters and airplanes with aerial views are the top choices in aiding search and rescue operations. However, problems occur in most cases where the surveyed area produces blind spots for the camera to capture resulting in slow and occasionally inaccurate searching. Due to their agility, portability, and aerial access advantages, UAVs have already been deployed for Search and Rescue (SAR) operations for several years. By mounting a high-resolution camera on a UAV, it can provide needed aerial imagery for these missions much more cost-effectively than adopting the conventional manned aircraft
approach.
Drones locally referred in the country as Remotely Piloted Aircraft Systems (RPAS) (Civil Aviation Authority of the Philippines, n. d.) are regulated under the Civil Aviation Authority of Philippines (CAAP) of the Department of Transportation, crafted a set of rules and regulations that serves as guidance on the registration and operational requirements of drones (Espinola et al., 2019).
Figure 1 compares the salient features of a traditional aircraft used in search and rescue operations against quadcopter-based Icarus in terms of maneuverability, mobility, maintenance, and surveillance capability.
As seen in Figure 2, drones are unmanned, and no additional lives of rescuers in the helicopter are put at risk. With respect to mobility, it can take lesser time to be deployed, fly at low altitudes, and could perform an initial overview of the disaster area. This overview could be better than the one that is done from a helicopter or an aircraft because drones are smaller and can reach places where helicopters cannot.
In addition, drones cost no fuel with the use of Lithium-Ion Polymer (LIPo) batteries. Because drones are smaller than other aircraft performing SAR operations, they also use
Figure 2: Illustrates one of the advantages of using Icarus in civilian applications.
Figure 1: Comparative analysis
less energy. This makes them emit fewer greenhouse gasses making it environment friendly.
Time is a crucial factor during any type of search and rescuemission, and to function as efficiently as possible, search and rescue workers obtain a rapid overview of the situation, which is often only possible from the sky. As said by Lean Alfred Santos (2013), for faster and more effective disaster relief and response there is a solution that "flies".
As described in Figure 3, the common root causes affect delays in the delivery of relief goods during the event of calamities. The diagram is a basic knowledge driven by theoretical and empirical considerations. It is used to map cause and effect relationships, with some more heuristics and some highly quantitative (Kenneth, 2008).
## Literature Review
### Search, Rescue, and Police Work
Mayer et al. (2019) claimed that time is frequently the most important consideration in search and rescue situations because lives are in danger. Unmanned aerial vehicle (UAV) deployment by emergency services has already begun to do a faster search over a greater region. In the event of a natural disaster, Mishra et al. (2019) stated that drones can scan the wide affected area and make the search and rescue (SAR) faster to save more human lives. Kaplan and Miller (2019) pointed out that drones are situated in an expanded field of police operations. The alleged distinction between military and civilian, or battlefield and home front.
### Construction Engineering and Engineering
Tkac and Mesaros (2019) stated that the most alluring development in building in recent years has been the usage of drones. Drone utilization has increased by approximately 240% in the construction industry, more than in any other commercial sector. Drones have aviation advantages and capabilities that are quite helpful in resolving construction-related issues. According to Nwaogu et al. (2023), the Architecture, Engineering, and Construction (AEC) sector is expected to be the second
Figure 3: The Cause-and-Effect Diagram or Ishikawa Diagram
largest market for commercial drones in 2020 due to the sector's relatively quick adoption of the technology.
### Agriculture, Film, and Television Production
Upadhyaya et al. (2022) cited that drones are aircraft that can be remotely piloted by a pilot on the ground to carry out a task or that can be automatically flown by loading a flight program that has already been created. Pattanayak and Shukla (2020) conveyed that the increased quality of television news coverage has increased viewers' need for thorough, real-time coverage of breaking news events. In most production regions, the media industry is undergoing a significant transformation. Unmanned aerial vehicles, sometimes known as drones, are emerging as promising new tools for Electronic News Gathering (ENG) and Electronic Field Production (EFP).
### Medicine and Modern Warfare
Messar et al. (2018) simulated the delivery of a 4.5 kg load of medical supplies, including tourniquets, bandages, analgesics, and blood products, using an unmanned, rotary-wing drone. The simulated victim was placed in a far-off location. Issacharoff and Pildes (2013) explored the utilization of drones in modern warfare which has become more prevalent. What could have originally been a tactical response, is now a key tactic for attacking the opposition. Braun et al. (2019) suggested that medical drones are on the verge of revolutionizing prehospital medicine enabling advanced healthcare delivery to once-inaccessible patients.
### News Broadcasting
Television personality, Daniel Razon (2014), wrote about how UNTV, a news broadcasting company in the Philippines uses drone Technology. According to him, drones are famously used for intelligence operations or aerial assaults. UNTV takes advantage of these unmanned aerial vehicles (UAVs) to intensify its news and rescue missions.
### Android-Based Application combined with Drone Technology
Viray (2019) showed other wireless communication methods, such as WiFi, Bluetooth, GSM, or SMS technologies, can be incorporated into a drone to enable wireless data transfer in addition to the usage of remote controls as an electronic drone communication method.
## Methodology
The purpose of this paper is to develop an Unmanned Aerial Vehicle (UAV) using a quadcopter with the capability of video surveillance, map coordinates, a deployable parachute with a medicine kit or a food pack as a payload, and a collision warning system,
remotely controlled, and integrated with an android application to assist in search and rescue operations.
Applied research for the design and development of the functional prototype, quantitative and descriptive statistics to summarize data in an organized manner by describing the relationship between variables in a sample or population (Kaur et al., 2018). Upon completion, the quadcopter underwent an evaluation using a survey instrument with closed-ended questionnaires constructed for data-gathering procedures to limit the respondents' answers to a fixed set of responses (Roopa and Rani, 2012). It validated its acceptability using predefined variables, demonstrated purposively to selected respondents who are involved or part of a rescue unit operating locally within Caloocan City and Quezon City, Philippines. They described some issues and concerns based on their lived experiences in the conduct of search and rescue (SAR) operations, including their demographic profiles.
### Hardware and Software Components
Several hardware components were assembled in developing the prototype of Icarus such as F45o frames (45omm), GooIRC A2212/1000kv 13T motors, 3oA SimonK brushless Electronic Speed Controller (ESC), Arduino AT Mega 256o board, Arduino Uno board, FlySky FS-I6 2.4 GHz Radio Transmitter with Receiver, 3s 2200mAh 11.1v Lithium Polymer (LiPo) battery, B3AC 2s-3s charger, HC-SRo4 ultrasonic sensors, EMAX 1045 propellers, sensor holders, male to female jumper wires, NRF wireless module, breadboards, light emittingdiodes (LEDs), servo motors, 30 cm x 30 cm plastic-made parachute, 4 inches x 4 inches x 4 inches light-weight wooden box, and Pixhawk 4 flight controller. On the software side, Arduino IDE Sketch, Android Studio, and LibrePilot were used in the front-end and coding side. Different analytical tools were also provided as shown below in Figures 4, 5, 6, 7, 8, 9, and 10 to best illustrate how the entire system works collectively through diagrams, flowcharts, and drawn figures.
Figure 5: System Flowchart of Icarus’ collision-detection warning system
Figure 6: Deployment Diagram of Icarus
Figure 7: Pin diagram of the collision-detection warning system
### Statistical Treatment of Data
The researchers have employed a simple statistical tool to analyze the data gathered such as variables, frequency distribution (f), percentile (%), and ranked data.
### The Functional Prototype
Figure 11 shows the 19 x 12 inches fully functional quadcopter prototype of lcarus. Equipped with 4 brushless motors and propellers in a 45omm frame, mounted with a microcontroller-board connected to a speed and flight controller, wireless modules, an android phone for the navigational system, an ultrasonic sensor, and LED indicators for the
Figure 8: How propellers work
Figure 10: The collision-detection warning system with LED indicators
collision-detection warning system, powered by a 3s 2200mAh rechargeable 11.1-volts Lithium Polymer (LiPo) battery, remotely controlled by 2.4 GHz receiver and transmitter, fiberglass body is painted with red and white cross color to signify its search and rescue flight mission.
As seen in Figures 12 and 13, ExeCam, an Android application developed by the researchers has a file size of 3.07 Megabytes and runs on Lollipop 5.1.1 android version.
The application uses the Android phone's camera to serve as the medium of video transmission as depicted in Figure 14.
## Results
Tables 1 and 2 summarize the tabulated results of the demographic profiles of the respondents, issues, and concerns distributed accordingly using the statistical tools from highest to lowest that most of the respondents expressed during the actual conduct of the survey in their respective office locations in Caloocan and Quezon City.
## Discussion
Based on Tables 1 and 2, arranged from highest to lowest concerning frequencies and percentages, while in ranks, number 1 being the highest, the researchers analyzed and interpreted the results in terms of the demographic profile of the respondents and issues and concerns.
Each SAR group/unit was composed equally of 33-3% or 10 members each from "The Caloocan City Disaster Risk and Monitoring Office (CCDRMO)", "New Era Bureau of Fire Protection (New Era
\begin{table}
\begin{tabular}{c l|c|c|c} \multicolumn{4}{c}{In terms of Issues and Concerns} \\ \hline \multicolumn{1}{c||}{**Variables**} & \multicolumn{1}{c|}{**Respenses**} & \multicolumn{1}{c|}{**f**} & \multicolumn{1}{c|}{**\%**} & \multicolumn{1}{c}{**Rank**} \\ \hline
1. & Types of SAR Operation & Ground SAR & 13 & 43 & 1 \\ \cline{2-5} & Firefight SAR & 10 & 33 & 2 \\ \cline{2-5} & Urban SAR & 5 & 17 & 3 \\ \cline{2-5} & Combat SAR & 2 & 7 & 4 \\ \cline{2-5} & Ground SAR & 13 & 43 & 1 \\ \hline
2. & Common Causes of Injuries during & Human Errors & 24 & 80 & 1 \\ \cline{2-5} & SAR Operations & Location Difficulty & 5 & 17 & 2 \\ \cline{2-5} & Harsh Weather & 1 & 3 & 3 \\ \hline
3. & Describing the Prototype & Useful & 15 & 47 & 1 \\ \cline{2-5} & Reliable & 6 & 20 & 2 \\ \cline{2-5} & Unique & 6 & 20 & 2 \\ \cline{2-5} & High quality & 2 & 10 & 3 \\ \cline{2-5} & Impractical & 1 & 3 & 4 \\ \hline
4. & Quality of the Prototype & Ground SAR & 13 & 43 & 1 \\ \cline{2-5} & Firefighter & 10 & 33 & 2 \\ \cline{2-5} & Urban SAR & 5 & 17 & 3 \\ \cline{2-5} & Combat SAR & 2 & 7 & 4 \\ \hline
5. & The Integration of Android & Useful & 23 & 77 & 1 \\ \cline{2-5} & Technology & Reliable & 5 & 17 & 2 \\ \cline{2-5} & Unique & 2 & 6 & 3 \\ \hline
6. & Number of Years in SAR Operations & 1-2 years & 15 & 50 & 1 \\ \cline{2-5} & 3-4 years & 5 & 17 & 2 \\ \cline{2-5} & 9-10 years & 3 & 10 & 3 \\ \cline{2-5} & 5-6 years & 3 & 10 & 3 \\ \cline{2-5} & 13-14 years & 2 & 7 & 4 \\ \cline{2-5} & 15-20 years & 1 & 3 & 5 \\ \cline{2-5} & 7-8 years & 1 & 3 & 5 \\ \hline
7. & Success Rate of SAR Operations & 90 – 100 \% & 15 & 50 & 1 \\ \cline{2-5} & 70 – 89 \% & 9 & 30 & 2 \\ \cline{2-5} & 50 – 69 \% & 3 & 10 & 3 \\ \cline{2-5} & 30 – 49 \% & 3 & 10 & 3 \\ \hline
8. & Acceptability of Drone Technology in & Agree & 30 & 100 & 1 \\ \cline{2-5} & SAR Operation & Disagree & 0 & 0 & \\ \hline \end{tabular}
\end{table}
Table 2: Distribution of Responses, Frequencies, Percentages, and Ranks
BFP)", and "Quezon City Police District 6 (Q CPD 6)". There were 27% "Male" and 3% "Female".
Without being gender-biased, men overpower women this is due to the fact of the nature of the job where mental and physical strength are the primary requirements to carry out the task. 53% were "Single" and 47% with "Married" civil status.
The slight difference between the number of singles and married are not considered a factor in being flexible and available during the conduct of SAR operations.
37% belonged to the "3+years old - above" age group, followed by 30% "25-26 years old", 23% "26-30 years' old, and 10% "18-20 years old".
"Ground SAR" is selected by 43% of respondents. This rescue effort focuses on searching for a lost or distressed person in a given location, followed by "Firefight SAR" with 33% concerned with saving people trapped inside a burning structure, "Urban SAR" with 17%, involves rescue operations where victims are trapped in collapsed structures due to natural and man-made disasters, and "Combat SAR" with 7%, carries out search and rescue missions in war conflict territories or zones.
80% of the common causes of injuries are listed as "Human Errors", including impaired judgment, lack of training/experience, mental block, distraction, arrogance, and overconfidence, by 17% as "Location Difficulty", this is due to unfamiliarity with the terrain, and 13% as "Harsh Weather Conditions", circumstances in which climate are extremes and unexpected.
Icarus is described as "Useful" by 47% of respondents, "Reliable" by 20%, "Unique" by 20%, and "High Quality" by 10%, while just 3% of respondents find it "Impractical."
To sum up, 97% of respondents affirmatively described the potential of Icarus against the 3% negative responses. Overwhelming 97% of respondents rated the project as "High Quality," whereas 3% rated it as "Poor Quality."
The 3% poor quality rate is manageable and can be improved further. 77% of respondents found Android technology to be "Useful," 17% "Reliable," and 6% "Unique." The 100% positivity rate is quite impressive. 50% of the respondents have "1-2 years," followed by 17% with "3-4 years," 10% between "5-6 years" and "7-8 years," 7% with "13-14 years," and both at 3% with "15-20 years" and "7-8 years."
It goes to show that SAR operators with 1-2 years of experience are equal in number to those with 3-20 years of experience. 50% of operations had a success rate of "90% to 100%". 30% between "70-89 percent," 10% between "30-49 percent," and 50% between "50-69 percent," respectively. The success is relatively high from 50% to 100%.
The overall acceptability rate on the use of drone technology in search and rescue
operations was 100% unanimously "Agreed" by all respondents.
## Conclusions and Recommendations
The data provided by search and rescue operators enabled researchers to recognize the importance and significance of using Android and drone technology to support search and rescue operations. In terms of the overall survey results, most of the respondents asserted a positive response in describing the answers to all the issues and concerns.
Therefore, the researchers concluded that the innovative way of utilizing android drone technology is a new step toward the improvement of search and rescue operations in the Philippines.
The LiPo battery must be replaced with a higher capacity to extend longer flight time, drone operator should undergo a proper drone training course, and secure a permit from the Civil Aviation Authority of the Philippines (CAAP) to ensure public safety.
## Social Implications
Some people are scared of drones. They said that their right to privacy might be in danger, and there are fears that drones could be used to spy on them. Since drones were deployed as a killing machine in the Russian-Ukrainian conflict that began on February 24, 2022, any SAR operations could be affected by the public's negative perception of drones.
## Acknowledgement
Always grateful to the men and women of Caloocan City Disaster Risk and Monitoring Office, New Era Bureau of Fire Protection, Quezon City Police District Station 6, and most especially to Ma. Lyn M. Feliciano, April Joy E. Capilitan, and Samira B. Gumbahale for extending much-needed assistance and providing moral support. Lastly, the Asian Institute of Computer Studies and the Philippine State College of Aeronautics for making the collaboration possible.
## Declarations
### Conflict of Interest
All authors declared that they have no conflicts of interest.
### Informed Consent
All participants were appropriately informed and voluntarily agreed to the terms with full consent before taking part in the conduct of the experiment/survey.
## Ethics Approval
The AICS Research Ethics Committee duly approved this study after it conformed to the local and internationally accepted ethical guidelines.
|
2303.07324
|
The algebraic structure of the non-commutative nonlinear Schrodinger and
modified Korteweg-de Vries hierarchy
|
We prove that each member of the non-commutative nonlinear Schrodinger and
modified Korteweg--de Vries hierarchy is a Fredholm Grassmannian flow, and for
the given linear dispersion relation and corresponding equivalencing group of
Fredholm transformations, is unique in the class of odd-polynomial partial
differential fields. Thus each member is linearisable and integrable in the
sense that time-evolving solutions can be generated by solving a linear
Fredholm Marchenko equation, with the scattering data solving the corresponding
linear dispersion equation. At each order, each member matches the
corresponding non-commutative Lax hierarchy field which thus represent
odd-polynomial partial differential fields. We also show that the cubic form
for the non-commutative sine--Gordon equation corresponds to the first negative
order case in the hierarchy, and establish the rest of the negative order
non-commutative hierarchy. To achieve this, we construct an abstract
combinatorial algebra, the Poppe skew-algebra, that underlies the hierarchy.
This algebra is the non-commutative polynomial algebra over the real line
generated by compositions, endowed with the Poppe product -- the product rule
for Hankel operators pioneered by Ch. Poppe for classical integrable systems.
Establishing the hierarchy members at non-negative orders, involves proving the
existence of a `Poppe polynomial' expansion for basic compositions in terms of
`linear signature expansions' representing the derivatives of the underlying
non-commutative field. The problem boils down to solving a linear algebraic
equation for the polynomial expansion coefficients, at each order.
|
Gordon Blower, Simon J. A. Malham
|
2023-03-13T17:44:12Z
|
http://arxiv.org/abs/2303.07324v1
|
The algebraic structure of the non-commutative nonlinear Schrodinger and modified Korteweg-de Vries hierarchy
###### Abstract
We prove that each member of the non-commutative nonlinear Schrodinger and modified Korteweg-de Vries hierarchy is a Fredholm Grassmannian flow, and for the given linear dispersion relation and corresponding equivalencing group of Fredholm transformations, is unique in the class of odd-polynomial partial differential fields. Thus each member is linearisable and integrable in the sense that time-evolving solutions can be generated by solving a linear Fredholm Marchenko equation, with the scattering data solving the corresponding linear dispersion equation. At each order, each member matches the corresponding non-commutative Lax hierarchy field which thus represent odd-polynomial partial differential fields. We also show that the cubic form for the non-commutative sine-Gordon equation corresponds to the first negative order case in the hierarchy, and establish the rest of the negative order non-commutative hierarchy. To achieve this, we construct an abstract combinatorial algebra, the Poppe skew-algebra, that underlies the hierarchy. This algebra is the non-commutative polynomial algebra over the real line generated by compositions, endowed with the Poppe product--the product rule for Hankel operators pioneered by Ch. Poppe for classical integrable systems. Establishing the hierarchy members at non-negative orders, involves proving the existence of a 'Poppe polynomial' expansion for basic compositions in terms of 'linear signature expansions' representing the derivatives of the underlying non-commutative field. The problem boils down to solving a linear algebraic equation for the polynomial expansion coefficients, at each order.
Keywords:Non-commutative nonlinear Schrodinger and modified Korteweg-de Vries hierarchies sine-Gordon equation
## 1 Introduction
For the non-commutative nonlinear Schrodinger and modified Korteweg-de Vries hierarchy, we prove that each member is both, a Fredholm Grassmannian flow, and for
the given linear dispersion relation and corresponding equivalencing group of Fredholm transformations, is unique in the class of odd-polynomial partial differential fields. That each member represents a Fredholm Grassmannian flow means they are linearisable in the sense that solutions can be generated by solving a linear Fredholm Marchenko equation whose scattering data is the solution to the corresponding linear dispersion relation. We also show that each member of the Lax hierarchy generates an odd-polynomial partial differential field, and so by uniqueness, at each non-negative order, the Fredholm Grassmannian flow and Lax hierarchy member are one and the same. We also show that the cubic form for the non-commutative sine-Gordon equation corresponds to the first negative order case in the Lax hierarchy, and establish the rest of the negative order non-commutative hierarchy. Our approach is inspired by the pioneering work of Ch. Poppe. In a sequence of papers, Poppe [86, 87, 88], Poppe and Sattinger [89], and Bauhardt and Poppe [9] developed the fundamental product rule for additive Hankel operators and semi-additive operators, in order to establish the integrability and specific solution forms for classical integrable systems. These included for example, the scalar sine-Gordon equation and Kadomtsev-Petviashvili hierarchy. Poppe's approach has recently been substantially developed and extended. In particular, Doikou, Malham and Stylianidis [30] streamlined and extended Poppe's approach to the non-commutative Korteweg-de Vries and nonlinear Schrodinger equations, and Malham [68] extended the approach to the quartic-order, quintic-degree non-commutative nonlinear Schrodinger equation. Also see Doikou, Malham, Stylianidis and Wiese [31, 32]. Further, Malham [69] developed a simpler form of the Poppe algebra constructed herein to prove that each member of the non-commutative potential Korteweg-de Vries hierarchy is unique in the class of polynomial partial differential fields and represents a Fredholm Grassmannian flow. Blower and Newsham [15] developed a systems perspective to Poppe's approach, constructing tau-functions and families of solutions to the Kadomtsev-Petviashvili equation, while Blower and Doust [14] extend this approach to the sinh-Gordon equation.
Let us now briefly outline our approach herein. Consider a non-commutative non-linear dispersive partial differential equation for \(g=g(x;t)\) of order \(n\) of the form,
\[\partial_{t}g=-(\mathrm{i}\mathcal{I})^{n-1}\pi_{n}\big{(}g,\partial g, \partial^{2}g,\ldots,\partial^{n-2}g,\partial^{n}g\big{)},\]
where \(\partial=\partial_{x}\) and \(x\in\mathbb{R}\). Here \(\pi_{n}=\pi_{n}(\cdot)\) is a polynomial of its arguments--a polynomial partial differential field. It is linear in \(\partial^{n}g\). The diagonal matrix \(\mathcal{I}\) simply has top left block '\(-\mathrm{id}\)' and bottom right block is '\(\mathrm{id}\)'. As we see presently, \(g=\llbracket G\rrbracket\) is the square-integrable kernel of a Hilbert-Schmidt operator \(G\). Herein we use the notation \(\llbracket G\rrbracket\) to denote the kernel of an operator \(G\). Suppose \(P\) is a Hilbert-Schmidt Hankel operator on the negative real axis satisfying the linear dispersive equation,
\[\partial_{t}P=-(\mathrm{i}\mathcal{I})^{n-1}\partial^{n}P.\]
Note if there are any linear terms in \(\pi_{n}\), we should augment this equation for \(P\) with such linear terms on the right. In particular we suppose the square-integrable kernel of \(P\) has the form \(p=p(y+z+x;t)\) for \(y,z\in(-\infty,0]\), while \(x\in\mathbb{R}\) represents an additive parameter. The matrix-valued kernel \(p\) satisfies the same linear dispersive partial differential equation as that above for \(P\); it represents the scattering data. The Marchenko equation at the operator level, here has the Fredholm form,
\[2\mathrm{i}P=G\,(\mathrm{id}+P^{2}),\]
for the unknown operator \(G\). Provided \(U\coloneqq(\mathrm{id}+P^{2})^{-1}\) exists as a Fredholm operator, then the solution Hilbert-Schmidt operator \(G=2\mathrm{i}PU\) to this Marchenko equation, parametrises a Fredholm Grassmannian flow of subspaces spanned by linear dispersive solutions \(p\). See, for example, Doikou _et al._[31]. Poppe's insight was to recognise the crucial role the Hankel properties of \(P\) played in making the connection between the solution \(G=\mathrm{i}PU\) to the Marchenko equation, and that its kernel \([\![G]\!]\) satisfies a specific nonlinear dispersive partial differential equation of the form shown above. The connection is made via the Poppe kernel product rule,
\[[\![F\partial_{x}(HH^{\prime})F^{\prime}]\!](y,z;x,t)=[\![FH]\!](y,0;x,t)[\![H^{ \prime}F^{\prime}]\!](0,y;x,t),\]
where \(H\) and \(H^{\prime}\) are Hankel operators as described above, and \(F\) and \(F^{\prime}\) are any two Hilbert-Schmidt operators. This rule indicates at the fundamental level, that there is a connection between the matrix products of kernels of operators of the form \(G=2\mathrm{i}PU\) and/or their derivatives (on the right), and kernels of monomials involving operator compositions of similar objects, but with one order higher derivative (on the left). Using that \(\mathrm{id}+P^{2}=(\mathrm{id}-\mathrm{i}P)(\mathrm{id}+\mathrm{i}P)\) and setting \(V\coloneqq(\mathrm{id}-\mathrm{i}P)^{-1}\), we see that,
\[G=2V(\mathrm{i}P)V^{\dagger}\equiv V-V^{\dagger}.\]
Further, we observe that \(\partial V=V\partial(\mathrm{i}P)V\), and if we use sub-indicies to denote partial derivatives '\(\partial\)', we find, \(G_{1}=V(\mathrm{i}P)_{1}V-V^{\dagger}(\mathrm{i}P)_{1}^{\dagger}V^{\dagger}\). In particular, if for any Hilbert-Schmidt operator \(F\) we set \([F]\coloneqq[\![F-F^{\dagger}]\!]\), then we observe that the kernels in both these cases are given by,
\[[\![G]\!]=[V]\qquad\text{and}\qquad[\![G]\!]_{1}=[V(\mathrm{i}P)_{1}V].\]
It is now easy to imagine that the \(n\)th partial derivative of \([\![G]\!]\) has the form,
\[[\![G]\!]_{n}=\sum\chi\big{(}a_{1}\cdots a_{n}\big{)}\,\big{[}V(\mathrm{i}P)_ {a_{1}}V\cdots V(\mathrm{i}P)_{a_{k}}V\big{]},\]
where the sum is over all compositions \(a_{1}a_{2}\cdots a_{k}\) of \(n\). Naturally \(\partial_{t}[\![G]\!]\) is given by \([V\partial_{t}(\mathrm{i}P)V]\) where \(\partial_{t}(\mathrm{i}P)\) can be expressed in terms \(\partial^{n}(\mathrm{i}P)\) using the linear dispersion equation for \(P\). Hence our goal is to express \(\partial_{t}[\![G]\!]\) in terms of a polynomial \(\pi_{n}=\pi_{n}\big{(}[\![G]\!],[\![G]\!]_{1},\ldots,[\![G]\!]_{n-2},[\![G]\!] _{n}\big{)}\), linear in \([\![G]\!]_{n}\). The monomials in the polynomial \(\pi_{n}\) consist of factors of the form \([\![G]\!]\), \([\![G]\!]_{1}\),..., \([\![G]\!]_{n-2}\), each expressible as a linear combination of basis elements \([V(\mathrm{i}P)_{a_{1}}V\cdots V(\mathrm{i}P)_{a_{k}}V]\) parameterised by compositions as shown above, with the product involved being the Poppe kernel product. If we extend the basis elements to include basis elements of the form \([V(\mathrm{i}P)_{a_{1}}V\cdots V(\mathrm{i}P)_{a_{k}}V]\) where any of the \(V\) factors shown my be replaced by \(V^{\dagger}\), then the operator partial fractions formulae \(V=\mathrm{id}+(\mathrm{i}P)V=\mathrm{id}+(\mathrm{i}P)V\) imply that the Poppe product generates a closed algebra on such basis elements (see Lemma 6). The playing field is thus set. It is the algebra of such basis elements equipped with the Poppe product. In fact we use an abstract version of this algebra by stripping the basis elements of their \(P\) and \(V\) labels and focusing on the compositions \(a_{1}a_{2}\cdots a_{k}\) and a binary encoding, \(\mathbf{0}\) and \(\mathbf{0}^{\dagger}\), of the intervening \(V\) or \(V^{\dagger}\) factors. The Poppe product essentially only acts on these components and thus transport our _playing field_ to the closed algebra of basis elements \([\![\mathbf{0}a_{1}\mathbf{0}a_{2}\mathbf{0}\cdots\mathbf{0}a_{k}\mathbf{0}]\!]\), where any of the \(\mathbf{0}\)'s may be replaced by \(\mathbf{0}^{\dagger}\). The \([\![G]\!]_{n}=[V]_{n}\) are linear combinations of such abstract basis elements (as shown shown above) and we label such specific linear combinations by \([\![\mathbf{n}]\!]\). The quantity \(\partial_{t}[\![G]\!]=\partial_{t}[V]\) can ultimately be expressed in terms of \([\![\mathbf{0}n\mathbf{0}]\!]\) or \([\![\mathbf{0}n\mathbf{0}^{\dagger}]\!]\), respectively depending on whether
\(n\) is odd or even. The _game_ is to determine if \([\mathbf{0}n\mathbf{0}]\) or \([\mathbf{0}n\mathbf{0}^{\dagger}]\) can be expressed in terms of a polynomial \(\pi_{n}=\pi_{n}([\mathbf{0}],[\mathbf{1}],\ldots,[\mathbf{n-2}],[\mathbf{n}])\), linear in \([\mathbf{n}]\). We express \(\pi_{n}\) as a linear combination of monomials with factors chosen from \([\mathbf{0}]\), \([\mathbf{1}]\),..., \([\mathbf{n-2}]\) and a linear term \([\mathbf{n}]\). The monomials present in \(\pi_{n}\) are restricted to those that can generate \([\mathbf{0}n\mathbf{0}]\) or \([\mathbf{0}n\mathbf{0}^{\dagger}]\) or basis elements involving compositions \(a_{1}a_{2}\cdots a_{k}\) of \(n\) via the Poppe product. An unknown complex coefficient is associated with each monomial in the linear combination. We use the linear expansions for \([\mathbf{0}]\), \([\mathbf{1}]\),..., \([\mathbf{n-2}]\) and linear term \([\mathbf{n}]\) in terms of the basis elements, and compute the Poppe products of all the expansion basis elements in each of the factors of the monomial. The result is a large linear combination of basis elements, each with a factor which is a linear combination of the unknown coefficients. We equate this to \([\mathbf{0}n\mathbf{0}]\) or \([\mathbf{0}n\mathbf{0}^{\dagger}]\), depending on whether \(n\) is odd or even, and equate all the coefficients of the basis elements present. This generates a large linear algebraic system of equations for the unknown coefficients. Though over-determined, it can be solved for a unique set of coefficients, see Theorem 1.
Our Main result thus establishes that for each non-negative integer \(n\), there exists a unique polynomial \(\pi_{n}=\pi_{n}([\mathbf{0}],[\mathbf{1}],\ldots,[\mathbf{n-2}],[\mathbf{n}])\) such that \([\mathbf{0}n\mathbf{0}]=\pi_{n}\), when \(n\) is odd, or \([\mathbf{0}n\mathbf{0}^{\dagger}]\), when \(n\) is even. The non-commutative Lax hierarchy in our context, can be iteratively generated, to obtain the next equation in the hierarchy from the previous one by applying a specific operator to \(\pi_{n}\). Given the existence of a match up to \(\pi_{n}\), we show that applying this specific iterative operator to \(\pi_{n}\) generates \([\mathbf{0}(n+1)\mathbf{0}^{\dagger}]\) when \(n\) is odd, and generates \([\mathbf{0}(n+1)\mathbf{0}]\) when \(n\) is even. Since, from our main Theorem 1, we know there is a unique polynomial expansion for \([\mathbf{0}n\mathbf{0}]\) when \(n\) is odd or \([\mathbf{0}n\mathbf{0}^{\dagger}]\) when \(n\) is even for any order, there is a unique polynomial expansion for \([\mathbf{0}(n+1)\mathbf{0}^{\dagger}]\) or \([\mathbf{0}(n+1)\mathbf{0}]\). The uniqueness property means that the polynomials \(\pi_{n}\) we establish at each order, must match the Lax hierarchy members. We also investigate pursuing the specific operator in the opposite direction to generate non-commutative Lax hierarchy members at all negative orders--see Tracy and Widom [100] for the scalar case. In particular we show that the first negative order case \(n=-1\), corresponds to the cubic form of the non-commutative sine-Gordon equation.
The solution to any of the non-commutative hierarchy equations is generated by solving the linear dispersion equation for the matrix kernel \(p\) or equivalently the Hilbert-Schmidt Hankel operator \(P\), and then solving the linear Fredholm Marchenko equation \(2\mathrm{i}P=G\left(\mathrm{id}+P^{2}\right)\) for \(G\). The hierarchy member solution is \([\![G]\!]\). Each member of the hierarchy is thus linearisable as the solution is generated via solving a linear dispersion equation and a linear Fredholm equation. However this procedure also identifies the graph of \(G\) as a Fredholm Grassmannian flow, represented in a specific coordinate patch parametrised by Hilbert-Schmidt operators. Such flows are explored in detail in Doikou _et al._[31]. We can think of the Fredholm Grassmannian as all collections of graphs of compatible linear Hilbert-Schmidt maps. Indeed briefly, suppose we set \(\mathbb{V}\coloneqq L^{2}((-\infty,0];\mathbb{C}^{m})\) for some \(m\in\mathbb{N}\), for example. Consider the pair of operators,
\[\begin{pmatrix}\mathrm{id}-Q\\ 2\mathrm{i}P\end{pmatrix},\]
both on \(\mathbb{V}\). Here we suppose \(\mathrm{id}-Q\) is a Fredholm operator on \(\mathbb{V}\), with \(Q\) a Hilbert-Schmidt operator, and \(P\) is a Hilbert-Schmidt operator on \(\mathbb{V}\). Assuming that the regularised determinant \(\det_{2}(\mathrm{id}-Q)\neq 0\), this pair of operators defines a Fredholm Grassmannian flow in a given coordinate patch as follows. We can think of this pair
of operators as spanning a subspace of \(\mathbb{H}\coloneqq\mathbb{V}\times\mathbb{V}\) that is isomorphic to \(\mathbb{V}\). The transformation \((\mathrm{id}-Q)^{-1}\) of this subspace generates,
\[\begin{pmatrix}\mathrm{id}\\ G\end{pmatrix},\]
where \(G=2\mathrm{i}P(\mathrm{id}-Q)^{-1}\). We can think of the Hilbert-Schmidt operator \(G\) as parametrising all such subspaces of \(\mathbb{H}\), that can be projected onto the canonical subspace represented by the pair of operators \((\mathrm{id},O)\). This is one coordinate patch of the Fredholm Grassmannian of such subspaces of \(\mathbb{H}\). Note that if we set \(Q=-P^{2}\), then \(G\) represents the solution to the Marchenko equation. Recall, in our application we assume \(P\) satisfies the dispersion equation \(\partial_{t}P=-(\mathrm{i}\mathcal{I})^{n-1}\partial^{n}P\). Further we suppose that \(P\) is a Hankel operator. This property is a natural far-field symmetry for the dispersive field in the sense that it is a natural symmetry arising as the result of the scattering, of an incident wave from one far field, into the opposite far field. See for example the construction of the Marchenko equation in Drazin and Johnson [33] or Appendix B in Doikou _et al._[31]. That the Marchenko equation solution \(G\) parameterises a class of subspaces of \(\mathbb{H}\) characterised by solutions of a dispersive field \(P\), generates the following perspective. We can think of the Fredholm Grassmannian, in the coordinate patch represented by the particular pair \((\mathrm{id},G)\), as parametrising the time-evolving _envelope_ of dispersive field solutions, i.e. the time-evolving subspace represented by the pair \((\mathrm{id},G)\). In principle we could consider \(\llbracket G\rrbracket=\llbracket G\rrbracket(y,z;x,t)\) or in particular \(\llbracket G\rrbracket(0,0;x,t)\) as an observable.
The Marchenko equation, and its role in inverse scattering and linearisation, has been fundamental in classical integrable systems from the very early stages. See for example Dyson [39], Miura [74], Zakharov and Shabat [107; 108], Ablowitz _et al._[4], Fokas and Ablowitz [43], Mumford [77], Poppe [86; 87; 88], Poppe and Sattinger [89], Bauhardt and Poppe [9] and Nijhoff _et al._[78; 79]. There has also been a resurgence of interest in such linearisation approaches, see for example Fokas and Pelloni [44], McKean [72], Fu [46] and Fu and Nijhoff [47]. It was Sato [93; 94] and Segal and Wilson [96] who pioneered the connection between Fredholm Grassmannians and integrable systems. Recently there has also been a resurgence in this direction as well, see for example, Mulase [76], Dupre _et al._[36; 37; 38], Kasman [63; 64], Hamanaka and Toda [59], Cafasso [18], Cafasso and Wu [19] and Arthamonov _et al._[7] (also see Beck _et al._[10; 11; 12] and Doikou _et al._[31; 32]). Related to this is the well-studied connection between the Korteweg-de Vries hierarchy, the intersection theory of Deligne-Mumford moduli space and the a string equation in two-dimensional gravity; see for example Witten [105; 106] and Cafasso and Wu [19]. Some of the earliest work on non-commutative integrable systems includes Fordy and Kulish [45], Nijhoff _et al._[79], Ablowitz _et al._[3], Ercolani and McKean [41] and Aden and Carl [6]. Again there has been more recent interest in such systems and their solutions, such as Treves [102; 103], Hamanaka and Toda [59], Degasperis and Lombardo [24], Dimakis and Muller-Hoissen [28], Carillo and Schiebold [20; 21; 22] whose results are particularly relevant to those herein, Sooman [98], Pelinovsky and Stepanyants [83], Buryak and Rossi [17], Doikou _et al._[30], Stylianidis [99], Adamopoulou and Papamikos [5], Malham [68], Gurses and Pekcan [58] and Ma [67]. The role of Hankel operators in integrable systems first explored by Poppe, has recently re-emerged as an active and fruitful research direction. In particular, relevant to our results herein are Blower and Newsham [15], Blower and Doust [14], Grudsky and Rybkin [52; 53; 54], Grellier and Gerard [51] and
Gerard and Pushnitski [49]. The combinatorial algebraic approach we consider herein was introduced in Malham [69] for the simpler non-commutative potential Korteweg-de Vries equation; also see Doikoi _et al._[31]. Dimakis and Muller-Hoissen [26; 27] consider integrable systems in the context of bidifferential graded algebras, while in Dimakis and Muller-Hoissen [25], they consider connections to shuffle and Rota-Baxter algebras. See Reutenauer [91], Malham and Wiese [70] and Ebrahimi-Fard _et al._[40] for more details on shuffle algebras and references for Rota-Baxter algebras.
To summarise, our achievements herein are as follows. In terms of algebras, we:
1. Introduce and develop new abstract non-commutative algebras. These are the algebra of non-negative integer monomial forms \(\mathbb{C}\langle\mathbb{Z}_{\mathbf{0}}\rangle\) described above, equipped with a quasi-Leibniz type product based on the Poppe product, and its skew-form subalgebra \(\mathbb{C}[\mathbb{Z}_{\mathbf{0}}]\). They are instrumental to the results (ii)-(iv) just below.
For the non-commutative nonlinear Schrodinger and modified Korteweg-de Vries hierarchy, we:
1. Provide a constructive proof that at each non-negative order, there exists a unique hierarchy member in the class of odd-polynomial partial differential fields. The proof simultaneously establishes that the solution flow of each member is a Fredholm Grassmannian, and therefore linearisable in the sense outlined in detail above;
2. Give a simple proof of the non-commutative Lax hierarchy in this context. In addition, we immediately establish that at each non-negative order, the unique hierarchy member in (ii), and the corresponding Lax hierarchy member, coincide;
3. Establish that the first negative order non-commutative Lax hierarchy member is the cubic form of the non-commutative sine-Gordon equation, and further, demonstrate how to generate the rest of the negative order non-commutative hierarchy.
Our paper is organised as follows. In Section 2 we introduce the Poppe product for Hankel operators and motivate the solution form we propose for the non-commutative nonlinear Schrodinger and modified Korteweg-de Vries hierarchy, based on the associated Marchenko equation. We introduce the Poppe kernel monomial algebra in Section 3 with the Poppe product, and in particular its isomorphic abstract form as well as the skew-Poppe subalgebra we use for the proofs of our main results. We present a sequence of simple examples in Section 4 illustrating the use of the abstract Poppe algebra to generate the order zero through to order four members of the non-commutative hierarchy. In Section 5 we establish the non-commutative Lax hierarchy using the Poppe algebra. Herein, we also generate the cubic form of the non-commutative sine-Gordon equation as the first negative order case, and indicate how to generate the rest of the negative order cases. We begin Section 6 with the illuminating example of the quintic non-commutative modified Korteweg-de Vries equation, before stating, and then proving, our main results. Finally, in Section 7 we present some further conclusions and applications.
## 2 Hankel operators, the Poppe product and the Marchenko equation
In this section we introduce the concepts and results that underlie our formulation. Herein, we: introduce the necessary Hilbert-Schmidt and Hankel operators we use and define the Poppe product; motivate the solution ansatz we use throughout; formulate the base linear dispersion equation; elucidate the well-posedness results for the
Marchenko equation we require and establish the connection between the Poppe product and finite rank operators.
### Hankel operators and the Poppe product
To begin, let us fix some notation. Let \(\mathbb{V}\) be the Hilbert space of square-integrable, complex matrix-valued functions on \((-\infty,0]\), i.e. \(\mathbb{V}\coloneqq L^{2}((-\infty,0];\mathbb{C}^{m})\) for some \(m\in\mathbb{N}\). Further, we denote by \(\mathfrak{J}_{2}(\mathbb{V})\) the space of Hilbert-Schmidt operators on \(\mathbb{V}\), i.e. bounded operators whose sum of the squares of their singular values is finite. For any given operator \(F=F(x,t)\in\mathfrak{J}_{2}(\mathbb{V})\) there exists a unique square-integrable kernel \(f=f(y,z;x,t)\) with \(f\in L^{2}((-\infty,0]^{\times 2};\mathbb{C}^{m\times m})\) such that for any \(\phi\in\mathbb{V}\) we have
\[(F\phi)(y;x,t)=\int_{-\infty}^{0}f(y,z;x,t)\phi(z)\,\mathrm{d}z.\]
Conversely, any such function \(f\in L^{2}((-\infty,0]^{\times 2};\mathbb{C}^{m\times m})\) defines an operator \(F=F(x,t)\) in \(\mathfrak{J}_{2}(\mathbb{V})\) with (for each \(x,t\)):
\[\|F\|_{\mathfrak{J}_{2}(\mathbb{V})}=\|f\|_{L^{2}((-\infty,0]^{\times 2}; \mathbb{C}^{m\times m})}.\]
See for example Simon [97, p. 23].
Definition 1 (Kernel bracket): For any Hilbert-Schmidt operator \(F=F(x,t)\), which depends on the parameters \(x\in\mathbb{R}\) and \(t\geqslant 0\), we use the _kernel bracket_ notation \([\![F]\!]\) to refer to the kernel \(f=f(y,z;x,t)\) of \(F\):
\[[\![F]\!](y,z;x,t)\coloneqq f(y,z;x,t).\]
In general, since \(f\) is square-integrable, it only exists almost everywhere on \((-\infty,0]^{\times 2}\). However below, the operators we consider have continuous kernels and so \(f\) makes sense pointwise. In such cases, we can in particular set \(y=z=0\), for which we use the notation \([\![F]\!]_{0,0}(x,t)\coloneqq f(0,0;x,t)\).
Recall that the _trace_ of any trace-class operator \(F\) on \((-\infty,0]\) is given by,
\[\operatorname{tr}F\coloneqq\int_{-\infty}^{0}f(z,z)\,\mathrm{d}z.\]
By a Hankel operator, which may depend on a parameter \(x\), we mean the following.
Definition 2 (Hankel operator with parameters): We say a Hilbert-Schmidt operator \(H\in\mathfrak{J}_{2}(\mathbb{V})\) with corresponding square-integrable kernel \(h\) is _Hankel_ or _additive_ with parameter \(x\in\mathbb{R}\) if its action, for any square-integrable function \(\phi\in\mathbb{V}\), is given by (here \(y\in(-\infty,0]\)),
\[(H\phi)(y;x)\coloneqq\int_{-\infty}^{0}h(y+z+x)\phi(z)\,\mathrm{d}z.\]
Poppe [86; 87] recognised the fundamental role played by such Hankel operators in classical integrable systems. The kernel of the derivative with respect to the additive parameter \(x\) of the operator product of an arbitrary pair of Hankel operators can be expressed as the matrix product of their respective kernels as follows. See Poppe [86; 87], as well as Doikou _et al._[30] and Malham [69]. We include a proof for completeness.
Lemma 1 (Poppe product): _Assume \(H\) and \(H^{\prime}\) are Hankel Hilbert-Schmidt operators with parameter \(x\) and \(F\) and \(F^{\prime}\) are Hilbert-Schmidt operators. Further assume the kernels of \(F\) and \(F^{\prime}\) are continuous, whilst the kernels of \(H\) and \(H^{\prime}\) are continuously differentiable. Then the following Poppe product rule holds,_
\[\big{[}\!\big{[}F\partial_{x}(HH^{\prime})F^{\prime}\big{]}\!\big{]}(y,z;x)= \big{[}\![FH]\!\big{]}(y,0;x)\big{[}\![H^{\prime}F^{\prime}]\!\big{]}(0,z;x).\]
Proof: We use the fundamental theorem of calculus and Hankel properties of \(H\) and \(H^{\prime}\). Let \(f\), \(h\), \(h^{\prime}\) and \(f^{\prime}\) denote the integral kernels of \(F\), \(H\), \(H^{\prime}\) and \(F^{\prime}\) respectively. By direct computation \(\big{[}\![F\partial_{x}(HH^{\prime})F^{\prime}]\!\big{]}(y,z;x)\) equals
\[\int_{\mathbb{R}^{3}_{-}}f(y,\xi_{1};x)\partial_{x}\big{(}h(\xi_ {1}+\xi_{2}+x)h^{\prime}(\xi_{2}+\xi_{3}+x)\big{)}f^{\prime}(\xi_{3},z;x)\, \mathrm{d}\xi_{3}\,\mathrm{d}\xi_{2}\,\mathrm{d}\xi_{1}\] \[=\int_{\mathbb{R}^{3}_{-}}f(y,\xi_{1};x)\partial_{\xi_{2}}\big{(} h(\xi_{1}+\xi_{2}+x)h^{\prime}(\xi_{2}+\xi_{3}+x)\big{)}f^{\prime}(\xi_{3},z;x)\, \mathrm{d}\xi_{3}\,\mathrm{d}\xi_{2}\,\mathrm{d}\xi_{1}\] \[=\int_{\mathbb{R}^{2}_{-}}f(y,\xi_{1};x)h(\xi_{1}+x)h^{\prime}( \xi_{3}+x)f^{\prime}(\xi_{3},z;x)\,\mathrm{d}\xi_{3}\,\mathrm{d}\xi_{1}\] \[=\int_{\mathbb{R}_{-}}f(y,\xi_{1};x)h(\xi_{1}+x)\,\mathrm{d}\xi_{ 1}\cdot\int_{\mathbb{R}_{-}}h^{\prime}(\xi_{3}+x)f^{\prime}(\xi_{3},z;x)\, \mathrm{d}\xi_{3}\] \[=\big{(}\![FH]\!\big{]}(y,0;x)\big{)}\big{(}\![H^{\prime}F^{ \prime}]\!\big{]}(0,z;x)\big{)},\]
which corresponds to the result stated.
Remark 1: We implicitly interpret kernel products written in the form as \(\big{[}\![\,\cdot\,]\![(y,0;x)]\![\,\cdot\,]\![(0,0;x)\cdots[\![\,\cdot\,]\![(0,0;x)]\![\,\cdot\,]\![(0,z;x).\]
### Solution ansatz motivation
Let us formally motivate the solution form we use for the non-commmutative nonlinear Schrodinger and modified Korteweg-de Vries hierarchy we study herein. This comes from the sine-Gordon equation. With \(x\in\mathbb{R}\) and \(t\geqslant 0\), we assume the sine-Gordon equation has the form, \(\partial_{t}\partial u=\sin u\), where \(u=u(x,t)\) and \(\partial\coloneqq\partial_{x}\). In the scalar case, when \(u\in\mathbb{R}\), it is well-known that there exists a solution of the form,
\[u=2\mathrm{i}\,\mathrm{tr}\,\log\!\bigg{(}\frac{\mathrm{id}-\mathrm{i}P}{ \mathrm{id}+\mathrm{i}P}\bigg{)},\]
or the equivalent form \(u=4\arctan P\). Here \(P=P(x,t)\) is a Hankel Hilbert-Schmidt operator with an integral kernel \(p=p(x,t)\) which satisfies the linearised form of the sine-Gordon equation (see for example Poppe [86, Cor. 3.2]), \(\partial_{t}\partial p=p\). For scalar
valued kernels, we have the following. Suppose that \(H=H(x)\) is a Hankel operator dependent on the parameter \(x\in\mathbb{R}\). Then for any \(n\in\mathbb{N}\), we have,
\[\partial\operatorname{tr}H^{n}=\frac{n}{2}\llbracket H^{n}\rrbracket_{0,0}.\]
Further, suppose that \(\Theta=\Theta(H)\) is a power series function of the Hankel operator \(H\), with scalar-valued coefficients \(c_{n}\), of the form \(\Theta(H)=\sum_{n\geqslant 1}c_{n}\,H^{n}\). Then we have,
\[\partial\operatorname{tr}\Theta(H)=\tfrac{1}{2}\llbracket H\,D\Theta(H) \rrbracket_{0,0},\]
where \(D\Theta=D\Theta(H)\) is the series \(D\Theta(H)=\sum_{n\geqslant 1}nc_{n}\,H^{n-1}\).
Remark 2: This is equivalent to the result embodied in equation (3.26) in Poppe [86]. We give a proof in Proposition 2 in Section 2.5 below. Also see Blower and Doust [14].
Example 1 (Logarithm of the Cayley transform): The solution ansatz for the scalar sine-Gordon equation above involves the logarithm of Cayley transform, i.e. the form,
\[\Theta(P)=\log\biggl{(}\frac{\operatorname{id}-\operatorname{i}P}{ \operatorname{id}+\operatorname{i}P}\biggr{)}.\]
By direct computation we observe that,
\[D\Theta(P)=-\frac{\operatorname{i}\cdot\operatorname{id}}{\operatorname{id}- \operatorname{i}P}-\frac{\operatorname{i}\cdot\operatorname{id}}{ \operatorname{id}+\operatorname{i}P}\quad\Rightarrow\quad P\,D\Theta(P)=- \frac{2\operatorname{i}P}{(\operatorname{id}-\operatorname{i}P)( \operatorname{id}+\operatorname{i}P)}.\]
Then, using the trace and kernel bracket result above, we have, \(\partial\operatorname{tr}\Theta(P)=-\llbracket(\operatorname{id}-\operatorname {i}P)^{-1}(\operatorname{i}P)(\operatorname{id}+\operatorname{i}P)^{-1} \rrbracket_{0,0}\), though the order of the factors shown on the right us not important.
Recall the solution form to the scalar sine-Gordon equation given above, \(u=2\operatorname{i}\operatorname{tr}\Theta(P)\), where \(\Theta=\Theta(P)\) is the logarithm of the Cayley transform given in Example 1. Let \(\partial^{-1}\coloneqq\partial_{x}^{-1}\) denote the primitive operator, \(\bigl{(}\partial^{-1}\phi\bigr{)}(x)\coloneqq\int_{-\infty}^{x}\phi(\xi) \operatorname{d}\xi\). Then, given the final result in Example 1, we can express the solution to the scalar sine-Gordon equation in the form,
\[u=-2\operatorname{i}\partial^{-1}\bigl{[}(\operatorname{id}-\operatorname{i}P )^{-1}(\operatorname{i}P)(\operatorname{id}+\operatorname{i}P)^{-1} \rrbracket_{0,0}.\]
This form of the solution for the scalar sine-Gordon equation motivates the solution form we seek for the non-commutative nonlinear Schrodinger and modified Korteweg-de Vries hierarchy, which we utilise in the following sections. The sine-Gordon equation is just a special case, the order '\(-1\)' case, in that hierarchy.
Example 2 (Non-commutative sine-Gordon cubic-form equation): If \(u\) satisfies the scalar sine-Gordon equation \(\partial_{t}\partial u=\sin u\), and \(u=-2\operatorname{i}\partial^{-1}g\), then \(g=g(x,t)\) satisfies the following sine-Gordon cubic-form equation,
\[\partial_{t}\partial g=g+g\,\partial^{-1}(\partial_{t}g^{2})+\partial^{-1}( \partial_{t}g^{2})\,g.\]
To see this, define the operator \(\Gamma\) by, \(\bigl{(}\Gamma\phi\bigr{)}(x)\coloneqq\int_{-\infty}^{x}\gamma(\xi)\phi(\xi) \operatorname{d}\xi\), where \(\gamma=-2\operatorname{i}g\). Using that for any \(n\in\mathbb{N}\) we have, \(\bigl{(}\Gamma\circ 1\bigr{)}^{n}\equiv n!\,\bigl{(}\Gamma^{n}\circ 1\bigr{)}\), then we observe that in fact, \(\sin u=(\operatorname{id}+\Gamma^{2})^{-1}\circ\Gamma\circ 1\). In other words \(\gamma\) satisfies the integral equation \((\operatorname{id}+\Gamma^{2})\circ\partial_{t}\gamma=\Gamma\circ 1\) or equivalently satisfies \(\partial_{t}\gamma+\partial^{-1}\bigl{(}\gamma\partial^{-1}(\gamma\partial_{t} \gamma)\bigr{)}=\partial^{-1}\gamma\). Noting that \(g=\gamma/(-2\operatorname{i})\) and symmetrically splitting the nonlinear term, generates the sine-Gordon cubic-form equation above. The cubic-form equation above is often interpreted to be the non-commutative sine-Gordon equation in, for example, Schiebold [95, Prop. 6.2].
### The linear dispersion equation
Consider the following coupled linear system of equations for the Hilbert-Schmidt operators \(P_{\alpha}\), \(P_{\beta}\), \(G_{\alpha}\) and \(G_{\beta}\),
\[\partial_{t}P_{\alpha} =\mu_{n}\partial^{n}P_{\alpha},\] \[\mathrm{i}P_{\alpha} =G_{\alpha}(\mathrm{id}+P_{\beta}P_{\alpha}),\] and \[\partial_{t}P_{\beta} =(-1)^{n-1}\mu_{n}\partial^{n}P_{\beta},\] \[\mathrm{i}P_{\beta} =G_{\beta}(\mathrm{id}+P_{\alpha}P_{\beta}).\]
for some order \(n\in\mathbb{Z}\), where the parameter \(\mu_{n}\in\mathbb{C}\). In order for the partial differential equations for \(P_{\alpha}\) and \(P_{\beta}\) shown to be dispersive, we necessarily require that \(\mu_{n}\) is pure imaginary when \(n\) is even and real when \(n\) is odd. We suppose that the matrix-valued kernel of \(P_{\beta}\) has the same shape as the transpose of the matrix-valued kernel of \(P_{\alpha}\). The matrix-valued kernels of \(G_{\alpha}\) and \(G_{\beta}\) naturally match those of \(P_{\alpha}\) and \(P_{\beta}\), respectively. If we set,
\[P\coloneqq\begin{pmatrix}O&P_{\beta}\\ P_{\alpha}&O\end{pmatrix},\qquad G\coloneqq\begin{pmatrix}O&G_{\beta}\\ G_{\alpha}&O\end{pmatrix},\qquad\text{and}\qquad\mathcal{I}\coloneqq\begin{pmatrix} -\mathrm{id}&O\\ O&\mathrm{id}\end{pmatrix},\]
then the system of linear equations above can be expressed in the form,
\[\partial_{t}P =-\mu_{n}(\mathrm{i}\mathcal{I})^{n-1}\partial^{n}P,\] \[\mathrm{i}P =G(\mathrm{id}+P^{2}),\]
where now the parameter \(\mu_{n}\in\mathbb{R}\). This form is given, eg., in Schiebold [95, p.679-80]. Now consider the following second order cubic nonlinear equation for the kernel \(\llbracket G\rrbracket\),
\[\partial_{t}\llbracket G\rrbracket(y,z;x,t)=-\mu_{2}\mathrm{i}\mathcal{I}\big{(} \partial_{\alpha}^{2}\llbracket G\rrbracket(y,z;x,t)-2\,\llbracket G \rrbracket(y,0;x,t)\llbracket G\rrbracket(0,0;x,t)\llbracket G\rrbracket(0,z ;x,t)\big{)}.\]
Written in terms of the kernels \(\llbracket G_{\alpha}\rrbracket\) and \(\llbracket G_{\beta}\rrbracket\) with \(y=z=0\), we observe,
\[\mathrm{i}\partial_{t}\llbracket G_{\alpha}\rrbracket =\mu_{2}\partial_{\alpha}^{2}\llbracket G_{\alpha}\rrbracket-2\mu_ {2}\,\llbracket G_{\alpha}\rrbracket\,\llbracket G_{\beta}\rrbracket\, \llbracket G_{\alpha}\rrbracket,\] \[\mathrm{i}\partial_{t}\llbracket G_{\beta}\rrbracket =-\mu_{2}\partial_{x}^{2}\llbracket G_{\beta}\rrbracket+2\mu_{2} \,\llbracket G_{\beta}\rrbracket\,\llbracket G_{\alpha}\rrbracket\, \llbracket G_{\beta}\rrbracket.\]
There are several different consistent choices we can make for \(P_{\alpha}\) and \(P_{\beta}\), as follows. For example, suppose we set \(P_{\beta}=P_{\alpha}^{\dagger}\), the adjoint operator to \(P_{\alpha}\) with respect to the \(L^{2}\) inner product. Then if \(G=\mathrm{i}PU\) with \(U\coloneqq(\mathrm{id}+P^{2})^{-1}\), as defined above, at the block level it transpires \(G_{\beta}=G_{\alpha}^{\dagger}\). In this case the kernel \(\llbracket G_{\beta}\rrbracket(0,0;x,t)\) is the complex conjugate transpose of the kernel \(\llbracket G_{\alpha}\rrbracket(0,0;x,t)\). And thus, assuming the kernel \(\llbracket G\rrbracket\) generated from \(G=PU\) satisfies the equation above, the equation for the block \(\llbracket G_{\alpha}\rrbracket(0,0;x,t)\) collapses to the non-commutative nonlinear Schrodinger equation. Further note, for the choice \(P_{\beta}=P_{\alpha}^{\dagger}\), the operator \(P\) is Hermitian with respect to the \(L^{2}\) inner product, i.e. \(P^{\dagger}=P\).
Remark 3 (Reverse and shifted space-time nonlocal equations): The system of linear equations for \(P\) above allows us to incorporate, and thus deduce corresponding results, for the nonlocal versions of the non-commutative nonlinear Schrodinger hierarchy. These include the reverse time, reverse space-time and space-time shifted nonlocal versions of these equations outlined in Ablowitz and Musslimani [1; 2], Fokas [42], Grabovski, Mohammed and Susanto [50] and Gurses and Pekcan [55; 56; 57; 58]. This fact is outlined in detail in Example 4 and Remark 17 in Doikou _et al_. [31].
### The Marchenko equation
Consider an operator \(P\in\mathfrak{J}_{N}(\mathbb{V})\), where \(N=1\) or \(N=2\). For the moment \(P\) is not necessarily a Hankel operator, and \(\mathbb{V}\) is an arbitrary seperable Hilbert space. The space \(\mathfrak{J}_{1}(\mathbb{V})\) denotes the set of trace-class (nuclear) operators. Crucial to the Poppe algebra we introduce in Section 3 are both, the Marchenko equation,
\[P=G\,(\mathrm{id}-Q),\]
for the operator \(G\), and the Poppe product in Lemma 1. In our application, we set \(Q\coloneqq-P^{2}\). The following abstract result is proved in Doikou _et al._[31, Lemma 1].
Lemma 2 (Existence and Uniqueness; Doikou _et al._[31]): _Assume \(Q_{0}\in\mathfrak{J}_{2}\) and for some \(T>0\) we know that \(Q\in C^{\infty}\big{(}[0,T];\mathfrak{J}_{2}\big{)}\) with \(Q(0)=Q_{0}\) and \(P\in C^{\infty}\big{(}[0,T];\mathfrak{J}_{N}\big{)}\), where \(N\) is \(1\) or \(2\). Further assume, \(\mathrm{det}_{2}(\mathrm{id}-Q_{0})\neq 0\). Then there exists a \(T^{\prime}>0\) with \(T^{\prime}\leqslant T\) such that for \(t\in[0,T^{\prime}]\) we have \(\mathrm{det}_{2}(\mathrm{id}-Q(t))\neq 0\) and there exists a unique solution \(G\in C^{\infty}\big{(}[0,T^{\prime}];\mathfrak{J}_{N}\big{)}\) to the linear equation \(P=G(\mathrm{id}-Q)\)._
Now suppose \(\mathbb{V}\coloneqq L^{2}((-\infty,0];\mathbb{C}^{m})\) for some \(m\in\mathbb{N}\). For any function \(w\geqslant 0\), we denote the weighted \(L^{2}\)-norm of any complex matrix-valued function \(f\) on \((-\infty,0]\) by,
\[\|f\|_{L^{2}_{w}}^{2}\coloneqq\int_{-\infty}^{0}\mathrm{tr}\left(f^{\dagger}( x)f(x)\right)w(x)\,\mathrm{d}x.\]
Doikou _et al._[31, Lemma 3] also establish, if \(p(\cdot;t)\in L^{2}_{w}((-\infty,0])\) with \(w\colon y\mapsto(1-y)^{2}\), the Hankel operator \(P=P(t)\) generated by \(p\) is such that \(P(t)\in\mathfrak{J}_{2}(\mathbb{V})\). Hence we _assume_ the solutions \(p=p(y;t)\) to the linear dispersive system \(\partial_{t}p=-\mu_{n}(\mathrm{i}\mathcal{I})^{n-1}\partial_{y}^{n}p\) lie in \(L^{2}_{w}((-\infty,0])\). We then take \(P=P(x,t)\) to be the Hankel operator with kernel \(p=p(y+z+x;t)\), where \(y,x\in(-\infty,0]^{\times 2}\), with parameter \(x\in\mathbb{R}\). Statements for \(p=p(\cdot;t)\) on \((-\infty,0]\) translate, for each \(x\in\mathbb{R}\), to statements for \(p=p(\cdot+x;t)\) on \((-\infty,x]\). This is important, as we wish to include natural solutions \(p=p(y;t)\) to the linear dispersion equation that are unbounded as \(y\to\infty\). Examples of such solutions are exponential-form solutions that generate soliton solutions to the corresponding non-commutative integrable nonlinear partial differential equation. Explicitly, the Marchenko equation we consider herein takes the form,
\[p(y+z+x;t)=g(y,z;x,t)-\int_{-\infty}^{0}g(y,\xi;x,t)q(\xi,z;x,t)\,\mathrm{d}\xi,\]
where \(q\) is the kernel of \(Q\coloneqq-P^{2}\). With this in hand, we have the following result, adapted from Doikou _et al._[31, Lemma 6].
Lemma 3 (Existence and Uniqueness: Marchenko equation): _Assume the smooth initial data \(p_{0}=p_{0}(\cdot)\) for \(p=p(\cdot;t)\) is such that \(\mathrm{det}_{2}(\mathrm{id}-Q_{0})\neq 0\), where \(Q_{0}\coloneqq-P_{0}^{2}\) and \(P_{0}\) is the Hankel operator generated by \(p_{0}\). Further assume there exists a \(T>0\) such that there is a solution,_
\[p\in C^{\infty}\big{(}[0,T];L^{2}_{w}((-\infty,0];\mathbb{C}^{m\times m})\cap C ^{\infty}((-\infty,0];\mathbb{C}^{m\times m})\big{)},\]
_to the linear dispersion equation \(\partial_{t}p=-\mu_{n}(\mathrm{i}\mathcal{I})^{n-1}\partial_{y}^{n}p\), where \(w\colon y\mapsto(1-y)^{2}\). Then there exists a \(T^{\prime}>0\) with \(T^{\prime}\leqslant T\) such that for \(t\in[0,T^{\prime}]\) we know: (i) The Hankel operator \(P=P(x,t)\) with parameter \(x\in\mathbb{R}\) generated by \(p\) is Hilbert-Schmidt valued on \(\mathbb{V}\); (ii) The determinant \(\mathrm{det}_{2}(\mathrm{id}-Q(x,t))\neq 0\) where \(Q(x,t)\coloneqq-P^{2}(x,t)\), and hence (iii) There is a unique Hilbert-Schmidt valued solution \(G=G(x,t)\) with \(G\in C^{\infty}([0,T^{\prime}];\mathfrak{J}_{2}(\mathbb{V}))\) to the linear Fredholm equation \(P=G(\mathrm{id}-Q)\)._
### Trace formulae and finite-rank operators
We now establish that at the core of the Poppe product is in fact a finite rank operator. For this section only, for convenience, we assume the domain of support of the functions under consideration is \([0,\infty)\) as opposed to \((-\infty,0]\). A reflection transformation translates between the two. For a bounded integral operator \(K\colon L^{2}(0,\infty)\to L^{2}(0,\infty)\) with a continuous kernel \(k=k(y,z)\), we write \(\llbracket K\rrbracket_{0,0}=k(0,0)\) for the kernel bracket.
Definition 3 (Shift operator): We define the shift operator \(S_{\eta}\colon L^{2}(0,\infty)\to L^{2}(0,\infty)\) by \(S_{\eta}f(x)=f(x-\eta)\operatorname{ind}_{(0,\infty)}(x-\eta)\), where \(\operatorname{ind}_{(0,\infty)}\) is the indicator function on \((0,\infty)\).
It is well-known that \(S_{\eta}\) is a linear isometry and \((S_{\eta})_{\eta>0}\) is a strongly continuous contraction semigroup. Further, the adjoint \(S_{\eta}^{\dagger}\) is a linear contraction, and \((S_{\eta}^{\dagger})_{\eta>0}\) is a strongly continuous contraction semigroup on \(L^{2}\).
Proposition 1: _Set \(\sigma_{\eta}(K)\coloneqq S_{\eta}^{\dagger}KS_{\eta}\) for \(K\in\mathfrak{J}(L^{2}(0,\infty))\). Then we have:_
1. \(\sigma_{\eta}(K)\in\mathfrak{J}(L^{2}(0,\infty))\) _for all_ \(K\in\mathfrak{J}(L^{2}(0,\infty))\) _with_ \(\|\sigma_{\eta}(K)\|\leqslant\|K\|\)_,_ \(K\mapsto\sigma_{\eta}(K)\) _is linear and_ \(\sigma_{\eta+\xi}=\sigma_{\eta}(\sigma_{\xi}(K))\)_;_
2. \(K=K^{\dagger}\) _implies_ \((\sigma_{\eta}(K))^{\dagger}=\sigma_{\eta}(K)\) _and_ \(K\geqslant 0\) _implies_ \(\sigma_{\eta}(K)\geqslant 0\)_;_
3. \((S_{\eta})_{\eta>0}\) _gives a strongly continuous contraction semigroup on the von Neumann-Schatten ideal_ \(\mathfrak{J}_{p}\) _for_ \(1\leqslant p<\infty\) _and on the space of compact operators. Also_ \(\sigma_{\eta}(K)\to K\) _as_ \(\eta\to 0\) _for such_ \(K\)_;_
4. _Let_ \(\delta\) _be the generator of the semigroup in (iii), so_ \(\sigma_{\eta}=\exp(t\,\delta)\)_. Then for continuously differentiable kernels_ \(k=k(y,z)\) _we have,_ \(\delta k(y,z)=(\partial_{y}+\partial_{z})k(y,z)\)_;_
5. _Suppose that_ \(K\) _has a continuous kernel_ \(k=k(y,z)\)_, and that_ \(K\) _is self-adjoint, non-negative and trace class. Then,_ \(\det(\operatorname{id}+\sigma_{\eta}(K))=\det(\operatorname{id}+K\mathrm{Pr}_ {(\eta,\infty)})\) _and,_ \[\llbracket\sigma_{\eta}(K)\rrbracket_{0,0}=k(\eta,\eta)=-\frac{ \operatorname{d}}{\operatorname{d}\eta}\mathrm{tr}\,\sigma_{\eta}(K);\]
6. _Poppe's bracket operation satisfies,_ \(\mathrm{tr}\,\delta K=-\llbracket K\rrbracket_{0,0}\)_._
_Proof_ (i) Follows since \((S_{\eta})\) is a contraction semigroup, while (ii) is straightforward. (iii) The Schatten class gives an operator ideal, so we have \(\|S_{\eta}^{\dagger}KS_{\eta}\|_{\mathfrak{J}_{p}}\leqslant\|K\|_{\mathfrak{ J}_{p}}\) since \(\|S_{\eta}\|_{\mathfrak{J}}=1\). In view of this, we only need to check continuity in the relevant norm. For the Hilbert-Schmidt operators \(\mathfrak{J}_{2}\), we let \(\sigma_{\eta}(K)(y,z)\) be the kernel of \(\sigma_{\eta}(K)\) as an integral operator. Then we have \(\sigma_{\eta}(K)(y,z)=K(y+\eta,z+\eta)\), so by the Hilbert-Schmidt theorem,
\[\|\sigma_{\eta}(K)-K\|_{\mathfrak{J}_{2}}^{2}=\int_{0}^{\infty}\int_{0}^{ \infty}\left\|k(y+\eta,z+\eta)-k(y,z)\right\|^{2}\operatorname{d}\!y\mathrm{d }z,\]
which converges to \(0\) as \(\eta\to 0^{+}\). For the trace class operators \(\mathfrak{J}_{1}\), we observe that the space of trace class operators may be identified with the projective tensor product \(\mathfrak{J}_{1}=L^{2}\hat{\otimes}L^{2}\), so we have a nuclear expansion,
\[k(y,z)=\sum_{j=1}^{\infty}f_{j}(y)g_{j}(z),\]
where \(\sum_{j=1}^{\infty}\|f_{j}\|_{L^{2}}\|g_{j}\|_{L^{2}}=\|K\|_{\mathfrak{J}_{1}}\). Then we have
\[\sigma_{\eta}(K)(y,z)-k(y,z)=\sum_{j=1}^{\infty}\bigl{(}f_{j}(y+\eta)-f_{j}(y) \bigr{)}g_{j}(z+\eta)+\sum_{j=1}^{\infty}f_{j}(y)\bigl{(}g_{j}(z+\eta)-g_{j}(y )\bigr{)},\]
and so,
\[\|\sigma_{\eta}(K)-K\|_{\mathfrak{J}_{1}}\,\leqslant\sum_{j=1}^{\infty}\|S_{\eta}^ {\dagger}(f_{j})-f_{j}\|_{L^{2}}\|g_{j}\|_{L^{2}}+\sum_{j=1}^{\infty}\|f_{j}\| _{L^{2}}\|S_{\eta}(g_{j})-g_{j}\|_{L^{2}},\]
where the right-hand side converges to \(0\) as \(\eta\to 0^{+}\) by dominated convergence.
For \(1<p<\infty\), we observe that the finite-rank operators give a dense linear subspace of \(\mathfrak{J}_{p}\), so we can argue as with the trace class operators. Likewise, the finite-rank operators give a dense linear subspace of the space of compact operators. Hence \((\sigma_{\eta})_{\eta>0}\) gives as strongly continuous contraction semigroup on these spaces. Now \(S_{\eta}^{\dagger}f\to 0\) as \(\eta\to\infty\) for all \(f\in L^{2}(0,\infty)\), so for all finite rank operators \(F\), we have \(\sigma_{\eta}(F)\to 0\) as \(\eta\to\infty\). Then for \(K\in\mathfrak{J}_{p}\) and \(\varepsilon>0\) there exists a finite rank \(F\) such that \(\|K-F\|_{\mathfrak{J}_{p}}<\varepsilon\), so \(\|\sigma_{\eta}(K)\|_{\mathfrak{J}_{p}}\leqslant\|K-F\|_{\mathfrak{J}_{p}}+\| \sigma_{\eta}(F)\|_{\mathfrak{J}_{p}}\) is less than \(2\varepsilon\) for all sufficiently large \(\eta\). (Note, we do not assert that \((\sigma_{\eta})_{\eta>0}\) is strongly continuous on \(\mathfrak{J}\) itself.)
(iv) From the definition of generator, we have,
\[\delta k(y,z)=\left.\frac{\mathrm{d}}{\mathrm{d}\eta}\right|_{\eta=0}\sigma_{ \eta}(K)(y,z)=\left.\frac{\mathrm{d}}{\mathrm{d}\eta}\right|_{\eta=0}k(y+\eta,z+\eta)=\big{(}\partial_{y}+\partial_{z}\big{)}k(y,z).\]
(v) We have, \(\det(\mathrm{id}+\sigma_{\eta}(K))=\det(\mathrm{id}+S_{\eta}^{\dagger}KS_{ \eta})=\det(\mathrm{id}+KS_{\eta}S_{\eta}^{\dagger})=\det(\mathrm{id}+K \mathrm{Pr}_{(\eta,\infty)})\). By Mercer's formula we have, \(\operatorname{tr}\sigma_{\eta}(K)=\int_{0}^{\infty}K(y+\eta,y+\eta)\,\mathrm{ d}y\), and we can differentiate this formula using the fundamental theorem of calculus.
(vi) The result (v) may be formulated in terms of the generator without explicit mention of the semigroup. Let \(\mathcal{D}^{1}\coloneqq\{\phi\in L^{2}((0,\infty);\mathbb{C})\colon\phi^{ \prime}\in L^{2}((0,\infty);\mathbb{C})\}\), and recall from Hille and Phillips [60, p. 535] that \(\mathcal{D}^{1}\) is the domain of the generator of \((S_{\eta}^{\dagger})_{\eta>0}\). By Plancherel's theorem, we have \(\mathcal{D}^{1}\subset L^{\infty}\), so \(\mathcal{D}^{1}\) is an algebra under pointwise multiplication of functions; hence there is a map \(\mu\colon\mathcal{D}^{1}\otimes\mathcal{D}^{1}\to\mathcal{D}^{1}\) given by \(\phi(y)\psi(z)\mapsto\phi(y)\psi(y)\). There is also a natural inclusion \(\mathcal{D}^{1}\otimes\mathcal{D}^{1}\to L^{2}\otimes L^{2}=\mathfrak{J}_{1}(L ^{2})\), and the trace satisfies \(\operatorname{tr}\left(K\right)=\int_{0}^{\infty}\mu(K)(x)\,\mathrm{d}x\). We have \(\delta\colon\mathcal{D}^{1}\otimes\mathcal{D}^{1}\to L^{2}\otimes L^{2}\colon \phi\otimes\psi\mapsto\phi^{\prime}\otimes\psi+\phi\otimes\psi^{\prime}\); hence for \(k=\sum_{j=1}^{\infty}\phi_{j}(y)\psi_{j}(z)\) we have,
\[\operatorname{tr}\delta K=\sum_{j=1}^{\infty}\int_{0}^{\infty}\big{(}\phi_{j} (y)\psi_{j}(y)+\phi_{j}(y)\psi_{j}^{\prime}(y)\big{)}\,\mathrm{d}y=-\sum_{j=1} ^{\infty}\phi_{j}(0)\psi_{j}(0),\]
so we have the required expression for Poppe's bracket operation.
Suppose that \(\phi\in L^{2}((0,\infty);\mathbb{M}_{m\times m}(\mathbb{C}))\). Then we introduce \(\phi_{(x)}(\eta)\coloneqq\phi(2x+\eta)\) and the Hankel operator \(\Gamma_{\phi_{(x)}}:L^{2}((0,\infty);\mathbb{C}^{m\times 1})\to L^{2}((0, \infty);\mathbb{C}^{m\times 1})\) by,
\[\Gamma_{\phi_{(x)}}h(y)=\int_{0}^{\infty}\phi(y+z+2x)f(z)\,\mathrm{d}z\]
for \(f\in L^{2}((0,\infty);\mathbb{C}^{m\times 1})\). Suppose, \(\int_{0}^{\infty}t\|\phi(y)\|^{2}\,\mathrm{d}y<\infty\) and \(\int_{0}^{\infty}t\|\psi(y)\|^{2}\,\mathrm{d}y<\infty\). Then \(\Gamma_{\phi}\) is a Hilbert-Schmidt operator, and \(\Gamma_{\phi}\Gamma_{\psi}\) is trace class with,
\[\operatorname{tr}\big{(}\Gamma_{\phi}\Gamma_{\psi}\big{)}=\int_{0}^{\infty} \int_{0}^{\infty}\phi(y+z)\psi(y+z)\,\mathrm{d}dyd\mathrm{z}=\int_{0}^{\infty} y\phi(y)\psi(y)\,\mathrm{d}y.\]
A bounded linear operator \(\Gamma\) on \(L^{2}(0,\infty)\) is Hankel if and only if \(S_{\eta}^{\dagger}\Gamma=\Gamma S_{\eta}\) for all \(\eta>0\). This may be interpreted as \(\partial_{x}\Gamma=-\Gamma\partial_{y}\) when we consider operators on \(C_{c}^{\infty}(0,\infty)\). In the context of Hankel products, this leads to the following.
Proposition 2 (Bracket identities for Hankel operators): _We have the following:_
1. _Let_ \(\Gamma_{\phi}\) _be the Hankel operator with kernel_ \(\phi(y+z)\)_. Then_ \(\sigma_{\eta}(\Gamma_{\phi})\) _has kernel_ \(\phi(y+z+2\eta)\)_, so_ \(\sigma_{\eta}(\Gamma_{\phi})=\Gamma_{\phi_{(\eta)}}\)_;_
2. _Let_ \(\phi,\psi\in\mathbb{M}_{m\times m}(C_{c}^{\infty}(0,\infty))\) _be functions as above. Then_ \(\sigma_{2\eta}(\Gamma_{\phi}\Gamma_{\psi})=\Gamma_{\phi_{(\eta)}}\Gamma_{\psi_ {(\eta)}}\)_, and,_ \[\partial_{\eta}\big{(}\Gamma_{\phi_{(\eta)}}\Gamma_{\psi_{(\eta)}}\big{)},\] _is a bounded linear operator of finite rank with rank less than or equal to_ \(m^{2}\)_;_
3. _Suppose_ \(m=1\)_, and_ \(\phi,\psi\in C_{c}^{\infty}(0,\infty)\)_. Then_ \(\delta(\Gamma_{\phi}\Gamma_{\psi})\) _has rank one;_
4. _Conversely, suppose that_ \(K\) _is in the domain of_ \(\delta\) _and_ \(\delta(K)\) _has finite rank. Then_ \(K=\Gamma_{\phi}^{\top}\Gamma_{\Psi}\)_;_
5. _Let_ \(\Gamma_{x}=\Gamma_{\phi_{(x)}}\) _and_ \(\Gamma_{x}^{\prime}=\partial_{x}\Gamma_{x}\)_. Let_ \(\Theta\) _be holomorphic on an open neighbourhood of the spectrum of_ \(\Gamma_{x}\) _for all real_ \(x\)_. Then we have,_ \[[\![\Gamma_{x}\Theta^{\prime}(\Gamma_{x})]\!]_{0,0}=-\partial_{x}\mathrm{tr} \,\big{(}\Theta(\Gamma_{x})\big{)}.\]
Proof: Item by item we observe the following. (i) This is straightforward, and explains the notation. (ii) The Hankel operators \(\Gamma_{\phi_{(\eta)}}\) and \(\Gamma_{\psi_{(\eta)}}\) are Hilbert-Schmidt, so their product is trace class. Then we differentiate the kernel and obtain,
\[\partial_{\eta}\int_{0}^{\infty}\phi(y+\xi+2\eta)\psi(\xi+z+2\eta)\,\mathrm{d} \xi=-2\phi(y+2\eta)\psi(z+2\eta),\]
which gives an element of the vector space \(\mathbb{M}_{m\times m}(\mathbb{C})\) of dimension \(m^{2}\). (iii) We have for \(m=1\), \(\delta(\Gamma_{\phi}\Gamma_{\psi})(y,z)=-\phi(y)\psi(z)\). (iv) By hypothesis, there exist \(\phi_{j},\psi_{j}\in L^{2}(0,\infty)\) for \(j=1,\ldots,n\) such that \(\delta K(y,z)=-\sum_{j=1}^{n}\phi_{j}(y)\psi_{j}(z)\). Then we introduce the vector functions \(\Phi=(\phi_{1},\ldots,\phi_{n})^{\mathrm{T}}\) and \(\Psi=(\psi_{1},\ldots,\psi_{n})^{\mathrm{T}}\) so that \(\Gamma_{\Phi}^{\top}\Gamma_{\Psi}=\sum_{j=1}^{n}\Gamma_{\phi_{j}}\Gamma_{\psi_ {j}}\); and also \(\delta(\Gamma_{\Phi}^{\top}\Gamma_{\Psi})=-\sum_{j=1}^{n}\phi_{j}(y)\psi_{j}(z)\). We consider \(W=K-\Gamma_{\Phi}^{\top}\Gamma_{\Psi}\) which belongs to the domain of \(\delta\) with \(\delta(W)=0\), hence, \(\sigma_{\eta}(W)=W+\int_{0}^{\eta}\sigma_{\xi}(\delta W)\,\mathrm{d}\xi=W\), where \(\sigma_{\eta}(W)\to 0\) as \(t\to\infty\); thus \(W=0\) and \(K=\Gamma_{\Phi}^{\top}\Gamma_{\Psi}\) is a Hankel product. (v) For even powers we have,
\[2\partial_{x}\mathrm{tr}\,\Gamma_{x}^{2k}=2\sum_{j=0}^{k-1}\mathrm{tr}\,\big{(} \Gamma_{x}^{2j}\partial_{x}(\Gamma_{x}^{2})\Gamma_{x}^{2(k-j-1)}\big{)}=2k \,\mathrm{tr}\,\big{(}\Gamma_{x}^{2k-2}\partial_{x}\Gamma_{x}^{2}\big{)},\]
where the final operator has finite rank. For odd powers, since,
\[2\partial_{x}\mathrm{tr}\,\big{(}\Gamma_{x}^{2k+1}\big{)}= \mathrm{tr}\,\big{(}\Gamma_{x}^{\prime}\Gamma_{2}^{2k}+\Gamma_{x} \Gamma_{x}^{\prime}\Gamma_{x}^{2k-1}+\cdots+\Gamma_{x}^{2k}\Gamma_{x}^{\prime} \big{)}\] \[+\mathrm{tr}\,\big{(}\Gamma_{x}^{\prime}\Gamma_{2}^{2k}+\Gamma_{x }\Gamma_{x}^{\prime}\Gamma_{x}^{2k-1}+\cdots+\Gamma_{x}^{2k}\Gamma_{x}^{\prime }\big{)},\]
and then we move the terms in the second list one step to the left, except for the first, which we move to the end. Thus we obtain,
\[\mathrm{tr}\,\big{(}(\Gamma_{x}^{\prime}\Gamma_{x}+\Gamma_{x} \Gamma_{x}^{\prime})\Gamma_{x}^{2k-1}+\Gamma_{x}(\Gamma_{x}^{\prime}\Gamma_{x} +\Gamma_{x}\Gamma_{x}^{\prime})\Gamma_{x}^{2k-2}+\cdots+\Gamma_{x}^{2k-1}( \Gamma_{x}^{\prime}\Gamma_{x}+\Gamma_{x}\Gamma_{x}^{\prime})\big{)}\] \[=(2k+1)\,\mathrm{tr}\,\big{(}\Gamma_{x}^{2k-1}\partial_{x}\Gamma _{x}^{2}\big{)},\]
where again the final operator has finite rank.
Suppose that the spectrum of \(\Gamma_{x}\) is contained in \(D(0,r)\) for some \(r>0\). Then for all \(s\) such that \(|s|>r\), we have a convergent power series \((s-\gamma)^{-1}=\sum_{j=0}^{\infty}\gamma^{j}/s^{j+1}\) for all \(\gamma\) in the spectrum of \(\Gamma_{x}\) and,
\[2\,\partial_{x}\mathrm{tr}\left((s\cdot\mathrm{id}-\Gamma_{x})^{-1}\right)= \sum_{j=1}^{\infty}\frac{2}{s^{j+1}}\partial_{x}\mathrm{tr}\left(\Gamma_{x}^{ j}\right).\]
Then for the first term in the series, we have,
\[2\,\mathrm{tr}\left(\Gamma_{x}^{\prime}\right)=2\,\partial_{x}\int_{0}^{\infty }\phi(2y+2x)\,\mathrm{d}y=-2\phi(2x)=-2[\![\Gamma_{x}]\!]_{0,0},\]
and for the remaining terms in the series, \(\mathrm{tr}\left(\Gamma_{x}^{j-2}\partial_{x}\Gamma_{x}^{2}\right)\) equals,
\[-2\int_{0}^{\infty}\cdots\int_{0}^{\infty}\phi_{(x)}(y_{0}+y_{1})\phi_{(x)}(y_ {1}+y_{2})\ldots\phi_{(x)}(y_{j-1}+y_{0})dy_{0}\ldots dy_{j-1},\]
which equals \(-2[\![\Gamma_{x}^{j}]\!]_{0,0}\). The result \(-\partial_{x}\mathrm{tr}\left((s\cdot\mathrm{id}-\Gamma_{x})^{-1}\right)=[\![ \Gamma_{x}(s\cdot\mathrm{id}-\Gamma_{x})^{-2}]\!]_{0,0}\) holds for all \(s\) such that \(|s|>r\) follows when we multiply through by \(-1/2\) and sum over \(j\). By analytic continuation, we have the same identity for all \(s\) in the unbounded component of the complement of the spectrum of \(\Gamma_{x}\) in the complex plane.
Now let \(\Theta\) be holomorphic on an open neighbourhood of the spectrum of \(\Gamma_{x}\) for all \(x>0\). Note that \(\|\Gamma_{x}\|\to 0\) as \(x\to\infty\), so this uniformity is a mild restriction. Then there exists a contour \(C\) that winds round the spectrum of \(\Gamma_{x}\) once in the positive sense, so by Cauchy's integral formula \(\Theta(\Gamma_{x})=(2\pi i)^{-1}\int_{C}(s\cdot\mathrm{id}-\Gamma_{x})^{-1} \Theta(\zeta)\,\mathrm{d}\zeta\), hence,
\[-\partial_{x}\mathrm{tr}\left(\Theta(\Gamma_{x})\right) = -\frac{1}{2\pi i}\int_{C}\partial_{x}\mathrm{tr}\left((s\cdot \mathrm{id}-\Gamma_{x})^{-1}\right)\!\Theta(\zeta)\,\mathrm{d}\zeta\] \[= \frac{1}{2\pi i}\int_{C}[\![\Gamma_{x}(s\cdot\mathrm{id}-\Gamma_ {x})^{-2}]\!]_{0,0}\Theta(\zeta)\,\mathrm{d}\zeta,\]
which integrates to \([\![\Gamma_{x}\Theta^{\prime}(\Gamma_{x})]\!]_{0,0}\). The proof is complete.
Example 3: For real-valued \(\phi\), and for \(\Theta(\zeta)=\zeta^{2}\), the basic formulae are,
\[\|\Gamma_{x}\|_{32}^{2}=\mathrm{tr}\left(\Gamma_{x}^{2}\right)=\int_{0}^{ \infty}\!\!y\phi_{(x)}(y)^{2}\,\mathrm{d}y\quad\text{and}\quad[\![\Gamma_{x}^{ 2}]\!]_{0,0}=-\tfrac{1}{2}\partial_{x}\mathrm{tr}\left(\Gamma_{x}^{2}\right)= \!\int_{0}^{\infty}\!\!\phi_{(x)}(y)^{2}\,\mathrm{d}y.\]
## 3 Poppe algebra
We prescribe the kernel algebra generated by the quantities \([\![V]\!]\) and \([\![V^{\dagger}]\!]\) and their derivatives, based on the Poppe product in Lemma 1, as well as a subalgebra generated by the quantity \([\![V-V^{\dagger}]\!]\) and its derivatives. We also outline abstract versions of these algebras to aid computations. We nominate the abstract algebra as the _Poppe algebra_ and the corresponding subalgebra the _skew-Poppe algebra_. The Poppe algebra outlined herein represents a generalisation of the Poppe algebra used in Malham [69] to derive the non-commutative Korteweg-de Vries hierarchy. We begin with some preliminary identities. Given a Hankel Hilbert-Schmidt operator \(P=P(x,t)\) on \(\mathbb{V}\), depending on the parameters \(x\in\mathbb{R}\) and \(t\geqslant 0\), we set,
\[V\coloneqq(\mathrm{id}-\mathrm{i}P)^{-1}.\]
Recall that \(P\) is self adjoint, so that \(P^{\dagger}=P\) and thus \((\mathrm{i}P)^{\dagger}=-\mathrm{i}P\).
Lemma 4 (Operator identities): _Given a Hankel operator \(P=P(x,t)\) which is Hilbert-Schmidt valued, and the definition \(V\coloneqq(\mathrm{id}-\mathrm{i}P)^{-1}\), with \(V^{\dagger}=(\mathrm{id}+\mathrm{i}P)^{-1}\) the adjoint operator to \(V\), we observe that,_
\[V\equiv\mathrm{id}+(\mathrm{i}P)V\equiv\mathrm{id}+V(\mathrm{i}P)\quad\text{ and}\quad V^{\dagger}\equiv\mathrm{id}+(\mathrm{i}P)^{\dagger}V^{\dagger}\equiv \mathrm{id}+V^{\dagger}(\mathrm{i}P)^{\dagger},\]
_and further that, \(V-V^{\dagger}\equiv 2\,V(\mathrm{i}P)V^{\dagger}\)._
Proof: All these identities follow directly from the definitions of \(V\) and \(V^{\dagger}\) and partial fraction identities.
Definition 4 (Fredholm Grassmannian flow): Given a Hankel Hilbert-Schmidt operator \(P=P(x,t)\) on \(\mathbb{V}\), depending on the parameters \(x\in\mathbb{R}\) and \(t\geqslant 0\), we define the operator \(G\) by,
\[G\coloneqq V-V^{\dagger}.\]
Remark 4: Using Lemma 4, we can write \(V-V^{\dagger}=2\,V(\mathrm{i}P)V^{\dagger}=2\,(\mathrm{i}P)U\), where \(U\coloneqq\big{(}\mathrm{id}+P^{2}\big{)}^{-1}\). This is possible because \(V\) and \(V^{\dagger}\) can be expressed as power series in \(P\) with scalar coefficients. Hence the order of the operators in \(V(\mathrm{i}P)V^{\dagger}\) does not matter. Thus, as outlined in detail in Doikou _et al._ (2011, Sec. 2), the flow of \(G\) represents a Fredholm Grassmannian flow.
It is now helpful to define the _signature character_, given previously in Malham (2011) and Doikou _et al._ (2011). Let \(\mathbb{N}^{*}\) denote the free monoid of words on \(\mathbb{N}\), i.e. the set of all possible words of the form \(a_{1}a_{2}\cdots a_{k}\) we can construct from letters \(a_{1},a_{2},\ldots,a_{k}\in\mathbb{N}\).
Definition 5 (Signature character): Suppose \(a_{1}a_{2}\cdots a_{n}\in\mathbb{N}^{*}\). The _signature character_\(\chi\colon\mathbb{N}^{*}\to\mathbb{Q}\) of any such word is given by the product of Leibniz coefficients,
\[\chi\colon a_{1}a_{2}\cdots a_{n}\mapsto\prod_{k=1}^{n}\binom{a_{k}+\cdots+a_ {n}}{a_{k}}\,.\]
Let \(\mathcal{C}(n)\) denote the set of all compositions of \(n\in\mathbb{N}\). The following result is equivalent to that in Malham (2011, Lemma 2) and Doikou _et al._ (2011, Lemma 8), where detailed proofs can be found. For any integer \(k\), we set \((\mathrm{i}P)_{k}\coloneqq\partial^{k}(\mathrm{i}P)\), \(V_{k}\coloneqq\partial^{k}V\) and \(V_{k}^{\dagger}\coloneqq\partial^{k}V^{\dagger}\). For example, if \(k=2\), then \(V_{2}=\partial^{2}V\), while if \(k=-1\), then \(V_{-1}=\partial^{-1}V\).
Lemma 5 (Kernel signature expansion): _Given a Hankel operator \(P=P(x,t)\) and that \(V\coloneqq(\mathrm{id}-\mathrm{i}P)^{-1}\) with \(V^{\dagger}=(\mathrm{id}+\mathrm{i}P)^{-1}\), we observe that \(\partial V\equiv V(\mathrm{i}P)_{1}V\) and \(\partial V^{\dagger}\equiv V^{\dagger}(\mathrm{i}P)_{1}^{\dagger}V^{\dagger}\). With the sum over all compositions \(a_{1}\cdots a_{k}\in\mathcal{C}(n)\), we have,_
\[V_{n}=\sum\chi\big{(}a_{1}\cdots a_{n}\big{)}\,V(\mathrm{i}P)_{a_{1}}V\cdots V (\mathrm{i}P)_{a_{k}}V,\]
_with the corresponding generalisation for \(V_{n}^{\dagger}\). In particular we have,_
\[V_{n}-V_{n}^{\dagger}=\sum_{k=1}\chi\big{(}a_{1}\cdots a_{n}\big{)}\,\big{(}V (\mathrm{i}P)_{a_{1}}V\cdots V(\mathrm{i}P)_{a_{k}}V-V^{\dagger}(\mathrm{i}P)_ {a_{1}}^{\dagger}V^{\dagger}\cdots V^{\dagger}(\mathrm{i}P)_{a_{k}}^{\dagger} V^{\dagger}\big{)}.\]
We now construct the algebra generated by \(\llbracket V-V^{\dagger}\rrbracket\) and its derivatives. For any Hilbert-Schmidt operator \(W\), we set,
\[\llbracket W\rrbracket\coloneqq\llbracket W-W^{\dagger}\rrbracket\qquad\text{ and}\qquad\{W\}\coloneqq\llbracket W+W^{\dagger}\rrbracket.\]
In other words the bracket '\([\,\cdot\,]\)' generates the kernel of the difference, between its operator argument and corresponding adjoint. It is the kernel of the skew-symmetric part of its operator argument. It is not a commutator. Thus if \(v=v(y,z;x,t)\) is the matrix-valued kernel corresponding to \(V\) which depends on the parameters \(x\) and \(t\), then \([V]=v(y,z;x,t)-v^{\dagger}(z,y;x,t)\), where now \(v^{\dagger}\) is the complex conjugate transpose of the matrix \(v\). Analogously, \(\{\,\cdot\,\}\) generates the kernel of the symmetric part of its operator argument. From Lemma 5, we observe that,
\[[V_{n}]=\sum\chi\big{(}a_{1}\cdots a_{n}\big{)}\,\big{[}V(\mathrm{i}P)_{a_{1}} V\cdots V(\mathrm{i}P)_{a_{k}}V\big{]}.\]
The kernel monomial algebra is generated by the monomials \(\big{[}V(\mathrm{i}P)_{a_{1}}V\cdots V(\mathrm{i}P)_{a_{k}}V\big{]}\), including monomials of this form with one or more of the \(V\)'s shown being replaced by \(V^{\dagger}\). The Poppe product from Lemma 1 generates closed-form identities for products of such monomials. In particular, we have the following.
Lemma 6 (Poppe kernel product identities): _For arbitrary Hilbert-Schmidt operators \(F\) and \(F^{\prime}\) and a Hankel Hilbert-Schmidt operator \(P\) with parameter \(x\) and a smooth kernel, we have the following,_
\[\llbracket F(\mathrm{i}P)_{a}V\rrbracket\llbracket V(\mathrm{i}P)_ {b}F^{\prime}\rrbracket =\llbracket F(\mathrm{i}P)_{a+1}V(\mathrm{i}P)_{b}F^{\prime} \rrbracket+\llbracket F(\mathrm{i}P)_{a}V(\mathrm{i}P)_{b+1}F^{\prime} \rrbracket\] \[\quad+2\,\llbracket F(\mathrm{i}P)_{a}V(\mathrm{i}P)_{1}V( \mathrm{i}P)_{b}F^{\prime}\rrbracket,\] \[\llbracket F(\mathrm{i}P_{a})V^{\dagger}\rrbracket\llbracket V^{ \dagger}(\mathrm{i}P)_{b}F^{\prime}\rrbracket =\llbracket F(\mathrm{i}P)_{a+1}V^{\dagger}(\mathrm{i}P)_{b}F^{ \prime}\rrbracket+\llbracket F(\mathrm{i}P)_{a}V^{\dagger}(\mathrm{i}P)_{b+1} F^{\prime}\rrbracket\] \[\quad+2\,\llbracket F(\mathrm{i}P)_{a}V^{\dagger}(\mathrm{i}P)_ {1}V^{\dagger}(\mathrm{i}P)_{b}F^{\prime}\rrbracket,\] \[\llbracket F(\mathrm{i}P)_{a}V^{\dagger}\rrbracket\llbracket V( \mathrm{i}P)_{b}F^{\prime}\rrbracket =\llbracket F(\mathrm{i}P)_{a+1}V(\mathrm{i}P)_{b}F^{\prime} \rrbracket+\llbracket F(\mathrm{i}P)_{a}V^{\dagger}(\mathrm{i}P)_{b+1}F^{ \prime}\rrbracket,\] \[\llbracket F(\mathrm{i}P)_{a}V\rrbracket\llbracket V^{\dagger}( \mathrm{i}P)_{b}F^{\prime}\rrbracket =\llbracket F(\mathrm{i}P)_{a+1}V^{\dagger}(\mathrm{i}P)_{b}F^{ \prime}\rrbracket+\llbracket F(\mathrm{i}P)_{a}V(\mathrm{i}P)_{b+1}F^{ \prime}\rrbracket.\]
Proof: The results stated are established straightforwardly. For example, using the identities in Lemma 4 and the basic Poppe product rule, we observe,
\[\llbracket F(\mathrm{i}P)_{a}V\rrbracket\llbracket V(\mathrm{i}P) _{b}F^{\prime}\rrbracket =\llbracket F(\mathrm{i}P)_{a}+F(\mathrm{i}P)_{a}V(\mathrm{i}P) \rrbracket\llbracket(\mathrm{i}P)_{b}F^{\prime}+(\mathrm{i}P)V(\mathrm{i}P) _{b}F^{\prime}\rrbracket\] \[=\llbracket F(\mathrm{i}P)_{a+1}(\mathrm{i}P)_{b}F^{\prime} \rrbracket+\llbracket F(\mathrm{i}P)_{a}(\mathrm{i}P)_{b+1}F^{\prime}\rrbracket\] \[\quad+\llbracket F(\mathrm{i}P)_{a+1}(\mathrm{i}P)V(\mathrm{i}P )_{b}F^{\prime}\rrbracket+\llbracket F(\mathrm{i}P)_{a}(\mathrm{i}P)_{1}V( \mathrm{i}P)_{b}F^{\prime}\rrbracket\] \[\quad+\llbracket F(\mathrm{i}P)_{a}V(\mathrm{i}P)_{1}(\mathrm{i} P)_{b}F^{\prime}\rrbracket+\llbracket F(\mathrm{i}P)_{a}V(\mathrm{i}P)(\mathrm{i}P)_{b+1}F^{ \prime}\rrbracket\] \[\quad+\llbracket F(\mathrm{i}P)_{a}V(\mathrm{i}P)_{1}(\mathrm{i} P)V(\mathrm{i}P)_{b}F^{\prime}\rrbracket\] \[\quad+\llbracket F(\mathrm{i}P)_{a}V(\mathrm{i}P)(\mathrm{i}P) _{1}V(\mathrm{i}P)_{b}F^{\prime}\rrbracket.\]
Combining terms using the identities in Lemma 4 generates the first result claimed. And so forth.
Remark 5 (Algebra of kernel monomials: abstract encoding): As mentioned, the set of all kernel monomials of the form \(\llbracket V(\mathrm{i}P)_{a_{1}}V(\mathrm{i}P)_{a_{2}}V\cdots V(\mathrm{i}P) _{a_{k}}V\rrbracket\), where any of the \(V\)'s shown may be replaced by \(V^{\dagger}\), with the Poppe kernel product defined in Lemma 6, form a closed algebra of such monomials. We assume here that all the
derivatives of the \(P\) operator exist and are Hilbert-Schmidt valued. At this stage it is useful to consider an abstract encoding of this kernel monomial algebra, equipped with the Poppe kernel product. The abstract algebra is constructed by simply stripping the 'i\(P\)' and '\(V\)' labels from the kernel monomials, and respectively, replacing them by the composition components \(a_{1}a_{2}\cdots a_{k}\), together with a binary encoding of whether an intervening operator is a \(V\) or \(V^{\dagger}\), i.e. we replace,
\[[\![V(\mathrm{i}P)_{a_{1}}V(\mathrm{i}P)_{a_{2}}V\cdots V(\mathrm{i}P)_{a_{k}}V ]\!]\to\mathbf{0}a_{1}\mathbf{0}a_{2}\mathbf{0}\cdots\mathbf{0}a_{k}\mathbf{0},\]
where any of the \(\mathbf{0}\)'s shown, corresponding to the \(V\) operator, may be replaced by \(\mathbf{0}^{\dagger}\) in the corresponding position that a \(V^{\dagger}\) operator is present in the monomial on the left. In essence, the Poppe kernel product defined in Lemma 6 involves operations on these stripped down components only, i.e. operations on the forms \(\mathbf{0}a_{1}\mathbf{0}a_{2}\mathbf{0}\cdots\mathbf{0}a_{k}\mathbf{0}\), where again some \(\mathbf{0}\)'s may be replaced by \(\mathbf{0}^{\dagger}\). Since we mirror the Poppe product in the abstract setting in Definition 6, we know that the kernel monomial algebra and our abstract algebra encoding just below, are isomorphic.
Let us now introduce our abstract encoding for the algebra of operator kernel monomials equipped with the Poppe product in Lemma 1, just mentioned. Given a word \(w=a_{1}a_{2}\cdots a_{k}\) generated using letters \(a_{1}\), \(a_{2}\), \(\ldots\), \(a_{k}\) from \(\mathbb{Z}\), and a word \(\boldsymbol{\varphi}=\boldsymbol{\theta}_{1}\boldsymbol{\theta}_{2}\cdots \boldsymbol{\theta}_{k+1}\) generated using the letters \(\boldsymbol{\theta}_{1}\), \(\boldsymbol{\theta}_{2}\), \(\ldots\), \(\boldsymbol{\theta}_{k+1}\) chosen from the binary set \(\{\mathbf{0},\mathbf{0}^{\dagger}\}\), let \(w\times\boldsymbol{\vartheta}\) denote the corresponding word,
\[w\times\boldsymbol{\varphi}=\boldsymbol{\theta}_{1}a_{1}\boldsymbol{\theta}_{2 }a_{2}\boldsymbol{\theta}_{3}\cdots\boldsymbol{\theta}_{k}a_{k}\boldsymbol{ \theta}_{k+1},\]
in the free monoid \((\mathbb{Z}_{\mathbf{0}})^{*}\) where \(\mathbb{Z}_{\mathbf{0}}\coloneqq\mathbb{Z}\cup\{\mathbf{0},\mathbf{0}^{ \dagger}\}\). For such words, there is a single letter from the binary set \(\{\mathbf{0},\mathbf{0}^{\dagger}\}\) sandwiched between each of the letters from \(\mathbb{Z}\), as well as one at each end. Let \(\mathbb{C}\langle\mathbb{Z}_{\mathbf{0}}\rangle\) denote the non-commutative polynomial algebra over \(\mathbb{C}\) generated by words from \((\mathbb{Z}_{\mathbf{0}})^{*}\), endowed with the following Poppe product.
Definition 6 (Poppe product): Consider four words from \((\mathbb{Z}_{\mathbf{0}})^{*}\) of the form \(ua\mathbf{0}\), \(ua\mathbf{0}^{\dagger}\), \(\mathbf{0}bv\) and \(\mathbf{0}^{\dagger}bv\), where \(u\) and \(v\) are any subwords from \((\mathbb{Z}_{\mathbf{0}})^{*}\) and \(a,b\in\mathbb{Z}\). We define the _Poppe product_ from \(\mathbb{C}\langle\mathbb{Z}_{\mathbf{0}}\rangle\times\mathbb{C}\langle\mathbb{ Z}_{\mathbf{0}}\rangle\) to \(\mathbb{C}\langle\mathbb{Z}_{\mathbf{0}}\rangle\) of these words to be,
\[(ua\mathbf{0})(\mathbf{0}bv) =u(a+1)\mathbf{0}bv+ua\mathbf{0}(b+1)v+2\cdot ua\mathbf{0} \mathbf{1}\mathbf{0}bv,\] \[(ua\mathbf{0}^{\dagger})(\mathbf{0}^{\dagger}bv) =u(a+1)\mathbf{0}^{\dagger}bv+ua\mathbf{0}^{\dagger}(b+1)v+2\cdot u a \mathbf{0}^{\dagger}\mathbf{1}\mathbf{0}^{\dagger}bv,\] \[(ua\mathbf{0}^{\dagger})(\mathbf{0}bv) =u(a+1)\mathbf{0}bv+ua\mathbf{0}^{\dagger}(b+1)v\] \[(ua\mathbf{0})(\mathbf{0}^{\dagger}bv) =u(a+1)\mathbf{0}^{\dagger}bv+ua\mathbf{0}(b+1)v.\]
Let \(\nu\) denote the empty word in \(\mathbb{C}\langle\mathbb{Z}_{\mathbf{0}}\rangle\). Then for any word \(w\times\boldsymbol{\varphi}\in\mathbb{C}\langle\mathbb{Z}_{\mathbf{0}}\rangle\) we have \(\nu\left(w\times\boldsymbol{\varphi}\right)=\left(w\times\boldsymbol{\varphi }\right)\nu=w\times\boldsymbol{\varphi}\). Let \(\mathcal{C}\coloneqq\cup_{n\geqslant 0}\mathcal{C}(n)\) denote the set of all compositions.
Definition 7 (Signature expansion): For any \(n\in\mathbb{N}\cup\{0\}\), we define the following linear _signature expansions_\(\boldsymbol{n}\in\mathbb{C}\langle\mathbb{Z}_{\mathbf{0}}\rangle\),
\[\boldsymbol{n}\coloneqq\sum_{a_{1}a_{2}\cdots a_{k}\in\mathcal{C}(n)}\chi(a_{1 }a_{2}\cdots a_{k})\,\cdot\mathbf{0}a_{1}\mathbf{0}a_{2}\mathbf{0}\cdots\mathbf{ 0}a_{k}\mathbf{0},\]
where the sum is over all possible compositions \(a_{1}a_{2}\cdots a_{k}\), with \(k\geqslant 1\) parts, of \(n\).
For example, we note that \(\mathbf{1}=\chi(1)\cdot\mathbf{0}\mathbf{10}\) and \(\mathbf{2}=\chi(2)\cdot\mathbf{0}\mathbf{20}+\chi(11)\cdot\mathbf{0}\mathbf{10} \mathbf{10}\). Further note, for the case \(n=0\), the signature expansion simply corresponds to the letter \(\mathbf{0}\) from the binary set \(\{\mathbf{0},\mathbf{0}^{\dagger}\}\). Equivalently we can write the relation in Definition 7 for the case \(n=0\) as \(\mathbf{0}=\chi(0)\cdot\mathbf{0}\). Naturally by convention, we take \(\chi(0)=1\). Let us also remark on the following basic identities in \(\mathbb{C}\langle\mathbb{Z}_{\mathbf{0}}\rangle\), which follow from Lemma 4.
Lemma 7 (Algebraic identities): _We have the following basic relations in \(\mathbb{C}\langle\mathbb{Z}_{\mathbf{0}}\rangle\),_
\[\mathbf{0}\equiv\nu+\mathbf{0}0\equiv\nu+\mathbf{0}0,\quad\mathbf{0}^{\dagger }\equiv\nu+\mathbf{0}^{\dagger}0^{\dagger}\equiv\nu+0^{\dagger}\mathbf{0}^{ \dagger}\quad\text{and}\quad\mathbf{0}-\mathbf{0}^{\dagger}=2\cdot\mathbf{0} \mathbf{0}0^{\dagger}.\]
Definition 8: Given any word \(w\times\boldsymbol{\varphi}\in\mathbb{C}\langle\mathbb{Z}_{\mathbf{0}}\rangle\), say \(w\times\boldsymbol{\varphi}=\boldsymbol{\theta}_{1}a_{1}\boldsymbol{\theta}_{ 2}a_{2}\boldsymbol{\theta}_{3}\cdots\boldsymbol{\theta}_{k}a_{k}\boldsymbol{ \theta}_{k+1}\), the letters \(a_{i}^{\dagger}\) denote the letters '\(-a_{i}\)' from \(\mathbb{Z}\), i.e. \(a_{i}^{\dagger}=-a_{i}\). Further we set,
\[\big{(}\boldsymbol{\theta}_{1}a_{1}\boldsymbol{\theta}_{2}a_{2}\boldsymbol{ \theta}_{3}\cdots\boldsymbol{\theta}_{k}a_{k}\boldsymbol{\theta}_{k+1}\big{)} ^{\dagger}\coloneqq\boldsymbol{\theta}_{1}^{\dagger}a_{1}^{\dagger} \boldsymbol{\theta}_{2}^{\dagger}a_{2}^{\dagger}\boldsymbol{\theta}_{3}^{ \dagger}\cdots\boldsymbol{\theta}_{k}^{\dagger}a_{k}^{\dagger}\boldsymbol{ \theta}_{k+1}^{\dagger},\]
i.e. we replace all the letters \(a_{i}\) in \(w\) by their counterparts \(a_{i}^{\dagger}=-a_{i}\) and all the letters in \(\boldsymbol{\theta}\) by their counterparts. In the latter instance this means we change all the \(\mathbf{0}\)'s to \(\mathbf{0}^{\dagger}\), and vice-versa. Note we _do not_ reverse the order of the terms in \(w\times\boldsymbol{\varphi}\). This means, for example, that we can interpret \((w\times\boldsymbol{\varphi})^{\dagger}=w^{\dagger}\times\boldsymbol{\varphi }^{\dagger}\), and since \(w^{\dagger}=(-1)^{|w|}w\), where \(|w|\) is the length of \(w\), then \((w\times\boldsymbol{\varphi})^{\dagger}\in\mathbb{C}\langle\mathbb{Z}_{ \mathbf{0}}\rangle\).
Consider the following skew-symmetric and symmetric forms on \(\mathbb{C}\langle\mathbb{Z}_{\mathbf{0}}\rangle\).
Definition 9 (Skew-symmetric and symmetric forms): Given any word \(w\times\boldsymbol{\varphi}\in\mathbb{C}\langle\mathbb{Z}_{\mathbf{0}}\rangle\), we define its skew-symmetric and symmetric forms in \(\mathbb{C}\langle\mathbb{Z}_{\mathbf{0}}\rangle\), respectively, by
\[[w\times\boldsymbol{\varphi}]\coloneqq w\times\boldsymbol{\varphi}-(w\times \boldsymbol{\varphi})^{\dagger}\qquad\text{and}\qquad\{w\times\boldsymbol{ \varphi}\}\coloneqq w\times\boldsymbol{\varphi}+(w\times\boldsymbol{\varphi})^ {\dagger}.\]
Naturally we have \(\big{[}(w\times\boldsymbol{\varphi})^{\dagger}\big{]}=-[w\times\boldsymbol{ \varphi}]\) and \(\big{\{}(w\times\boldsymbol{\varphi})^{\dagger}\big{\}}=\{w\times \boldsymbol{\varphi}\}\).
The following product rules based on the Poppe product in Definition 6, are useful for our computations in all subsequent sections.
Lemma 8 (Skew and symmetric Poppe products): _Consider the elements \([ua\mathbf{0}]\), \(\{ua\mathbf{0}\}\), \([\mathbf{0}bv]\) and \(\{\mathbf{0}bv\}\) from \(\mathbb{C}\langle\mathbb{Z}_{\mathbf{0}}\rangle\), where \(u\) and \(v\) are any subwords from \(\mathbb{C}\langle\mathbb{Z}_{\mathbf{0}}\rangle\) and \(a,b\in\mathbb{Z}\). We have the following Poppe products in \(\mathbb{C}\langle\mathbb{Z}_{\mathbf{0}}\rangle\) between these elements,_
\[\{ua\mathbf{0}\}\,[\mathbf{0}bv] =\big{[}u(a+1)[\mathbf{0}bv]\big{]}+\big{[}ua\mathbf{0}[(b+1)v] \big{]}+2\cdot[ua\mathbf{0}\mathbf{10}bv],\] \[\{ua\mathbf{0}\}\,[\mathbf{0}bv] =\big{[}u(a+1)\{\mathbf{0}bv\}\big{]}+\big{[}ua\mathbf{0}\{(b+1)v \}\big{]}+2\cdot[ua\mathbf{0}\mathbf{10}bv],\] \[\{ua\mathbf{0}\}\,[\mathbf{0}bv] =\big{\{}u(a+1)[\mathbf{0}bv]\big{\}}+\big{\{}ua\mathbf{0}[(b+1)v ]\big{\}}+2\cdot\{ua\mathbf{0}\mathbf{10}bv\}.\]
_These products also hold when \([ua\mathbf{0}]=[\mathbf{0}]\), which case term on the right involving '\((a+1)\)' is absent. Likewise, these products also hold when \([\mathbf{0}bv]=[\mathbf{0}]\), in which case the term on the right involving '\((b+1)\)' is absent._
Proof: The results are established straightforwardly using Definition 6 for the abstract Poppe product. Consider for example the first product shown, we observe that,
\[\{ua\mathbf{0}\}\,[\mathbf{0}bv] =\big{(}ua\mathbf{0}+(ua\mathbf{0})^{\dagger}\big{)}\big{(} \mathbf{0}bv-(\mathbf{0}bv)^{\dagger}\big{)}\] \[=u(a+1)\mathbf{0}bv+ua\mathbf{0}(b+1)v+2\cdot ua\mathbf{0}\mathbf{ 10}bv\]
\[-u(a+1)(\vec{0}bv)^{\dagger}-ua\vec{0}\big{(}(b+1)v\big{)}^{\dagger}\] \[+\big{(}u(a+1)\big{)}^{\dagger}\vec{0}bv+(ua\vec{0})^{\dagger}(b+1)v\] \[-\big{(}u(a+1)\vec{0}bv\big{)}^{\dagger}-\big{(}ua\vec{0}(b+1)v \big{)}^{\dagger}-2\cdot\big{(}ua\vec{0}\vec{1}\vec{0}bv\big{)}^{\dagger},\]
which gives the first product result. The other three cases follow completely analogously. For the case, for example, when \([ua\vec{0}]=[\vec{0}]\) in the second product, we use that, since \([\vec{0}]=2\cdot\vec{0}\vec{0}^{\dagger}\) and \(\vec{0}\vec{0}\vec{0}^{\dagger}=\vec{0}^{\dagger}\vec{0}\vec{0}\), we have \([\vec{0}\vec{0}\vec{0}^{\dagger}]=[\vec{0}^{\dagger}\vec{0}\vec{0}]=\vec{0} \vec{0}\vec{0}^{\dagger}+\vec{0}^{\dagger}\vec{0}\vec{0}=[\vec{0}]\). Hence we observe, since \([\vec{0}]=[\vec{0}^{\dagger}\vec{0}\vec{0}]\), we can use the latter form in the corresponding product already established, so using the properties of the skew and symmetric forms in Definition 9 we have,
\[[\vec{0}]\,\{\vec{0}bv\} =[\vec{0}^{\dagger}\vec{0}\vec{0}]\,\{\vec{0}bv\}\] \[=\big{[}\vec{0}^{\dagger}\vec{1}\vec{0}bv\big{]}+\big{[}\vec{0}^{ \dagger}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0 }\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0} \vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0} \vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0} \vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0} \vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0} \vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0} \vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0} \vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0} \vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0} \vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0} \vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0} \vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0} \vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0} \vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0} \vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0} \vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0} \vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0} \vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0} \vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0} \vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0} \vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0} \vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0} \vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0} \vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0} \vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0} \vec{0}\vec{0}\vec{0}{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0} \vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0} \vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0} \vec{0}\vec{0}\vec{0}\vec{0}\vec{0}{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0} \vec{0}\vec{0}\vec{0}{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0} \vec{0}\vec{0}{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0} \vec{0}{0}\vec{0}\vec{0}\vec{0}{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}{0} \vec{0}\vec{0}\vec{0}\vec{0}\vec{0}{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}{0}\vec{0 }\vec{0}\vec{0}\vec{0}{0}\vec{0}\vec{0}\vec{0}{0}\vec{0}\vec{0}\vec{0}\vec{0}{0} \vec{0}\vec{0}{0}\vec{0}\vec{0}\vec{0}\vec{0}{0}\vec{0}\vec{0}{0}\vec{0}\vec{0} \vec{0}{0}\vec{0}\vec{0}{0}\vec{0}\vec{0}{0}\vec{0}\vec{0}\vec{0}{0}\vec{0}\vec{0} \vec{0}{0}\vec{0}\vec{0}\vec{0}\vec{0}{0}\vec{0}\vec{0}{0}\vec{0}\vec{0}\vec{0}{0} \vec{0}{0}\vec{0}\vec{0}{0}\vec{0}{0}\vec{0}\vec{0}\vec{0}{0}\vec{0}{0}\vec{0}\vec{0} \vec{0}{0}\vec{0}{0}\vec{0}\vec{0}{0}\vec{0}\vec{0}\vec{0}{0}\vec{0}\vec{0}{0} \vec{0}{0}\vec{0}\vec{0}\vec{0}{0}\vec{0}\vec{0}{0}\vec{0}\vec{0}{0}\vec{0}{0} \vec{0}{0}\vec{0}\vec{0}{0}\vec{0}\vec{0}{0}\vec{0}\vec{0}{0}\vec{0}\vec{0}{0} \vec{0}\vec{0}{0}\vec{0}{0}\vec{0}{0}\vec{0}\vec{0}{0}\vec{0}\vec{0}{0}\vec{0}{0} \vec{0}{0}\vec{0}\vec{0}{0}\vec{0}\vec{0}\vec{0}{0}\vec{0}{0}\vec{0}\vec{0}{0} \vec{0}{0}\vec{0}{0}\vec{0}\vec{0}{0}\vec{0}\vec{0}{0}\vec{0}\vec{0}{0}\vec{0}{0} \vec{0}{0}\vec{0}\vec{0}{0}\vec{0}\vec{0}{0}\vec{0}\vec{0}{0}\vec{0}\vec{0}{0} \vec{0}\vec{0}{0}\vec{0}{0}\vec{0}\vec{0}\vec{0}\vec{0}{0}\vec{0}\vec{0}{0}\vec{0} \vec{0}{0}\vec{0}\vec{0}{0}\vec{0}\vec{0}\vec{0}\vec{0}{0}\vec{0}\vec{0}\vec{0}{0} \vec{0}\vec{0}{0}\vec{0}\vec{0}\vec{0}\vec{0}\vec{0}{0}\vec{0}\vec{0}\vec{0}{0} \vec{0}\vec{0}{0}\vec{0}\vec{0}{0}\vec{0}\vec{0}\vec{0}\vec{0}{0}\vec{0}\vec{0}\vec{0}{0} \vec{0}\vec{0}\vec{0}\vec{0}{0}\vec{0}{0}\vec{0
**Remark 9**: **(Homomorphic signature character)** Consider a multi-factor product of signature expansions of the form,
\[\left[\boldsymbol{n}_{1}\right]\left[\boldsymbol{n}_{2}\right]\,\cdots\,\left[ \boldsymbol{n}_{k}\right]=\sum\bigl{(}\chi(w_{1})\chi(w_{2})\cdots\chi(w_{k}) \bigr{)}\cdot\left[w_{1}\times\boldsymbol{\varphi}_{1}\right]\left[w_{2}\times \boldsymbol{\varphi}_{2}\right]\,\cdots\,\left[w_{k}\times\boldsymbol{ \varphi}_{k}\right],\]
where the sum is over all words \(w_{1}\times\boldsymbol{\varphi}_{1}\) with \(w_{1}\in\mathcal{C}(n_{1})\), \(w_{2}\times\boldsymbol{\varphi}_{2}\) with \(w_{2}\in\mathcal{C}(n_{2})\), and so forth. Note, the form \(\left[w_{1}\times\boldsymbol{\varphi}_{1}\right]\left[w_{2}\times\boldsymbol{ \varphi}_{2}\right]\,\cdots\,\left[w_{k}\times\boldsymbol{\varphi}_{k}\right]\) generates many different words in \(\mathbb{C}\langle\mathbb{Z}_{\mathbf{0}}\rangle\). We observe that it would be convenient to encode \(\chi(w_{1})\chi(w_{2})\cdots\chi(w_{k})\) as \(\chi(w_{1}\otimes w_{2}\otimes\cdots\otimes w_{k})\). Indeed, hereafter, we assume that \(\chi\) acts _homomorphically_ on any such tensor product of compositions so that indeed we have,
\[\chi(w_{1}\otimes w_{2}\otimes\cdots\otimes w_{k})\equiv\chi(w_{1})\chi(w_{2} )\cdots\chi(w_{k}).\]
Let us now outline some simple examples.
Example 4: By definition \(\left[\mathbf{0}\right]\coloneqq\mathbf{0}-\mathbf{0}^{\dagger}\). Using the notation \(\left[\mathbf{0}\right]^{2}=\bigl{(}\chi(0)\cdot\left[\mathbf{0}\right]\bigr{)} \,\bigl{(}\chi(0)\cdot\left[\mathbf{0}\right]\bigr{)}\) and so forth, then using the product rules in Lemma 8 we observe (also see Remark 8),
\[\left[\mathbf{0}\right]^{2} =\chi(0\hat{\otimes}0)\cdot\{\mathbf{0}\mathbf{1}\mathbf{0}\},\] \[\left[\mathbf{0}\right]^{3} =\left[\mathbf{0}\right]\left[\mathbf{0}\right]^{2}\] \[=\bigl{(}\chi(0)\cdot\left[\mathbf{0}\right]\bigr{)}\,\bigl{(} \chi(0\hat{\otimes}0)\{\mathbf{0}\mathbf{1}\mathbf{0}\}\bigr{)}\] \[=\chi(0\otimes 0\hat{\otimes}0)\cdot\bigl{[}\mathbf{0}\{2 \mathbf{0}\}\bigr{]}+\chi(0\hat{\otimes}0\hat{\otimes}0)\cdot\left[\mathbf{0} \mathbf{1}\mathbf{0}\mathbf{1}\mathbf{0}\right],\]
where the tensor notation '\(\hat{\otimes}\)' in the argument of \(\chi=\chi(\cdot)\) indicates a tensor product '\(\otimes\)' together with the fact that an extra real factor of '\(2\)' should be included with the \(\chi=\chi(\cdot)\) factor shown. See Remark 10 just below.
**Remark 10**: Hereafter, we also use the tensor notation '\(\hat{\otimes}\)' in the argument of \(\chi=\chi(\cdot)\) to indicate when the skew or symmetric form were generated by the 'quasi' term \(2\cdot\left[ua\mathbf{0}\mathbf{1}\mathbf{0}bv\right]\) in one of the Poppe products in Lemma 8. We illustrated this in Example 4 just above. We observe therein that the result of the product \(\left[\mathbf{0}\right]^{2}\) is '\(2\)' times the symmetric form \(\{\mathbf{0}\mathbf{1}\mathbf{0}\}\). This symmetric form emerges from the 'quasi' term in the Poppe product of \(\chi(0)\cdot\left[\mathbf{0}\right]\) with \(\chi(0)\cdot\left[\mathbf{0}\right]\) and a natural way to record this is the form \(\chi(0\hat{\otimes}0)\cdot\{\mathbf{0}\mathbf{1}\mathbf{0}\}\). The tensor product of the zeros in the argument of \(\chi=\chi(\cdot)\) indicates that the symmetric form is the result of the product of \(\left[\mathbf{0}\right]\) with \(\left[\mathbf{0}\right]\), while the fact that the tensor product is '\(\hat{\otimes}\)' indicates it was the result of the 'quasi' term in the Poppe product, and an extra factor of \(2\) is implied. In this case if we evaluate the signature character we include an extra factor of '\(2\)' in its evaluation. Also consider the product \(\left[\mathbf{0}\right]^{3}\) in Example 4. When we compute the Poppe product \(\bigl{(}\chi(0)\cdot\left[\mathbf{0}\right]\bigr{)}\,\bigl{(}\chi(0\hat{ \otimes}0)\{\mathbf{0}\mathbf{1}\mathbf{0}\}\bigr{)}\), the first skew form generated, i.e. \(\left[\mathbf{0}\{2\mathbf{0}\}\right]\), has the coefficient \(\chi(0\hat{\otimes}0\hat{\otimes}0)\) as we might expect, using the homomorphic properties of \(\chi\). However the second term generated by the product \(\bigl{(}\chi(0)\cdot\left[\mathbf{0}\right]\bigr{)}\,\bigl{(}\chi(0\hat{ \otimes}0)\{\mathbf{0}\mathbf{1}\mathbf{0}\}\bigr{)}\), which is \(\left[\mathbf{0}\mathbf{1}\mathbf{0}\mathbf{1}\mathbf{0}\right]\), has the coefficient \(\chi(0\hat{\otimes}0\hat{\otimes}0)\). This is because this second term is the result of the 'quasi' term \(2\cdot\left[ua\mathbf{0}\mathbf{1}\mathbf{0}bv\right]\) in the Poppe product; here \(ua=\nu\) and \(bv=\mathbf{1}\mathbf{0}\). The factor '\(2\)' is absorbed/encoded by the fact that a '\(\hat{\otimes}\)' tensor (instead of just '\(\otimes\)') is used between the first \(0\) and the \(0\hat{\otimes}0\), the respective \(\chi\)-arguments for \(\left[\mathbf{0}\right]\) and \(\{\mathbf{0}\mathbf{1}\mathbf{0}\}\), in the coefficient \(\chi(0\hat{\otimes}0\hat{\otimes}0)\) for \(\left[\mathbf{0}\mathbf{1}\mathbf{0}\mathbf{1}\mathbf{0}\right]\) in Example 4. See Example 5 for further illustrations of this notation.
_Example 5_: Using the Poppe products in Lemma 8 and that \([\mathbf{1}]=\chi(1)\cdot[\mathbf{0}\mathbf{1}\mathbf{0}]\), we have,
\[\left[\mathbf{1}\right]\left[\mathbf{0}\right]^{2} =\left(\chi(1)\cdot\left[\mathbf{0}\mathbf{1}\mathbf{0}\right] \right)\left(\chi(0\hat{\otimes}0)\cdot\left\{\mathbf{0}\mathbf{1}\mathbf{0} \right\}\right)\] \[=\chi(1\hat{\otimes}0\hat{\otimes}0)\cdot\left(\left[\mathbf{0}2 \{\mathbf{0}\mathbf{1}\mathbf{0}\}\right]+\left[\mathbf{0}\mathbf{1}\mathbf{0} \{2\mathbf{0}\}\right]\right)+\chi(1\hat{\otimes}0\hat{\otimes}0)\cdot[ \mathbf{0}\mathbf{1}\mathbf{0}\mathbf{1}\mathbf{0}\mathbf{1}\mathbf{0}],\] \[\left[\mathbf{0}\right]^{2}\left[\mathbf{1}\right] =\left(\chi(0\hat{\otimes}0)\cdot\left\{\mathbf{0}\mathbf{1} \mathbf{0}\right\}\right)\left(\chi(1)\cdot\left[\mathbf{0}\mathbf{1}\mathbf{ 0}\mathbf{1}\right]\right)\] \[=\chi(0\hat{\otimes}0\hat{\otimes}1)\cdot\left(\left[\mathbf{0}2 \mathbf{0}\mathbf{1}\mathbf{0}\right]+\left[\mathbf{0}\mathbf{1}\mathbf{0} \mathbf{2}\mathbf{0}\right]\right)+\chi(0\hat{\otimes}0\hat{\otimes}1)\cdot [\mathbf{0}\mathbf{1}\mathbf{0}\mathbf{1}\mathbf{0}].\]
Definition 10 (Derivation endomorphism): Given any word \(w\times\boldsymbol{\varphi}\in\mathbb{C}\langle\mathbb{Z}_{\mathbf{0}}\rangle\) with \(w\times\boldsymbol{\varphi}=\boldsymbol{\theta}_{1}a_{1}\boldsymbol{\theta}_{2} \cdots\boldsymbol{\theta}_{k}a_{k}\boldsymbol{\theta}_{k+1}\), we define the _derivation endomorphism_\(\mathfrak{d}\) on \(\mathbb{C}\langle\mathbb{Z}_{\mathbf{0}}\rangle\) to be the linear expansion,
\[\mathfrak{d}(w\times\boldsymbol{\varphi})\coloneqq \sum_{\ell=1}^{k}\boldsymbol{\theta}_{1}a_{1}\boldsymbol{\theta}_ {2}\cdots\boldsymbol{\theta}_{\ell}(a_{\ell}+1)\boldsymbol{\theta}_{\ell+1} \cdots\boldsymbol{\theta}_{k}a_{k}\boldsymbol{\theta}_{k+1}\] \[+\sum_{\ell=1}^{k+1}\boldsymbol{\theta}_{1}a_{1}\boldsymbol{ \theta}_{2}\cdots\boldsymbol{\theta}_{\ell-1}a_{\ell-1}(\mathfrak{d} \boldsymbol{\theta}_{\ell})a_{\ell}\boldsymbol{\theta}_{\ell+1}\cdots \boldsymbol{\theta}_{k}a_{k}\boldsymbol{\theta}_{k+1},\]
where \(\mathfrak{d}\boldsymbol{\theta}_{\ell}\) equals \(\mathbf{0}\mathbf{1}\mathbf{0}\) or \(\mathbf{0}^{\dagger}1^{\dagger}\mathbf{0}^{\dagger}\), depending respectively on whether \(\boldsymbol{\theta}_{\ell}\) is \(\mathbf{0}\) or \(\mathbf{0}^{\dagger}\).
Remark 11: The action of the derivation endomorphism on \(\mathbf{0}\) and \(\mathbf{0}^{\dagger}\) shown in the definition reflects the signature expansions, either at the kernel or abstract level. In this case here, we know \(\partial V=V(\mathrm{i}P)_{1}V\) and \(\partial V^{\dagger}=V^{\dagger}(\mathrm{i}P)_{1}^{\dagger}V^{\dagger}\) or equivalently \(\mathbf{1}=\chi(1)\cdot\mathbf{0}\mathbf{1}\mathbf{0}\) and \(\mathbf{1}^{\dagger}=\chi(1)\cdot\mathbf{0}^{\dagger}1^{\dagger}\mathbf{0}^{ \dagger}=-\mathbf{0}^{\dagger}1\mathbf{0}^{\dagger}\). Similarly, the action of the derivation endomorphism on any signature expansion, say \(\boldsymbol{n}\), is given by, \(\mathfrak{d}\colon\boldsymbol{n}\mapsto(\boldsymbol{n+1})\), and similarly for \(\boldsymbol{n}^{\dagger}\).
Now suppose, within \(\mathbb{C}\langle\mathbb{Z}_{\mathbf{0}}\rangle\), we restrict ourselves to the set of skew-symmetric forms \(\left[w\times\boldsymbol{\varphi}\right]\). Naturally, as a vector space, \(\mathbb{C}\langle\mathbb{Z}_{\mathbf{0}}\rangle\) can be decomposed into the direct sum of the vector subspaces \(\mathbb{C}[\mathbb{Z}_{\mathbf{0}}]\) of skew-symmetric forms, and \(\mathbb{C}\{\mathbb{Z}_{\mathbf{0}}\}\) of symmetric forms:
\[\mathbb{C}\langle\mathbb{Z}_{\mathbf{0}}\rangle=\mathbb{C}[\mathbb{Z}_{ \mathbf{0}}]\bigoplus\mathbb{C}\{\mathbb{Z}_{\mathbf{0}}\}.\]
We observe from the Poppe products in Lemma 8, the product \(\left[w_{1}\times\boldsymbol{\varphi}_{1}\right]\left[w_{2}\times\boldsymbol{ \varphi}_{2}\right]\) does not generate a skew-symmetric form but a symmetric one. However any triple product \(\left[w_{1}\times\boldsymbol{\varphi}_{1}\right]\left[w_{2}\times\boldsymbol {\varphi}_{2}\right]\left[w_{3}\times\boldsymbol{\varphi}_{3}\right]\) does generate a skew-symmetric form. This is true for any Poppe products involving an odd number of skew-symmetric forms. Hence we can define a subalgebra of the Poppe algebra \(\mathbb{C}\langle\mathbb{Z}_{\mathbf{0}}\rangle\) which we denote by \(\mathbb{C}[\mathbb{Z}_{\mathbf{0}}]\subseteq\mathbb{C}\langle\mathbb{Z}_{ \mathbf{0}}\rangle\), which is generated by skew-symmetric forms and triple products of such forms.
Definition 11 (Skew-Poppe algebra): We call \(\mathbb{C}[\mathbb{Z}_{\mathbf{0}}]\) the _skew-Poppe algebra_.
Remark 12 (Practical Poppe algebra computations): In practice, in particular in the next two sections, we perform calculations in the "enveloping" algebra \(\mathbb{C}\langle\mathbb{Z}_{\mathbf{0}}\rangle\), and at the end, show that the result remains closed within the skew-Poppe algebra \(\mathbb{C}[\mathbb{Z}_{\mathbf{0}}]\). However, the skew-Poppe algebra and its triple product structure is crucial to the proof of our main result in Section 6.
## 4 Hierarchy examples
We use the skew-Poppe algebra \(\mathbb{C}[\mathbb{Z}_{\mathbf{0}}]\) to establish integrability for examples from the non-commutative nonlinear Schrodinger and modified Korteweg-de Vries hierarchy. This was first considered in an analogous context in Doikou _et al._[31, Sec. 6]. Recall the linear dispersion equation, the 'base' equation, we introduced in Section 2.3. Hereafter we assume Hilbert-Schmidt operators \(P\) and \(G\) satisfy, respectively, the linear dispersive partial differential equation \(\partial_{t}P=-\mu_{n}(\mathrm{i}\mathcal{I})^{n-1}\partial^{n}P\) and the linear Fredholm equation \(\mathrm{i}P=G(\mathrm{id}+P^{2})\). We also assume \(P^{\dagger}=P\). We observe that with \(V\coloneqq(\mathrm{id}-\mathrm{i}P)^{-1}\), then assuming it exists, we have,
\[G=V(\mathrm{i}P)V^{\dagger}.\]
Note we scale this by a factor '2' presently so that it matches the expression in Definition 4. We record the following identities that prove useful below; also see Doikou _et al._[31, Sec. 6]. Also recall the identities in Lemma 4 and Remark 4.
**Lemma 9**: _The block operators \(P\) and \(V\) satisfy the following idenitites,_
\[P\mathcal{I}=-\mathcal{I}P,\qquad\mathcal{I}V=V^{\dagger}\mathcal{I}\qquad \text{and}\qquad V\mathcal{I}=\mathcal{I}V^{\dagger}.\]
_Proof_ The first identity follows from the block structures assumed for \(P\) and \(\mathcal{I}\). The latter two identities follow using the power series expansion for \(V\coloneqq(\mathrm{id}-\mathrm{i}P)^{-1}\). \(\sqcap\)\(\sqcup\)
We now rescale our definition for \(G\) above by a factor '2', and set,
\[G\coloneqq V-V^{\dagger}.\]
Hereafter, we are thus concerned with the quantity \([V]\coloneqq[\![V-V^{\dagger}]\!]\). Using that \(\partial_{t}V=V\partial_{t}(\mathrm{i}P)V\) and \(\partial_{t}V^{\dagger}=V^{\dagger}\partial_{t}(\mathrm{i}P)^{\dagger}V^{\dagger}\), and that \(\partial_{t}(\mathrm{i}P)=-\mu_{n}(\mathrm{i}\mathcal{I})^{n-1}\partial^{n}( \mathrm{i}P)\) and \(\partial_{t}(\mathrm{i}P)^{\dagger}=-(-1)^{n-1}\mu_{n}\partial^{n}(\mathrm{i}P )^{\dagger}(\mathrm{i}\mathcal{I})^{n-1}\), we observe that for any \(n\in\mathbb{Z}\), we have
\[\partial_{t}[V] =V\partial_{t}(\mathrm{i}P)V-V^{\dagger}\partial_{t}(\mathrm{i}P )^{\dagger}V^{\dagger}\] \[= -\mu_{n}\Big{(}V(\mathrm{i}\mathcal{I})^{n-1}\partial^{n}( \mathrm{i}P)V-(-1)^{n-1}V^{\dagger}\partial^{n}(\mathrm{i}P)^{\dagger}( \mathrm{i}\mathcal{I})^{n-1}V^{\dagger}\Big{)}.\]
For convenience set \(\mathcal{M}_{n}\coloneqq-\mu_{n}(\mathrm{i}\mathcal{I})^{n-1}\). Using the identities in Lemma 9, we have,
\[\mathcal{M}_{n}^{-1}\partial_{t}[V]=\begin{cases}\big{[}V(\mathrm{i}P)_{n}V \big{]},&\text{when $n$ is odd},\\ \big{[}V^{\dagger}(\mathrm{i}P)_{n}V\big{]},&\text{when $n$ is even}.\end{cases}\]
We now establish integrability for some examples from the non-commutative nonlinear Schrodinger hierarchy. We express \(\mathcal{M}_{n}^{-1}\partial_{t}[V]\) in the skew-Poppe algebra as follows.
**Definition 12**: **(Time-derivation endomorphism)** Given \(n\in\mathbb{Z}\), we define the _time-derivation endomorphism_\(\mathfrak{c}_{n}\colon\mathbb{C}[\mathbb{Z}_{\mathbf{0}}]\to\mathbb{C}[ \mathbb{Z}_{\mathbf{0}}]\) by,
\[\mathfrak{c}_{n}\colon[\mathbf{0}]\mapsto\begin{cases}[\mathbf{0}n\mathbf{0}],&\text{when $n$ is odd},\\,&\text{when $n$ is even}.\end{cases}\]
The nonlinear fields we seek are expressed in the skew-Poppe algebra as follows.
Definition 13 (Poppe polynomials): For \(n\in\mathbb{N}\cup\{0\}\), let \(\pi_{n}=\pi_{n}\big{(}[\mathbf{0}],[\mathbf{1}],\ldots,[\boldsymbol{n}]\big{)}\) denote a polynomial consisting of a linear combination of odd-degree monomials of signature expansions in the skew-Poppe algebra \(\mathbb{C}[\mathbb{Z}_{\mathbf{0}}]\) of the form,
\[\pi_{n}\coloneqq\sum_{k=1(\text{odd})}^{n}\sum_{a_{1}a_{2}\cdots a_{k}}c_{a_{1 }a_{2}\cdots a_{k}}\cdot[\boldsymbol{a}_{1}]\,[\boldsymbol{a}_{2}]\,\cdots \,[\boldsymbol{a}_{k}].\]
The first sum is over odd values of \(k\). The second sum is over all words \(a_{1}a_{2}\cdots a_{k}\) we can construct from the alphabet \(\{0,1,2,\ldots,n\}\) such that \(a_{1}+a_{2}+\cdots+a_{k}=n-(k-1)\). This ensures \(\pi_{n}\) is an odd polynomial. The coefficients \(c_{a_{1}a_{2}\cdots a_{k}}\) are scalar constants.
Our goal is to show \(\mathfrak{e}_{n}\big{(}[\mathbf{0}]\big{)}\) can be expressed in terms of a Poppe polynomial \(\pi_{n}=\pi_{n}\big{(}[\mathbf{0}],[\mathbf{1}],\ldots,[\boldsymbol{n}]\big{)}\) in the skew-Poppe algebra \(\mathbb{C}[\mathbb{Z}_{\mathbf{0}}]\). Thus for each \(n\in\mathbb{N}\cup\{0\}\), our goal is to determine the coefficients \(c_{a_{1}a_{2}\cdots a_{k}}\) such that,
\[\mathfrak{e}_{n}=\pi_{n}.\]
The examples we explore here correspond to the simple cases \(n=0,1,2,3,4\), as follows.
Example 6 (Linear ordinary differential equation: \(n=0\)): We observe \(\mathfrak{e}_{0}\big{(}[\mathbf{0}]\big{)}=[\mathbf{0}\mathbf{0}\mathbf{0}^{ \dagger}]\). Recall that \(a^{\dagger}=-a\) for letters from \(\mathbb{Z}\) in \((\mathbb{Z}_{\mathbf{0}})^{*}\), including \(a=0\). Hence we observe,
\[[\mathbf{0}\mathbf{0}\mathbf{0}^{\dagger}]=\mathbf{0}\mathbf{0}\mathbf{0}^{ \dagger}-\mathbf{0}^{\dagger}\mathbf{0}^{\dagger}\mathbf{0}=\mathbf{0}\mathbf{0 }\mathbf{0}^{\dagger}+\mathbf{0}^{\dagger}\mathbf{0}\mathbf{0}=\mathbf{0}- \mathbf{0}^{\dagger}=[\mathbf{0}].\]
In other words \(\mathfrak{e}_{0}\big{(}[\mathbf{0}]\big{)}=[\mathbf{0}]\) which translates to the following linear ordinary differential equation for \(g=[\![G]\!]\), with \(\mathcal{M}_{0}=\mu_{0}\mathrm{i}\mathcal{I}\),
\[\partial_{t}g=\mathcal{M}_{0}\,g.\]
Example 7 (Linear wave equation: \(n=1\)): We observe \(\mathfrak{e}_{1}\big{(}[\mathbf{0}]\big{)}=[\mathbf{0}\mathbf{1}\mathbf{0}]\). From Definition 7, we have the signature expansion \([\mathbf{1}]=[\mathbf{0}\mathbf{1}\mathbf{0}]\), since \(\chi(1)=1\). From Definition 10 for the derivation endomorphism, we know \(\mathfrak{d}[\mathbf{0}]=[\mathbf{1}]\). Hence we have, \(\mathfrak{e}_{1}\big{(}[\mathbf{0}]\big{)}=\mathfrak{d}[\mathbf{0}]\) in \(\mathbb{C}[\mathbb{Z}_{\mathbf{0}}]\). This translates to the linear wave equation for \(g=[\![G]\!]\), with \(\mathcal{M}_{1}=-\mu_{1}\,\mathrm{id}\),
\[\partial_{t}g=\mathcal{M}_{1}\,\partial g.\]
Example 8 (Nonlinear Schrodinger equation: \(n=2\)): We observe \(\mathfrak{e}_{2}\big{(}[\mathbf{0}]\big{)}=[\mathbf{0}\mathbf{2}\mathbf{0}^{ \dagger}]\). Using the homomorphic properties of \(\chi\), the values for the signature coefficients given in Definition 5 and that each tensor product '\(\hat{\otimes}\)' under \(\chi\) generates a real factor of \(2\), we have \(\chi(0_{\otimes}0\hat{\otimes}0)=2\) and \(\chi(0\hat{\otimes}0\hat{\otimes}0)=4\). Then from Example 4, we see that we have, \([\mathbf{0}]^{3}=2\cdot\big{[}\mathbf{0}\{2\mathbf{0}\}\big{]}+4\cdot[\mathbf{ 0}\mathbf{1}\mathbf{0}\mathbf{1}\mathbf{0}]=2\cdot\big{[}\mathbf{0}\mathbf{2} \mathbf{0}\big{]}-2\cdot[\mathbf{0}\mathbf{2}\mathbf{0}^{\dagger}]+4\cdot[ \mathbf{0}\mathbf{1}\mathbf{0}\mathbf{1}\mathbf{0}]\), using that \(\big{[}\mathbf{0}\{2\mathbf{0}\}\big{]}=[\mathbf{0}\mathbf{2}\mathbf{0}]-[ \mathbf{0}\mathbf{2}\mathbf{0}^{\dagger}]\), and that \(2^{\dagger}=-2\). The signature expansion for \([\mathbf{2}]=\mathfrak{d}^{2}[\mathbf{0}]\) is given by, \([\mathbf{2}]=\chi(2)\cdot[\mathbf{0}\mathbf{2}\mathbf{0}]+\chi(11)\cdot[ \mathbf{0}\mathbf{1}\mathbf{0}\mathbf{1}\mathbf{0}]=[\mathbf{0}\mathbf{2} \mathbf{0}]+2\cdot[\mathbf{0}\mathbf{1}\mathbf{0}\mathbf{1}\mathbf{0}]\). Hence we observe, \(\mathfrak{e}_{2}\big{(}[\mathbf{0}]\big{)}=[\mathbf{2}]-\frac{1}{2}\cdot[ \mathbf{0}]^{3}\) in \(\mathbb{C}[\mathbb{Z}_{\mathbf{0}}]\). This translates to the non-commutative nonlinear Schrodinger equation for \(g=[\![G]\!]\), with \(\mathcal{M}_{2}=-\mu_{2}\,(\mathrm{i}\mathcal{I})\),
\[\mathcal{M}_{2}^{-1}\partial_{t}g=\partial^{2}g-\tfrac{1}{2}g^{3}.\]
Remark 13 (Rescaling): In all examples, rescaling the solution \(g\) to \(2\,g\) recovers the usual corresponding equations in the non-commutative nonlinear Schrodinger hierarchy. This is because we assumed \(G\coloneqq V-V^{\dagger}\) rather than \(G=V(\mathrm{i}P)V^{\dagger}\equiv\frac{1}{2}(V-V^{\dagger})\)
Example 9: _(Modified Korteweg-de Vries equation: \(n=3\))_ We observe \(\mathfrak{e}_{3}\big{(}[\mathbf{0}]\big{)}=[\mathbf{0}3\mathbf{0}]\). Recall from Example 4 that, \([\mathbf{0}]^{2}=\chi([0\raisebox{0.0pt}{\scalebox{1.2}{$\otimes$}}1\raisebox{0.0pt}{\scalebox{1.2}{$\otimes$}}0)\big{)}\cdot\{\mathbf{0}1\mathbf{0}\}\). Note this lies in \(\mathbb{C}\langle\mathbb{Z}_{\mathbf{0}}\rangle\) as opposed to the skew-Poppe algebra \(\mathbb{C}[\mathbb{Z}_{\mathbf{0}}]\). From the results of Example 5, evaluating the signature characterisers, we know \([\mathbf{1}]\,[\mathbf{0}]^{2}=2\cdot\big{(}\big{[}\mathbf{0}2\{\mathbf{0}1 \mathbf{0}\}\big{]}+\big{[}\mathbf{0}1\mathbf{0}\{\mathbf{2}0\}\big{]}\big{)} +4\cdot[\mathbf{0}1\mathbf{0}1\mathbf{0}1\mathbf{0}]\) and \([\mathbf{0}]^{2}\,[\mathbf{1}]=2\cdot\big{(}\big{[}\mathbf{0}2[\mathbf{0}1 \mathbf{0}]\big{]}+\big{[}\mathbf{0}1\mathbf{0}[2\mathbf{0}]\big{]}\big{)}+4 \cdot[\mathbf{0}1\mathbf{0}1\mathbf{0}1\mathbf{0}]\). Then using the properties of the skew form from Definition 9, we see that,
\[\big{[}\mathbf{0}2\{\mathbf{0}1\mathbf{0}\}\big{]}+\big{[}\mathbf{0}2[\mathbf{ 0}1\mathbf{0}]\big{]}=2\cdot\big{[}\mathbf{0}2\mathbf{0}1\mathbf{0}\big{]}\ \ \text{and}\ \ \big{[}\mathbf{0}1\mathbf{0}\{\mathbf{2}0\}\big{]}+\big{[}\mathbf{0}1\mathbf{ 0}[2\mathbf{0}]\big{]}=2\cdot\big{[}\mathbf{0}1\mathbf{0}2\mathbf{0}\big{]}.\]
The signature expansion for \([\mathbf{3}]=\mathfrak{d}^{3}[\mathbf{0}]\) is given by,
\[[\mathbf{3}] =\chi(3)\cdot[\mathbf{0}3\mathbf{0}]+\chi(21)\cdot[\mathbf{0}2 \mathbf{0}1\mathbf{0}]+\chi(12)\cdot[\mathbf{0}1\mathbf{0}2\mathbf{0}]+\chi( 111)\cdot[\mathbf{0}1\mathbf{0}1\mathbf{0}]\] \[=[\mathbf{0}3\mathbf{0}]+3\cdot[\mathbf{0}2\mathbf{0}1\mathbf{0} ]+3\cdot[\mathbf{0}1\mathbf{0}2\mathbf{0}]+6\cdot[\mathbf{0}1\mathbf{0}1 \mathbf{0}1\mathbf{0}].\]
Hence we see that, \(\mathfrak{e}_{3}\big{(}[\mathbf{0}]\big{)}=[\mathbf{3}]-\frac{3}{4}\cdot \big{(}[\mathbf{1}]\,[\mathbf{0}]^{2}+[\mathbf{0}]^{2}\,[\mathbf{1}]\big{)}\), in \(\mathbb{C}[\mathbb{Z}_{\mathbf{0}}]\). This translates to the non-commutative modified Korteweg-de Vries equation for \(g=[\![G]\!]\), with \(\mathcal{M}_{3}=\mu_{3}\operatorname{id}\),
\[\mathcal{M}_{3}^{-1}\partial_{t}g=\partial^{3}g-\tfrac{3}{4}\big{(}(\partial g )g^{2}+g^{2}(\partial g)\big{)}.\]
Example 10: _(Fourth order quintic nonlinear Schrodinger equation: \(n=4\))_ In this case we know \(\mathfrak{e}_{4}\big{(}[\mathbf{0}]\big{)}=[\mathbf{0}4\mathbf{0}^{\dagger}]\). For this and higher orders, our procedure needs to be systematic. The Poppe polynomial in this case has the form,
\[\pi_{4}\coloneqq c_{4}\cdot[\mathbf{4}]+c_{200}\cdot[\mathbf{2}]\,[\mathbf{0}]^{2 }+c_{020}[\mathbf{0}]\,[\mathbf{2}]\,[\mathbf{0}]+c_{002}\cdot[\mathbf{0}]^{2 }\,[\mathbf{2}]\] \[\quad+c_{110}\cdot[\mathbf{1}]^{2}\,[\mathbf{0}]+c_{101}\cdot[ \mathbf{1}]\,[\mathbf{0}]\,[\mathbf{1}]+c_{011}\cdot[\mathbf{0}]\,[\mathbf{1} ]^{2}+c_{00000}\cdot[\mathbf{0}]^{5}.\]
The signature expansion for \([\mathbf{4}]\) has the form,
\[[\mathbf{4}] =\chi(4)\cdot[\mathbf{0}4\mathbf{0}]+\chi(31)\cdot[\mathbf{0}3 \mathbf{0}1\mathbf{0}]+\chi(22)\cdot[\mathbf{0}2\mathbf{0}2\mathbf{0}]+\chi( 13)\cdot[\mathbf{0}1\mathbf{0}3\mathbf{0}]\] \[\quad+\chi(211)\cdot[\mathbf{0}2\mathbf{0}1\mathbf{0}1\mathbf{0}]+ \chi(121)\cdot[\mathbf{0}1\mathbf{0}2\mathbf{0}1\mathbf{0}]+\chi(112)\cdot [\mathbf{0}1\mathbf{0}1\mathbf{0}2\mathbf{0}]\]
\begin{table}
\begin{tabular}{|c|c c c c|c|} \hline basis & \([\mathbf{3}]\) & \([\mathbf{0}]\,[\mathbf{1}]\,[\mathbf{0}]\) & \([\mathbf{1}]\,[\mathbf{0}]^{2}\) & \([\mathbf{0}]^{2}\,[\mathbf{1}]\) & \(B\) \\ \hline \([\mathbf{0}3\mathbf{0}]\) & \(3\) & \(2\cdot(0\raisebox{0.0pt}{\scalebox{1.2}{$\otimes$}}1\raisebox{0.0pt}{\scalebox{1.2}{$ \otimes$}}0)\) & & & \(1\) \\ \([\mathbf{0}3\mathbf{0}^{\dagger}]\) & & \(-2\cdot(0\raisebox{0.0pt}{\scalebox{1.2}{$\otimes$}}1\raisebox{0.0pt}{\scalebox{1.2}{$ \otimes$}}0)\) & & & \\ \hline \([\mathbf{0}2\mathbf{0}1\mathbf{0}]\) & \(21\) & \(2\cdot(0\raisebox{0.0pt}{\scalebox{1.2}{$\otimes$}}1\raisebox{0.0pt}{\scalebox{1.2}{$ \otimes$}}0)\) & \(2\cdot(1\raisebox{0.0pt}{\scalebox{1.2}{$\otimes$}}0\raisebox{0.0pt}{\scalebox{1.2}{$ \otimes$}}0)\) & \(2\cdot(0\raisebox{0.0pt}{\scalebox{1.2}{$\otimes$}}0\raisebox{0.0pt}{\scalebox{1.2}{$ \otimes$}}1)\) & \\ \([\mathbf{0}2\mathbf{0}^{\dagger}1\mathbf{0}^{\dagger}]\) & & \(2\cdot(0\raisebox{0.0pt}{\scalebox{1.2}{$\otimes$}}1\raisebox{0.0pt}{\scalebox{1.2}{$ \otimes$}}0)\) & \(-2\cdot(1\raisebox{0.0pt}{\scalebox{1.2}{$\otimes$}}0\raisebox{0.0pt}{\scalebox{1.2}{$ \otimes$}}0)\) & \(2\cdot(0\raisebox{0.0pt}{\scalebox{1.2}{$\otimes$}}0\raisebox{0.0pt}{\scalebox{1.2}{$ \otimes$}}1)\) & \\ \hline \([\mathbf{0}1\mathbf{0}2\mathbf{0}]\) & \(12\) & \(2\cdot(0\raisebox{0.0pt}{\scalebox{1.2}{$\otimes$}}1\raisebox{0.0pt}{\scalebox{1.2}{$ \otimes$}}0)\) & \(2\cdot(1\raisebox{0.0pt}{\scalebox{1.2}{$\otimes$}}0\raisebox{0.0pt}{\scalebox{1.2}{$ \otimes$}}0)\) & \(2\cdot(0\raisebox{0.0pt}{\scalebox{1.2}{$\otimes$}}0\raisebox{0.0pt}{\scalebox{1.2}{$ \otimes$}}1)\) & \\ \([\mathbf{0}1\mathbf{0}2\mathbf{0}^{\dagger}]\) & & \(-2\cdot(0\raisebox{0.0pt}{\scalebox{1.2}{$\otimes$}}1\raisebox{0.0pt}{\scalebox{1.2}{$ \otimes$}}0)\) & \(-2\cdot(1\raisebox{0.0pt}{\scalebox{1.2}{$\otimes$}}0\raisebox{0.0pt}{\scalebox{1.2}{$ \otimes$}}0)\) & \(2\cdot(0\raisebox{0.0pt}{\scalebox{1.2}{$\otimes$}}0\raisebox{0.0pt}{\scalebox{1.2}{$ \otimes$}}1)\) & \\ \hline \([\mathbf{0}1\mathbf{0}1\mathbf{0}1\mathbf{0}]\) & \(111\) & \(4\cdot(0\raisebox{0.0pt}{\scalebox{1.2}{$\otimes$}}1\raisebox{0.0pt}{\scalebox{1.2}{$ \otimes$}}0)\) & \(4\cdot(1\raisebox{0.0pt}{\scalebox{1.2}{$\otimes$}}0)\) & \(4\cdot(0\raisebox{0.0pt}{\scalebox{1.2}{$\otimes$}}0\raisebox{0.0pt}{\scalebox{1.2}{$ \otimes$}}1)\) & \\ \hline \end{tabular}
\end{table}
Table 1: Non-zero signature coefficients appearing in the expansion of the _Poppe polynomial_\(\pi_{3}\) in Example 9. The coefficients are the \(\chi\)-images of the signature entries shown. Each column shows the factor contributions to the real coefficients of the basis elements shown in the very left column, for each of the monomials in \(\pi_{3}\) shown across the top row. The final column represents the coefficient on the right-hand side of the equation \(\pi_{3}=[\mathbf{0}3\mathbf{0}]\).
\[+\chi(1111)\cdot[\mathbf{010101010}].\]
Using the skew and symmetric Poppe products in Lemma 8 we find, for example, that,
\[\left[\mathbf{2}\right]\left[\mathbf{0}\right]^{2} =\left(\chi(2)\cdot\left[\mathbf{020}\right]+\chi(11)\cdot\left[ \mathbf{01010}\right]\right)\left(\chi(0\otimes 0)\cdot\left\{\mathbf{010}\right\}\right)\] \[=\chi(2\otimes 0\hat{\otimes}0)\cdot\left(\left[\mathbf{0}3\{ \mathbf{010}\}\right]+\left[\mathbf{020}\{\mathbf{20}\}\right]\right)+\chi(2 \hat{\otimes}0\hat{\otimes}0)\cdot\left[\mathbf{0201010}\right]\] \[\qquad+\chi(11\otimes 0\hat{\otimes}0)\cdot\left(\left[\mathbf{010 2}\{\mathbf{010}\}\right]+\left[\mathbf{01010}\{\mathbf{20}\}\right]\right)\] \[\qquad+\chi(11\otimes 0\hat{\otimes}0)\cdot\left[\mathbf{0101010 10}\right].\]
The other products shown in \(\pi_{4}\) can be similarly expanded. In Table 2 we list all the basis elements and corresponding coefficients generated by all the Poppe products present in \(\pi_{4}\). The values of the coefficients are the \(\chi\)-images of the tensored terms shown. Each row generates a linear algebraic equation for the expansion coefficients \(c_{4}\), \(c_{200}\), \(c_{110}\),...\(c_{00000}\). Note that in Table 2 rows are ordered according to descent order, with a sub-order for the positions of the \(\mathbf{0}^{\dagger}\) letters as indicated. The ordering of the columns is self-evident from the structure present in the table. We discuss this ordering in more explicitly in Section 6. Using all the rows shown, we generate an over-determined linear system of algebraic equations, \(AC=B\), where \(B\) is the column vector shown in the right-hand column in Table 2, \(C\) is the vector of coefficients \(c_{4}\), \(c_{020}\), and so forth in the order shown across the top row. The matrix \(A\) consists of the \(\chi\)-images of the entries shown in the table (neglecting the final column). From the augmented matrix \(\left[A\,B\right]\), we observe the first two rows generate a closed system of equations, namely \(c_{4}+2c_{020}=0\) and \(-2c_{020}=1\). This system of equations corresponds to the following smaller augmented matrix subsystem \(\left[A_{0}\,B_{0}\right]\) for \(c_{4}\) and \(c_{020}\), where
\[A_{0}=\begin{pmatrix}1&2\\ 0&-2\end{pmatrix}\qquad\text{and}\qquad B_{0}=\begin{pmatrix}0\\ 1\end{pmatrix}.\]
Hence we deduce \(c_{4}=1\) and \(c_{020}=-\frac{1}{2}\). With these values in hand, we then observe that the next two rows also generate a closed system of equations for \(c_{200}\) and \(c_{011}\) given by \(4c_{4}+2c_{020}+2c_{200}+2c_{011}=0\) and \(2c_{020}-2c_{200}+2c_{011}=0\). This linear system of equations for the two unknowns \(c_{200}\) and \(c_{011}\) is represented by the smaller augmented matrix subsystem \(\left[A_{1}\,B_{1}\right]\), where,
\[A_{1}=\begin{pmatrix}2&2\\ -2&2\end{pmatrix}\qquad\text{and}\qquad B_{1}=\begin{pmatrix}-3\\ 1\end{pmatrix}.\]
Solving the linear system of equations, we deduce that \(c_{200}=-1\) and \(c_{011}=-1/2\). The next four rows in the augmented matrix \(\left[A\,B\right]\), given the coefficients we have already solved for, generate a closed system of equations for \(c_{110}\), \(c_{101}\), \(c_{002}\) and \(c_{00000}\), generated by the smaller augment matrix \(\left[A_{2}\,B_{2}\right]\), where,
\[A_{2}=\begin{pmatrix}1&1&2&4\\ -1&1&0&-4\\ 1&-1&0&-4\\ -1&-1&2&4\end{pmatrix}\qquad\text{and}\qquad B_{2}=\begin{pmatrix}-5/2\\ -5/2\\ -1/2\\ -3/2\end{pmatrix}.\]
The solution to this system is given by \(c_{110}=-1/2\), \(c_{101}=-3/2\), \(c_{002}=-1\) and \(c_{00000}=3/8\). It is easy to check the equations represented by the remaining rows in the big augmented matrix \(\left[A\,B\right]\) above, are consistent. Thus, we deduce that \(\mathfrak{c}_{4}\big{(}\left[\mathbf{0}\right]\big{)}=\pi_{4}\)
where the coefficients \(c_{4}\), \(c_{020}\) and so forth, are given by the unique values outlined above. The fourth order non-commutative nonlinear Schrodinger equation for \(g=\llbracket G\rrbracket\), with \(\mathcal{M}_{4}=\mu_{4}\mathrm{i}\mathcal{I}\), is given by,
\[\mathcal{M}_{4}^{-1}\partial_{t}g =\partial^{4}g-(\partial^{2}g)g^{2}-\tfrac{1}{2}g(\partial^{2}g)g -g^{2}(\partial^{2}g)\] \[\qquad-\tfrac{1}{2}(\partial g)^{2}g-\tfrac{3}{2}(\partial g)g( \partial g)-\tfrac{1}{2}g(\partial g)^{2}+\tfrac{3}{8}g^{5}.\]
This matches the form given in Malham [68] and Nijhoff _et al._[79, eq. B.4a].
Remark 14 (Basis elements): In Tables 1 and 2 we record the basis elements of the form \([w\times\boldsymbol{\varphi}]\) in the far left column. The composition components \(w\) are compositions of \(n\), i.e. compositions of \(3\) and \(4\), in the respective tables. The \(\mathbb{Z}_{2}^{\star}\)-component \(\boldsymbol{\varphi}\) in the basis element is in principle any possible \((|w|+1)\)-tuples that can be constructed from \(\{\boldsymbol{0},\boldsymbol{0}^{\dagger}\}\cong\mathbb{Z}_{2}\). However, recall \(\left[(w\times\boldsymbol{\varphi})^{\dagger}\right]=-[w\times\boldsymbol{ \varphi}]\) and the Definition 8. Using this property for \(\left[(w\times\boldsymbol{\varphi})^{\dagger}\right]\), for any basis element \([w\times\boldsymbol{\varphi}]\), we can thus always arrange for the first component of \(\boldsymbol{\varphi}\) to be '\(\boldsymbol{0}\)'; as can be observed in the tables. The basis elements are ordered according to descent order for the compositions \(w\) and natural binary ordering for the \(\mathbb{Z}_{2}^{\star}\)-components \(\boldsymbol{\varphi}\). For more details see Definition 15 in Section 6, and the subsequent discussion therein. Note that though the first component of \(\boldsymbol{\varphi}\) can always be arranged to be \(\boldsymbol{0}\), in our computations involving Poppe products, we often utilise the symmetry \(\left[(w\times\boldsymbol{\varphi})^{\dagger}\right]=-[w\times\boldsymbol{ \varphi}]\) in order to use the Poppe products listed in Lemma 8. Thus temporarily, the first component in some factors is sometimes \(\boldsymbol{0}^{\dagger}\). However, we always use the same symmetry again to convert the final answer to the form with \(\boldsymbol{0}\) as the first component in the basis element.
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|c|c|} \hline basis & [4] & \(\mathbf{[0]\,[2]\,[0]\,[0]\,[2]\,[0]\,[0]\,[2]\,[0]\,[0]^{2}\,[0]\,[1]^{2}\,[1]^{2} \,[1]^{2}\,[0]\,[1]\,[0]\,[1]\,[0]\,[1]\,[0]^{2}\,[2]\,[2]\,[0]^{5}\,\)} & \(B\) \\ \hline \(\mathbf{[010]}\) & \(4\) & \(2\cdot(0_{2}\mathbb{z}0)\) & & & & \\ \(\mathbf{[010^{\dagger}]}\) & \(-2\cdot(0_{2}\mathbb{z}0)\) & & & & & \\ \hline \(\mathbf{[03010]}\) & \(31\) & \(0_{2}\mathbb{z}0\) & \(2\cdot(2_{0}\mathbb{z}0_{0}0)\) & \(2\cdot(0_{1}\mathbb{z}0_{1}0)\) & & & \\ \(\mathbf{[030^{\dagger}10^{\dagger}]}\) & \(0_{2}\mathbb{z}0\) & \(-2\cdot(2_{0}\mathbb{z}0_{0}0)\) & \(2\cdot(0_{1}\mathbb{z}0_{1}0)\) & & & \\ \hline \(\mathbf{[02020]}\) & \(22\) & \(0_{1}\mathbb{z}0_{1}0\) & \(2\cdot(2_{0}\mathbb{z}0_{0}0)\) & \(0_{1}\mathbb{z}0_{1}0\) & \(1_{1}\mathbb{z}0\) & \(1_{1}\mathbb{z}0\) & \(1_{1}\mathbb{z}0\) \\ \(\mathbf{[02020^{\dagger}]}\) & \(-0_{1}\mathbb{z}0_{1}0\) & \(-2\cdot(2_{0}\mathbb{z}0_{0}0)\) & \(0_{1}\mathbb{z}0_{1}0\) & \(-1_{1}\mathbb{z}0\) & \(1_{1}\mathbb{z}0\) & \(1_{1}\mathbb{z}0\) \\ \(\mathbf{[020^{\dagger}20]}\) & \(-0_{0}\mathbb{z}0_{1}\mathbb{z}0\) & \(0_{1}\mathbb{z}0_{1}0\) & \(1_{1}\mathbb{z}0\) & \(-1_{1}\mathbb{z}0\) & \(1_{1}\mathbb{z}0\) \\ \(\mathbf{[020^{\dagger}20]}\) & \(-0_{0}\mathbb{z}0_{1}\mathbb{z}0\) & \(0_{1}\mathbb{z}0_{1}0\) & \(0_{1}\mathbb{z}0_{1}0\) & \(-1_{1}\mathbb{z}0\) & \(1_{1}\mathbb{z}0\) \\ \(\mathbf{[020^{\dagger}20^{\dagger}]}\) & \(0_{1}\mathbb{z}0_{1}0\) & \(0_{1}\mathbb{z}0_{1}0\) & \(-1_{1}\mathbb{z}0\) & \(-1_{1}\mathbb{z}0\) & \(1_{1}\mathbb{z}0\) \\ \hline \(\mathbf{[01030]}\) & \(13\) & \(0_{2}\mathbb{z}0\) & & \(2\cdot(0_{2}\mathbb{z}0_{0}0)\) & \(2\cdot(0_{2}\mathbb{z}0_{0}0)\) & \\ \(\mathbf{[01030^{\dagger}]}\) & \(-0_{2}\mathbb{z}0\) & & & \(-2\cdot(1_{2}\mathbb{z}0_{0}0)\) & \(2\cdot(0_{2}\mathbb{z}0_{0}0)\) & \\ \hline \(\mathbf{[0201010]}\) & \(21\) & \(0_{1}\mathbb{z}0\) & \(2\cdot(2_{0}\mathbb{z}0_{0}0)\) & \(0_{1}\mathbb{z}0_{1}0\) & \(1_{1}\mathbb{z}0\) & \(1_{1}\mathbb{z}0\) \\ \(\mathbf{[020^{\dagger}10^{\dagger}10^{\dagger}]}\) & \(-0_{0}\mathbb{z}1\mathbb{z}0\) & & \(-0_{1}\mathbb{z}1\mathbb{z}0\) & \(1_{1}\mathbb{z}0\) & \(1_{1}\mathbb{z}0\) \\ \(\mathbf{[0102010]}\) & \(121\) & \(0_{2}\mathbb{z}0\) & \(2\cdot(11_{0}\mathbb{z}0_{0}0)\) & \(0_{1}\mathbb{z}0_{1}0\) & \(1_{1}\mathbb{z}0\) \\ \(\mathbf{[01020^{\dagger}10^{\dagger}]}\) & \(-2\cdot(11_{0}\mathbb{z}0_{0}0)\) & \(0_{1}\mathbb{z}0_{1}0\) & \(1_{1}\mathbb{z}0\) & \(1_{1}\mathbb{z}0\) \\ \(\mathbf{[0101020]}\) & \(112\) & \(0_{1}\mathbb{z}11_{0}0\) & \(2\cdot(11_{0}\mathbb{z}0)\) & \(0_{1}\mathbb{z}0_{1}0\) & \(1_{1}\mathbb{z}0\) \\ \(\mathbf{[0101020^{\dagger}]}\) & \(-0_{1}\mathbb{z}11_{0}0\) & \(-2\cdot(11_{0}\mathbb{z}0_{0}0)\) & \(0_{1}\mathbb{z}11\) & \(-1_{1}\mathbb{z}0\) & \(1_{1}\mathbb{z}0\) \\ \(\mathbf{[010101010]}\) & \(111\) & \(0_{1}\mathbb{z}11_{0}0\) & \(0_{1}\mathbb{z}11\) & \(1_{1}\mathbb{z}0\) & \(1_{1}\mathbb{z}0\) \\ \hline \end{tabular}
\end{table}
Table 2: Non-zero signature coefficients appearing in the expansion of the \(P\bar{\mathit{op}}pe\)\(polynomial\)\(\pi_{4}\) in Example 10. The coefficients are the \(\chi\)-images of the signature entries shown. Each column shows the factor contributions to the real coefficients of the basis elements shown in the very left column, for each of the monomials in \(\pi_{4}\) shown across the top row. The final column represents the coefficient on the right-hand side of the equation \(\pi_{4}=[\mathbf{010^{\dagger}}]\).
## 5 Non-commutative Lax hierarchy and the sine-Gordon equation
Herein we establish the non-commutative nonlinear Schrodinger and modified Korteweg-de Vries Lax hierarchy iteratively, order by order. The non-commutative modified Korteweg-de Vries hierarchy can be found for example in Carillo and Schiebold [20, eq. (9)]. Importantly, this iterative hierarchy extends to all negative orders. The first member of negative order, i.e. for which \(n=-1\), corresponds to the non-commutative sine-Gordon cubic-form equation in Example 2; see, for example, Tracy and Widom [100] for the scalar case. Establishing the hierarchy for all orders \(n\in\mathbb{Z}\) is particularly simple in the Poppe algebra \(\mathbb{C}\langle\mathbb{Z}_{\mathbf{0}}\rangle\). We need to define some natural actions on \(\mathbb{C}\langle\mathbb{Z}_{\mathbf{0}}\rangle\) first.
Definition 14 (Adjoint and symmetric algebra products and actions): We define the standard commutation and symmetric products, respectively, \(\mathrm{ad}\colon\mathbb{C}\langle\mathbb{Z}_{\mathbf{0}}\rangle\times \mathbb{C}\langle\mathbb{Z}_{\mathbf{0}}\rangle\to\mathbb{C}\langle\mathbb{Z} _{\mathbf{0}}\rangle\) and \(\mathrm{sd}\colon\mathbb{C}\langle\mathbb{Z}_{\mathbf{0}}\rangle\times \mathbb{C}\langle\mathbb{Z}_{\mathbf{0}}\rangle\to\mathbb{C}\langle\mathbb{Z} _{\mathbf{0}}\rangle\). For example, for \([\mathbf{0}]\in\mathbb{C}[\mathbb{Z}_{\mathbf{0}}]\subset\mathbb{C}\langle \mathbb{Z}_{\mathbf{0}}\rangle\) and any word \(w\times\boldsymbol{\varphi}\in\mathbb{C}\langle\mathbb{Z}_{\mathbf{0}}\rangle\) we have,
\[\mathrm{ad}_{[\mathbf{0}]}(w\times\boldsymbol{\varphi}) \coloneqq[\mathbf{0}]\left(w\times\boldsymbol{\varphi}\right)- \left(w\times\boldsymbol{\varphi}\right)[\mathbf{0}],\] \[\mathrm{sd}_{[\mathbf{0}]}(w\times\boldsymbol{\varphi}) \coloneqq[\mathbf{0}]\left(w\times\boldsymbol{\varphi}\right)+ \left(w\times\boldsymbol{\varphi}\right)[\mathbf{0}],\]
which is the exclusive form of their action we use below. We also define the following two actions on the skew-Poppe algebra \(\mathbb{C}[\mathbb{Z}_{\mathbf{0}}]\). For \([\mathbf{0}]\in\mathbb{C}[\mathbb{Z}_{\mathbf{0}}]\) set,
\[A \coloneqq\frac{1}{4}\mathrm{ad}_{[\mathbf{0}]}\mathfrak{d}^{-1} \mathrm{ad}_{[\mathbf{0}]},\] \[S \coloneqq\frac{1}{4}\mathrm{sd}_{[\mathbf{0}]}\mathfrak{d}^{-1} \mathrm{sd}_{[\mathbf{0}]}.\]
That the actions of \(A\) and \(S\) are closed in \(\mathbb{C}[\mathbb{Z}_{\mathbf{0}}]\) is established as part of the proof of the following crucial lemma.
Lemma 10 (Natural iteration): _For any \(n\in\mathbb{Z}\) we have:_
\[(\mathfrak{d}-A)[\mathbf{0}n\mathbf{0}^{\dagger}] =[\mathbf{0}(n+1)\mathbf{0}],\] \[(\mathfrak{d}-S)[\mathbf{0}n\mathbf{0}] =[\mathbf{0}(n+1)\mathbf{0}^{\dagger}].\]
Proof: By direct computation using Lemma 8 we have,
\[\mathrm{ad}_{[\mathbf{0}]}\left[\mathbf{0}n\mathbf{0}^{\dagger}\right] =[\mathbf{0}][\mathbf{0}n\mathbf{0}^{\dagger}]-[\mathbf{0}n \mathbf{0}^{\dagger}][\mathbf{0}]\] \[=2\cdot\left(\left\{\mathbf{0}(n+1)\mathbf{0}^{\dagger}\right\}+ \left\{\mathbf{0}1\mathbf{0}n\mathbf{0}^{\dagger}\right\}+\left\{\mathbf{0}n \mathbf{0}^{\dagger}1^{\dagger}\mathbf{0}^{\dagger}\right\}\right)\] \[=2\cdot\mathfrak{d}\left\{\mathbf{0}n\mathbf{0}^{\dagger}\right\}.\]
Hence we observe, using that \(\left\{\mathbf{0}n\mathbf{0}^{\dagger}\right\}=-\left\{\mathbf{0}^{\dagger}n \mathbf{0}\right\}\) we have,
\[A\left[\mathbf{0}n\mathbf{0}^{\dagger}\right] =\tfrac{1}{2}\left([\mathbf{0}]\left\{\mathbf{0}n\mathbf{0}^{ \dagger}\right\}+\left\{\mathbf{0}^{\dagger}n\mathbf{0}\right\}[\mathbf{0}]\right)\] \[=[\mathbf{0}(n+1)\mathbf{0}^{\dagger}]-[\mathbf{0}(n+1)\mathbf{ 0}]+[\mathbf{0}1\mathbf{0}n\mathbf{0}^{\dagger}]+[\mathbf{0}^{\dagger}n \mathbf{0}1\mathbf{0}]\] \[=\mathfrak{d}\left[\mathbf{0}n\mathbf{0}^{\dagger}\right]-[ \mathbf{0}(n+1)\mathbf{0}].\]
This gives the first result. Again, by direct computation, we have,
\[\mathrm{sd}_{[\mathbf{0}]}\left[\mathbf{0}n\mathbf{0}\right]=[\mathbf{0}] \left[\mathbf{0}n\mathbf{0}\right]+[\mathbf{0}n\mathbf{0}]\left[\mathbf{0}\right]\]
\[=2\left(\{\mathbf{0}(n+1)\mathbf{0}\}+\{\mathbf{0}\mathbf{1} \mathbf{0}n\mathbf{0}\}+\{\mathbf{0}n\mathbf{0}\mathbf{1}\mathbf{0}\}\right)\] \[=2\cdot\mathfrak{d}\left\{\mathbf{0}n\mathbf{0}\right\}.\]
Hence we observe,
\[S\left[\mathbf{0}n\mathbf{0}\right] =\tfrac{1}{2}\big{(}[\mathbf{0}]\left\{\mathbf{0}n\mathbf{0} \right\}+\{\mathbf{0}n\mathbf{0}\}\left[\mathbf{0}\right]\big{)}\] \[=[\mathbf{0}(n+1)\mathbf{0}]+[\mathbf{0}1\mathbf{0}n\mathbf{0}] +[\mathbf{0}n\mathbf{0}1\mathbf{0}]-[\mathbf{0}(n+1)\mathbf{0}^{\dagger}]\] \[=\mathfrak{d}\left[\mathbf{0}n\mathbf{0}\right]-[\mathbf{0}(n+1) \mathbf{0}^{\dagger}].\]
This gives the second result.
The following immediate corollary is established straightforwardly by induction, both when \(n\) is positive as well as negative. For the remainder of this section we refer to Poppe polyomials \(\pi_{n}\) for \(n\in\mathbb{Z}\), though in Definition 13, \(n\in\mathbb{N}\cup\{0\}\) for which \(\pi_{n}=\pi_{n}\big{(}[\mathbf{0}],[\mathbf{1}],\ldots,[\mathbf{n}]\big{)}\). The form of \(\pi_{n}\) for the negative integer cases is given presently.
Corollary 1 (Non-commutative Lax hierarchy iteration): _Let \(n\in\mathbb{Z}\) be a given integer, and consider the equation, \(\mathfrak{e}_{n}\big{(}[\mathbf{0}]\big{)}=\pi_{n}\), where \(\pi_{n}\) is a Poppe polynomial and \(\mathfrak{e}_{n}\big{(}[\mathbf{0}]\big{)}\) equals \([\mathbf{0}n\mathbf{0}]\) if \(n\) is odd, and equals \([\mathbf{0}n\mathbf{0}^{\dagger}]\) if \(n\) is even. For \(n=0,1,2,3,4\) such polynomials \(\pi_{n}\) exist, as demonstrated in Examples 6--10. Then we have,_
\[\mathfrak{e}_{n+1}\big{(}[\mathbf{0}]\big{)}=\begin{cases}(\mathfrak{d}-A)\pi _{n},&\text{when $n$ is even,}\\ (\mathfrak{d}-S)\pi_{n},&\text{when $n$ is odd.}\end{cases}\]
Proof: This follows directly from Lemma 10 for \(n\in\mathbb{Z}\) by induction.
Further, we have the following additional immediate corollary.
Corollary 2 (Non-commutative Lax hierarchy): _For any \(n\in\mathbb{Z}\), the \((n+1)\)th order member equation of the non-commutative Lax hierarchy is given by,_
\[\mathfrak{e}_{n+1}\big{(}[\mathbf{0}]\big{)}=\begin{cases}\left(\mathfrak{d} -A\right)\big{(}(\mathfrak{d}-S)(\mathfrak{d}-A)\big{)}^{\frac{n}{2}}\left[ \mathbf{0}\right],&\text{when $n$ is even,}\\ \left((\mathfrak{d}-S)(\mathfrak{d}-A)\right)^{\frac{1}{2}(n+1)}\left[ \mathbf{0}\right],&\text{when $n$ is odd.}\end{cases}\]
The Lax hierarchy stated in Corollary 2, at each odd order \(n\), exactly matches that quoted for the non-commutative modified Korteweg-de Vries hierarchy in Carillo and Schiebold [20, eq. (9)]. Given the main existence and uniqueness result we prove in Section 6, this is expected. The cases \(n=0,1,2,3,4\) in Corollary 2 naturally match the non-commutative equation members given in Examples 6--10. However, Corollary 2 also applies for negative \(n\). Consider the example case of order '\(-1\)'.
Example 11 (Non-commutative sine-Gordon cubic-form equation: order '\(-1\)') Setting \(n=-2\), a case when \(n\) is even, in Corollary 2, generates the equation,
\[\mathfrak{e}_{-1}\big{(}[\mathbf{0}]\big{)}=(\mathfrak{d}-A)\big{(}( \mathfrak{d}-S)(\mathfrak{d}-A)\big{)}^{-1}\left[\mathbf{0}\right]\quad \Leftrightarrow\quad(\mathfrak{d}-S)\,\mathfrak{e}_{-1}\big{(}[\mathbf{0}] \big{)}=[\mathbf{0}].\]
We can express the equation on the right as follows,
\[\mathfrak{d}\mathfrak{e}_{-1}\big{(}[\mathbf{0}]\big{)}=[\mathbf{0}]+\tfrac{1} {4}\Big{(}\big{(}\mathfrak{d}^{-1}\mathfrak{e}_{-1}([\mathbf{0}]^{2})\big{)} \left[\mathbf{0}\right]+[\mathbf{0}]\,\big{(}\mathfrak{d}^{-1}\mathfrak{e}_{-1 }([\mathbf{0}]^{2})\big{)}\Big{)},\]
where we have used that \(\operatorname{sd}_{[\mathbf{0}]}\mathfrak{e}_{-1}([\mathbf{0}])=\mathfrak{e}_{-1}([ \mathbf{0}]^{2})\) from Lemma 11 just below. This relation in \(\mathbb{C}[\mathbb{Z}_{\mathbf{0}}]\) translates to the non-commutative sine-Gordon cubic-form equation given in Example 2 for \(g=[\![G]\!]\), with \(\mathcal{M}_{-1}=\mu_{-1}\operatorname{id}\), i.e.
\[\partial\partial_{t}g-\tfrac{1}{4}\big{(}(\partial^{-1}\partial_{t}g^{2})\,g+g \,(\partial^{-1}\partial_{t}g^{2})\big{)}=\mu_{-1}g.\]
Invoking the factor '2' rescaling mentioned in Remark 13 gives the exact match.
Lemma 11: _For any \(n\in\mathbb{Z}\) we have,_
\[\mathfrak{e}_{n}\big{(}[\mathbf{0}]^{2}\big{)}=\begin{cases} \operatorname{sd}_{[\mathbf{0}]}\mathfrak{e}_{n}\big{(}[\mathbf{0}]\big{)},& \text{when $n$ is odd},\\ \operatorname{ad}_{[\mathbf{0}]}\mathfrak{e}_{n}\big{(}[\mathbf{0}]\big{)},& \text{when $n$ is even}.\end{cases}\]
Proof: This result is established at the operator level. Recall \(\mathcal{M}_{n}\coloneqq-\mu_{n}(\operatorname{i}\!\mathcal{I})^{n-1}\) and the properties outlined in Lemma 9. We observe when \(n\) is even, we have,
\[\partial_{t}[V]^{2} =\mathcal{M}_{n}\big{[}V(\operatorname{i}\!P)_{n}V^{\dagger} \big{]}\,[V]+[V]\,\mathcal{M}_{n}\big{[}V(\operatorname{i}\!P)_{n}V^{\dagger} \big{]}\] \[=\mathcal{M}_{n}\big{[}V(\operatorname{i}\!P)_{n}V^{\dagger} \big{]}\,[V]-\mathcal{M}_{n}[V]\,\big{[}V(\operatorname{i}\!P)_{n}V^{\dagger }\big{]},\]
where we have used that \([V]\,\mathcal{M}_{n}=[\![V-V^{\dagger}]\!]\,\mathcal{M}_{n}=\mathcal{M}_{n}\,[ \![V^{\dagger}-V]=-\mathcal{M}_{n}[V]\). When \(n\) is odd, we follow an analogous computation with \([V(\operatorname{i}\!P)_{n}V]\) replacing \([V(\operatorname{i}\!P)_{n}V^{\dagger}]\) just above, and that in this case there is no sign change in the second term on the right as \([V]\,\mathcal{M}_{n}=[\![V-V^{\dagger}]\!]\,\mathcal{M}_{n}=\mathcal{M}_{n}\,[ \![V-V^{\dagger}]\!]=\mathcal{M}_{n}[V]\).
Naturally we can continue to consider further negative order hierarchy member equations. For example, the non-commutative order '\(-2\)' equation can be generated by setting \(n=-3\), a case when \(n\) is odd, in Corollary 2. With \(\mathfrak{e}_{-2}\big{(}[\mathbf{0}]\big{)}=[\mathbf{0}(-2)\mathbf{0}^{ \dagger}]\), this generates the non-commutative equation,
\[\mathfrak{e}_{-2}\big{(}[\mathbf{0}]\big{)}=\big{(}(\mathfrak{0}-S)( \mathfrak{0}-A)\big{)}^{-1}\,[\mathbf{0}]\quad\Leftrightarrow\quad(\mathfrak{ 0}-S)(\mathfrak{0}-A)\,\mathfrak{e}_{-2}\big{(}[\mathbf{0}]\big{)}=[\mathbf{ 0}].\]
And so forth. The example integrable equations in Examples 6-10 in Section 4 were of the form \(\partial_{t}g=\pi(g,\partial g,\partial^{2}g,\ldots)\) for \(n=0,1,2,3,4\). Indeed for these examples, we show the equation is unique in this class. In other words, at each of the orders considered, given the base dispersion equation for \(P\) and the form of the Marchenko equation, the right-hand side in the non-commutative nonlinear partial differential equation, the 'nonlinear field', is of the form \(\pi=\pi(g,\partial g,\partial^{2}g,\ldots)\), where \(\pi\) is a polynomial in its arguments. In Section 6, we establish this fact to all orders \(n\geqslant 0\).
## 6 Hierarchy uniqueness
Herein we prove that at each order \(n\geqslant 0\), the Poppe polynomial signature expansion \(\pi_{n}\) such that \(\mathfrak{e}_{n}([\mathbf{0}])=\pi_{n}\), exists, and is unique. This is our main result. Before presenting this result in the general case, we present one further example, the \(n=5\) case. This case acts a useful reference for our general argument.
Example 12: _(Fifth order quintic modified Korteweg-de Vries equation: \(n=5\))_ In this case \(\mathfrak{e}_{5}\big{(}[\mathbf{0}]\big{)}=[\mathbf{0}5\mathbf{0}]\) and the Poppe polynomial \(\pi_{5}\), in general, has the form,
\[\pi_{5}\coloneqq c_{5}\cdot\left[\mathbf{5}\right]+c_{300}\cdot\left[\mathbf{3} \right]\left[\mathbf{0}\right]^{2}+c_{030}\cdot\left[\mathbf{3}\right]\left[ \mathbf{0}\right]+c_{003}\cdot\left[\mathbf{0}\right]^{2}\left[\mathbf{3} \right]+c_{210}\cdot\left[\mathbf{2}\right]\left[\mathbf{1}\right]\left[ \mathbf{0}\right]\] \[+c_{201}\cdot\left[\mathbf{2}\right]\left[\mathbf{0}\right] \left[\mathbf{1}\right]+c_{120}\cdot\left[\mathbf{1}\right]\left[\mathbf{2} \right]\left[\mathbf{0}\right]+c_{102}\cdot\left[\mathbf{1}\right]\left[ \mathbf{0}\right]\left[\mathbf{2}\right]\left[\mathbf{1}\right]\] \[+c_{012}\cdot\left[\mathbf{0}\right]\left[\mathbf{1}\right] \left[\mathbf{2}\right]+c_{111}\cdot\left[\mathbf{1}\right]\left[\mathbf{1} \right]\left[\mathbf{1}\right]+c_{10000}\cdot\left[\mathbf{1}\right]\left[ \mathbf{0}\right]^{4}+c_{01000}\cdot\left[\mathbf{0}\right]\left[\mathbf{1} \right]\left[\mathbf{0}\right]^{3}\] \[+c_{00100}\cdot\left[\mathbf{0}\right]^{2}\left[\mathbf{1}\right] \left[\mathbf{0}\right]^{2}+c_{00010}\cdot\left[\mathbf{0}\right]^{3}\left[ \mathbf{1}\right]\left[\mathbf{0}\right]+c_{00001}\cdot\left[\mathbf{1}\right] \left[\mathbf{0}\right]^{4}.\]
The signature expansion for \(\left[\mathbf{5}\right]\) has the form,
\[\left[\mathbf{5}\right]= \chi(5)\cdot\left[\mathbf{0}5\mathbf{0}\right]\] \[+\chi(41)\cdot\left[\mathbf{0}4\mathbf{0}\mathbf{10}\right]+ \chi(32)\cdot\left[\mathbf{0}3\mathbf{0}2\mathbf{0}\right]+\chi(23)\cdot \left[\mathbf{0}2\mathbf{0}3\mathbf{0}\right]+\chi(14)\cdot\left[\mathbf{0}1 \mathbf{0}4\mathbf{0}\right]\] \[+\chi(311)\cdot\left[\mathbf{0}3\mathbf{0}1\mathbf{0}\mathbf{10} \right]+\chi(221)\cdot\left[\mathbf{0}2\mathbf{0}1\mathbf{0}\right]+\chi(212 )\cdot\left[\mathbf{0}2\mathbf{0}1\mathbf{0}2\mathbf{0}\right]\] \[+\chi(131)\cdot\left[\mathbf{0}1\mathbf{0}3\mathbf{0}\mathbf{10} \right]+\chi(122)\cdot\left[\mathbf{0}1\mathbf{0}2\mathbf{0}2\mathbf{0}\right] +\chi(113)\cdot\left[\mathbf{0}1\mathbf{0}1\mathbf{0}3\mathbf{0}\right]\] \[+\chi(2111)\cdot\left[\mathbf{0}2\mathbf{0}1\mathbf{0}1\mathbf{ 0}1\mathbf{0}\right]+\chi(1211)\cdot\left[\mathbf{0}1\mathbf{0}2\mathbf{0}1 \mathbf{0}\right]+\chi(1121)\cdot\left[\mathbf{0}1\mathbf{0}1\mathbf{0}2 \mathbf{0}1\mathbf{0}\right]\] \[+\chi(1112)\cdot\left[\mathbf{0}1\mathbf{0}1\mathbf{0}1\mathbf{ 0}2\mathbf{0}\right]+\chi(11111)\cdot\left[\mathbf{0}1\mathbf{0}1\mathbf{0}1 \mathbf{0}1\mathbf{0}1\mathbf{0}\right].\]
Using the skew and symmetric Poppe products in Lemma 8 we find, for example, that,
\[\left[\mathbf{1}\right]\left[\mathbf{2}\right]\left[\mathbf{0}\right]\] \[=\big{(}\chi(1)\cdot\left[\mathbf{0}1\mathbf{0}\right]\big{)} \left(\chi(2)\cdot\left[\mathbf{0}2\mathbf{0}\right]+\chi(11)\cdot\left[ \mathbf{0}1\mathbf{0}1\mathbf{0}\right]\right)\big{(}\chi(0)\cdot\left[ \mathbf{0}\right]\big{)}\] \[=\big{(}\chi(1)\cdot\left[\mathbf{0}1\mathbf{0}\right]\big{)} \left(\chi(2\otimes 0)\cdot\left\{\mathbf{0}3\left[\mathbf{0}\right] \right\}+\chi(2\hat{\otimes}0)\cdot\left\{\mathbf{0}2\mathbf{0}1\mathbf{0}\right\}\] \[+\chi(11\otimes 0)\cdot\left\{\mathbf{0}1\mathbf{0}2\left[ \mathbf{0}\right]\right\}+\chi(11\hat{\otimes}0)\cdot\left\{\mathbf{0}1 \mathbf{0}1\mathbf{0}1\mathbf{0}\right\}\big{)}\] \[=\chi(1\otimes 2\otimes 0)\cdot\left(\left[\mathbf{0}2\{\mathbf{0}3 \left[\mathbf{0}\right]\}\right]+\left[\mathbf{0}1\mathbf{0}\{4\left[\mathbf{ 0}\right]\}\right]\right)+\chi(1\hat{\otimes}2\otimes 0)\cdot\left[\mathbf{0}1\mathbf{0}1 \mathbf{0}3\mathbf{0}\right]\right]\] \[+\chi(1\otimes 2\otimes 0)\cdot\left(\left[\mathbf{0}2\{\mathbf{0}2 \mathbf{0}1\mathbf{0}\}\right]+\left[\mathbf{0}1\mathbf{0}\{3\mathbf{0}1 \}\right]\right)+\chi(1\hat{\otimes}2\otimes 0)\cdot\left[\mathbf{0}1\mathbf{0}1 \mathbf{0}2\mathbf{0}1\mathbf{0}\right]\] \[+\chi(1\otimes 11\hat{\otimes}0)\cdot\left(\left[\mathbf{0}2\{ \mathbf{0}1\mathbf{0}1\mathbf{0}1\mathbf{0}\}\right]+\left[\mathbf{0}1 \mathbf{0}\{2\mathbf{0}1\mathbf{0}\}\right]\right)\] \[+\chi(1\hat{\otimes}11\hat{\otimes}0)\cdot\left(\left[\mathbf{0}1 \mathbf{0}1\mathbf{0}1\mathbf{0}1\mathbf{0}\right]\right).\]
The other products shown in \(\pi_{5}\) can be similarly expanded. In Tables 3 and 4 we list the essential basis elements and corresponding signature coefficients generated by all the Poppe products present in \(\pi_{5}\). The values of the coefficients are the \(\chi\)-images of the tensored terms shown. Each row generates a linear algebraic equation for the expansion coefficients \(c_{5}\), \(c_{300}\), \(c_{210}\),...\(c_{00001}\). The ordering of rows and columns in Tables 3 and 4 is self-evident. We discuss this ordering in detail presently. Using all the rows shown, we generate an over-determined linear system of algebraic equations, \(AC=B\), where \(B\) is the column vector shown in the right-hand column in Table 4 and \(C\) is the vector of coefficients \(c_{5}\), \(c_{030}\), \(c_{300}\), \(c_{021}\) and so forth in the order shown. The matrix \(A\) is populated with the \(\chi\)-images of the signature coefficients shown in the tables. We can in fact solve \(AC=B\) for \(C\) systematically, block by block, as follows. The first two rows corresponding to \(\left[\mathbf{0}5\mathbf{0}\right]\) and \(\mathbf{0}^{\dagger}5\mathbf{0}\) generate the pair of equations, \(c_{5}+2c_{030}=1\) and \(c_{030}=0\). This system of equations corresponds to the smaller augmented subsystem \(\left[A_{0}\,B_{0}^{\prime}\right]\) where \(A_{0}\) is the same matrix as in Example 10 for the order \(n=4\) case, and \(B_{0}^{\prime}=(1,0)^{\mathrm{T}}\). We deduce \(c_{5}=1\) and \(c_{030}=0\). The next two rows corresponding to \(\left[\mathbf{0}4\mathbf{0}1\mathbf{0}\right]\) and \(\left[\mathbf{0}4\mathbf{0}^{\dagger}1\mathbf{0}^{\dagger}\right]\) generate the pair of equations
\(5c_{5}+2c_{030}+2c_{300}+2c_{021}=0\) and \(2c_{030}-2c_{300}+2c_{021}=0\) for \(c_{300}\) and \(c_{021}\). This pair corresponds to the smaller augmented subsystem \([A_{1}\,B_{1}^{\prime}]\) where \(A_{1}\) is the same matrix as in Example 10, and \(B_{1}^{\prime}=(-5,0)^{\mathrm{T}}\). Hence we deduce \(c_{021}=-5/4\) and \(c_{300}=-5/4\). Then the equations corresponding to the rows \([\mathbf{0}3\mathbf{0}\mathbf{2}\mathbf{0}]\), \([\mathbf{0}3\mathbf{0}\mathbf{2}\mathbf{0}^{\dagger}]\), \([\mathbf{0}3\mathbf{0}^{\dagger}\mathbf{2}\mathbf{0}]\) and \([\mathbf{0}3\mathbf{0}^{\dagger}\mathbf{2}\mathbf{0}^{\dagger}]\) generate the following smaller augmented matrix subsystem \([A_{2}\,B_{2}^{\prime}]\) for \(c_{210}\), \(c_{201}\), \(c_{012}\) and \(c_{01000}\), where, \(A_{2}\) is the same as the corresponding matrix in Example 10 and \(B_{2}^{\prime}=(-25/4,-5/4,5/4,5/4)^{\mathrm{T}}\). This system of linear equations is easily solved to reveal \(c_{210}=-5/4\), \(c_{201}=-5/2\), \(c_{012}=-5/4\) and \(c_{01000}=0\). As we shall see, it is no coincidence that the coefficent matrices \(A_{0}\), \(A_{1}\) and \(A_{2}\) match those for the analogous blocks of basis elements in Example 10. The next set of rows corresponding to the basis elements \([\mathbf{0}2\mathbf{0}\mathbf{3}\mathbf{0}]\), \([\mathbf{0}2\mathbf{0}\mathbf{3}\mathbf{0}^{\dagger}]\), \([\mathbf{0}2\mathbf{0}^{\dagger}\mathbf{3}\mathbf{0}]\) and \([\mathbf{0}2\mathbf{0}^{\dagger}\mathbf{3}\mathbf{0}^{\dagger}]\), generates exactly the same augmented matrix subsystem \([A_{2}\,B_{2}^{\prime}]\) as that discussed just above, but now for the unknown coefficients \(c_{120}\), \(c_{102}\), \(c_{003}\) and \(c_{00010}\). This linear system reveals \(c_{120}=-5/4\), \(c_{102}=-5/2\), \(c_{003}=-5/4\) and \(c_{00010}=0\). We do not deduce any new information from the equations corresponding to the rows \([\mathbf{0}1\mathbf{0}\mathbf{4}\mathbf{0}]\) and \([\mathbf{0}1\mathbf{0}\mathbf{4}\mathbf{0}^{\dagger}]\), other than that they are consistent. We consider the next block of four rows shown in Tables 3 and 4 corresponding to the rows \([\mathbf{0}2\mathbf{0}\mathbf{2}\mathbf{0}\mathbf{1}\mathbf{0}]\), \([\mathbf{0}2\mathbf{0}\mathbf{2}\mathbf{0}^{\dagger}\mathbf{1}\mathbf{0}^{ \dagger}]\), \([\mathbf{0}2\mathbf{0}^{\dagger}\mathbf{2}\mathbf{0}\mathbf{1}\mathbf{0}]\) and \([\mathbf{0}2\mathbf{0}^{\dagger}2\mathbf{0}^{\dagger}\mathbf{1}\mathbf{0}^{ \dagger}]\). These basis elements generate the following augmented matrix subsystem \([A_{3}^{\prime}\,B_{3}]\) for \(c_{111}\), \(c_{10000}\), \(c_{00100}\) and \(c_{00001}\), where,
\[A_{3}^{\prime}=\begin{pmatrix}1&4&4&4\\ 1&-4&-4&4\\ 1&-4&4&-4\\ 1&4&-4&-4\end{pmatrix}\qquad\text{and}\qquad B_{3}=\begin{pmatrix}5\\ -5\\ -5\\ -5\end{pmatrix}.\]
This system of linear equations is easily solved to reveal \(c_{111}=-5/2\), \(c_{10000}=5/8\), \(c_{00002}=5/8\) and \(c_{00001}=5/8\). We have determined the unique set of coefficients for which \(\mathfrak{c}_{5}\big{(}[\mathbf{0}]\big{)}=\pi_{5}\). In principle we can check the equations generated by the rows corresponding to the remaining basis elements are consistent. However, we explain in our proof of our main result (Step 8) why this is not necessary. Hence the fifth order non-commutative modified Korteweg-de Vries for \(g=[\![G]\!]\), with \(\mathcal{M}_{5}=\mu_{5}\operatorname{id}\), is given by (this matches the form given in Nijhoff _et al._[79, eq. B.5a]),
\[\mathcal{M}_{5}^{-1}\partial_{t}g =\partial^{5}g-\frac{5}{4}\Big{(}\big{(}\partial^{3}g\big{)}g^{2 }+g^{2}\big{(}\partial^{3}g\big{)}+\big{(}\partial^{2}g\big{)}\big{(}\partial g \big{)}g\] \[\quad+2\,\big{(}\partial^{2}g\big{)}g\big{(}\partial g\big{)}+ \big{(}\partial g\big{)}\big{(}\partial^{2}g\big{)}g+2\,\big{(}\partial g \big{)}g\big{(}\partial^{2}g\big{)}\] \[\quad+g\big{(}\partial^{2}g\big{)}\big{(}\partial g\big{)}+g \big{(}\partial g\big{)}\big{(}\partial^{2}g\big{)}+2\,\big{(}\partial g \big{)}\big{(}\partial g\big{)}\big{(}\partial g\big{)}\Big{)}\] \[\quad+\frac{5}{8}\Big{(}\big{(}\partial g\big{)}g^{4}+g^{2}\big{(} \partial g\big{)}g^{2}+g^{4}\big{(}\partial g\big{)}\Big{)}.\]
\begin{table}
\begin{tabular}{|c|c c c c c c c|} \hline basis & [5] & \(\mathbf{[0]\,[3]\,[0]\,[0]}\) & \(\mathbf{[3]\,[0]\,[0]^{2}}\) & \(\mathbf{[0]\,[2]\,[1]\,[2]\,[1]\,[2]\,[1]\,[0]\,[2]\,[2]\,[0]\,[1]\,[1]\,[0]\,[2]\,[2]\,[0]\,[1]\,[ 1]\,[2]\,[2]\,[0]\,[1]\,[0]^{3}\) \\ \hline \(\mathbf{[050]}\) & \(5\) & \(2\cdot(0\circ 3\circ 0)\) & & & & \\ \(\mathbf{[050^{1}]}\) & & \(-2\cdot(0\circ 3\circ 0)\) & & & & \\ \hline \(\mathbf{[04010]}\) & \(41\) & \(0\circ 3\circ 0\) & \(3\circ 0\circ 0\) & \(2\cdot(0\circ 2\circ 1)\) & & & \\ \(\mathbf{[040^{1}10^{1}]}\) & & \(0\circ 3\circ 0\) & \(-3\circ 0\circ 0\) & \(2\cdot(0\circ 2\circ 1)\) & & & \\ \hline \(\mathbf{[03020]}\) & \(32\) & \(0\circ 2\circ 1\circ 0\) & \(3\circ 0\circ 0\) & \(0\circ 2\circ 1\) & \(2\circ 1\circ 0\) & \(2\circ 0\circ 1\) & \(2\cdot(0\circ 1\circ 0\circ 0\circ 0)\) \\ \(\mathbf{[03020^{1}]}\) & & \(-0\circ 21\circ 0\) & \(-3\circ 0\circ 0\) & \(0\circ 2\circ 1\) & \(-2\circ 1\circ 0\) & \(2\circ 0\circ 1\) & \(-2\cdot(0\circ 1\circ 0\circ 0\circ 0)\) \\ \(\mathbf{[030^{1}20]}\) & & \(-0\circ 21\circ 0\) & & \(0\circ 2\circ 1\) & \(2\circ 1\circ 0\) & \(-2\circ 0\circ 1\) & \(-2\cdot(0\circ 1\circ 0\circ 0\circ 0)\) \\ \(\mathbf{[030^{1}20^{1}]}\) & & \(0\circ 21\circ 0\) & & \(0\circ 2\circ 1\) & \(2\circ 1\circ 0\) & \(-2\circ 0\circ 1\) & \(2\cdot(0\circ 1\circ 0\circ 0\circ 0)\) \\ \hline \(\mathbf{[02030]}\) & \(23\) & \(0\circ 1\circ 2\circ 0\) & & \(2\cdot(2\circ 1\circ 0)\) & & \(0\circ 1\circ 2\) & \\ \(\mathbf{[02030^{1}]}\) & & \(-0\circ 1\circ 2\circ 0\) & & \(-2\cdot(2\circ 1\circ 0)\) & & \(0\circ 1\circ 2\) & \\ \(\mathbf{[020^{1}30]}\) & & \(-0\circ 1\circ 2\circ 0\) & & & & \(0\circ 1\circ 2\) & \\ \(\mathbf{[020^{1}30^{1}]}\) & & \(0\circ 1\circ 2\circ 0\) & & & & \(0\circ 1\circ 2\) & \\ \hline \(\mathbf{[01040]}\) & \(14\) & \(0\circ 3\circ 0\) & & & & \\ \(\mathbf{[01040^{1}]}\) & & \(-0\circ 3\circ 0\) & & & & \\ \hline \(\mathbf{[0202010]}\) & \(221\) & \(0\circ 12\circ 0\) & \(21\circ 0\circ 0\) & \(0\circ 11\circ 1\) & \(2\circ 1\circ 0\) & \(2\circ 0\circ 1\) & \(0\circ 1\circ 0\circ 0\circ 0\) \\ \(\mathbf{[0202^{1}10^{1}]}\) & & \(-21\circ 0\circ 0\) & \(0\circ 11\circ 1\) & \(2\circ 1\circ 0\) & \(2\circ 0\circ 1\) & \(-0\circ 1\circ 1\circ 11\) & \(-0\circ 1\circ 0\circ 0\circ 0\) \\ \(\mathbf{[020^{1}2010]}\) & & & \(-0\circ 11\circ 1\) & & & \(0\circ 1\circ 11\) & \(0\circ 1\circ 0\circ 0\circ 0\) \\ \(\mathbf{[020^{1}2010^{1}]}\) & & \(-0\circ 12\circ 0\) & & \(-0\circ 11\circ 1\) & & \(-0\circ 1\circ 11\) & \(-0\circ 1\circ 0\circ 0\circ 0\) \\ \hline \(\vdots\) & & & & & & \\ \hline \end{tabular}
\end{table}
Table 3: Non-zero signature coefficients appearing in the expansion of the \(P\ddot{o}ppe\)\(polynomial\)\(\pi_{5}\) in Example 12. Not all the coefficients are shown. The remaining columns are shown in Table 4. The coefficients are the \(\chi\)-images of the signature entries shown. Each column shows the factor contributions to the real coefficients of the basis elements shown in the very left column, for each of the monomials in \(\pi_{5}\) shown across the top row.
\begin{table}
\begin{tabular}{|l|c|
We now consider the general order \(n\geqslant 0\) case. Our goal is to establish the following.
Theorem 3.1 (Main result: existence and uniqueness): _For every \(n\in\mathbb{N}\cup\{0\}\), there exists a unique Poppe polynomial \(\pi_{n}=\pi_{n}([\mathbf{0}],[\mathbf{1}],\ldots,[\boldsymbol{n}])\) in \(\mathbb{C}[\mathbb{Z}_{\mathbf{0}}]\) such that \(\mathfrak{e}_{n}([\mathbf{0}])=\pi_{n}\)._
We prove this result through a sequence of steps. It requires some preparation and the rest of this section is devoted to outlining the notation, strategy, ideas and intermediate results we require to carry through the proof in Steps 1-7 before giving the overall proof in Step 8. We have the following immediate corollary of Theorem 3.1.
Corollary 3 (The non-commutative Lax hierarchy is unique): _For any integer \(n\geqslant 0\), the Lax hierarchy generated by the iteration indicated in Corollary 1 is unique. The non-commutative nonlinear equation at order \(n\) in the Lax hierarchy is simply that represented by \(\mathfrak{e}_{n}([\mathbf{0}])=\pi_{n}\) in Theorem 3.1._
Proof: When \(n\) is even, we know from Corollary 1 that \(\mathfrak{e}_{n+1}([\mathbf{0}])=(\partial-A)\pi_{n}\). Now from Theorem 3.1 we know \(\mathfrak{e}_{n+1}([\mathbf{0}])=\pi_{n+1}\) with \(\pi_{n+1}=\pi_{n+1}([\mathbf{0}],[\mathbf{1}],\ldots,[\boldsymbol{n}+1])\) a unique polynomial in its arguments. We thus deduce the result of the corollary when \(n\) is even. An exactly analogous argument follows for the case when \(n\) is odd.
Let us now return to the proof of Theorem 3.1. Roughly, in the proof of Theorem 3.1, we construct a 'table' for the coefficients of \(\pi_{n}\), much like we did for the cases \(n=3,4,5\) in Tables 1--4. Indeed, Tables 3 and 4 act as a useful reference. We proceed systematically, considering basis elements \([w\times\boldsymbol{\varphi}]\) in descending order with respect to the composition \(\mathcal{C}\)-component \(w\) and using a natural binary order for the \(\mathbb{Z}_{2}^{*}\)-component \(\boldsymbol{\varphi}\). Both these orders are implicit in Tables 1--4.
_Step 1: Descent and binary order._ We introduce an order, the descent order, on the set of basis elements. We also define a new respresentation for the basis elements, the composition-binary representation, that we use hereafter. The descent order for compositions is given in Malham [69]; we present it here for completeness.
Definition 15 (Descent ordering of compositions): A composition \(u\in\mathcal{C}\) precedes another composition \(v\in\mathcal{C}\) if the length of the compostion \(u\), i.e. the number of digits it contains, is strictly less than the length of \(v\). If \(u\) and \(v\) have the same length, say \(k\), so \(u=u_{1}u_{2}\cdots u_{k}\) and \(v=v_{1}v_{2}\cdots v_{k}\), then \(u\) precedes \(v\) if for some \(\ell\in\{1,2,\ldots,k\}\) we have \(u_{1}=v_{1}\), \(u_{2}=v_{2}\),..., \(u_{\ell-1}=v_{\ell-1}\) and \(u_{\ell}<v_{\ell}\). Otherwise, \(v\) precedes \(u\). The resulting ordering induced on \(\mathcal{C}\), is the _descent ordering_.
The \(\mathbb{Z}_{2}^{*}\)-component \(\boldsymbol{\varphi}\) in the basis element \([w\times\boldsymbol{\varphi}]\) is any \((|w|+1)\)-tuple constructed from \(\{\mathbf{0},\mathbf{0}^{\dagger}\}\cong\mathbb{Z}_{2}\). Recall from Remark 14, we can always arrange for the first component of \(\boldsymbol{\varphi}\) to be '\(\mathbf{0}\)'; see Tables 1--4. Thus for any given composition \(w\in\mathcal{C}\), the component \(\boldsymbol{\varphi}\in\mathbb{Z}_{2}^{*}\) in a basis element \([w\times\boldsymbol{\varphi}]\) is one of the following forms,
\[\boldsymbol{\phi}_{1}\coloneqq\mathbf{0}\mathbf{0}\cdots\mathbf{0 }\mathbf{0}\mathbf{0}\mathbf{0},\ \boldsymbol{\phi}_{2}\coloneqq\mathbf{0}\mathbf{0}\cdots\mathbf{0}\mathbf{0} \mathbf{0}^{\dagger},\ \boldsymbol{\phi}_{3}\coloneqq\mathbf{0}\mathbf{0}\cdots\mathbf{0}\mathbf{0} \mathbf{0}^{\dagger}\mathbf{0},\ \boldsymbol{\phi}_{4}\coloneqq\mathbf{0}\mathbf{0}\cdots\mathbf{0}\mathbf{0} \mathbf{0}^{\dagger}\mathbf{0}^{\dagger},\] \[\boldsymbol{\phi}_{5}\coloneqq\mathbf{0}\mathbf{0}\cdots\mathbf{0 }\mathbf{0}^{\dagger}\mathbf{0}\mathbf{0},\ \boldsymbol{\phi}_{6}\coloneqq\mathbf{0}\mathbf{0}\cdots\mathbf{0}\mathbf{0}^{ \dagger}\mathbf{0}\mathbf{0}^{\dagger},\ \boldsymbol{\phi}_{7}\coloneqq\mathbf{0}\mathbf{0}\cdots\mathbf{0}\mathbf{0}^{ \dagger}\mathbf{0},\]
and so forth, all the way up to \(\boldsymbol{\phi}_{2^{|w|}}\coloneqq\mathbf{0}\mathbf{0}^{\dagger}\cdots\mathbf{0 }^{\dagger}\mathbf{0}^{\dagger}\mathbf{0}^{\dagger}\mathbf{0}^{\dagger}\). This is the _natural binary ordering_ of \(\mathbb{Z}_{2}^{*}\) we refer to just above. In this and the next sections, we mainly use a modified encoding of the basis elements \([w\times\boldsymbol{\varphi}]\), as follows. We replace the free monoid
\(\mathbb{Z}_{2}^{*}\) of all forms \(\boldsymbol{\varphi}\) that can be constructed from \(\{\mathbf{0},\mathbf{0}^{\dagger}\}\cong\mathbb{Z}_{2}\) by the vector space \(\mathbb{R}\langle\mathbb{B}\rangle\) representing the span over all the elements \(\mathbb{B}\coloneqq\{\boldsymbol{\phi}_{i}\}_{i\geqslant 1}\). We thus express any \(\boldsymbol{\varphi}\in\mathbb{Z}_{2}^{*}\) in the form \(\boldsymbol{\varphi}=\beta_{1}\boldsymbol{\phi}_{1}+\beta_{2}\boldsymbol{ \phi}_{2}+\beta_{3}\boldsymbol{\phi}_{3}+\cdots\), where the \(\beta_{i}\) for integer \(i\geqslant 1\), represent the coefficients of the basis element components \(\boldsymbol{\phi}_{i}\). Henceforth we represent any element \(\boldsymbol{\varphi}\in\mathbb{Z}_{2}^{*}\) by a \(2^{|w|}\)-tuple \(\boldsymbol{\beta}\coloneqq(\beta_{1},\beta_{2},\ldots,\beta_{2^{|w|}})\in \mathbb{R}\langle\mathbb{B}\rangle\). Thus we replace,
\[[w\times\boldsymbol{\varphi}]\leadsto[w]\times\boldsymbol{\beta}.\]
Example 13: Some examples matching the old notation with the new are as follows: \([\mathbf{0}a\mathbf{0}]=[a]\times(1,0)\), \([\mathbf{0}a\mathbf{0}^{\dagger}]=[a]\times(0,1)\), and then also,
\[[\mathbf{0}a_{1}\mathbf{0}a_{2}\mathbf{0}] =[a_{1}a_{2}]\times(1,0,0,0),\] \[[\mathbf{0}a_{1}\mathbf{0}a_{2}\mathbf{0}^{\dagger}] =[a_{1}a_{2}]\times(0,1,0,0),\] \[[\mathbf{0}a_{1}\mathbf{0}^{\dagger}a_{2}\mathbf{0}^{\dagger}] =[a_{1}a_{2}]\times(0,0,1,0),\] \[[\mathbf{0}a_{1}\mathbf{0}^{\dagger}a_{2}\mathbf{0}^{\dagger}] =[a_{1}a_{2}]\times(0,0,0,1),\] \[[\mathbf{0}a_{1}\mathbf{0}a_{2}\mathbf{0}a_{3}\mathbf{0}] =[a_{1}a_{2}a_{3}]\times(1,0,0,0,0,0,0),\] \[[\mathbf{0}a_{1}\mathbf{0}a_{2}\mathbf{0}a_{3}\mathbf{0}^{\dagger}] =[a_{1}a_{2}a_{3}]\times(0,1,0,0,0,0,0,0),\] \[\vdots\] \[[\mathbf{0}a_{1}\mathbf{0}^{\dagger}a_{2}\mathbf{0}^{\dagger}a_{3} \mathbf{0}^{\dagger}] =[a_{1}a_{2}a_{3}]\times(0,0,0,0,0,0,1),\]
and so forth. Naturally, as constructed, for linear combinations, we have for example that for any real scalar constants \(\beta_{1}\) and \(\beta_{2}\),
\[\beta_{1}\cdot[\mathbf{0}a_{1}\mathbf{0}a_{2}\mathbf{0}]+\beta_{2}\cdot[ \mathbf{0}a_{1}\mathbf{0}^{\dagger}a_{2}\mathbf{0}]=[a_{1}a_{2}]\times(\beta _{1},0,\beta_{2},0),\]
and so forth. There is a natural basis for elements of \(\mathbb{R}\langle\mathbb{B}\rangle\) of a given length \(2^{n}\). Such a basis is given by the elements of length \(2^{n}\) of the form \(\boldsymbol{\beta}_{i}\coloneqq(0,\ldots,0,1,0,\ldots)\), where the '1' is in the \(i\)th position.
Definition 16 (Composition-binary representation): We call the representation \([w]\times\boldsymbol{\beta}\) where \(w\in\mathcal{C}\) and \(\boldsymbol{\beta}\in\mathbb{R}\langle\mathbb{B}\rangle\) the _composition-binary representation_ of the basis elements. We call \([w]\) the composition component and \(\boldsymbol{\beta}\) the \(\mathbb{R}\langle\mathbb{B}\rangle\)-component.
_Step 2: Triple product action._ All the Poppe polynomials \(\pi_{n}\) are polynomials in the skew-Poppe algebra \(\mathbb{C}[\mathbb{Z}_{\mathbf{0}}]\) and thus necessarily of odd degree. As such all the monomials therein can be constructed from triple products of signature expansions \([\boldsymbol{n}]\), as we have seen in Examples 8--12. In particular we characterise the following triple product action which is established straightforwardly using the Poppe product rules in Lemma 8.
Lemma 12: _Consider the two signature expansions \([\boldsymbol{a}]\) and \([\boldsymbol{b}]\) and a basis element \([cw\times\boldsymbol{\varphi}]\in\mathbb{C}[\mathbb{Z}_{\mathbf{0}}]\), with \(c\in\mathbb{Z}\) the first letter in the composition '\(cw\)'. Let \(\hat{\boldsymbol{\varphi}}\) denote the element of \(\mathbb{Z}_{2}^{*}\) given by \(\boldsymbol{\varphi}\) with its first letter '\(\mathbf{0}\)' removed. Then at leading order the triple product action \([\boldsymbol{a}]\,[\boldsymbol{b}]\,\big{(}[cw\times\boldsymbol{\varphi}] \big{)}\) is given by,_
\[\chi(a\otimes b\otimes cw)\cdot\Big{(}[\mathbf{0}(a+1)\mathbf{0}(b+1) \mathbf{0}c(w\times\hat{\boldsymbol{\varphi}})]+[\mathbf{0}(a+1)\mathbf{0}^ {\dagger}(b+1)\mathbf{0}c(w\times\hat{\boldsymbol{\varphi}})]\]
\[+[\mathbf{0}(a+1)\mathbf{0}(b+1)\mathbf{0}^{\dagger}c(w\times\hat{ \boldsymbol{\varphi}})^{\dagger}]+[\mathbf{0}(a+1)\mathbf{0}^{\dagger}(b+1) \mathbf{0}^{\dagger}c(w\times\hat{\boldsymbol{\varphi}})^{\dagger}]\] \[+[\mathbf{0}(a+1)\mathbf{0}b\mathbf{0}(c+1)(w\times\hat{ \boldsymbol{\varphi}})]+[\mathbf{0}(a+1)\mathbf{0}^{\dagger}b\mathbf{0}^{ \dagger}(c+1)(w\times\hat{\boldsymbol{\varphi}})]\] \[+[\mathbf{0}(a+1)\mathbf{0}b\mathbf{0}(c+1)(w\times\hat{ \boldsymbol{\varphi}})^{\dagger}]+[\mathbf{0}(a+1)\mathbf{0}^{\dagger}b \mathbf{0}^{\dagger}(c+1)(w\times\hat{\boldsymbol{\varphi}})^{\dagger}]\] \[+2\cdot[\mathbf{0}a\mathbf{0}(b+2)\mathbf{0}c(w\times\hat{ \boldsymbol{\varphi}})]+2\cdot[\mathbf{0}a\mathbf{0}(b+2)\mathbf{0}^{\dagger}c (w\times\hat{\boldsymbol{\varphi}})^{\dagger}]\] \[+[\mathbf{0}a\mathbf{0}(b+1)\mathbf{0}(c+1)(w\times\hat{ \boldsymbol{\varphi}})]+[\mathbf{0}a\mathbf{0}(b+1)\mathbf{0}^{\dagger}(c+1)( w\times\hat{\boldsymbol{\varphi}})]\] \[+[\mathbf{0}a\mathbf{0}(b+1)\mathbf{0}(c+1)(w\times\hat{ \boldsymbol{\varphi}})^{\dagger}]+[\mathbf{0}a\mathbf{0}(b+1)\mathbf{0}^{ \dagger}(c+1)(w\times\hat{\boldsymbol{\varphi}})^{\dagger}]\Big{)}+\cdots.\]
_Here by leading order, we mean, we do not retain terms generated by lower (descent) order terms in the signature expansions of \([\boldsymbol{a}]\) and \([\boldsymbol{b}]\), nor do we retain terms generated with a quasi-product term--i.e. generated using any of the final terms with real factor '2' in the Poppe products in Lemma 8. We use the notation '\(+\cdots\)' do denote these missing terms._
We also define the following auto-tensorial action on \(\mathbb{R}\langle\mathbb{B}\rangle\).
Definition 17 (Tensorial action): Given \(\boldsymbol{\beta}\coloneqq(\beta_{1},\beta_{2},\ldots)\) and \(\boldsymbol{\gamma}\coloneqq(\gamma_{1},\gamma_{2},\ldots)\) in \(\mathbb{R}\langle\mathbb{B}\rangle\) we define the (left) _auto-tensorial action_ of \(\boldsymbol{\beta}\) on \(\boldsymbol{\gamma}\), denoted \(\boldsymbol{\beta}\lhd\boldsymbol{\gamma}\), to be,
\[\boldsymbol{\beta}\lhd\boldsymbol{\gamma}\coloneqq\big{(}\beta_{1}\cdot \boldsymbol{\gamma},\beta_{2}\cdot\boldsymbol{\gamma},\ldots\big{)},\]
where for each \(i=1,2,\ldots\), we note \(\beta_{i}\cdot\boldsymbol{\gamma}=(\beta_{i}\gamma_{1},\beta_{i}\gamma_{2},\ldots)\).
In the new notation, with the tensorial action on \(\mathbb{R}\langle\mathbb{B}\rangle\) just defined, the triple product action on \(\mathbb{C}[\mathbb{Z}_{\mathbf{0}}]\) given in Lemma 12 can be expressed more succinctly as follows.
Corollary 4 (Triple product action): _Given two signature expansions \([\boldsymbol{a}]\) and \([\boldsymbol{b}]\) and a generic basis element \([cw]\times\boldsymbol{\beta}\in\mathbb{C}[\mathcal{C}]\times\mathbb{R}\langle \mathbb{B}\rangle\cong\mathbb{C}[\mathbb{Z}_{\mathbf{0}}]\), the triple product action \([\boldsymbol{a}]\,[\boldsymbol{b}]\,\big{(}[cw]\times\boldsymbol{\beta}\big{)}\) on \(\mathbb{C}[\mathcal{C}]\times\mathbb{R}\langle\mathbb{B}\rangle\) is given at leading order by,_
\[[\boldsymbol{a}]\,[\boldsymbol{b}]\,\big{(}[cw]\times\boldsymbol{ \beta}\big{)}\] \[\qquad=\chi(a\otimes b\otimes cw)\cdot\Big{(}[(a+1)(b+1)cw]\times \big{(}(1,0,1,0)\lhd\boldsymbol{\beta}+(-1)^{|w|}\,(0,1,0,1)\lhd\boldsymbol{ \beta}^{\dagger}\big{)}\] \[\qquad\qquad\qquad\qquad\qquad+[(a+1)b(c+1)w]\times\big{(}(1,0,0,1) \lhd(\boldsymbol{\beta}+(-1)^{|w|}\,\boldsymbol{\beta}^{\dagger})\big{)}\] \[\qquad\qquad\qquad\qquad\qquad\qquad+[a(b+2)cw]\times\big{(}(2,0,0,0 )\lhd\boldsymbol{\beta}+(-1)^{|w|}\,(0,2,0,0)\lhd\boldsymbol{\beta}^{\dagger} \big{)}\] \[\qquad\qquad\qquad\qquad\qquad\qquad+[a(b+1)(c+1)w]\times\big{(}(1,1, 0,0)\lhd(\boldsymbol{\beta}+(-1)^{|w|}\,\boldsymbol{\beta}^{\dagger})\big{)} \Big{)}+\cdots.\]
_Here, if \(\boldsymbol{\beta}=(\beta_{1},\beta_{2},\beta_{3},\ldots,\beta_{2^{n}})\), then \(\boldsymbol{\beta}^{\dagger}=(\beta_{2^{n}},\beta_{2^{n}-1},\ldots,\beta_{2}, \beta_{1})\)._
Proof: The result of the corollary is just a restatement of the triple product action in Lemma 12. That the adjoint of \(\boldsymbol{\beta}\), denoted \(\boldsymbol{\beta}^{\dagger}\), corresponds to reversing the elements in \(\boldsymbol{\beta}\) is explained as follows. The entries in \(\boldsymbol{\beta}\) correspond to the coefficients of the basis \(\boldsymbol{\phi}_{1}\), \(\boldsymbol{\phi}_{2}\), \(\ldots\), \(\hat{\boldsymbol{\phi}}_{2^{n}}\), for some \(n\in\mathbb{N}\). The triple product action in Lemma 12 involves the components \(\hat{\boldsymbol{\varphi}}^{\dagger}\) where \(\hat{\boldsymbol{\varphi}}\) corresponds to \(\boldsymbol{\varphi}\) with its first letter '\(\mathbf{0}\)' removed. Let \(\hat{\boldsymbol{\phi}}_{1}\), \(\hat{\boldsymbol{\phi}}_{2}\), \(\ldots\), \(\hat{\boldsymbol{\phi}}_{2^{n}}\), be the same sequence of basis elements each of which has the first letter '\(\mathbf{0}\)' removed. We observe, \(\big{\{}\hat{\boldsymbol{\phi}}_{1}^{\dagger},\hat{\boldsymbol{\phi}}_{2}^{ \dagger},\ldots,\hat{\boldsymbol{\phi}}_{2^{n}}\big{\}}=\big{\{}\hat{\boldsymbol {\phi}}_{2^{n}},\hat{\boldsymbol{\phi}}_{2^{n}-1},\ldots,\hat{\boldsymbol{\phi}}_ {1}\big{\}}\).
Example 14: Consider computing the triple product \(\left[\boldsymbol{a}\right]\left[\boldsymbol{b}\right]\left[\boldsymbol{c}\right]\) to leading order. In this case, to leading order \(\left[\boldsymbol{c}\right]=\left[\boldsymbol{0}\boldsymbol{c}\boldsymbol{0} \right]+\cdots\) and so in Corollary 4 we have \(w=\nu\), the empty word, with \(\left|w\right|=0\). We observe, \(\left[\boldsymbol{0}\boldsymbol{c}\boldsymbol{0}\right]=\left[c\right]\times \boldsymbol{\gamma}\) with \(\boldsymbol{\gamma}=\left(1,0\right)\in\mathbb{R}\langle\mathbb{B}\rangle\). Hence we have,
\[\left(1,0,1,0\right)\lhd\boldsymbol{\gamma} =\left(1\cdot\left(1,0\right)\text{, }0\cdot\left(1,0\right)\text{, }1\cdot\left(1,0\right)\text{, }0\cdot\left(1,0\right)\right)=\left(1,0,0,0,1,0,0,0\right)\text{,}\] \[\left(0,1,0,1\right)\lhd\boldsymbol{\gamma}^{\dagger} =\left(0\cdot\left(0,1\right)\text{, }1\cdot\left(0,1\right)\text{, }0\cdot\left(0,1\right)\text{, }1\cdot\left(0,1\right)\right)=\left(0,0,0,1,0,0,0,1\right)\text{,}\]
and so forth. Hence we observe from Corollary 4 that at leading order,
\[\left[\boldsymbol{a}\right]\left[\boldsymbol{b}\right]\left[ \boldsymbol{c}\right] =\chi(a{\approx}b{\approx}c)\cdot\left(\left[(a+1)(b+1)c\right] \times\left(1,0,0,1,1,0,0,1\right)\right.\] \[\qquad\qquad\qquad\qquad\qquad\qquad+\left[(a+1)b(c+1)\right] \times\left(1,1,0,0,0,0,1,1\right)\] \[\qquad\qquad\qquad\qquad\qquad\qquad+\left[a(b+2)c\right] \times\left(2,0,0,2,0,0,0,0\right)\] \[\qquad\qquad\qquad\qquad\qquad\qquad\left.\left.+\left[a(b+1)(c+ 1)\right]\times\left(1,1,1,1,0,0,0,0\right)\right)+\cdots\text{.}\]
We extensively use such computations hereafter.
_Step 3: Generators, a coarse-grain overview._ For a given basis element \(\left[w\right]\times\boldsymbol{\beta}\) and composition \(w\) of \(n\in\mathbb{N}\), it is useful to identify the types of odd-degree monomials of signature expansions that might generate it.
Definition 18: (Monomial generator) Given a basis element \(\left[w\right]\times\boldsymbol{\beta}\) with a composition component \(\left[w\right]\) and an \(\mathbb{R}\langle\mathbb{B}\rangle\)-component \(\boldsymbol{\beta}\), where \(w\) is a composition of \(n\in\mathbb{N}\) and \(\boldsymbol{\beta}\) has length \(2^{n}\), we call any odd-degree monomial of signature expansions of the form \(\left[\boldsymbol{a}_{1}\right]\left[\boldsymbol{a}_{2}\right]\)\(\cdots\)\(\left[\boldsymbol{a}_{2m+1}\right]\) that produces \(\left[w\right]\times\boldsymbol{\beta}\) as one of the terms in its expansion, a _monomial generator_ or just _generator_ of \(\left[w\right]\times\boldsymbol{\beta}\).
At this stage and in this step, it is useful to give a brief coarse overview of our overall strategy, which we implement in detail in the subsequent steps below. We show in this step how, for any given composition component, we can identify, for the associated basis elements, specific collections of generators. We call the sets of basis elements and corresponding collections of generators "coefficient blocks" or simply "blocks". We show that such blocks are necessarily square. To start, consider any one-part composition \(w\) of \(n\), so the two corresponding basis elements are \(\left[\boldsymbol{0}\boldsymbol{n}\boldsymbol{0}\right]\leadsto\left[n \right]\times\left(1,0\right)\) and \(\left[\boldsymbol{0}\boldsymbol{n}\boldsymbol{0}^{\dagger}\right]\leadsto \left[n\right]\times\left(0,1\right)\). The basis element \(\left[n\right]\times\left(1,0\right)\) is generated by signature expansion \(\left[\boldsymbol{n}\right]\), while \(\left[n\right]\times\left(0,1\right)\) is not. On the other hand, both basis elements are generated by \(\left[\boldsymbol{0}\right]\left[\boldsymbol{n}-\boldsymbol{2}\right]\left[ \boldsymbol{0}\right]\). This exhausts all the possible odd-degree monomials in \(\pi_{n}\) that could generate \(\left[n\right]\times\left(1,0\right)\) and \(\left[n\right]\times\left(0,1\right)\). Thus for a one-part composition, the possible odd-degree generators have the form,
\[\left[\star\right]\text{, }\left[\boldsymbol{0}\right]\left[\star\right] \left(\left[\boldsymbol{0}\right]\right)\text{,}\]
where \(\left[\star\right]\) represents the appropriate generic signature expansion, i.e. in the first instance it is \(\left[\boldsymbol{n}\right]\) and in the second instance, i.e. for \(\left[\boldsymbol{0}\right]\left[\star\right]\left(\left[\boldsymbol{0}\right]\right)\), the middle \(\left[\star\right]\) factor is \(\left[\boldsymbol{n}-\boldsymbol{2}\right]\). Note, we allow \(\left[\star\right]=\left[\boldsymbol{0}\right]\). The only other possibilities are \(\left[\star\right]\left[\boldsymbol{0}\right]\left[\boldsymbol{0}\right]\) and \(\left[\boldsymbol{0}\right]\left[\boldsymbol{0}\right]\left[\star\right]\). However for \(n\geqslant 3\), we can rule these two possibilities out as \(\left[\boldsymbol{0}\right]\left[\boldsymbol{0}\right]=2\cdot\left\{ \boldsymbol{0}\boldsymbol{1}\boldsymbol{0}\right\}\) and any subsequent Poppe product of this term with \(\left[\boldsymbol{0}\right]\) would generate a basis element \(\left[w\right]\times\boldsymbol{\beta}\) where the composition \(w\) has two parts. Note of course, the triple Poppe product \(\left[\boldsymbol{a}\right]\left[\boldsymbol{b}\right]\left[\boldsymbol{c}\right]\) is naturally associative.
Now consider any two part composition \(w=a_{1}a_{2}\) of \(n\). We observe that basis elements with such a two-part composition component can in principle be generated
by \(\left[\star\right]\) and \(\left[\mathbf{0}\right]\left[\star\right]\left[\mathbf{0}\right]\), which we have already come across just above. However such basis elements can also be generated by any of the following four generators of the form,
\[\left[\star\right]\left[\star\right]\left(\left[\mathbf{0}\right]\right),\ \left[\star\right]\left[\mathbf{0}\right]\left(\left[\star\right]\right),\ \left[\mathbf{0}\right]\left[\star\right]\left(\left[\star\right]\right),\ \left[ \mathbf{0}\right]\left[\star\right]\left(\left[\mathbf{0}\right]\right)\right).\]
We observe that each possible generator above contains only two '\(\left[\star\right]\)' factors, consistent with the two-part composition component of the basis elements we are aiming to generate. Further note that we can also see that the four generators above can be constructed from the previous two generators \(\left[\star\right]\) and \(\left[\mathbf{0}\right]\left[\star\right]\left(\left[\mathbf{0}\right]\right)\) corresponding to one-part compositions, by applying one of the three actions \(\left[\star\right]\left[\star\right]\left(\cdot\right)\), \(\left[\star\right]\left[\mathbf{0}\right]\left(\cdot\right)\) or \(\left[\mathbf{0}\right]\left[\star\right]\left(\cdot\right)\) to them. For example, the first generator above is constructed by applying the action \(\left[\star\right]\left[\star\right]\left(\cdot\right)\) to \(\left[\star\right]=\left[\mathbf{0}\right]\), where we must set the argument \(\left[\star\right]=\left[\mathbf{0}\right]\) to preserve the two-part composition component of the basis element we wish to generate. The next two generators above are constructed by applying the actions \(\left[\star\right]\left[\mathbf{0}\right]\left(\cdot\right)\) or \(\left[\mathbf{0}\right]\left[\star\right]\left(\cdot\right)\) to \(\left[\star\right]\). Now consider the final quintic generator above. Applying the action \(\left[\star\right]\left[\star\right]\left(\cdot\right)\) to \(\left[\star\right]\left(\left[\mathbf{0}\right]\right)\) would produce a generator with too many '\(\left[\star\right]\)' factors, while in principle, either of the actions \(\left[\star\right]\left[\mathbf{0}\right]\left(\cdot\right)\) or \(\left[\mathbf{0}\right]\left[\star\right]\left(\cdot\right)\) could be applied to \(\left[\mathbf{0}\right]\left[\star\right]\left(\left[\mathbf{0}\right]\right)\). However the action of \(\left[\star\right]\left[\mathbf{0}\right]\left(\cdot\right)\) on \(\left[\mathbf{0}\right]\left[\star\right]\left(\cdot\right)\) is nilpotent. We demonstrate this below in Lemma 16. Hence the action \(\left[\star\right]\left[\mathbf{0}\right]\left(\cdot\right)\) rigorously applied to \(\left[\mathbf{0}\right]\left[\star\right]\left(\left[\mathbf{0}\right]\right)\) produces zero. Thus the only viable action is \(\left[\mathbf{0}\right]\left[\star\right]\left(\cdot\right)\) on \(\left[\mathbf{0}\right]\left[\star\right]\left(\left[\mathbf{0}\right]\right)\) producing the quintic generator shown. From another perspective, for the quintic generator, in the case \(n\geqslant 5\), any other quintic arrangment with two \(\left[\star\right]\)- and three \(\left[\mathbf{0}\right]\)-factors, would necessitate a consecutive pair '\(\left[\mathbf{0}\right]\left[\mathbf{0}\right]\)' that would result in generating a basis element whose composition component has more than two parts.
In the case of basis elements with a three-part composition component \(w=a_{1}a_{2}a_{3}\), the possible generators are, in principle, any of the generators we have already seen, as well as, the generators of the form,
\[\left[\star\right]\left[\star\right]\left(\left[\star\right]\right),\ \left[\star\right]\left(\left[\mathbf{0}\right]\left[\star\right]\left(\left[ \mathbf{0}\right]\right)\right),\ \left[\star\right]\left[\mathbf{0}\right]\left(\left[\star\right]\left[ \mathbf{0}\right]\right),\ \left[\star\right]\left(\left[\mathbf{0}\right]\left[\star\right]\left(\left[ \mathbf{0}\right]\right)\right),\ \left[\star\right]\left(\left[\mathbf{0}\right]\left[\star\right]\left( \left[\mathbf{0}\right]\right)\right),\] \[\left[\mathbf{0}\right]\left[\star\right]\left(\left[\mathbf{0} \right]\left[\star\right]\left(\left[\mathbf{0}\right]\left[\star\right] \left(\left[\mathbf{0}\right]\right)\right)\right).\]
We remark that each possible generator above contains only three '\(\left[\star\right]\)' factors. We see that the first two generators are constructed by applying the action \(\left[\star\right]\left[\star\right]\left(\cdot\right)\) to the generators for basis elements with one-part composition components. The next set of generators are constructed by applying the action \(\left[\star\right]\left[\mathbf{0}\right]\left(\cdot\right)\) to the generators for basis elements with two-part composition components, taking into account the nilpotent action of \(\left[\star\right]\left[\mathbf{0}\right]\left(\cdot\right)\) on \(\left[\mathbf{0}\right]\left[\star\right]\left(\cdot\right)\). This accounts for the next two generators. Then the final four generators are constructed by applying the action \(\left[\mathbf{0}\right]\left[\star\right]\left(\cdot\right)\) to all four of the generators for basis elements with two-part composition components.
For the case of basis elements with a four-part composition component \(w=a_{1}a_{2}a_{3}a_{4}\), the possible generators are, in principle, besides any of the generators we have already seen, generators of the form,
\[\left[\star\right]\left[\star\right]\left(\left[\star\right]\left( \left[\mathbf{0}\right]\right)\right),\ \left[\star\right]\left(\left[\star\right]\left(\left[\mathbf{0}\right]\right) \right),\ \left[\star\right]\left(\left[\star\right]\left[\mathbf{0}\right]\left(\left[ \star\right]\right)\right),\ \left[\star\right]\left(\left[\mathbf{0}\right]\left[\star\right]\left( \left[\star\right]\left(\left[\mathbf{0}\right]\right)\right)\right),\ \left[\star\right]\left(\left[\mathbf{0}\right]\left[\star\right]\left( \left[\star\right]\left(\left[\mathbf{0}\right]\right)\right)\right),\ \left[\star\right]\left(\left[\mathbf{0}\right]\left[\star\right]\left( \left[\mathbf{0}\right]\left[\star\right]\left(\left[\mathbf{0}\right]\right) \right)\right),\] \[\left[\star\right]\left(\left[\mathbf{0}\right]\left[\star\right]\left( \left[\mathbf{0}\right]\left[\star\right]\left(\left[\mathbf{0}\right]\right) \right)\right),\ \left[\star\right]\left(\left[\mathbf{0}\right]\left(\left[\star\right]\left( \left[\mathbf{0}\right]\right)\right)\right),\ \left[\star\right]\left(\left[\mathbf{0}\right]\left(\left[\star\right]\left( \left[\mathbf{0}\right]\right)\right)\right),\ \left[\star\right]\left(\left[\mathbf{0}\right]\left(\left[\star\right]\left( \left[\mathbf{0}\right]\right)\right)\right),\ \left[\star\right]\left(\left[\mathbf{0}\right]\left(\left[\star\right]\left( \left[\mathbf{0}\right]\right)\right)\right),\ \left[\star\right]\left(\left[\mathbf{0}\right]\left(\left[\star\right]\left( \left[\mathbf{0}\right]\right)\right)\right),\ \left[\star\right]\left(\left[\mathbf{0}\right]\left(\left[\star\right]\left( \left[\mathbf{0}\right]\right)\right)\right),\ \left[\star\right]\left(\left[\mathbf{0}\right]\left(\left[\star\right]\left( \left[\mathbf{0}\right]\right)\right)\right),\ \left[\star\right]\left(\left[\mathbf{0}\right]\left(\left[\star\right]\left( \left[\mathbf{0}\right]\right)\right)\right),\ \left[\star\right]\left(\left[\mathbf{0}\right]\left(\left[\star\right]\left( \left[\mathbf{0}\right]\right)\right)\right),\ \left[\star\right]\left(\left[\mathbf{0}\right]\left(\left[\star\right]\left( \left[\mathbf{0}\right]\right)\right)\right),\]
\(\left[\mathbf{0}\right]\left[\star\right]\left(\left[\mathbf{0}\right]\left[\star \right]\left(\left[\star\right]\left[\mathbf{0}\right]\left(\left[\star\right] \right)\right)\right)\), \(\left[\mathbf{0}\right]\left[\star\right]\left(\left[\mathbf{0}\right]\left[ \star\right]\left(\left[\mathbf{0}\right]\left[\star\right]\left(\left[ \mathbf{0}\right]\left[\star\right]\right)\right)\right)\).
Again, each possible generator above contains only four '\(\left[\star\right]\)' factors. They are constructed by applying the action \(\left[\star\right]\left[\star\right]\left(\cdot\right)\) to the generators for basis elements with two-part composition components, applying the action \(\left[\star\right]\left[\mathbf{0}\right]\left(\cdot\right)\) to the generators for basis elements with three-part composition components, taking into account the nilpotent action of \(\left[\star\right]\left[\mathbf{0}\right]\left(\cdot\right)\) on \(\left[\mathbf{0}\right]\left[\star\right]\left(\cdot\right)\), and then also applying the action \(\left[\mathbf{0}\right]\left[\star\right]\left(\cdot\right)\) to all of the generators for basis elements with three-part composition components.
We have seen that for basis elements with a composition component with \(k=1,2,3\) or \(4\) parts, the number of generators that might produce such a basis element is \(2^{k}\). We have not shown that corresponding to a given basis element, the generators constructed in the manner indicated are unique at leading order. We demonstrate this below in Steps 7 and 8. Assuming this is the case for the moment, we have the following.
Lemma 13 (Generator block size): _For a given basis element with composition component \(w\), the number of monomial generators that can generate that basis element at leading order is \(2^{\left|w\right|}\)._
Proof: As observed, the result is true for \(\left|w\right|=1,2,3,4\). Assume the result is true for \(\left|w\right|=1,2,\ldots,k\) for some \(k\in\mathbb{N}\). The set of generators for basis elements with composition components of \(k+1\) parts are constructed by: (i) Applying the action \(\left[\star\right]\left[\star\right]\left(\cdot\right)\) to the generators for basis elements with \((k-1)\)-part composition components of which there are \(2^{k-1}\) by assumption; (ii) Applying the action \(\left[\star\right]\left[\mathbf{0}\right]\left(\cdot\right)\) to the generators for basis elements with \(k\)-part composition components, taking into account the nilpotent action of \(\left[\star\right]\left[\mathbf{0}\right]\left(\cdot\right)\) on \(\left[\mathbf{0}\right]\left[\star\right]\left(\cdot\right)\). Since there are \(2^{k}\) generators corresponding to any basis element with a composition of \(k\)-parts, and half of these start with the factor '\(\left[\mathbf{0}\right]\)\(\left[\star\right]\)', there are \(2^{k-1}\) generators constructed in this way; and then finally (iii) Applying the action \(\left[\mathbf{0}\right]\left[\star\right]\left(\cdot\right)\) to all of the generators for basis elements with \(k\)-part composition components, of which there are \(2^{k}\). Adding these three contributions up, \(2^{k-1}+2^{k-1}+2^{k}=2^{k+1}\), and the result follows by induction.
We can also view this last result from another perspective as follows. For each \(k\)-part composition, when \(k\) is odd, the set of new generators are characterised as follows. First, we include the degree \(k\) monomial \(\left[\star\right]\left[\star\right]\)\(\cdots\)\(\left[\star\right]\), of which there is only one choice. We also include the degree \(k+2\) monomials which contain two non-adjacent '\(\left[\mathbf{0}\right]\)' factors; there are \(k+1\) choose \(2\) possible monomials of this form. Then we can also include degree \(k+4\) monomials which contain four non-adjacent '\(\left[\mathbf{0}\right]\)' factors; there are \(k+1\) choose \(4\) possible monomials of this form. And so forth until we reach the single degree \(2k+1\) monomial of the form \(\left[\mathbf{0}\right]\left[\star\right]\left[\mathbf{0}\right]\left[\star \right]\left[\mathbf{0}\right]\)\(\cdots\)\(\left[\star\right]\left[\mathbf{0}\right]\). Here we have implicitly used that the number of ways to place \(r\) objects in non-adjacent slots whose total number is \(m\), is given by \(m-r+1\) choose \(r\). In the examples just presented, we considered the number of possible ways of placing \(2\ell\) factors of the form '\(\left[\mathbf{0}\right]\)' in a monomial of degree \(k+2\ell\), for \(\ell=0,1,\ldots,(k+1)/2\). Hence the total number of monomials of each degree outlined being \(k+1\) choose \(2\ell\). Thus with \(k\) odd, the total number of such new odd-degree monomials is given by the sum over \(\ell=0,1,\ldots,(k+1)/2\) of \(k+1\) choose \(2\ell\), i.e. the sum on the left shown in Lemma 14. Suppose now \(k\) is even. The lowest degree monomials that might generate the corresponding basis element are those of degree \(k+1\) with a single factor '\(\left[\mathbf{0}\right]\)'. There are \(k+1\) such monomials. We can also include degree \(k+3\) monomials with three non-adjacent factors '\(\left[\mathbf{0}\right]\)'; there are \(k+1\) choose
such possible monomials, and so forth. In the final highest degree monomial, of degree \(2k+1\) has the single form \(\left[\mathbf{0}\right]\left[\star\right]\left[\mathbf{0}\right]\left[\star \right]\left[\mathbf{0}\right]\cdots\left[\star\right]\left[\mathbf{0}\right]\). Thus with \(k\) even, the total number of such new odd-degree monomials is given by the sum over \(\ell=0,1,\ldots,k/2\) of \(k+1\) choose \(2\ell+1\), i.e. the sum on the right shown in Lemma 14. In consequence we have the following important result.
Lemma 14: _The aforementioned sums, in the respective \(k\) is odd and then even cases, are equal to \(2^{k}\). In other words, respectively, when \(k\) is odd and then even, we have,_
\[\sum_{\ell=0}^{(k+1)/2}\binom{k+1}{2\ell}=2^{k}\qquad\text{and}\qquad\sum_{ \ell=0}^{k/2}\binom{k+1}{2\ell+1}=2^{k}.\]
Proof: Suppose \(k\) is odd. Then by direct computation, we observe,
\[\sum_{\ell=0}^{(k+1)/2}\frac{(k+1)!}{(k+1-2\ell)!(2\ell)!} =2+\sum_{\ell=1}^{(k-1)/2}\frac{k!}{(k-2\ell)!(2\ell-1)!}\left( \frac{1}{k-2\ell+1}+\frac{1}{2\ell}\right)\] \[=2+\sum_{\ell=1}^{(k-1)/2}\frac{k!}{(k+1-2\ell)!(2\ell-1)!}+ \sum_{\ell=1}^{(k-1)/2}\frac{k!}{(k-2\ell)!(2\ell)!}\] \[=1+\binom{k}{1}+\binom{k}{2}+\binom{k}{3}+\cdots+\binom{k}{k-1}+1,\]
where we matched up respective pairs from the sums and then used that \(2^{k}=(1+1)^{k}\). This gives the first result. When \(k\) is even, we again use that \(2^{k}=(1+1)^{k}\), and observe,
\[2^{k}=\binom{k}{0}+\binom{k}{1}+\binom{k}{2}+\cdots+\binom{k}{k}=\sum_{\ell=0 }^{k/2}\frac{(k+1)!}{(k-2\ell)!(2\ell+1)!},\]
where we paired up successive terms and parameterised the pairs by \(\ell=0,1,\ldots,k/2\). This gives the second result.
The crucial observation from the result of Lemmas 13 and 14 is the following.
Corollary 5 (Generator-tuple dimension match): _For a given composition component \(w\) of a block set of basis elements \(\left[w\right]\times\boldsymbol{\beta}\) parameterised by the tuples \(\boldsymbol{\beta}\in\mathbb{R}\langle\mathbb{B}\rangle\), the number of new monomial generators equals the dimension of the tuple block, i.e. \(2^{\left|w\right|}\)._
One of our main concerns now is to show that the resulting square block of signature coefficients has full rank. The next three steps address this issue, making the analysis of this section more precise.
_Step 4: The three standard triple actions._ We have seen that the triple product action in Corollary 4, in the full form given therein, as well as in the special forms \(\left[\boldsymbol{a}\right]\left[\mathbf{0}\right]\left(\cdot\right)\) and \(\left[\mathbf{0}\right]\left[\mathbf{b}\right]\left(\cdot\right)\), are used to construct the generators corresponding to a given basis element. We call these three actions the standard triple actions.
Definition 19 (Standard triple actions): We call the actions \(\left[\boldsymbol{a}\right]\left[\boldsymbol{b}\right]\left(\cdot\right)\), \(\left[\boldsymbol{a}\right]\left[\mathbf{0}\right]\left(\cdot\right)\) and \(\left[\mathbf{0}\right]\left[\mathbf{b}\right]\left(\cdot\right)\) the three _standard_ triple actions.
The result of the action \(\left[\boldsymbol{a}\right]\left[\boldsymbol{b}\right]\left(\cdot\right)\) is given in Corollary 4. As we use them frequently hereafter, we record the result of the standard actions \(\left[\boldsymbol{a}\right]\left[\boldsymbol{0}\right]\left(\cdot\right)\) and \(\left[\boldsymbol{0}\right]\left[\boldsymbol{b}\right]\left(\cdot\right)\) in the following Corollary. They are just special cases which we call the _special actions_.
Corollary 6 (Special actions): _The two special actions \(\left[\boldsymbol{a}\right]\left[\boldsymbol{0}\right]\left(\cdot\right)\) and \(\left[\boldsymbol{0}\right]\left[\boldsymbol{b}\right]\left(\cdot\right)\) are given at leading order by,_
\[\left[\boldsymbol{a}\right]\left[\boldsymbol{0}\right]\left([ cw]\times\boldsymbol{\beta}\right) =\chi(a\otimes b\otimes cw)\cdot\left[(a+1)(c+1)w\right]\times\left((1,-1) \lhd\left(\boldsymbol{\beta}+(-1)^{|w|}\boldsymbol{\beta}^{\dagger}\right) \right)+\cdots,\] \[\left[\boldsymbol{0}\right]\left[\boldsymbol{b}\right]\left([ cw]\times\boldsymbol{\beta}\right) =\chi(0\otimes b\otimes cw)\cdot\left(\left[(b+2)cw\right]\times \left((2,0)\lhd\boldsymbol{\beta}+(-1)^{|w|}(0,2)\lhd\boldsymbol{\beta}^{ \dagger}\right)\right.\] \[\qquad\qquad\qquad\qquad+\left[(b+1)(c+1)w\right]\times\left((1,1 )\lhd\left(\boldsymbol{\beta}+(-1)^{|w|}\boldsymbol{\beta}^{\dagger}\right) \right)\right)+\cdots.\]
Proof: By direct computation using the Poppe product rules in Lemma 8, we observe that \(\left[\boldsymbol{a}\right]\left[\boldsymbol{0}\right]\left([cw\times \boldsymbol{\varphi}]\right)\) equals,
\[\chi(a\otimes b\otimes cw)\cdot\left( \left[\boldsymbol{0}(a+1)\boldsymbol{0}(c+1)(w\times\hat{ \boldsymbol{\varphi}})\right]+\left[\boldsymbol{0}(a+1)\boldsymbol{0}(c+1)(w \times\hat{\boldsymbol{\varphi}})^{\dagger}\right]\] \[\quad-\left[\boldsymbol{0}(a+1)\boldsymbol{0}^{\dagger}(c+1)(w \times\hat{\boldsymbol{\varphi}})\right]-\left[\boldsymbol{0}(a+1) \boldsymbol{0}^{\dagger}(c+1)(w\times\hat{\boldsymbol{\varphi}})^{\dagger} \right)+\cdots,\]
at leading order, giving the first result. Then, by direct computation for the other case, we observe that \(\left[\boldsymbol{0}\right]\left[\boldsymbol{b}\right]\left([cw\times \boldsymbol{\varphi}]\right)\) equals,
\[\chi(0\otimes b\otimes cw)\cdot\left( 2\cdot\left[\boldsymbol{0}(b+2)\boldsymbol{0}c)(w\times\hat{ \boldsymbol{\varphi}})\right]+2\cdot\left[\boldsymbol{0}(b+2)\boldsymbol{0}^ {\dagger}c)(w\times\hat{\boldsymbol{\varphi}})^{\dagger}\right]\] \[\quad+\left[\boldsymbol{0}(b+1)\boldsymbol{0}(c+1)(w\times \hat{\boldsymbol{\varphi}})\right]+\left[\boldsymbol{0}(b+1)\boldsymbol{0}^ {\dagger}(c+1)(w\times\hat{\boldsymbol{\varphi}})\right]\] \[\quad+\left[\boldsymbol{0}(b+1)\boldsymbol{0}(c+1)(w\times \hat{\boldsymbol{\varphi}})^{\dagger}\right]+\left[\boldsymbol{0}(b+1) \boldsymbol{0}^{\dagger}(c+1)(w\times\hat{\boldsymbol{\varphi}})^{\dagger} \right)+\cdots,\]
at leading order, giving the second result.
Remark 15: Comparing the results of Corollary 6 with Corollary 4 we emphasise two observations, that there is: (i) A natural contraction of the action forms due to the '\(\left[\boldsymbol{0}\right]\)' factors in the action; (ii) An apparent change of sign in the second term on the right in the first example. We can view both cases as the consequence of substituting \(\left[\nu\times\boldsymbol{0}\right]\) for \(\left[\boldsymbol{b}\right]\) in the first case and then \(\left[\nu\times\boldsymbol{0}\right]\) for \(\left[\boldsymbol{a}\right]\) in the second case. The sign change, perhaps more easily observed from the corresponding result in Lemma 12, is a consequence of the fact that to make the appropriate substitution of \(\left[\nu\times\boldsymbol{0}\right]\) for \(\left[\boldsymbol{b}\right]\), we should convert the two terms with \(\boldsymbol{0}^{\dagger}b\boldsymbol{0}^{\dagger}\) on the right, to \(-\boldsymbol{0}^{\dagger}b^{\dagger}\boldsymbol{0}^{\dagger}\) first.
Step 5: Generating blocks.We now show precisely how, given a block of basis elements characterised by a given composition component \(w\) and parameterised by the corresponding \(2^{|w|}\) basis elements \(\boldsymbol{\beta}\) of \(\mathbb{R}\langle\mathbb{B}\rangle\), we can use the three standard actions to enumerate all the monomial generators that produce the basis elements of that block at leading order, and also establish the corresponding signature coefficient associated with each such basis element. Let us examine the three standard actions given in Corollaries 4 and 6 more closely. If we examine the right-hand side of \(\left[\boldsymbol{a}\right]\left[\boldsymbol{b}\right]\left([cw]\times \boldsymbol{\beta}\right)\) in Corollary 4, then we observe that in terms of descent order, the first composition term '\(\left[(a+1)(b+1)cw\right]\)' on the right is highest, and thus we retain that term only. In Corollary 6, at leading order, the action \(\left[\boldsymbol{a}\right]\left[\boldsymbol{0}\right]\left([cw]\times \boldsymbol{\beta}\right)\) in unique, while the right-hand side of \(\left[\boldsymbol{0}\right]\left[\boldsymbol{b}\right]\left([cw]\times \boldsymbol{\beta}\right)\) contains two terms, the first of which is higher in terms
of descent order, which is the one we retain. Thus at leading order the three standard actions on \([cw]\times\boldsymbol{\beta}\) are:
\[\left[\boldsymbol{a}\right]\left[\boldsymbol{b}\right]\left( \cdot\right) =\chi(cw)\cdot\left[(a+1)(b+1)cw\right]\times\left((1,0,1,0)\lhd \boldsymbol{\beta}+(-1)^{|w|}\left(0,1,0,1\right)\lhd\boldsymbol{\beta}^{ \dagger}\right)+\cdots,\] \[\left[\boldsymbol{a}\right]\left[\boldsymbol{0}\right]\left( \cdot\right) =\chi(cw)\cdot\left[(a+1)(c+1)w\right]\times\left((1,-1)\lhd \boldsymbol{\beta}+(-1)^{|w|}(1,-1)\lhd\boldsymbol{\beta}^{\dagger}\right)+ \cdots,\] \[\left[\boldsymbol{0}\right]\left[\boldsymbol{b}\right]\left( \cdot\right) =\chi(cw)\cdot\left[(b+2)cw\right]\times\left((2,0)\lhd \boldsymbol{\beta}+(-1)^{|w|}(0,2)\lhd\boldsymbol{\beta}^{\dagger}\right)+ \cdots.\]
Here we have used the homomorphic properties of \(\chi\), in particular that \(\chi(a_{\otimes}b_{\otimes}cw)=\chi(a_{\otimes}0_{\otimes}cw)=\chi(0_{ \otimes}b_{\otimes}cw)=\chi(cw)\). Consider the following respective replacements in each of the three actions above: (i) \(a\to a-1\), \(b\to b-1\), \(c\to\nu\); (ii) \(a\to a-1\), \(c\to c-1\) and (iii) \(b\to b-2\). With these three choices, each of the actions generates the same composition \(acw\)--in the first case we relabel \(b\) as \(c\) and in the third case we relabel \(b\) as \(a\). Recall from our coarse-grain overview in Step 3 that to enumerate the generators corresponding to basis elements with composition components with \(k\geqslant 2\) parts, we apply the first action to the generators at level \(k-2\), and the two special actions to the generators at level \(k-1\), taking into account the nilpotent action outlined just below in Lemma 16. We note that, for any sequence \(\hat{u}\in\mathbb{R}\langle\mathbb{B}\rangle\), with \(|\hat{u}|=2^{k-2}\), we have,
\[(1,0,1,0)\lhd\hat{u} =(1,1)\lhd(1,0)\lhd\hat{u}=(1,1)\lhd(\hat{u},0),\] \[(0,1,0,1)\lhd\hat{u}^{\dagger} =(1,1)\lhd(0,1)\lhd\hat{u}^{\dagger}=(1,1)\lhd(0,\hat{u}^{ \dagger}),\]
where \((\hat{u},0)\) and \((0,\hat{u}^{\dagger})\) are of length \(2^{k-1}\). Putting these observations together, we have thus established the following lemma.
Lemma 15 (Actions generating the same composition): _At leading order, with the choices mentioned above, the following three standard actions generate the same composition with the respective \(\mathbb{R}\langle\mathbb{B}\rangle\) components indicated,_
\[\left[\boldsymbol{a-1}\right]\left[\boldsymbol{c-1}\right]\left( \left[w\right]\times\boldsymbol{\beta}\right) =\chi(w)\cdot[acw]\times\left((1,1)\lhd\left((\hat{u},0)-(-1)^{| w|}(\hat{u},0)^{\dagger}\right)\right),\] \[\left[\boldsymbol{a-1}\right]\left[\boldsymbol{0}\right]\left( \left[(c-1)w\right]\times\boldsymbol{\beta}\right) =\chi((c-1)w)\cdot[acw]\times\left((1,-1)\lhd\left((\hat{a},\hat {b})+(-1)^{|w|}(\hat{a},\hat{b})^{\dagger}\right)\right),\] \[\left[\boldsymbol{0}\right]\left[\boldsymbol{a-2}\right]\left( \left[cw\right]\times\boldsymbol{\beta}\right) =\chi(cw)\cdot[acw]\times\left((2,0)\lhd(\hat{a},\hat{b})+(-1)^{| w|}(0,2)\lhd(\hat{a},\hat{b})^{\dagger}\right).\]
_Here, in the first case \(\boldsymbol{\beta}=(\hat{u},0)\in\mathbb{R}\langle\mathbb{B}\rangle\) with \(\hat{u}\) arbitrary, and in the second and third cases \(\boldsymbol{\beta}=(\hat{a},\hat{b})\in\mathbb{R}\langle\mathbb{B}\rangle\) is arbitrary. Each such \(\boldsymbol{\beta}\) is of length \(2^{|acw|-1}\), and \(\hat{a}\) and \(\hat{b}\) have the same length--matching that of \(\hat{u}\)._
Remark 16: Note, in the statement of Lemma 15, the case of the first action which corresponds to the action \(\left[\boldsymbol{a}\right]\left[\boldsymbol{b}\right]\left(\cdot\right)\) applied to \([cw]\times\boldsymbol{\beta}\) in the discussion preceding the Lemma. In that discussion, when we set \(c\to\nu\), we equivalently replaced \(cw\) by \(w\). This means that we should effectively consider the length of \(w\) to be one less than would otherwise be the case. This explains why the sign in front of the term with the factor \((-1)^{|w|}\) in the first action case is negative in the statement of the Lemma.
Some further clarifications on the statement of Lemma 15 are required. Note that,
\[\left[\boldsymbol{0}\right]\leadsto[\nu]\times(1),\]
where \(\nu\) is the empty composition and (1) is the element of \(\mathbb{R}\langle\mathbb{B}\rangle\) corresponding to compositions of zero parts. The special action \(\left[\boldsymbol{0}\right]\left[\boldsymbol{a-2}\right]\left(\cdot\right)\) in Lemma 15 still applies
when the argument \([cw]\times\boldsymbol{\beta}=[\nu]\times(1)\) and thus when \((\hat{a},\hat{b})=(1)\). The result is that at leading order we have,
\[\left[\boldsymbol{0}\right]\left[\boldsymbol{a-2}\right]\left(\left[\nu\right] \times(1)\right)=\chi(\nu)\cdot[a]\times\left(\left(2,0\right)\lhd(1)-(0,2) \lhd(1)^{\dagger}\right)=[a]\times(2,-2).\]
Here, by convention, we take \(\chi(\nu)\coloneqq 1\). Since we have taken \(cw\to\nu\), we can think of the number of parts of \(w\) to be '\(-1\)', explaining the sign in front of the \(\mathbb{R}\langle\mathbb{B}\rangle\)-element \((0,2)\). This is consistent with just computing \(\left[\boldsymbol{0}\right]\left[\boldsymbol{a-2}\right]\left[\boldsymbol{0}\right]\). Further, the first two actions in Lemma 15 don't make sense when \(cw\to\nu\), though if \(w\to\nu\), the special action \(\left[\boldsymbol{a-1}\right]\left[\boldsymbol{0}\right](\cdot)\) applies with the appropriate adaptations. And of course we can compute \(\left[\boldsymbol{a-1}\right]\left[\boldsymbol{c-1}\right]\left(\left[\nu \right]\times(1)\right)=\left[\boldsymbol{a-1}\right]\left[\boldsymbol{c-1} \right]\left[\boldsymbol{0}\right]\).
Finally, we now also observe the following (aforementioned) nilpotency property.
Lemma 16 (Nilpotent action): _At leading order, if we first apply the action \(\left[\boldsymbol{0}\right]\left[\boldsymbol{b}\right](\cdot)\) to an arbitrary \(\mathbb{R}\langle\mathbb{B}\rangle\) component, then apply the action \(\left[\boldsymbol{a}\right]\left[\boldsymbol{0}\right](\cdot)\) to the result, this generates the zero \(\mathbb{R}\langle\mathbb{B}\rangle\) component. In other words at leading order we have,_
\[\left[\boldsymbol{a}\right]\left[\boldsymbol{0}\right]\left(\left[\boldsymbol {0}\right]\left[\boldsymbol{b}\right](\cdot)\right)=0,\]
_where the '\(0\)' on the right-hand side represents the zero \(\mathbb{R}\langle\mathbb{B}\rangle\) component._
Proof: We focus on the effect of the actions on the \(\mathbb{R}\langle\mathbb{B}\rangle\) components only. The third (special) action applied to the input \((a,b)\) generates \(2\cdot(a,b,\pm b^{\dagger},\pm a^{\dagger})\). Set \(A,B\in\mathbb{R}\langle\mathbb{B}\rangle\) to be the sub-components \(A\coloneqq(a,b)\) and \(B\coloneqq\pm(b^{\dagger},a^{\dagger})\). With these identifications we note that \(B=\pm A^{\dagger}\). Ignoring the real factor \(2\), apply the second (special) action to the input \((A,B)\). This is (note the sign of the second term of the action changes), \((1,-1)\lhd\left((A,B)\mp(A,B)^{\dagger}\right)\), which equals, \((A\mp B^{\dagger},B\mp A^{\dagger},-A\pm B^{\dagger},-B\pm A^{\dagger})\). Since \(B=\pm A^{\dagger}\), this result is the zero \(\mathbb{R}\langle\mathbb{B}\rangle\) component.
We now explore, through a series of examples, how to construct the generators and coefficient blocks associated with any given composition. In particular we consider the cases of compositions with \(1\), \(2\) and \(3\) parts, before exploring the case of any given composition. Compositions containing a '\(1\)' need to be singled out, as explained below.
Example 15 (One-part compositions): We observe that there are two basis elements corresponding to the one-part composition \(w=a\), namely, \([a]\times(1,0)\) and \([a]\times(0,1)\). We assume \(n=a\geqslant 2\). At leading we know from the corresponding signature expansion \([\boldsymbol{a}]=[a]\times(1,0)+\cdots\). From our discussion succeeding Lemma 15, we know the first two actions do not make sense when \(cw\to\nu\), while the final special action does make sense. As we saw directly, at leading order we have \(\left[\boldsymbol{0}\right]\left[\boldsymbol{a-2}\right]\left[\boldsymbol{0} \right]=[a]\times(2,-2)\). We have thus enumerated the generators corresponding to \([a]\times(1,0)\) and \([a]\times(0,1)\) and that the signature coefficient matrix is,
\[A_{0}=\begin{pmatrix}1&2\\ 0&-2\end{pmatrix}.\]
Example 16 (Two-part compositions): Consider the basis elements with a two-part composition component \(a_{1}a_{2}\), i.e. basis elements of the form \([a_{1}a_{2}]\times\boldsymbol{\beta}_{i}\), where the \(\boldsymbol{\beta}_{i}\) are the four basis elements of length \(4\), which are zero apart from a '\(1\)' in the \(i\)th position. For the moment assume neither \(a_{1}\) nor \(a_{2}\) equal \(1\); we consider each of these two special cases separately below. Using Lemma 15, noting that for each of the standard actions
our goal is to obtain the composition component \([a_{1}a_{2}]\) on the right-hand side, we observe the following. For the first action, setting \(w=\nu\), \(a=a_{1}\) and \(c=a_{2}\), we find that at leading order, we get,
\[\left[\boldsymbol{a_{1}-1}\right]\left[\boldsymbol{a_{2}-1}\right]\left[ \boldsymbol{0}\right]=[a_{1}a_{2}]\times\big{(}(1,1)\lhd\big{(}(1,0)-(0,1) \big{)}\big{)}=[a_{1}a_{2}]\times(1,-1,1,-1).\]
The first special action in Lemma 15, with the same identifications gives,
\[\left[\boldsymbol{a_{1}-1}\right]\left[\boldsymbol{0}\right]\big{(}[a_{2}-1] \times(\hat{a},\hat{b})\big{)}=[a_{1}a_{2}]\times\big{(}(1,-1)\lhd\big{(}(\hat {a},\hat{b})+(\hat{b},\hat{a})\big{)}\big{)}.\]
We saw in Example 15, the basis element \([a_{2}-1]\times(\hat{a},\hat{b})\) can be generated both by the corresponding signature expansion \([\boldsymbol{a_{2}-1}]=[a_{2}-1]\times(1,0)+\cdots\), and by the generator \(\left[\boldsymbol{0}\right]\left[\boldsymbol{a-2}\right]\left[\boldsymbol{0}\right]\). We discount the latter case due to the nilpotent action property. Hence using this expression for \([\boldsymbol{a_{2}-1}]\) and inserting \((\hat{a},\hat{b})=(0,1)\) into the expression above, we deduce,
\[\left[\boldsymbol{a_{1}-1}\right]\left[\boldsymbol{0}\right]\big{(}[ \boldsymbol{a_{2}-1}]\big{)}=[a_{1}a_{2}]\times(1,1,-1,-1),\]
to leading order. Now consider the second special action in Lemma 15. Again with the same identifications for \(a\), \(c\) and \(w\), we observe that to leading order,
\[\left[\boldsymbol{0}\right]\left[\boldsymbol{a_{1}-2}\right]\big{(}[a_{2}] \times(\hat{a},\hat{b})\big{)}=[a_{1}a_{2}]\times\big{(}(2,0)\lhd(\hat{a}, \hat{b})+(0,2)\lhd(\hat{b},\hat{a})\big{)}.\]
We know from Example 15, the basis element \([a_{2}]\times(\hat{a},\hat{b})\) can be generated either by the signature expansion \([\boldsymbol{a_{2}}]=[a_{2}]\times(1,0)+\cdots\), or by the generator \(\left[\boldsymbol{0}\right]\left[\boldsymbol{a_{2}-2}\right]\left[\boldsymbol {0}\right]=[a_{2}]\times(2,-2)+\cdots\). Respectively substituting the expressions \([a_{2}]\times(1,0)\) and \([a_{2}]\times(2,-2)\) for \([a_{2}]\times(\hat{a},\hat{b})\) in the relation just above, we observe that to leading order,
\[\left[\boldsymbol{0}\right]\left[\boldsymbol{a_{1}-2}\right]\big{(}[ \boldsymbol{a_{2}}]\big{)} =[a_{1}a_{2}]\times\big{(}(2,0)\lhd(1,0)+(0,2)\lhd(0,1)\big{)}\] \[=[a_{1}a_{2}]\times(2,0,0,2),\] \[\left[\boldsymbol{0}\right]\left[\boldsymbol{a_{1}-2}\right]\big{(} \left[\boldsymbol{0}\right]\left[\boldsymbol{a_{2}-2}\right]\left[\boldsymbol {0}\right]\big{)} =[a_{1}a_{2}]\times\big{(}(2,0)\lhd(2,-2)+(0,2)\lhd(-2,2)\big{)}\] \[=[a_{1}a_{2}]\times(4,-4,-4).\]
We have thus enumerated the four generators corresponding to the four basis elements \([a_{1}a_{2}]\times(1,0,0,0)\), \([a_{1}a_{2}]\times(0,1,0,0)\), \([a_{1}a_{2}]\times(0,0,1,0)\) and \([a_{1}a_{2}]\times(0,0,0,1)\). They are \([\boldsymbol{a_{1}-1}]\left[\boldsymbol{a_{2}-1}\right]\left[\boldsymbol{0}\right]\), \([\boldsymbol{a_{1}-1}]\left[\boldsymbol{0}\right]\left[\boldsymbol{a_{2}-1}\right]\), \([\boldsymbol{0}]\left[\boldsymbol{a_{1}-2}\right]\left[\boldsymbol{a_{2}}\right]\) and the quintic generator \(\left[\boldsymbol{0}\right]\left[\boldsymbol{a_{1}-2}\right]\left[\boldsymbol {0}\right]\left[\boldsymbol{a_{2}-2}\right]\left[\boldsymbol{0}\right]\). The corresponding signature coefficient matrix is,
\[A_{2}\coloneqq\begin{pmatrix}1&1&2&4\\ -1&1&0&-4\\ 1&-1&0&-4\\ -1&-1&2&4\end{pmatrix}\]
which is the subsystem coefficient matrix \(A_{2}\) in Examples 10 and 12 respectively concerning the quartic and quintic non-commutative nonlinear Schrodinger equations.
Let us now consider the case when \(a_{2}=1\). If we substitute this value for \(a_{2}\) into the generators above, we see that the first two generators coincide and are given by \([\boldsymbol{a_{1}-1}]\left[\boldsymbol{0}\right]\left[\boldsymbol{0}\right]= [a_{1}1]\times(1,-1,1,-1)+\cdots\) and \([\boldsymbol{a_{1}-1}]\left[\boldsymbol{0}\right]\left[\boldsymbol{0}\right] =[a_{1}1]\times(1,1,-1,-1)+\cdots\). Since we can add them together under the same coefficient \(c_{(a_{1}-1)00}\), in this case we have the single generator, \([\boldsymbol{a_{1}-1}]\left[\boldsymbol{0}\right]\left[\boldsymbol{0}\right]= [a_{1}1]\times(2,0,0,-2)+\cdots\). The third generator above becomes, \([\boldsymbol{0}]\left[\boldsymbol{a_{1}-2}\right]\left[\boldsymbol{1}\right]= [a_{1}1]\times(2,0,0,2)+\cdots\). The final
quintic generator cannot be a generator in this case if we insist on only including signature expansions corresponding to non-negative integers. There are thus only two independent generators. Hence this this case, the corresponding signature coefficient matrix, ignoring the middle two rows, is
\[A_{1}\coloneqq\begin{pmatrix}2&2\\ -2&2\end{pmatrix}.\]
See Examples 10 and 12 and the equations for the coefficients \(c_{(n-2)00}\) and \(c_{0(n-3)1}\) in those cases for when \(w=(n-1)1\), as well as with the coefficients in Tables 2-4. Note, when \(a_{1}=a_{2}=1\), there is only one generator, \([\mathbf{0}]^{3}\), as we saw in Example 8. We treat the more general case when \(a_{1}=1\) at the end of this step..
Example 17: _(Three-part compositions)_ Consider basis elements with a three-part composition component \(a_{1}a_{2}a_{3}\), i.e. basis elements of the form \([a_{1}a_{2}a_{3}]\times\boldsymbol{\beta}_{i}\), where the \(\boldsymbol{\beta}_{i}\) for \(i=1,\ldots,8\), contain '1' in the \(i\)th position and zeros in the remaining seven positions. For the moment assume neither \(a_{1}\) nor \(a_{2}\) nor \(a_{3}\) are unity. Using Lemma 15, the standard actions, setting \(a=a_{1}\), \(c=a_{2}\) and \(w=a_{3}\) give to leading order,
\[[\boldsymbol{a_{1}-1}]\,[\boldsymbol{a_{2}-1}]\,\big{(}[a_{3}] \times(\hat{u},0)\big{)} =[a_{1}a_{2}a_{3}]\times\big{(}(1,1)\lhd(\hat{u},\hat{u}^{\dagger })\big{)},\] \[[\boldsymbol{a_{1}-1}]\,[\mathbf{0}]\,\big{(}[(a_{2}-1)a_{3}] \times(\hat{a},\hat{b})\big{)} =\chi((a_{2}-1)a_{3})\cdot[a_{1}a_{2}a_{3}]\] \[\qquad\qquad\qquad\qquad\times\big{(}(1,-1)\lhd((\hat{a},\hat{b} )-(\hat{a},\hat{b})^{\dagger})\big{)},\] \[[\mathbf{0}]\,[\boldsymbol{a_{1}-2}]\,\big{(}[a_{2}a_{3}]\times( \hat{a},\hat{b})\big{)} =\chi(a_{2}a_{3})\cdot[a_{1}a_{2}a_{3}]\times(2\hat{a},2\hat{b},- 2\hat{b}^{\dagger},-2\hat{a}^{\dagger}).\]
We observe, with these three relations, the task of finding the generators for any basis element with a three-part composition component, becomes the task of finding the generators for the basis element with the one-part composition component '\([a_{3}]\)' in the first case, and then the generators for basis elements with the two-part components '\([(a_{2}-1)a_{3}]\)' and '\([a_{2}a_{3}]\)' in the second and third cases. We can construct the generators in these cases via Examples 15 and 16 just above. In the first case, from Example 15, the two generators for \([a_{3}]\times(1,0)\) and \([a_{3}]\times(0,1)\) are \([\boldsymbol{a}_{3}]=[a_{3}]\times(1,0)+\cdots\) and \([\mathbf{0}]\,[\boldsymbol{a_{3}-2}]\,[\mathbf{0}]=[a_{3}]\times(2,-2)+\cdots\). Hence if we substitute these expressions into the first case above, respectively setting \(\hat{u}=(1,0)\) and then \(\hat{u}=(2,-2)\), we find,
\[[\boldsymbol{a_{1}-1}]\,[\boldsymbol{a_{2}-1}]\,\big{(}[ \boldsymbol{a_{3}}]\big{)} =[a_{1}a_{2}a_{3}]\times\big{(}(1,1)\lhd(1,0,0,1)\big{)}\] \[=[a_{1}a_{2}a_{3}]\times(1,0,0,1,1,0,0,1),\] \[[\boldsymbol{a_{1}-1}]\,[\boldsymbol{a_{2}-1}]\,\big{(}[\mathbf{0 }]\,[\boldsymbol{a_{3}-2}]\,[\mathbf{0}]\big{)} =[a_{1}a_{2}a_{3}]\times\big{(}(1,1)\lhd(2,-2,-2,2)\big{)}\] \[=[a_{1}a_{2}a_{3}]\times(2,-2,-2,2,2,-2,2).\]
For the second case above with composition component '\([(a_{2}-1)a_{3}]\)', we know from Example 16, there are four possible generators. However once we observe the nilpotent action property, we are left with two, namely, \([\boldsymbol{a_{2}-2}]\,[\boldsymbol{a_{3}-1}]\,[\mathbf{0}]=[(a_{2}-1)a_{3}] \times(1,-1,1,-1)+\cdots\) and \([\boldsymbol{a_{2}-2}]\,[\mathbf{0}]\,[\boldsymbol{a_{3}-1}]=[(a_{2}-1)a_{3}] \times(1,1,-1,-1)+\cdots\). Substituting these expressions into the second case above, respectively setting \((\hat{a},\hat{b})=(1,-1,1,-1)\) and then \((\hat{a},\hat{b})=(1,1,-1,-1)\), we find,
\[[\boldsymbol{a_{1}-1}]\,[\mathbf{0}]\,\big{(}[\boldsymbol{a_{2}- 2}]\,[\boldsymbol{a_{3}-1}]\,[\mathbf{0}]\big{)}\] \[=\chi((a_{2}-1)a_{3})\cdot[a_{1}a_{2}a_{3}]\times\big{(}(1,-1) \lhd\big{(}(1,-1,1,-1)-(1,-1,1,-1)^{\dagger}\big{)}\big{)}\]
\[=\chi((a_{2}-1)a_{3})\cdot[a_{1}a_{2}a_{3}]\times(2,-2,2,-2,-2,2,-2,2,2),\] \[[\boldsymbol{a_{1}-1}] [\boldsymbol{0}]\left([\boldsymbol{a_{2}-2}]\,[\boldsymbol{0}]\,[ \boldsymbol{a_{3}-1}]\right)\] \[=\chi((a_{2}-1)a_{3})\cdot[a_{1}a_{2}a_{3}]\times\left((1,-1) \lhd\left((1,1,-1,-1)-(1,1,-1,-1)^{\dagger}\right)\right)\] \[=\chi((a_{2}-1)a_{3})\cdot[a_{1}a_{2}a_{3}]\times(2,2,-2,-2,-2,2,2).\]
For the third case above with composition component '\([a_{2}a_{3}]\)', again, we know from Example 16, there are four possible generators. These are all four of the generators shown in Example 16 once we replace \(a_{1}\) and \(a_{2}\) therein respectively by \(a_{2}\) and \(a_{3}\). If we substitute the corresponding four expressions with the replacements mentioned into the third case above, respectively setting \((\hat{a},\hat{b})=(1,-1,1,-1)\), \((\hat{a},\hat{b})=(1,1,-1,-1)\), \((\hat{a},\hat{b})=(2,0,0,2)\) and then \((\hat{a},\hat{b})=(4,-4,-4,4)\), we find at leading order,
\[[\boldsymbol{0}]\left[\boldsymbol{a_{1}-2}\right]\left([\boldsymbol {a_{2}-1}]\left[\boldsymbol{a_{3}-1}\right][\boldsymbol{0}]\right) =\chi(a_{2}a_{3})\cdot[a_{1}a_{2}a_{3}]\times(2,-2,2,-2,2,-2,2,-2),\] \[[\boldsymbol{0}]\left[\boldsymbol{a_{1}-2}\right]\left([ \boldsymbol{a_{2}-1}]\,[\boldsymbol{0}]\,[\boldsymbol{a_{3}-1}]\right) =\chi(a_{2}a_{3})\cdot[a_{1}a_{2}a_{3}]\times(2,2,-2,-2,2,2,-2,-2),\] \[[\boldsymbol{0}]\left[\boldsymbol{a_{1}-2}\right]\left([ \boldsymbol{0}]\left[\boldsymbol{a_{2}-2}\right]\left[\boldsymbol{a_{3}}\right] \right) =\chi(a_{2}a_{3})\cdot[a_{1}a_{2}a_{3}]\times(4,0,0,4,-4,0,0,-4),\]
and finally,
\[[\boldsymbol{0}]\left[\boldsymbol{a_{1}-2}\right]\left([\boldsymbol{0}]\left[ \boldsymbol{a_{2}-2}\right][\boldsymbol{0}]\left[\boldsymbol{a_{3}-2}\right] [\boldsymbol{0}]\right) =\chi(a_{2}a_{3})\cdot[a_{1}a_{2}a_{3}]\times(8,-8,-8,8,-8,8,8,-8).\]
Hence, for the eight basis elements \([a_{1}a_{2}a_{3}]\times\boldsymbol{\beta}_{i}\), \(i=1,\ldots,8\), with the columns corresponding to the generators above in descent order following by degree, the corresponding signature coefficient matrix, is the full rank matrix,
\[A_{3}\coloneqq\begin{pmatrix}1&2&2&2&2&2&4&8\\ 0&-2&-2&2&-2&2&0&-8\\ 0&-2&2&-2&2&-2&0&-8\\ 1&2&-2&-2&-2&-2&4&8\\ 1&2&-2&-2&2&2&-4&-8\\ 0&-2&2&-2&-2&2&0&8\\ 0&-2&-2&2&2&-2&0&8\\ 1&2&2&2&-2&-2&-4&-8\end{pmatrix},\]
where columns 3 and 4 should involve the factor \(\chi((a_{2}-1)a_{3})\), while columns 5 through to 8 should involve the factor \(\chi(a_{2}a_{3})\). The factors are omitted in \(A_{3}\) for clarity.
Remark 17: Examples 15-17 precisely reflect the analysis we outlined in Step 3.
We can now discern the pattern. Suppose we wish to construct all the generators corresponding to a full set of \(2^{k}\) basis elements associated with a given composition component \([a_{1}a_{2}\cdots a_{k}]\). We preclude for the moment, that any of \(a_{1}\) through to \(a_{k}\) are equal to '1'. Using the standard actions in Lemma 15, we find,
\[[\boldsymbol{a_{1}-1}]\left[\boldsymbol{a_{2}-1}\right]\left([a_ {3}\cdots a_{k}]\times(\hat{u},0)\right) =[a_{1}\cdots a_{k}]\] \[\times\left((1,1)\lhd\left((\hat{u},0)-(-1)^{k-2}(0,\hat{u}^{ \dagger})\right)\right),\] \[[\boldsymbol{a_{1}-1}]\left[\boldsymbol{0}\right]\left([(a_{2}-1) a_{3}\cdots a_{k}]\times(\hat{a},\hat{b})\right) =\chi((a_{2}-1)a_{3})\cdot[a_{1}\cdots a_{k}]\] \[\times\left((1,-1)\lhd\left((\hat{a},\hat{b})+(-1)^{k-2}(\hat{ a},\hat{b})^{\dagger}\right)\right),\] \[[\boldsymbol{0}]\left[\boldsymbol{a_{1}-2}\right]\left([a_{2} \cdots a_{k}]\times(\hat{a},\hat{b})\right) =\chi(a_{2}a_{3})\cdot[a_{1}\cdots a_{k}]\]
\[\times\big{(}(2,0)\lhd(\hat{a},\hat{b})+(-1)^{k-2}(0,2)\lhd(\hat{a},\hat{b})^{ \dagger}\big{)},\]
to leading order. We then act iteratively to substitute for the generators corresponding to the composition: (i) \([a_{3}\cdots a_{k}]\) with \((k-2)\)-parts; (ii) \([(a_{2}-1)a_{3}\cdots a_{k}]\) with \((k-1)\)-parts, taking into account the nilpotency action property; and (iii) \([a_{2}\cdots a_{k}]\) with \((k-1)\)-parts. We know from Step 3, for a given composition \(a_{1}\cdots a_{k}\), we can construct \(2^{k}\) unique generators in this way. In Step 6 we demonstrate that the corresponding signature coefficient matrix generated in this way has full rank. However, at this stage, we note that we have the following straightforward result.
Lemma 17 (Generator sets unique to compositions): _If we use the procedure above to construct the \(2^{k}\) generators associated with the \(2^{k}\) basis elements with a given composition component \(a_{1}\cdots a_{k}\), then each such set of generators is unique to the given composition \(a_{1}\cdots a_{k}\), i.e. each such set of generators corresponding to a given composition \(a_{1}\cdots a_{k}\), does not appear elsewhere in generator sets for other compositions._
Lastly, apart from the case of the composition '\(a_{1}1\)' in Example 16, including the case '\(11\)', our analysis above has precluded compositions \(a_{1}\cdots a_{k}\) containing '\(1\)' in the composition sequence. We saw at the end of Example 16, that provided \(a_{1}\neq 1\) then the set of generators for the basis elements with composition components \(a_{1}1\) reduces to two generators only, however, both generators only generate the basis elements \([a_{1}1]\times(1,0,0,0)\) and \([a_{1}1]\times(0,0,0,1)\). Consider the case of the composition \(a_{1}a_{2}1\), with both \(a_{1}\neq 1\) and \(a_{2}\neq 1\). Using arguments analogous to those for the case '\(a_{1}1\)' at the end of Example 16, if we examine the eight generators listed in Example 17, we observe that the second and last cannot be generators in the case when \(a_{3}=1\), while the third and fourth generators combine, and the fifth and sixth generators combine, in much the same way as for the case of '\(a_{1}1\)' in Example 16. The latter two correspond to adding the third and fourth, and also the fifth and sixth, columns in the \(8\times 8\)-matrix \(A_{3}\) above. Thus for the case of the composition \(a_{1}a_{2}1\), the resulting coefficient matrix, ignoring the second, third, sixth and seventh rows which are zero, corresponds to the coefficient matrix \(A_{3}^{\prime}\) in Example 12. With these last two examples in hand, we deduce that for any composition of the form \(a_{1}a_{2}\cdots a_{k-1}1\), where we preclude any of \(a_{1}\) through to \(a_{k-1}\) to be '\(1\)', we have a unique set of generators in the sense of Lemma 17, albeit with a signature coefficient matrix of size \(2^{k-1}\times 2^{k-1}\). Now consider the case when the composition component is \([1a_{2}\cdots a_{k}]\), assume for the moment none of \(a_{2}\) throught to \(a_{k}\) equal '\(1\)'. Looking at the standard actions in Lemma 15, we observe that for basis elements with such a composition component, the two valid actions are,
\[\left[\mathbf{0}\right]\left[\mathbf{a_{2}-1}\right]\big{(}[a_{3 }\cdots a_{k}]\times(\hat{u},0)\big{)} =\chi(a_{3}\cdots a_{k})\cdot[a_{1}\cdots a_{k}]\times(\star, \star),\] \[\left[\mathbf{0}\right]\left([(a_{2}-1)a_{3}\cdots a_{k}]\times( \hat{a},\hat{b})\right) =\chi((a_{2}-1)a_{3}\cdots a_{k})\cdot[a_{1}\cdots a_{k}]\times( \star,\star),\]
where the two expressions \((\star,\star)\) are proxies for the appropriate \(\mathbb{R}\langle\mathbb{B}\rangle\)-components whose exact form is not important at this stage. However, we now observe that if we apply the final action in Lemma 15 respectively for the cases of the compositions \([(a_{2}+1)a_{3}\cdots a_{k}]\) and \([2(a_{2}-1)a_{3}\cdots a_{k}]\), we find,
\[\left[\mathbf{0}\right]\left[\mathbf{a_{2}-1}\right]\big{(}[a_{3 }\cdots a_{k}]\times(\hat{u},0)\big{)} =\chi(a_{3}\cdots a_{k})\cdot[(a_{2}+1)\cdots a_{k}]\times(\star,\star),\] \[\left[\mathbf{0}\right]\left[\mathbf{0}\right]\left([(a_{2}-1)a_{3 }\cdots a_{k}]\times(\hat{a},\hat{b})\right) =\chi((a_{2}-1)a_{3}\cdots a_{k})\cdot[2(a_{2}-1)a_{3}\cdots a_{k }]\times(\star,\star).\]
We observe that the first two respective factors of the generators and their arguments match the two cases corresponding to the composition \([1a_{2}\cdots a_{k}]\). However the latter two cases generate basis elements with the respective composition components
\([(a_{2}+1)a_{3}\cdots a_{k}]\) and \([2(a_{2}-1)a_{3}\cdots a_{k}]\), both of which occur before the composition \([1a_{2}\cdots a_{k}]\) in descent order. Thus these generators will not be new. A similar scenario could occur if one or more letters \(a_{2}\) through to \(a_{k}\) are equal to \(1\). In our main proof below in Step 8, we are able to discount any compositions \(a_{1}\cdots a_{k}\) in which any one of the letters \(a_{1}\) through to \(a_{k-1}\) is equal to '1'.
_Step 6: Full rank blocks._ Our goal in this step is to show that for a given composition of length \(k\), for which in general there are \(2^{k}\) different possible \(\mathbb{R}\langle\mathbb{B}\rangle\) components, there are \(2^{k}\) independent generators, generated by the first action acting on generators at level \(k-2\) and the second and third special actions on generators at level \(k-1\). The following results establish that this is indeed the case.
Lemma 18 (Actions and independence): _We have the following, at leading order:_
_(i) Given an independent set of input \(\mathbb{R}\langle\mathbb{B}\rangle\) components of length \(2^{k-1}\), of the form \((u,0)\) for the first action, or of the form \((a,b)\) for the second and third actions, each individual action produces an independent set of \(\mathbb{R}\langle\mathbb{B}\rangle\) components of length \(2^{k}\);_
_(ii) Given any arbitrary length \(2^{k-1}\) non-zero inputs, of the form \((u,0)\) for the first action or of the form \((a,b)\) for the second and third actions, the set of three actions generate independent \(\mathbb{R}\langle\mathbb{B}\rangle\) components of length \(2^{k}\)._
Proof: We focus on the effect of the actions on the \(\mathbb{R}\langle\mathbb{B}\rangle\) components only. In order, consider (i). It is sufficient to prove the result for two independent inputs as the general case follows suit. Consider the first action and suppose \(u\) and \(\hat{u}\) are two non-trivial independent \(\mathbb{R}\langle\mathbb{B}\rangle\) components. Consider an arbitrary linear combination, with scalar coefficients \(\kappa_{1}\) and \(\kappa_{2}\), of the first action applied to the input \((u,0)\) and the first action applied to the input \((\hat{u},0)\). Set the linear combination to zero. This gives, \(\kappa_{1}\cdot(u,\pm u^{\dagger},u,\pm u^{\dagger})+\kappa_{2}\cdot(\hat{u}, \pm\hat{u}^{\dagger},\hat{u},\pm\hat{u}^{\dagger})=0\), where the right-hand side represents the zero \(\mathbb{R}\langle\mathbb{B}\rangle\) component of the appropriate length. Pairing up, we observe the equation above is equivalent to \(\kappa_{1}\cdot u+\kappa_{2}\cdot\hat{u}=0\), with the other pairings generating the same equation. Since by assumption \(u\) and \(\hat{u}\) are two independent \(\mathbb{R}\langle\mathbb{B}\rangle\) components, the result follows. Now consider the second action. Suppose \((a,b)\) and \((\hat{a},\hat{b})\) are two non-trivial independent \(\mathbb{R}\langle\mathbb{B}\rangle\) components. As above, we construct the arbitrary linear combination, \(\kappa_{1}\cdot(a\pm b^{\dagger},b\pm a^{\dagger},-a\mp b^{\dagger},-b\mp a^{ \dagger})+\kappa_{2}\cdot(\hat{a}\pm\hat{b}^{\dagger},\hat{b}\pm\hat{a}^{ \dagger},-\hat{a}\mp\hat{b}^{\dagger},-\hat{b}\mp\hat{a}^{\dagger})=0\), for arbitrary scalar coefficients \(\kappa_{1}\) and \(\kappa_{2}\). We assume \(a\neq\pm b^{\dagger}\) and \(\hat{a}\neq\pm\hat{b}^{\dagger}\)--we observe from our proof of Lemma16 that the second action is trivial if and only if \(a=\pm b^{\dagger}\). Since in the last equation the final two components generate the same equation as the first two, the last equation is equivalent to, \(\kappa_{1}\cdot(a\pm b^{\dagger},b\pm a^{\dagger})+\kappa_{2}\cdot(\hat{a}\pm b ^{\dagger},\hat{b}\pm a^{\dagger})=0\). This reduces to \(\kappa_{1}\cdot(a,b)+\kappa_{2}\cdot(\hat{a},\hat{b})=0\). Hence by our assumption on \((a,b)\) and \((\hat{a},\hat{b})\), the result follows. We now consider the third action. Suppose \((a,b)\) and \((\hat{a},\hat{b})\) are two non-trivial independent \(\mathbb{R}\langle\mathbb{B}\rangle\) components, As above, we construct the linear combination, \(\kappa_{1}\cdot(a,b,\pm b^{\dagger},\pm a^{\dagger})+\kappa_{2}\cdot(\hat{a}, \hat{b},\pm\hat{b}^{\dagger},\pm\hat{a}^{\dagger})=0\), for arbitrary scalar coefficients \(\kappa_{1}\) and \(\kappa_{2}\). This last equation is equivalent to \(\kappa_{1}\cdot(a,b)+\kappa_{2}\cdot(\hat{a},\hat{b})=0\)--the final two components generate the same equation as the first two. By our independence assumption on \((a,b)\) and \((\hat{a},\hat{b})\), the result follows.
We now consider (ii). For arbitrary \((u,0)\) and \((a,b)\) in \(\mathbb{R}\langle\mathbb{B}\rangle\), consider the following linear combination of the three actions, set equal to the zero \(\mathbb{R}\langle\mathbb{B}\rangle\) component, namely: \(\kappa_{1}\cdot(u,u^{\dagger},u,u^{\dagger})+\kappa_{2}\cdot(a-b^{\dagger},b-a ^{\dagger},b^{\dagger}-a,a^{\dagger}-b)+\kappa_{3}\cdot(a,b,-b^{\dagger},-a^{ \dagger})=0\), where \(\kappa_{1}\), \(\kappa_{2}\) and \(\kappa_{3}\) are arbitrary scalar coefficients. Note we assume \(u\) and \((a,b)\) are non-trivial. If \(\kappa_{1}\neq 0\) and \(\kappa_{2}=\kappa_{3}=0\) then we observe that necessarily \(u\) must be the zero \(\mathbb{R}\langle\mathbb{B}\rangle\) component, which contradicts our assumptions. Similarly if \(\kappa_{3}\neq 0\) and
\(\kappa_{1}=\kappa_{2}=0\) then necessarily \((a,b)\) is the zero \(\mathbb{R}\langle\mathbb{B}\rangle\) component, which again contradicts our assumptions. If \(\kappa_{1}\neq 0\), \(\kappa_{2}\neq 0\) and \(\kappa_{3}=0\), the first and second components above reveal that necessarily \(\kappa_{1}\cdot u+\kappa_{2}\cdot(a-b^{\dagger})=0\) and \(\kappa_{1}\cdot u^{\dagger}+\kappa_{2}\cdot(b-a^{\dagger})=0\). Taking the adjoint of the second equation and adding the result to the first equation, implies \(u=0\). The third and fourth components generate the same information. Thus we have a contradiction. Analogously, in the cases \(\kappa_{1}\neq 0\), \(\kappa_{3}\neq 0\) and \(\kappa_{2}=0\), as well as \(\kappa_{2}\neq 0\), \(\kappa_{3}\neq 0\) and \(\kappa_{1}=0\), it is straightforward to show that a necessary consequence is that \((a,b)\) is the zero \(\mathbb{R}\langle\mathbb{B}\rangle\) component, and we have a contradiction. Now consider the case when all of \(\kappa_{1}\), \(\kappa_{2}\), \(\kappa_{3}\) are non-zero. Pairing up the first component from the linear combination above with the adjoint of the fourth component reveals that necessarily \(a\) is the zero \(\mathbb{R}\langle\mathbb{B}\rangle\) component. Pairing the second component and the adjoint of the third component reveals that necessarily \(b\) is the zero \(\mathbb{R}\langle\mathbb{B}\rangle\) component. We thus reach another contradiction. The final case we have not considered is the case \(\kappa_{1}=\kappa_{3}=0\) and \(\kappa_{2}\neq 0\). In this case we necessarily deduce \(a=b^{\dagger}\). As we have seen above, this is precisely the condition we need to rule out for the input when we apply the second action. The proof is complete.
Putting the results of this and the previous steps together, we observe the following.
Proposition 3 (Full rank linear system for all compositions): _Suppose we are given a composition \(a_{1}\cdots a_{k}\in\mathcal{C}\) of \(k\)-parts. Assume that all of \(a_{1}\) through to \(a_{k-1}\) are not equal to '\(1\)'. Then associated with the \(2^{k}\) set of basis elements with composition component \(a_{1}\cdots a_{k}\), are a unique set of \(2^{k}\) generators, and the signature coefficient matrix has full rank. If \(a_{k}=1\), the statement still holds but instead with \(2^{k-1}\) basis elements and \(2^{k-1}\) generators._
_Step 7: Composition and generator counts._ In light of Proposition 3, we are interested in the following counts. Given \(n\in\mathbb{N}\), what are the total numbers of: (i) Generators; (ii) Basis elements with composition components avoiding '\(1\)', i.e. compositions \(a_{1}\cdots a_{k}\) for which none of the letters \(a_{1}\) through to \(a_{k}\) are '\(1\)'; (iii) Basis elements with compositions ending in '\(1\)' with rest of the composition avoiding '\(1\)', i.e. compositions of the form \(a_{1}\cdots a_{k-1}1\) for which \(a_{1}\cdots a_{k-1}\) avoids '\(1\)'. Such information will help us keep track of size of the linear system of equations for the unknown coeffcients \(\{c_{\star}\}\) we solve as part of the proof of Theorem 3.1 below in Step 8.
Let us begin with the total number of generators, i.e. item (i). Note that all monomial generators are of odd-degree. For a given \(n\in\mathbb{N}\), there is one generator of degree \(1\), namely \([\boldsymbol{n}]\). We then need to enumerate all the possible degree \(3\) generators. Each Poppe product corresponds to increasing the order of the compositions they generate by \(1\), and there are two Poppe products in any degree \(3\) generator. Hence all the degree \(3\) generators consist of all the possible compositions of \((n-2)\) with \(1\), \(2\) and \(3\) parts and all the possible ways to assort them into three factors which can include "packing factors" of \(0\). So for example, the only \(1\)-part compositions of \((n-2)\) assorted in this way are \([\boldsymbol{n-2}]\,[\boldsymbol{0}]\,[\boldsymbol{0}]\), \([\boldsymbol{0}]\,[\boldsymbol{n-2}]\,[\boldsymbol{0}]\) and \([\boldsymbol{0}]\,[\boldsymbol{0}]\,[\boldsymbol{n-2}]\). The first set of generators of this form associated with the \(2\)-part compositions of \((n-2)\) are \([\boldsymbol{n-3}]\,[\boldsymbol{1}]\,[\boldsymbol{0}]\), \([\boldsymbol{n-3}]\,[\boldsymbol{0}]\,[\boldsymbol{1}]\) and \([\boldsymbol{0}]\,[\boldsymbol{n-3}]\,[\boldsymbol{1}]\), and so forth. These are just the _weak compositions_ of \((n-2)\) into three parts. The next set of generators are those of degree \(5\), and all the generators of this degree would consist of the weak compositions of \((n-4)\) with \(5\) parts, and so forth. The number of weak compositions of \(m\) into \(k\) parts is \(m+k-1\) choose \(k-1\). We can also think of this as the number of ways of distributing \(m\) balls into \(k\) slots, allowing empty slots. From our discussion above, when \(n\) is odd, we see that we are
interested in, for \(k=1,3,5,\ldots,n\), the number of ways of distributing \(m=n-k+1\) balls into \(k\) slots, or in other words,
\[\sum_{k=1(k\text{ odd})}^{n}\binom{n}{k-1}=\sum_{\ell=0}^{\frac{1}{2}(n-1)} \binom{n}{2\ell}=2^{n-1},\]
where we use the substitution \(k=2\ell+1\) for the second sum. That the sum total on the right equals \(2^{n-1}\) follows, with some care, from the corresponding result in Lemma 14. Similarly, when \(n\) is even, we are interested in, for \(k=1,3,5,\ldots,n+1\), the number of ways of distributing \(m=n-k+1\) balls into \(k\) slots, or in other words,
\[1+\sum_{k=1(k\text{ odd})}^{n-1}\binom{n}{k-1}=1+\sum_{\ell=0}^{\frac{1}{2}(n- 2)}\binom{n}{2\ell}=2^{n-1}.\]
Again we used the substitution \(k=2\ell+1\) for the second sum. Note that the initial '1' in the sum corresponds to the case \(k=n+1\), i.e. corresponding to the monomial generator \([\mathbf{0}]^{n+1}\). That the sum total is \(2^{n-1}\) again follows from the first result in Lemma 14. We have thus established the following.
Lemma 19 (Total number of generators): _Given \(n\in\mathbb{N}\), the total number of monomial generators is equal to \(2^{n-1}\)._
The number of compositions of \(n\) with \(k\)-parts is \(n-1\) choose \(k-1\), and accumulating these coefficients from \(k=1\) to \(k=n\), the total number of compositions of \(n\) is \(2^{n-1}\). Of course associated with each composition, there are one or more basis elements. For example for a composition \(a_{1}\cdots a_{k}\) with \(k\)-parts which avoids 1, there are \(2^{k}\) associated basis elements. The number of compositions of \(n\) into \(k\) parts avoiding 1 is given by,
\[\binom{n-k-1}{k-1}\,.\]
To see this we observe the following--see Axenovich and Ueckerdt [8, p. 24] or Beck and Robbins [13]. There is a bijection between: (a) the arrangements of \(n\) balls into \(k\) slots with each slot containing two or more balls; and (b) the arrangements of \(n-k\) balls into \(k\) slots with no empty slots. For the map from (a) to (b), we simply remove one ball from each slot. For the map from (b) to (a) we just add one ball to each slot. The count for (b) is \(n-k-1\) choose \(k-1\), giving the result above. Thus, given that for a given composition with \(k\) parts that avoids '1' there are \(2^{k}\) corresponding basis elements, the total number of basis elements with composition components that avoid '1' is given, with \(\lambda=2\), respectively when \(n\) is odd and then when \(n\) is even, by,
\[p(n;\lambda)\coloneqq\sum_{k=1}^{(n-1)/2}\binom{n-k-1}{k-1}\,\lambda^{k}\qquad \text{and}\qquad p(n;\lambda)\coloneqq\sum_{k=1}^{n/2}\binom{n-k-1}{k-1}\, \lambda^{k}.\]
By direct enumeration, as well as from Examples 8-12, we know \(p(2;2)=p(3;2)=2\), \(p(4;2)=6\) and \(p(5;2)=10\). In fact, in general, we have the following result.
**Lemma 20** (Weighted compositions avoiding '1' count): _Given an integer \(n\geqslant 2\) and a real number \(\lambda>0\), the weighted sum \(p=p(n;\lambda)\) of the total number of basis elements with composition components which avoid '1' satisfies the weighted Fibonacci sequence satisfying,_
\[p(n;\lambda)=p(n-1;\lambda)+\lambda\,p(n-2;\lambda).\]
_In particular, when \(\lambda=2\), \(p(2;2)=p(3;2)=2\) and \(p(n;2)=\frac{2}{3}(2^{n-1}+(-1)^{n})\)._
Proof: By direct computation, for \(n\) odd, we observe, \(p(n-1;\lambda)+\lambda\,p(n-2;\lambda)\) equals,
\[\sum_{k=1}^{(n-1)/2}\binom{n-k-2}{k-1}\,\lambda^{k}+\sum_{k=1}^{( n-3)/2}\binom{n-k-3}{k-1}\,\lambda^{k+1}\] \[\qquad=\binom{n-3}{0}\,\lambda+\sum_{k=2}^{(n-1)/2}\Biggl{(} \binom{n-k-2}{k-1}+\binom{n-k-2}{k-2}\Biggr{)}\lambda^{k},\]
which equals \(p(n;\lambda)\) once we combine the two terms in the coefficient of \(\lambda^{k}\) shown and observe that the coefficient of the \(\lambda\) term is one. Note that in the first step we made the change of variables \(\ell=k+1\) in the second sum, before relabelling \(\ell\) as \(k\). When \(n\) is even, we similarly observe that \(p(n-1;\lambda)+\lambda\,p(n-2;\lambda)\) equals,
\[\sum_{k=1}^{(n-2)/2}\binom{n-k-2}{k-1}\,\lambda^{k}+\sum_{k=1}^{( n-2)/2}\binom{n-k-3}{k-1}\,\lambda^{k+1}\] \[\qquad\qquad=\binom{n-3}{0}\,\lambda+\sum_{k=2}^{(n-2)/2}\Biggl{(} \binom{n-k-2}{k-1}+\binom{n-k-2}{k-2}\Biggr{)}\lambda^{k}+\binom{n/2-2}{n/2-2 }\,\lambda^{n/2},\]
which equals \(p(n;\lambda)\) once we combine the terms in the coefficient of \(\lambda^{k}\), and observe that the coefficients of the \(\lambda\) and \(\lambda^{n/2}\) terms are one. We also used the same change of variables in the first step. The final statement specific to \(\lambda=2\), follows directly by solving the difference equation for \(p=p(n;2)\) for the initial conditions indicated.
The result of Lemma 20 provides an answer to item (ii), stated at the beginning of this step. Item (iii) is now straightforward. The total number of compositions with \(k\)-parts of the form \(a_{1}\cdots a_{k-1}1\) for which \(a_{1}\cdots a_{k-1}\) avoids '1' is simply \(n-k-1\) choose \(k-2\). This is because here we require \(n-1\) balls to fit into \(k-1\) slots with each slot containing two or more balls. Hence the total number of basis elements associated with composition components which end in '1', but avoid '1' elsewhere, when \(n\) is odd so \(n-1\) is even, is given by,
\[\sum_{k=2}^{(n+1)/2}\binom{n-k-1}{k-2}\,\lambda^{k-1}=\sum_{k=1}^{(n-1)/2} \binom{n-k-2}{k-1}\,\lambda^{k},\]
which equals \(p(n-1;\lambda)\). When \(n\) is even and thus \(n-1\) is odd, the same count is,
\[\sum_{k=2}^{n/2}\binom{n-k-1}{k-2}\,\lambda^{k-1}=\sum_{k=1}^{(n-2)/2}\binom{n -k-2}{k-1}\,\lambda^{k},\]
which equals \(p(n-1;\lambda)\). Finally we observe the following.
Lemma 21 (Basis element count): _The total number of basis elements with composition components which avoid '\(1\)', or end in '\(1\)' and avoid '\(1\)' elsewhere, equals \(2^{n-1}\)._
Proof: The count in question is \(p(n;2)+p(n-1;2)\). Using the explicit soluton for \(p(n;2)\) given in Lemma 20, the result follows.
Combining the results of Lemmas 19 and 21, we deduce the rather remarkable fact:
For any given \(n\in\mathbb{N}\), the total number of basis elements whose composition component avoids '\(1\)', or ends in '\(1\)' but avoids '\(1\)' elsewhere, exactly equals the total number of generators.
Naturally this result is important in our proof of Theorem 1 just below. We remark that the inclusion of basis elements whose composition components end in '\(1\)' but avoid '\(1\)' elsewhere, rather than composition components containing '\(1\)' at some other position with avoidance elsewhere, is merely a consequence of the ordering we have imposed, namely the descent order.
_Step 8: Proof of Theorem 1._ In this last step we provide the overall proof of our main theorem. We combine together the knowledge we gained in Steps 1-7. This final stage of the argument, though significantly adapted, is analogous to that outlined for the non-commutative potential Korteweg-de Vries hierarchy in Malham [69].
Proof (of Theorem 1): To complete the proof, we essentially construct a table of signature coefficients for the arbitrary order \(n\) case, much like Tables 2-4 for the \(n=4\) and \(n=5\) cases. Indeed we refer to these example tables to demonstrate examples of the general procedure. We already know from Steps 1-7 that we can construct a linear algebraic equations of the form \(AC=B\), where the vector \(C\) of length \(2^{n-1}\) lists the unknown coefficients of the generator monomials in the Poppe polynomial \(\pi_{n}=\pi_{n}([\mathbf{0}],[\mathbf{1}],\ldots,[\mathbf{n}])\). The vector \(B\), whose length exceeds \(2^{n-1}\), is a vector of zeros apart from a single non-zero value '\(1\)' in the first position when \(n\) is odd and in the second position when \(n\) is even. This is due to the ordering we impose which we outline briefly now, and in some more detail just below. The coefficients of \(C\), are ordered according to the descent order and blocks as outlined in Steps 3 and 5. The signature coefficient matrix \(A\) has a lower triangular block form. It has \(2^{n-1}\) columns and its total number of rows exceeds \(2^{n-1}\), though equals the length of \(B\). The columns of \(A\) are parametrised by the blocks of monomial generators mentioned, or equivalently the order of the coeffcients in \(C\). The rows of \(A\) are parametrised by the basis elements, characterised by the composition components of the basis elements listed in descent order, and within individual composition \(w\), the basis elements are listed according to the binary order of the \(\mathbb{R}\langle\mathbb{B}\rangle\)-components. The number of such \(\mathbb{R}\langle\mathbb{B}\rangle\)-components is \(2^{|w|-v(w)}\), where \(v(w)\) counts the number of \(1\)'s in the composition \(w\).
Let us outline the forms of \(C\) and \(A\) in some more detail. We can be brief as much of the procedure has already been outlined in Steps 1-7. Corresponding to the pair of basis elements with a one-part composition component, namely \([n]\times(1,0)\) and \([n]\times(0,1)\), the first pair of coefficients in \(C\) are \(c_{n}\) and \(c_{0(n-2)0}\), corresponding to the generators \([\mathbf{n}]\) and \([\mathbf{0}]\,[\mathbf{n-2}]\,[\mathbf{0}]\). The corresponding \(2\times 2\) top-left block in \(A\) is the matrix \(A_{0}\). All the remaining entries in the first two rows of \(A\) to the right of this block are zero. We move onto basis elements whose composition components have \(k=2\) parts. In descent order the first set of basis elements consists of \([(n-1)1]\times(1,0,0,0)\) and \([(n-1)1]\times(0,0,0,1)\). We know from Example 16 in Step 5, these are the only two relevant
basis elements for corresponding to compositions ending in '1'. The corresponding coefficients in \(C\) are \(c_{(n-2)00}\) and \(c_{0(n-3)1}\) associated with the generators \([\boldsymbol{n-2}]\,[\boldsymbol{0}]\,[\boldsymbol{0}]\) and \([\boldsymbol{0}]\,[\boldsymbol{n-3}]\,[\boldsymbol{1}]\). The corresponding coefficient matrix is the block \(A_{1}\) taking up rows and columns 3 and 4 in \(A\), with the remaining entries in rows 3 and 4 to the right of this block being zero. This first set of blocks is important once we start addressing the issue of the uniqueness of the solution \(C\) to the linear system, or equivalently the consistency of the overall linear system. Next we consider the blocks of basis elements parametrised by two-part composition components of the form \(a_{1}a_{2}\) in descent order, with neither \(a_{1}\) nor \(a_{2}\) equal to zero. Associated with any such composition, there are four basis elements \([a_{1}a_{2}]\times\boldsymbol{\beta}_{i}\), \(i=1,2,3,4\), and the corresponding coefficients in \(C\) are \(c_{(a_{1}-1)(a_{2}-1)0}\), \(c_{(a_{1}-1)0(a_{2}-1)}\), \(c_{0(a_{1}-2)a_{2}}\) and \(c_{0(a_{1}-2)0(a_{2}-2)0}\), corresponding to the monomial generators outlined in Example 16. For each block of four basis elements with such a composition component, the corresponding signature coefficient matrix is \(A_{2}\). As we run through the compositions \(a_{1}a_{2}\) which avoid '1' in descent order, the coefficient matrix \(A_{2}\) occupies rows and columns \(4(n-a_{1}-1)+i\) for \(i=1,2,3,4\) in the signature coefficient matrix \(A\). All the entries in \(A\) in the these rows to the right of these blocks are zero. We know from Step 5, that the basis elements with the two-part composition component \(1(n-1)\) does not occupy any new columns in \(A\), but instead, the two rows corresponding to the two basis elements concerned only contain non-zero entries in the columns parametrised by \(4(n-a_{1}-1)+i\) for \(i=1,2,3,4\) for \(a_{1}\neq 1\).
We move onto blocks of basis elements corresponding to composition components with \(k=3\) parts. We know from our arguments in Step 5 that we can focus on composition components of the form \(a_{1}a_{2}a_{3}\) which avoid '1' or end in '1' and avoid '1' elsewhere. Basis elements with composition components lying in the complement of this set do not generate "new" columns, or equivalently, the non-zero entries in the rows in \(A\) corresponding to these basis elements only occupy columns we have already encountered/parametrised. Let us examine the blocks we generate as we run through the compositions \(a_{1}a_{2}a_{3}\) which avoid '1' or end in '1' and avoid '1' elsewhere. We know from Step 5, that for each of the former compositions we generate the block matrix \(A_{3}\), modified to include the column factors mentioned in Step 5, associated with the 8 corresponding basis elements and the 8 "new" monomial generator columns. For each of the latter compositions we generate a block matrix \(A^{\prime}_{3}\) associated with the 4 corresponding basis elements and the 4 "new" monomial generator columns. The block matrices \(A_{3}\) and \(A^{\prime}_{3}\) are diagonal blocks in \(A\), with all entries in the corresponding rows they occupy to the right of these blocks equal to zero.
We know from our analysis in Step 5 and in particular Lemma 17 and the discussion immediately following this lemma, that we have the following. For every composition \(w\) of \(n\) that avoids '1', or ends in '1' but avoids '1' elsewhere, we can construct a \(2^{|w|}\times 2^{|w|}\) block matrix \(A_{w}\) in the former case, or a \(2^{|w|-1}\times 2^{|w|-1}\) block matrix \(A^{\prime}_{w}\) in the latter case, which occupies a distinct diagonal block in \(A\). Here by "distinct", we mean that its rows and columns do not coincide with the rows and columns of any of the analogous block matrices corresponding to basis elements with such composition components. All the entries in \(A\) in the rows occupied by these blocks and to the right of them, are zero. Thus for general \(n\), the signature coefficient matrix \(A\) is indeed a lower block triangular matrix. Further, from our results in Step 6, we know that each of the block matrices \(A_{w}\) and \(A^{\prime}_{w}\) has full rank. Even further, from our results in Step 7, we know that the total number of basis elements corresponding to such composition components and generating the rows of such diagonal blocks, exactly equals the total number of monomial generators generating the columns of such diagonal blocks. This means that
if we ignore the basis elements with composition components in the complementary set for the moment, then we can proceed block by block, starting with \(A_{0}\), and solve for the corresponding set of Poppe polynomial coefficients in \(C\), until we precisely exhaust the blocks and uniquely recover \(C\).
The rest of the proof is now concerned with demonstrating the consistency of the remaining rows/linear equations for the coefficients in \(C\) associated with those basis elements with composition components that contain a '1', not including the instances of compositions ending in '1', but avoiding it elsewhere. We heavily rely on the fact that the linear system of algebraic equations for \(C\) is almost homogeneous apart from the single unit entry in the first position when \(n\) is odd and in the second position when \(n\) is even. The first phase of this section of the proof focuses on the blocks associated with basis elements with 1 and 2-part composition components. Assume for the moment that \(n\) is odd. In this instance the first two linear equations for the coefficients \(c_{n}\) and \(c_{0(n-2)0}\) are \(c_{n}+2\cdot c_{0(n-2)0}=1\) and \(-2\cdot c_{0(n-2)0}=0\). Thus when \(n\) is odd, we always have \(c_{0(n-2)0}=0\). This means that all the entries in the second column of the signature coefficient matrix \(A\) are not relevant to the linear system of equations for \(C\) and we thus eliminate this column in \(A\) when \(n\) is odd. Implementing this, and ignoring the first row in \(A\) corresponding to the basis element \([\mathbf{0}n\mathbf{0}]\), the top left corner of \(A\) has the form shown in the top part of Table 5. Now assume that \(n\) is even. The first two linear for the coefficients \(c_{n}\) and \(c_{0(n-2)0}\) in this instance are \(c_{n}+2\cdot c_{0(n-2)0}=0\) and \(-2\cdot c_{0(n-2)0}=1\). When \(n\) is even we swap over the first two rows in the signature coefficient matrix \(A\) which is equivalent to swapping the order of the first two linear equations shown. If we ignore the _new_ first row of \(A\) corresponding to the nonhomogeneous equation \(-2\cdot c_{0(n-2)0}=1\), then the top left corner of \(A\) has the form shown in the bottom part of Table 5. All the remaining rows and columns in the signature coefficient matrix \(A\), whether \(n\) is odd or even, remain the same. However, with the first rows ignored, the system of linear equations that remains is homogeneous. And, of course, the top left corners in the case that \(n\) is odd
\begin{table}
\begin{tabular}{|l|c c c|} \hline \hline \(n\) odd & \([\boldsymbol{n}]\) & \([\boldsymbol{n-2}]\,[\mathbf{0}]^{2}\) & \([\mathbf{0}]\,[\boldsymbol{n-3}]\,[\mathbf{1}]\) & \(\ldots\) \\ \hline \([\mathbf{0}(n-1)\mathbf{0}\mathbf{1}\mathbf{0}]\) & \((n-1)1\) & \(2\cdot\big{(}(n-2){\otimes}0{\otimes}0\big{)}\) & \(2\cdot\big{(}0{\otimes}(n-3){\otimes}1\big{)}\) & \\ \([\mathbf{0}(n-1)\mathbf{0}^{\dagger}1\mathbf{0}^{\dagger}]\) & \(-2\cdot\big{(}(n-2){\otimes}0{\otimes}0\big{)}\) & \(2\cdot\big{(}0{\otimes}(n-3){\otimes}1\big{)}\) & \\ \hline \hline \(\vdots\) & & & \\ \hline \hline \(n\) even & \([\boldsymbol{n}]\) & \([\mathbf{0}]\,[\boldsymbol{n-2}]\,[\mathbf{0}]\) & \([\boldsymbol{n-2}]\,[\mathbf{0}]^{2}\) & \([\mathbf{0}]\,[\boldsymbol{n-3}]\,[\mathbf{1}]\) \\ \hline \([\mathbf{0}n\mathbf{0}]\) & \(n\) & \(2\cdot\big{(}0{\otimes}(n-2){\otimes}0\big{)}\) & \\ \([\mathbf{0}(n-1)\mathbf{0}\mathbf{1}\mathbf{0}]\) & \((n-1)1\) & \(2\cdot\big{(}0{\otimes}(n-2){\otimes}0\big{)}\) & \(2\cdot\big{(}(n-2){\otimes}0{\otimes}0\big{)}\) & \(2\cdot\big{(}0{\otimes}(n-3){\otimes}1\big{)}\) \\ \([\mathbf{0}(n-1)\mathbf{0}^{\dagger}1\mathbf{0}^{\dagger}]\) & \(2\cdot\big{(}0{\otimes}(n-2){\otimes}0\big{)}\) & \(-2\cdot\big{(}(n-2){\otimes}0{\otimes}0\big{)}\) & \(2\cdot\big{(}0{\otimes}(n-3){\otimes}1\big{)}\) \\ \hline \hline \end{tabular}
\end{table}
Table 5: The top left block entries in the signature coefficient matrix \(A\) at any order \(n\), depending on whether \(n\) is odd (top) or \(n\) is even (bottom). The coefficients are the \(\chi\)-images of the signature entries shown. The forms shown for the case of when \(n\) is odd or even, are used to prove the consistency of the overdetermined linear system of algebraic equations for the Poppe polynomial coefficients. The first rows are not shown.
or even have the forms shown in Table 5. In both cases in Table 5 we trace a diagonal starting from the top left non-zero coefficient, which is \(\chi\big{(}(n-1)1\big{)}=n\) when \(n\) is odd, and \(\chi(n)=1\) when \(n\) is even. When \(n\) is even the next two entries along this diagonal are \(\chi\big{(}2\cdot(0_{\otimes}(n-2)_{\otimes}0)\big{)}=2\) and \(\chi\big{(}-2\cdot((n-2)_{\otimes}0_{\otimes}0)\big{)}=-2\). When \(n\) is odd the next diagonal entry is \(\chi\big{(}-2\cdot((n-2)_{\otimes}0_{\otimes}0)\big{)}=-2\). Thereafter the diagonal entries for the cases of \(n\) being even or odd are the same. In Table 2, when \(n\) is even, after swapping the first two rows, though not ignoring the new first row yet, we can view the diagonal we have identified as the diagonal just below the leading diagonal. Similarly in Tables 3 and 4, when \(n\) is odd, after eliminating the second column but still retaining the first row, we can again view the diagonal we have identified as that just below the leading diagonal in those Tables. In either Table 2 or in Tables 3 and 4, let us call the diagonal just below the leading diagonal the'sub-diagonal'. Consider for example Tables 3 and 4. If we follow the sub-diagonal with the view of retaining non-zero terms along it, we observe we meet a obstruction in the first \(4\times 4\) block with matrix \(A_{2}\) characterised by the composition component \(32\). The problem is that while the sub-diagonal of \(A_{2}\) has non-zero entries, the next term along the diagonal that lies in the last column of that \(A_{2}\) block, but beneath the entire block, and in fact the row corresponding to the basis element \([23]\times(1,0,0,0)\) is zero. However there is a quick fix to this obstruction. That is to simply swap the two columns in the signature coefficient matrix \(A\) corresponding to the final two columns of the \(A_{2}\) block characterised by the composition component \(23\). Such a swap simply corresponds to changing the order of the monomial generators. We can see from Tables 3 and 4 that this column-swap procedure would guarantee that the next entry in the sub-diagonal would be non-zero. We can then continue to consider the sub-diagonal in the \(A_{2}\) block corresponding to the basis elements \([23]\times\boldsymbol{\beta}_{i}\) for \(i=1,2,3,4\). However we observe a similar obstruction necessitating an anologous swap of the columns of \(A\) corresponding to the final two columns of this second \(A_{4}\) block. This is again enacted to ensure that the term in the final column immediately below this \(A_{4}\) block is non-zero. The term in question corresponds to the signature coefficient \(\chi(0\hat{\otimes}0_{\otimes}3)\) in the row corresponding to \([14]\times(1,0,0,0)\). We have thus established, for all the basis elements with \(1\) and \(2\)-part composition components, a complete diagonal with all entries non-zero, and which can act as pivots. Since all the linear equations corresponding to the rows we are considering (we are ignoring the top row) are homogeneous we can use Gaussian elimination to render all the entries in the columns below the sub-diagonal to be zero.
The procedure for the case of general \(n\) proceeds in exactly the same manner as the \(n=5\) case we have just outlined, except that now we need to establish that when we swap the columns over as just outlined, we are guaranteed a non-zero entry in the corresponding sub-diagonal entry. We also need to guarantee that the entry immediately below the final column of the \(2\times 2\) block \(A_{1}\) corresponding to the rows \([(n-1)1]\times(1,0,0,0)\) and \([(n-1)1]\times(0,0,0,1)\) is also non-zero. This case corresponds to the column given by \(\boldsymbol{[0]}\,\boldsymbol{[n-3]}\,\boldsymbol{[1]}\). In the other cases of the \(A_{4}\) blocks corresponding to the basis elements \([(n-m)m]\times\boldsymbol{\beta}_{i}\) for \(i=1,2,3,4\), the column in question is the third column in the \(A_{4}\) block corresponding to the column given by \(\boldsymbol{[0]}\,\boldsymbol{[n-m-2]}\,\boldsymbol{[m]}\). In particular this means we can treat the \(m=1\) and \(m=2,\ldots,n-2\) cases simultaneously. Indeed, using Corollary 6 in Step 4, we observe that at leading order we have,
\[\boldsymbol{[0]}\,\boldsymbol{[n-m-2]}\,\boldsymbol{[m]}=[(n-m)m]\times(2,0,0, 2)+[(n-m-1)(m+1)]\times(1,1,1,1)+\cdots,\]
where we have used that at leading order \(\boldsymbol{[m]}=[m]\times(1,0)+\cdots\). The first term on the right is the term we expect at leading order for this generator, while the second term on
the right is the column corresponding to the rows in the next block down--we see that \((n-m-1)(m+1)\) is obtained from \((n-m)m\) by replacing \(m\) by \(m+1\), which is one composition further down in descent order. Thus indeed we are guaranteed that the next entry in the sub-diagonal is non-zero when we enact the column swap. Further we observe that in the case of the final \(A_{4}\) block corresponding to the composition \(2(n-2)\) for which \(m=n-2\), the corresponding generator is \(\left[\mathbf{0}\right]\left[\mathbf{0}\right]\left[\mathbf{n-2}\right]\) while the corresponding row containing the sub-diagonal entry of interest is \([1(n-1)]\times(1,0,0,0)\). At leading order we have,
\[\left[\mathbf{0}\right]\left[\mathbf{0}\right]\left[\mathbf{n-2}\right]=[2(n- 2)]\times(2,0,0,2)+[1(n-1)]\times(2,2,0,0)+\cdots,\]
which thus guarantees a final sub-diagonal non-zero entry. We are thus in the exact same situation as described for the \(n=5\) case just above, and we can use the sub-diagonal entries as pivots to render all the entries in \(A\), in all the sub-diagonal columns, below the sub-diagonal to be zero.
The second phase of this section of the proof now focuses on all the blocks associated with basis elements with composition components with \(k\)-parts with \(k\geqslant 3\). This phase is more straightforward. Let us focus on the 3-part composition cases to begin with. The first 3-part composition in descent order is \((n-2)11\), and we know from Step 5 that there are no "new" generators associated with with any such composition that lies in the set of compositions complementary to those avoiding '1' or ending in '1', but avoiding it elsewhere. Hence we can use Gaussian elimination, using the pivots from the sub-diagonal outlined for the 1 and 2-part composition cases just outlined, to render the entries in for the two rows/basis elements concerned here, \([(n-2)11]\times\boldsymbol{\beta}_{1}\) and \([(n-2)11]\times\boldsymbol{\beta}_{8}\), equal to zero. Next we consider the block of rows/basis elements corresponding to the composition \((n-3)21\). As outlined in Step 5 this block is associated with 4 generators and the block matrix \(A_{3}^{\prime}\). We know this has full rank and we can thus use the leading diagonal as pivots to render all entries in the corresponding columns of \(A\) below this diagonal to be zero. The next block is associated with the composition \((n-3)12\), which with a '1' in the middle is not associated with any new generators, and from our Gaussian elimination processes thus far has all row entries rendered zero. The next blocks are associated with the compositions \((n-4)31\), \((n-4)22\), \((n-4)14\). The first two of these compositions are associated with separate copies of the \(8\times 8\) matrix \(A_{3}\) (with the columns mentioned in Step 5 suitably scaled) and a total of 16 generators (one set of 8 each). The leading diagonals of both copies of \(A_{3}\) can again be used as pivots to render all the entries, below this diagonal in the columns of \(A\) associated with these two copies, equal to zero. The entries in the rows associated with the third composition \((n-4)13\) will have been rendered zero in the Gaussian elimination process just outlined for the other two compositions. And so forth, we can see that we can proceed in descent order through the blocks associated with 3-part compositions, either in the case of compositions that avoid '1' or end in '1' but avoid it elsewhere, using the diagonals of the blocks associated with \(A_{3}\) or \(A_{3}^{\prime}\), to render the corresponding entries in \(A\) below these diagonals to be zero, or recognising for the blocks associated with the complementary set of compositions, the entries in the rows of those block will already be rendered zero. The procedure for all further blocks associated with compositions of 4 or more parts proceeds exactly analogously. Naturally that the corresponding diagonals with non-zero entries exist for all blocks associated with compositions that avoid '1' or end in '1' but avoid it elsewhere, is guaranteed by the results in Step 6, in particular Proposition 3.
Hence we have thus rendered all the entries in all the rows corresponding to basis elements with composition components which lie in the set complementary to those that avoid '1' or end in '1' but avoid it elsewhere, equal to zero. Briefly returning to the rows/blocks associated with the 1 and 2 part compositions, still ignoring the top row as indicated in Table 5. A quick count reveals that when \(n\) is even, we have \(3+4(n-1)\) rows, i.e. homogeneous linear equations, in \(4+4(n-1)\) unknowns, while when \(n\) is odd, we have \(2+4(n-1)\) homogeneous linear equations, in \(3+4(n-1)\) unknowns. In either case when \(n\) is even or odd, proceeding through all the other blocks associated with compositions of three or more parts, the remaining number of homogeneous linear equations equals the remaining number of unknowns (as outlined in the first section of this proof). Hence, in either case when \(n\) is even or odd, we can solve the entire system of linear homogeneous equations, with one less equation then the total number of unknowns, to find expressions for all the unknowns in terms of only one of them. In the case that \(n\) is odd, we solve for all of them in terms of \(c_{n}\). In the case that \(n\) is even, we solve for all of them in terms of \(c_{0(n-2)0}\). We now re-introduce the very first row we ignored at the beginning of this second, "consistency", section of the proof. When \(n\) is odd, that first equation is \(c_{n}+2\cdot c_{0(n-2)0}=1\). Since we have an expression for \(c_{0(n-2)0}\) in terms of \(c_{n}\) from the homogeneous set of linear equations, we can substitute that expression into this non-homogeneous linear equation and determine \(c_{n}\). When \(n\) is even, the first equation is \(-2\cdot c_{0(n-2)0}=1\) or equivalently \(c_{0(n-2)0}=-1/2\). Since in this case we have expressions for all the other unknowns in terms of \(c_{0(n-2)0}\), this fixes the values of all the other unknowns. In either case, whether \(n\) is odd or even, we have established a unique solution \(C\), and the proof is complete.
**Remark 18**: As mentioned, the overall proof in Step 8 just above is analogous to that for the non-commutative potential Korteweg-de Vries hierarchy in Malham [69]. Therein we proceed by considering compositions with \(k=1\), \(k=2\), and so forth, parts. In that case there are no blocks as there are no \(\mathbb{R}\langle\mathbb{B}\rangle\) components. The basis elements are just compositions of \(n\), there are no skew forms. Also, as it is the potential equation, the generators are just monomials of the signature expansions \(\boldsymbol{n}\), with \(n\in\mathbb{N}\)--in particular there are no generators corresponding to \([\boldsymbol{0}]\). Further, since there are no skew forms, the generators can be of even or odd degree. See in particular Section 6 in Malham [69]. We observed at the end of Step 7 above that the total number of generators equalled the total number of basis elements with composition components that avoided '1' together with those that ended in '1' but avoided it elsewhere. It would be natural to wonder whether a similar situation occurs for the case of the non-commutative potential Korteweg-de Vries hierarchy, and indeed retrospectively, we can establish the exact same result for that hierarchy. In fact we can show, since there are no blocks and no generators akin to '\([\boldsymbol{0}]\)', that for each set of compositions with \(k\) parts, the number of monomial generators with \(k\) factors equals the number of compositions (the basis elements here) that avoid '1' together with those that ended in '1' but avoided it elsewhere. Again, that we single out those compositions ending in '1' is just an artefact of the descent order we impose. To see this fact, we observe from Section 6 in Malham [69], that the number of generators with \(k\) factors, say of the form \((\boldsymbol{n}_{1})(\boldsymbol{n}_{2})\cdots(\boldsymbol{n}_{k})\) with the Poppe product, is given by \(n-k\) choose \(k-1\). This is because each Poppe product adds a '1' to one of the composition parts in the eventual expansion in compositions. The complete set of such monomial generators is exhausted by those with \(k=1,2,\ldots,\frac{1}{2}(n+1)\) parts. We already know from Step 7 above, that the number of compositions of \(n\) with \(k\) parts that avoid '1' equals \(n-k-1\)
choose \(k-1\) for \(k=1,2,\ldots,\frac{1}{2}(n-1)\), and the number of compositions that end in '1' but avoid it elsewhere equals \(n-k-1\) choose \(k-2\) for \(k=2,\ldots,\frac{1}{2}(n+1)\). If we restrict ourselves to \(k=2,\ldots,\frac{1}{2}(n-1)\), we observe that the number of compositions satisfying either property is given by,
\[\binom{n-k-1}{k-1}+\binom{n-k-1}{k-2}=\binom{n-k}{k-1}\,,\]
with equality following by simply adding the two relevant fractions on the left. The cases \(k=1\) and \(k=\frac{1}{2}(n+1)\), for which the number of such compositions and generators is singular, just follows by inspection.
## 7 Conclusion
There are many open directions of research we intend to pursue based on the combinatorial algebraic approach we introduced herein. One direction we have not directly addressed herein is that of alternative formulations of the modified Korteweg-de Vries hierarchy members of orders 3, 5 and higher. See for example Liu and Athorne [66], Olver and Sokolov [82], Oevel and Rogers [81] and Gerdjikov [48]. For example, the alternative non-commutative modified Korteweg-De Vries equation has the form,
\[\partial_{t}g=\partial^{3}g+3\big{(}g(\partial^{2}g)-(\partial^{2}g)g\big{)} -6g(\partial g)g.\]
Note that the polynomial partial differential field includes even degree terms. Such alternative forms can be obtained from non-commutative modified Korteweg-De Vries equation via a suitable gauge transformation as, for example, outlined in detail in Carillo and Schiebold [23]. The combinatorial algebraic structure we have developed would seem a natural context to investigate such alternative hierarchy forms further. Closely related is the _Miura transformation_. This is particularly simple in our context. Assuming the order \(n=2m+1\) with \(m\in\mathbb{N}\) is odd, then since \(\mathcal{I}^{2}=\mathrm{id}\), the base dispersion equation for \(P\) is,
\[\partial_{t}P=(-1)^{m+1}\partial^{2m+1}P.\]
We can assume this to be the base equation for the non-commutative potential Korteweg-de Vries hierarchy considered in Malham [69]. In that case the solution \(G^{\mathrm{pKdV}}\) is given by \(G^{\mathrm{pKdV}}=P(\mathrm{id}-P)^{-1}\). For the non-commutative modified Korteweg-de Vries hierarchy, we observe that when \(n\) is odd we can assume the solution \(G^{\mathrm{mKdV}}\) to have the form \(G^{\mathrm{mKdV}}=2P(\mathrm{id}+P)^{-1}(\mathrm{id}-P)^{-1}\), i.e. replacing the '\(iP\)' everywhere simply by \(P\), and all our results in Section 4 and thereafter follow through. This is because we carried through the quantity '\(iP\)' throughout our computations in Section 4 and, in particular, into our abstract encoding. For example, our computation for \(\partial_{t}[V]\) preceding Definition 12 carries through with this replacement with \(V\coloneqq(\mathrm{id}-P)^{-1}\), \(P^{\dagger}=-P\) and \(V^{\dagger}=(\mathrm{id}+P)^{-1}\). For convenience we set \(U^{\mathrm{pKdV}}\coloneqq(\mathrm{id}-P)^{-1}\) and \(U^{\mathrm{mKdV}}\coloneqq(\mathrm{id}+P)^{-1}(\mathrm{id}-P)^{-1}\). Note that by operator partial fractions we have \(U^{\mathrm{pKdV}}=\mathrm{id}+PU^{\mathrm{pKdV}}\) so \(\partial G^{\mathrm{pKdV}}=\partial U^{\mathrm{pKdV}}\). Then as in Doikou _et al._[30, Cor. 3.15] we observe that since \(U^{\mathrm{pKdV}}=(\mathrm{id}+P)U^{\mathrm{mKdV}}\) we have,
\[\partial U^{\mathrm{pKdV}} =\partial\big{(}PU^{\mathrm{mKdV}}\big{)}+\partial U^{\mathrm{ mKdV}}\] \[\Leftrightarrow \partial U^{\mathrm{pKdV}} =\partial\big{(}PU^{\mathrm{mKdV}}\big{)}+U^{\mathrm{mKdV}} \partial(P^{2})U^{\mathrm{mKdV}}\]
\[\Rightarrow\qquad\qquad\partial[G^{\rm pKdV}]=\partial[G^{\rm mKdV}]+[G^{\rm mKdV}]^{2}.\]
In the last step we used the Poppe product rule. This represents the Miura transformation giving the connection between the non-commutative potential, and modified, Korteweg-de Vries hierarchies. A natural question is what the translation (likely non-trivial) of this result is at even orders?
A natural formulation for Hankel and Toeplitz operators is the \(L^{2}\) Hardy spaces \(\mathbb{H}_{\pm}\), corresponding to the upper and lower half complex plane; see for example Peller [84]. This can be thought of as the Fourier transform representation of the formulation we gave in Section 2. Recently this context has been used to prove interesting integrability results/connections for the cubic Szego equation, see Pocovnicu [85], Grellier and Gerard [51] and Gerard and Pushnitski [49], and to extend regularity results for the Korteweg-de Vries equation, see Grudsky and Rybkin [52, 53, 54]. There is a natural decomposition \(L^{2}(\mathbb{R})=\mathbb{H}_{+}\oplus\mathbb{H}_{-}\) and thus an immediate direction to pursue would be to consider our Marchenko equation and Fredholm Grassmannian flow in this context and establish a connection to the results of, for example, Grellier and Gerard [51] and Grudsky and Rybkin [54].
At the abstract algebra level, for the skew-Poppe algebra \(\mathbb{C}[\mathbb{Z}_{\bf 0}]\cong\mathbb{C}[\mathcal{C}]\times\mathbb{R} \langle\mathbb{B}\rangle\), there are many open questions as follows: (i) The skew-Poppe algebra \(\mathbb{C}[\mathbb{Z}_{\bf 0}]\), endowed with the triple product in Lemma 12, constitutes a _triple system_ or _ternary algebra_. See for example Meyberg [73, p. 21] or Ricciardo [92, p. 23]. Exploring this context is very much of interest. (ii) The Poppe products in Lemma 8 are quasi-Leibniz products in which the 'quasi' label refers to the term additional to the two expected Leibniz terms which essentially involves inserting a '1' between the two terms in the product (as well as a factor '2'). A natural question is, is it possible to establish an isomorphism between between this skew-Poppe algebra and the corresponding skew-Poppe algebra endowed with the triple product based on the Poppe products in Lemma 8 without the 'quasi' terms? This will necessarily require a fix of the non-quasi product for low order terms, for example those involving products with '\([{\bf 0}]\)' and so forth. The analogy is the isomorphism between the shuffle algebra and the quasi-shuffle algebra proved by Hoffman [61]. Establishing such an isomorphism would significantly simplify the proofs of the results herein and would help to establish (iii) and (iv) just below. (iii) We observe that in our main result we sought Poppe polynomial expansions \(\pi_{n}=\pi_{n}([{\bf 0}],[{\bf 1}],\ldots,[{\bf n}])\) for the endomorphisms \([{\bf 0}n{\bf 0}]\) when \(n\) is odd, and \([{\bf 0}n{\bf 0}^{\dagger}]\) when \(n\) is even. However more generally we might ask the question of whether there exists Poppe polynomial expansions for any of the basis elements in \(\mathbb{C}[\mathbb{Z}_{\bf 0}]\)? In other words can we express any basis element in \(\mathbb{C}[\mathbb{Z}_{\bf 0}]\) as a linear combination of monomials of the form \([{\bf n_{1}}]\,[{\bf n_{2}}]\,\cdots\,[{\bf n_{k}}]\)? (iv) A directly related broader question then is, does there exist an isomorphism between the algebra of odd-degree monomial forms \([{\bf n_{1}}]\,[{\bf n_{2}}]\,\cdots\,[{\bf n_{k}}]\) with \(n_{i}\in\mathbb{N}\cup\{0\}\) endowed with the concatenation product, and the skew-Poppe algebra? The connection is provided by the signature expansions. The odd-degree monomial form parametrising factors \(n_{1}n_{2}\cdots n_{k}\) are actually weak compositions. (v) Can we establish a _co-algebra_ associated with the skew-Poppe algebra \(\mathbb{C}[\mathcal{C}]\times\mathbb{R}\langle\mathbb{B}\rangle\)? This was achieved for Poppe algebra in Malham [69, Sec. 5]. Here we have to deal with the \(\mathbb{R}\langle\mathbb{B}\rangle\)-components. Indeed, we have already started in this direction. (vi) Establishing such a co-algebra, or at least a refined de-Poppe co-product \(\Delta_{n}\) associated with \(\mathbb{C}[\mathcal{C}]\times\mathbb{R}\langle\mathbb{B}\rangle\), would be useful. Consider the odd-degree monomial \([{\bf n_{1}}]\,[{\bf n_{2}}]\,\cdots\,[{\bf n_{k}}]\) with \(n_{1}n_{2}\cdots n_{k}\) a weak composition of \(n\). Using the signature expansions for each of the factors, \([{\bf n_{1}}]\,[{\bf n_{2}}]\,\cdots\,[{\bf n_{k}}]\) can be
expressed in the form,
\[\sum\chi(w_{1}\otimes w_{2}\otimes\cdots\otimes w_{k})\cdot\big{(}[w_{1}]\times \boldsymbol{\beta}_{1}(|w_{1}|)\big{)}\,\big{(}[w_{2}]\times\boldsymbol{\beta} _{1}(|w_{2}|)\big{)}\,\cdots\,\big{(}[w_{k}]\times\boldsymbol{\beta}_{1}(|w_{k }|)\big{)},\]
where the sum is over all basis elements \([w_{i}]\times\boldsymbol{\beta}_{1}(|w_{i}|)\) for \(i=1,\ldots,k\), with \(w_{i}\in\mathcal{C}(n_{i})\) and \(\boldsymbol{\beta}_{1}(|w_{i}|)\in\mathbb{R}\langle\mathbb{B}\rangle\) of length \(2^{|w_{i}|}\) with the first component equal to '1' as the only non-zero component. If we compute all the odd-degree Poppe products on the right, we generate the following form,
\[\sum_{w\in\mathcal{C}(n)}\sum_{i=1}^{2^{|w|}}\chi_{\boldsymbol{\beta}_{i}} \Big{(}\big{(}\Delta_{k}([w]\times\boldsymbol{\beta}_{i})\Big{)}\cdot\big{(}[ w]\times\boldsymbol{\beta}_{i}\big{)}.\]
In this expression, the \(\boldsymbol{\beta}_{i}\) are the natural basis elements of \(\mathbb{R}\langle\mathbb{B}\rangle\) of length \(2^{|w|}\), containing a '1' in the \(i\)th position and zeros elsewhere. The combined pair of sums correspond to a sum over all possible basis elements, for example over all the left-most column elemnts in Tables 2 and 3. The co-product \(\Delta_{k}\) generates all forms \(w_{1}\otimes w_{2}\otimes\cdots\otimes w_{k}\) such that the odd-degree Poppe product \(\big{(}[w_{1}]\times\boldsymbol{\beta}_{1}(|w_{1}|)\big{)}\,\cdots\,\big{(}[ w_{k}]\times\boldsymbol{\beta}_{1}(|w_{k}|)\big{)}\) generates \([w]\times\boldsymbol{\beta}_{i}\). The homomorphic map \(\chi_{\boldsymbol{\beta}_{i}}\) records the signature coefficient \(\chi(w_{1}\otimes w_{2}\otimes\cdots\otimes w_{k})\) together with the factor in the \(2^{|w|}\times 2^{|w|}\) block associated with the composition \([w]\) as outlined in Section 6. See for example the matrix \(A_{3}\) from that section. It records the factor associated with the \(\boldsymbol{\beta}_{i}\) row and the \([\boldsymbol{\sigma(w_{1})}]\,[\boldsymbol{\sigma(w_{1})}]\,\cdots\,[ \boldsymbol{\sigma(w_{1})}]\) column, where \(\sigma(w_{i})\) represents the sum of all the factors in the composition \(w_{i}\). Any terms resulting from the 'quasi' term in the Poppe product are included. Recall we can systematically generate all the blocks and all such factors using the appropriate three standard actions--see Step 5 in Section 6. Let \(\mathcal{C}^{*}(n)\) denote the set of all odd-length weak compositions \(v\) of \(n\) such that \(\sigma(v)+|v|-1=n\). Assuming we have established such a co-product \(\Delta_{k}\), then we observe, we can express any Poppe polynomial, or even an arbitrary sum of Poppe polynomials, in the form,
\[\sum_{n\geqslant 1}\pi_{n}=\sum_{w\in\mathcal{C}}\Pi\big{(}[w]\big{)} \cdot\big{(}[w]\times\boldsymbol{\beta}_{i}\big{)},\]
where
\[\Pi\big{(}[w]\big{)}=\sum_{i=1}^{2^{|w|}}\sum_{v\in\mathcal{C}^{*}(\sigma(w))} c_{v}\,\chi_{\boldsymbol{\beta}_{i}}\big{(}\Delta_{|v|}([w]\times\boldsymbol{ \beta}_{i})\big{)}.\]
Thus, in principle, we can express the whole hierarchy as the co-algebra sum \(\Pi\).
###### Acknowledgements.
SJAM would like to thank the EPSRC for the Mathematical Sciences Small Grant EP/X018784/1. It is also a pleasure to acknowledge very interesting discussions with Alexander Pushnitski and Alexei Rybkin in connection with our work herein.
## 8 Declarations
### Funding and/or Conflicts of interests/Competing interests
SJAM received funding from the EPSRC for the Mathematical Sciences Small Grant EP/X018784/1. There are no conflicts of interests or competing interests.
### Data availability statement
No data was used in this work.
|
2303.08017
|
Reliable Beamforming at Terahertz Bands: Are Causal Representations the
Way Forward?
|
Future wireless services, such as the metaverse require high information
rate, reliability, and low latency. Multi-user wireless systems can meet such
requirements by utilizing the abundant terahertz bandwidth with a massive
number of antennas, creating narrow beamforming solutions. However, existing
solutions lack proper modeling of channel dynamics, resulting in inaccurate
beamforming solutions in high-mobility scenarios. Herein, a dynamic,
semantically aware beamforming solution is proposed for the first time,
utilizing novel artificial intelligence algorithms in variational causal
inference to compute the time-varying dynamics of the causal representation of
multi-modal data and the beamforming. Simulations show that the proposed
causality-guided approach for Terahertz (THz) beamforming outperforms classical
MIMO beamforming techniques.
|
Christo Kurisummoottil Thomas, Walid Saad
|
2023-03-14T16:02:46Z
|
http://arxiv.org/abs/2303.08017v1
|
# Reliable beamforming at terahertz bands: are causal representations
###### Abstract
Future wireless services, such as the metaverse require high information rate, reliability, and low latency. Multi-user wireless systems can meet such requirements by utilizing the abundant terahertz bandwidth with a massive number of antennas, creating narrow beamforming solutions. However, existing solutions lack proper modeling of channel dynamics, resulting in inaccurate beamforming solutions in high-mobility scenarios. Herein, a dynamic, semantically aware beamforming solution is proposed for the first time, utilizing novel artificial intelligence algorithms in variational causal inference to compute the time-varying dynamics of the causal representation of multi-modal data and the beamforming. Simulations show that the proposed causality-guided approach for Terahertz (THz) beamforming outperforms classical MIMO beamforming techniques.
Christo Kurisummoottil Thomas and Walid Saad Wireless@VT, Bradley Department of Electrical and Computer Engineering,
Virginia Tech, Arlington, VA, USA,
Emails: {christokt,walids}@vt.edu
This research was supported by the Office of Naval Research (ONR) under MURI grant N00014-19-1-2621.
## 1 Introduction
Future wireless systems must communicate massive multi-modal sensory information to enable emerging applications such as the metaverse and extended reality (XR) [1, 2, 3]. However, sub-6 GHz and millimeter wave (mmWave) bands have limited bandwidth and cannot satisfy the stringent quality-of-service (QoS) requirements of XR applications in terms of delivering high data rates, low latency, and high reliability. Integrating XR services over high-frequency terahertz (THz) bands is a promising solution, but the wireless channel at THz frequencies is highly susceptible to significant blockage effects, limiting the significant multipath components. As such, pencil-like narrow beamforming (BF) solutions that can be dynamically adjusted are required to provide seamless connectivity for users, especially in high mobility scenarios.
To enable pencil-like BF solutions, usage of (ultra)-massive multiple-input multiple-output (MIMO) antenna systems (which becomes practical due to smaller antenna spacing) is expected [4]. However, accurately tracking time-varying user channels and mitigating downlink interference caused by inaccurate BF solutions (i.e., leakage from sidelobes) to other users in the network are critical challenges in providing high-rate communication links using BF. Conventional approaches here [5] is to either use a codebook-based beam tracking method or channel information-based BF by exploiting Kalman filtering (KF)-based methods [6] to track the time-varying channels while considering a specific user mobility model. The first method suffers from a large codebook size (hence, larger overhead) to follow a narrow beam direction along the user, particularly at THz bands. The second scheme need not be practical due to the user mobility model, which may be inaccurate and cannot be easily modeled. Another major direction is utilizing artificial intelligence (AI) based algorithms as in [7]. However, those techniques require a larger amount of data and higher training overheads. A promising candidate here that has not been explored yet is to design AI-native wireless systems [8] that can extract the causal aspects (why and how the data gets generated) present in the wireless environment and the content to be transmitted. Such causal information would represent the semantics of the channel and data. We envision that transmitting just the semantic substance instead of the irrelevant components present in the data has two benefits. Firstly, it reduces the number of physical bits to be transmitted, thus improving communication resource efficiency [9, 10, 11, 12]. Second, in a THz system, we envision that a causality-aware BF solution could improve reliability at the end user due to increased BF gain. This means that a semantic-aware system can judiciously choose the null space of the BF vectors, thus leaving enough dimensions to increase the gain along the specific users that have semantically rich data. However, none of the existing works in the literature deal with multi-user semantics except [13]. However, [13] limits the discussions to the design of the encoding architecture while relying on a conventional BF approach. Contrary to [13], for the design of multi-user semantic communications in a THz system for a metaverse application, we try to address the following questions:
* What is the best way to represent multiple modalities at the base station (BS) in a shared semantic subspace that can capture causal reasoning across modalities while preserving the modality-specific causal aspects?
* How to design a BF vector for each user in a THz system, guided by the causality dynamics across users so that it can mitigate the impact of multi-user interference and simultaneously preserve the causality aspects of the multi-modal data while ensuring that the users can decode it reliably?
The main contribution of the paper is, thus, a novel multi-user, multi-modal communication system that can deliver reliable THz transmission by exploiting the causality dynamics present in the channel and source data. In particular, we propose a novel BF solution that can be represented as a linear combination of a static component and a dynamic component. The static part, is a non-linear function of the causality dynamics of the channel and user data and learned using variational causal networks (VCN) [14]. The dynamic component must be adjusted based on instantaneous channel estimates and is solved by maximizing the minimum semantic information across all users. The resulting BF solution is semantics (causality) aware, see for e.g., Figure. 1. This means that the subspace dimension over which the BF must mitigate interference varies depending on the semantic richness of information at interfering UEs. Compared to classical AI schemes that rely on uncertainty or statistics present in the data, exploiting the causal aspects enables the system to learn with dramatically less data and reduces the training overhead [15]. Our proposed framework achieves a high semantic information rate and semantic reliability compared to conventional schemes that use beam tracking or uplink channel estimation-based beamforming solutions (without proper Doppler modeling).
## 2 System model
Consider a wireless network in which a BS transmits i.i.d multi-modal metaverse content in the downlink (DL) to a set \(\mathcal{K}\) of \(K\) user equipment (UE) over THz bands. The BS has \(M\) antennas, and each UE has \(N\) antennas. The data to be transmitted to user \(k\), is \(\mathbf{x}_{k}=[x_{k,1},\cdots,x_{k,S}]\), with \(x_{k,m}\) corresponding to modality \(m\). Modality here refers to data from a separate source. Transmitting this raw multi-modal data directly to users is delay-critical and requires a high data rate. Instead, we propose to compute a latent representation that can explain the causes behind the data generation, called _causal representation_. The causality (that represents the _semantics_ here) of the multi-modal data for UE \(k\), is defined as \(\mathbf{z}_{k}\in\mathcal{R}^{D}\). We assume that \(\min(M,N)\geq D\). The linear BF matrix \(\mathbf{V}_{k}\) for user \(k\) serves to project the semantics \(\mathbf{z}_{k}\) in a subspace (\(\subset\mathcal{R}^{M}\)) such that the UE can reconstruct the received
metaverse content with high semantic reliability. _Semantic reliability_ here is defined as the accuracy of the semantics reconstructed at the receiver compared to the intended semantics transmitted from the BS. We chose to represent the BF and causality of the data using two components, motivated by the formulations in KF [16] under nonlinear state dynamics, with the \(\widetilde{\mathbf{V}}_{k},\widetilde{\mathbf{z}}_{k}\) representing the prediction from the unknown nonlinear dynamics of user channels and causally. The BF matrix, at instance \(t\), of dimensions \(M\times D\) is \(\mathbf{V}_{k}^{(t)}=\lambda\widehat{\mathbf{V}}_{k}^{(t)}+(1-\lambda)\widehat{\mathbf{V} }_{k}^{(t)}\), where \(\widehat{\mathbf{V}}_{k}^{(t)}\) represents the component that is a function of the channel statistics (derived from the channel history) and semantics \(\mathbf{z}_{k}\) until time \(t-1\) and \(\widehat{\mathbf{V}}_{k}^{(t)}\) represents an instantaneous component of the BF vector that tracks dynamic variations in the channel and causality. \(\lambda\in[0,1]\) is a constant factor. Similarly, the causal representation is also assumed to have a linear representation whereby \(\mathbf{z}_{k}^{(t)}=\beta\widehat{\mathbf{z}}_{k}^{(t)}+(1-\beta)\widehat{\mathbf{z}}_{k }^{(t)}\), with \(\beta\in[0,1]\) being a constant factor. The receive combiner \(\mathbf{W}_{k}\) is a matrix of dimension \(N\times D\) and is assumed to only have an instantaneous component. A semantic-aware receive combiner design follows similarly but is not discussed due to space limitations. We can write the received signal at user \(k\) after the combining as follows (index \(t\) is omitted for notational convenience):
\[\mathbf{y}_{k}=\mathbf{W}_{k}^{H}\mathbf{H}_{k}\sqrt{p_{k}}\mathbf{V}_{k}\mathbf{z}_{k}+\mathbf{W}_{k }^{H}\mathbf{H}_{k}\sum_{i\neq k}\sqrt{p_{i}}\mathbf{V}_{i}\mathbf{z}_{i}+\mathbf{W}_{k}^{H} \mathbf{v}_{k}. \tag{1}\]
Here, \(p_{k}\) is UE \(k\)'s transmit power, assumed to be fixed and equal for all users. The entries of noise vector \(\mathbf{v}_{k}\) follow \(\mathcal{N}(0,1)\). The THz channel at multi-path delay \(d\) between any user and the BS is [17]:
\[\begin{array}{l}\mathbf{H}_{d}=\alpha_{0}G_{t}G_{r}p_{r}(dT_{s}-\tau_{0})e^{j2\pi \nu_{0}i}\mathbf{a}_{r}(\theta_{0})\mathbf{a}_{s}(\phi_{0})^{T}+\\ \sum\limits_{i=1}^{D}\alpha_{i}G_{t}G_{r}p_{r}(dT_{s}-\tau_{i})e^{j2\pi\nu_{i} t}\mathbf{a}_{r}(\theta_{i})\mathbf{a}_{s}(\phi_{i})^{T},\end{array} \tag{2}\]
where \(\alpha_{i}\) represents the complex path gain, \(\theta_{i},\phi_{i}\) are the AoA and AoD, respectively, and \(G_{t},G_{r}\) represents the antenna gains at BS and UE, respectively. \(\nu_{i}\) is the Doppler frequency corresponding to path \(i\). Parameters with subscript \(0\) correspond to line-of-sight (LOS) components. The number of non-LOS (NLOS) paths \(P\) is considered small as is typical in THz [1]. The LOS channel gain will be [18]\(\alpha_{0}=\frac{e^{-\frac{s(T)^{2}}{2}}}{4\pi f_{r}}e^{-\frac{s(T)^{2}}{2}}e^{- j2\pi f\tau_{0}}\), where \(\kappa(f)\) is the overall molecular absorption coefficient of the medium at THz band, \(f\) is the operating frequency, \(c\) is the speed of light, and \(\tau\) is the distance between the user and the BS. Further, we can write the channel at any subcarrier \(f\) as \(\mathbf{H}_{f}=\sum\limits_{i=0}^{N_{p}-1}\mathbf{H}_{d}e^{\frac{-i2\pi fd}{K_{d}}}\). However, we consider the BF design for a single subcarrier for simplicity. Hence, hereinafter, we ignore the notation \(f\). This is also motivated by the fact that the semantics transmitted across different subcarriers are independent, and hence, we can consider the BF design separately for each subcarrier.
We now briefly touch upon the channel estimation characterization that motivates the BF and \(\mathbf{z}_{k}\). Consider that the BS has the linear minimum mean squared error (LMMSE) channel estimates computed using uplink (UL) pilots, whose behavior can be characterized as \(\mathbf{H}_{k}=\widetilde{\mathbf{H}}_{k}+\widetilde{\mathbf{H}}_{k}\), where the estimates \(\widetilde{\mathbf{H}}_{k}\perp\widetilde{\mathbf{H}}_{k}\) lie in orthogonal subspace. \(\widetilde{\mathbf{H}}_{k}\) includes both the white noise component in the received pilots and the time-varying part (assuming DL transmission slots differ from the UL pilots). Motivated by the above formulations for the channel estimates, we split the problem into two parts: a) the time-varying component (\(\widehat{\mathbf{V}}_{k},\widehat{\mathbf{z}}_{k}\)) computed using an AI approach and is a function of the channel dynamics, thus a function of the statistics \(\mathbb{E}(\widetilde{\mathbf{H}}_{k}\widetilde{\mathbf{H}}_{k}^{H})\), and b) the instantaneous tracking part (\(\widehat{\mathbf{V}}_{k},\widehat{\mathbf{z}}_{k}\)) using \(\widetilde{\mathbf{H}}_{k}\) formulated as a non-convex optimization, to maximize the minimum semantic information. Specifically, we propose to learn the time-varying dynamics (computed as the posterior distribution) of the BF and causal components from the channel estimation history. Components \(\widehat{\mathbf{V}}_{k}\) and \(\widehat{\mathbf{z}}_{k}\) are learned as the mode of the posterior distribution:
\[[\widehat{\mathbf{V}}_{k}^{t},\widehat{\mathbf{z}}_{k}^{t}]=\arg\max_{\mathbf{V}_{k}, \widehat{\mathbf{z}}_{k}}p(\mathbf{z}_{k},\mathbf{V}_{k}\mid\mathcal{H}^{0:t-1},\mathcal{Z }^{0:t-1}), \tag{3}\]
where \(\mathcal{H}^{(0:t-1)}\) and \(\mathcal{Z}^{0:t-1}\) are the set of all channel matrices and causal representations till time \(t-1\). Next, we look at the problem formulation to compute the BF matrices, complners and \(\mathbf{z}_{k}\).
## 3 Problem Formulation
Given instantaneous (and perfect) channel information, our objective is to compute the BF/combiner matrices \(\mathbf{V}_{k}\) and \(\mathbf{W}_{k}\) as well as the causal representation \(\mathbf{z}_{k}\) such that the semantic reliability at the UE \(k\) is above a determined threshold. We denote \(\hat{\mathbf{z}}_{k}\) as the reconstructed semantics at the UE. We consider max-min average semantic information as the optimization criteria so as to ensure fairness with respect to user rates. We now introduce relevant metrics, whose detailed definitions appear in [15]. The channel imperfections result in a _semantic distortion_, which can be captured by the Frobenius norm of the error in prediction by using the transmitted state (\(\mathbf{z}_{k}\)) and the user's extracted state (\(\hat{\mathbf{z}}_{k}\)): \(E(\mathbf{z}_{k},\hat{\mathbf{z}}_{k})=\left\|\mathbf{z}_{k}-\hat{\mathbf{z}}_{k}\right\|^{2}\). We also define the error measure \(E(S,\widehat{S})=|S-\widehat{S}|^{2}\) in terms of the difference in semantic information (\(S\)) conveyed by the transmitter and that learned by the receiver. These semantic distortion measures help to quantify how much semantic information the receiver can extract from a corrupted version of the message. The semantic message space (representing semantic similarity) corresponding to \(\hat{\mathbf{z}}_{k}\) can be described as \(E(\mathbf{z}_{k},\hat{\mathbf{z}}_{k})\leq\delta\), s.t. \(E(S(\mathbf{z}_{k}),\widehat{S}(\hat{\mathbf{z}}_{k};\mathbf{y}_{k}))=0.\)\(\delta\) represents the threshold in the Euclidean space between two \(D\)-dimensional causal vectors \(\mathbf{z}_{k}\), \(\hat{\mathbf{z}}_{k}\) within which the received semantic information is the same (_semantic space_ corresponding to \(\mathbf{z}_{k}\)). As long as decoded messages \(\hat{\mathbf{z}}_{k}\) are within the semantic space, it is successfully decoded; hence the _semantic reliability_ can be captured by the probability of successful transmission \(p(E(\mathbf{z}_{k},\hat{\mathbf{z}}_{k})\leq\delta)\). Using these metrics, we formulate our problem (for communication instance \(t\)) as:
\[\begin{array}{l}\mathcal{P}_{1}:\max\limits_{\mathcal{V},\mathcal{W},\mathcal{Z }}\min\limits_{k}\mathbb{E}_{q}[S_{k}(\mathbf{z}_{k};\mathbf{y}_{k})]\\ \text{subject to }p\left(E(\mathbf{z}_{k},\hat{\mathbf{z}}_{k})<\delta\right)\geq 1- \epsilon,\end{array} \tag{4}\]
where \(\mathcal{V},\mathcal{W},\mathcal{Z},\mathcal{H},\mathcal{V},\mathcal{X}\) represent the set of all \(\mathbf{V}_{k},\mathbf{W}_{k},\mathbf{z}_{k},\widehat{\mathbf{H}}_{k}\), \(\mathbf{y}_{k}\) and \(\mathbf{z}_{k}\), respectively, and \(\epsilon\) is close to \(0\). Here, the expectation is over \(p(\mathbf{z}_{k},\mathbf{V}_{k}\mid\mathcal{H}^{0:t-1},\mathcal{Z}^{0:t-1})\) and represents the (learned) statistics over channel and data history (see Section 3.3). Moreover, our problem formulation relies on novel semantic information metrics, unlike state of the art semantic communication systems that exploit classical information theoretic schemes [19]. Next, we define the semantic information measure required to compute the objective \(\mathcal{P}_{1}\).
### Semantic Information Measure
Using category theory, we can define the concept of semantic information, based on [15], as follows. The extracted semantic information at the receiver can be written as (5), derived in [15]: with \(Z_{\hat{\mathbf{z}}_{k},\hat{\mathbf{z}}_{k}}\) being the similarity between transmitted and extracted causal states as also defined in [15]. \(\mathbf{R}_{\overline{\mathbf{z}}}=\sum\limits_{i\neq k}\mathbb{E}_{q}(\mathbf{W}_{k}^{H} \widetilde{\mathbf{H}}_{k}\mathbf{V}_{i}\mathbf{z}_{i}\mathbf{z}_{i}^{T}\mathbf{V
\[\mathbb{E}_{q}\left[S_{k}(\hat{\mathbf{z}}_{k};\mathbf{y}_{k}\mid\mathbf{z}_{k},\mathbf{H}) \right]=\sum\limits_{\mathbf{z}_{k}}p\left(\mathbf{y}_{k}\mid\mathbf{z}_{k}\right)S_{s}(\mathbf{ z}_{k}) \tag{8}\] \[\overset{(a)}{\leq}\sum\limits_{\mathbf{z}_{k}}p\left(\mathbf{y}_{k}\mid \mathbf{z}_{k}\right)S_{s}(\mathbf{z}_{k})[\sum\limits_{\hat{\mathbf{i}}_{k}}\log\det(\mathbf{ R}_{\overline{k}}^{-1}\mathbf{R}_{k})Z_{\hat{z}_{k_{k}}}],\]
the expectation operator inside the log using Jensen's inequality, resulting in the upper bound to the semantic information measure. We can expand the expectation (for a fixed \(\mathbf{z}_{k}\), \(\mathbf{W}_{k}\)) as follows:
\[\mathbb{E}_{q}\left(\mathbf{W}_{k}^{H}\widehat{\mathbf{H}}_{\mathbf{v}}\mathbf{I}_{ \mathbf{z}_{k}}\mathbf{z}_{k}^{T}\widehat{\mathbf{H}}_{\mathbf{v}}^{H}\mathbf{V}_{k}^{T}\right) =\mathbf{W}_{k}^{H}\widehat{\mathbf{H}}_{k}\left(\lambda^{2}\mathbb{E}_{q} \left(\widehat{\mathbf{V}}_{i}\mathbf{z}_{i}\mathbf{z}_{i}^{T}\widehat{\mathbf{V}}_{i}^{H} \right)+\lambda^{2}\mathbf{\mathrm{tr}}(\mathbf{z}_{i}\mathbf{z}_{i}^{T})\mathbb{E}_{q} (\widehat{\mathbf{V}}_{i}\widehat{\mathbf{V}}_{i}^{H})+(1-\lambda^{2})\widehat{\mathbf{V} }_{i}^{H}\mathbf{z}_{i}\mathbf{z}_{i}^{T}\widehat{\mathbf{V}}_{i}^{T}\widehat{\mathbf{V}}_{i }\right)\mathbf{H}_{k}^{H}\mathbf{W}_{k}+f(\widehat{\mathbf{V}}_{i}). \tag{9}\]
Here, we assume that the mean \(\mathbb{E}_{q}(\widehat{\mathbf{V}}_{i})\) is zero. \(f(\widehat{\mathbf{V}}_{i})\) represents the quadratic terms that depends only on \(\widehat{\mathbf{V}}_{i}\). To compute the expectations in (9), we need to know the posterior given the history of observations and channel, which we propose to approximate using variational causal networks as discussed in Section 3.3.
### Causality Aware BF with Generalized Eigen Vectors (GEV)
Since the optimization problem in (4) is non-convex with respect to joint variables \(\{\widehat{\mathbf{V}}_{k},\widehat{\mathbf{W}}_{k},\mathbf{z}_{k}\}\), we adopt the standard technique of alternating optimization. We propose to solve \(\mathcal{P}_{1}\) using two sub problems. The BF and combining vectors (for any given \(\mathbf{z}_{k}\)) can be alternatively computed by solving the following problem.
\[\mathcal{P}_{2}:\max_{\mathbf{V}_{k},\mathbf{W}_{k}}\underbrace{\sum \limits_{\hat{\mathbf{i}}_{k}}w_{\hat{\mathbf{z}}_{k}}\log\det(\mathbf{R}_{\overline{k}}^{ -1}\mathbf{R}_{k})}_{\text{Concear part w.r.t }\mathbf{v}_{k}}+ \tag{10}\] \[\underbrace{\sum\limits_{i\neq k}\sum\limits_{\hat{\mathbf{i}}_{k}}w _{\hat{\mathbf{z}}_{k}}\log\det(\mathbf{R}_{\overline{i}}^{-1}\mathbf{R}_{i})}_{\text{ Convex part w.r.t }\mathbf{v}_{k}}\]
where \(w_{\hat{\mathbf{z}}_{k}}=p\left(\mathbf{y}_{k}\mid z_{k}\right)S_{s}(\mathbf{z}_{k})Z_{ \hat{\mathbf{i}}_{k},\mathbf{z}_{k}}\). (10) is non-concave since it is a summation of concave and convex functions. Hence, this can be solved by constructing an approximate function that is a lower bound to the (10). Next, we derive the BF and combiner matrices by alternating optimization of the resulting approximate function.
**Theorem 1**.: _The BF and combiner matrices corresponding to any user \(k\) can be obtained as a GEV of two matrices representing a compromise between maximizing the useful signal power part and minimizing the leakage power part. The corresponding expressions can be written as follows:_
\[\text{vec}(\widehat{\mathbf{V}}_{k})=\text{G.E.}\mathbf{V}(\mathbf{S}_{t,k}, \mathbf{I}_{t,k}),\ \mathbf{W}_{k}=\text{G.E.}\mathbf{V}_{1:D}(\mathbf{S}_{r,k},\mathbf{I}_{r,k}), \tag{11}\] \[\text{where},\ \mathbf{S}_{t,k}=\left(\mathbf{z}_{k}\mathbf{z}_{k}^{H}\otimes \widehat{\mathbf{H}}_{k}^{H}\mathbf{R}_{k}^{-1}\widehat{\mathbf{H}}_{k}\right),\] (12) \[\mathbf{S}_{r,k}=\widehat{\mathbf{H}}_{k}\mathbf{V}_{k}\mathbf{z}_{k}\mathbf{z}_{k}^ {H}\mathbf{V}_{k}^{H}\widehat{\mathbf{H}}_{k}^{H},\] (13) \[\mathbf{I}_{t,k}=\sum\limits_{i\neq k}\sum\limits_{\hat{\mathbf{z}}_{k}}w _{\hat{\mathbf{z}}_{k}}\left(\mathbf{z}_{k}\mathbf{z}_{k}^{H}\otimes\widehat{\mathbf{H}}_{i}^{H} (\mathbf{R}_{\overline{i}}^{-1}-\mathbf{R}_{k}^{-1})\widehat{\mathbf{H}}_{i}\right),\] (14) \[\mathbf{I}_{r,k}=\sum\limits_{i\neq k}\sum\limits_{\hat{\mathbf{z}}_{k}}w _{\hat{\mathbf{z}}_{k}}\widehat{\mathbf{H}}_{i}\mathbf{V}_{i}\mathbf{z}_{i}\mathbf{z}_{i}^{H}\mathbf{V} _{i}^{H}\widehat{\mathbf{H}}_{k}^{H}, \tag{15}\]
_where \(\mathbf{vec}(\mathbf{X})\) represents the vectorized version (by stacking column by column) of the matrix \(\mathbf{X}\)._
Proof.: To obtain this, we can linearize the non-concave part using a first order Taylor series expansion. This follows similar approach called difference of convex functions as in [20], hence we skip detailed derivations and convergence analysis. This leads to:
\[\max_{\mathbf{V}_{k}}\sum\limits_{\hat{\mathbf{i}}_{k}}w_{\hat{\mathbf{z}}_{k}} \log\det(\mathbf{R}_{\overline{k}}^{-1}\mathbf{R}_{k})- \tag{16}\] \[\sum\limits_{i\neq k}\sum\limits_{\hat{\mathbf{z}}_{i}}w_{\hat{\mathbf{z} }_{i}}\mathbf{\mathrm{tr}}(\widehat{\mathbf{H}}_{i}^{H}(\mathbf{R}_{\overline{i}}^{-1}-\mathbf{R} _{i}^{-1})\widehat{\mathbf{H}}_{i}\mathbf{V}_{i}\mathbf{z}_{k}\mathbf{z}_{k}^{H}\mathbf{V}_{k}^{H}\mathbf{ V}_{k}^{H}\mathbf{I}_{k}^{H}\mathbf{I}_{k}^{H}\]
Taking derivative of the (11) wrt. \(\mathbf{V}_{k}\) leads to the following generalized eigen vector (G.E.V) solution. Here, we make use of the relation \(\text{vec}(\mathbf{AX}\mathbf{B})=(\mathbf{B}^{T}\otimes\mathbf{A})\text{vec}(\mathbf{X})\).
\[\sum\limits_{\hat{\mathbf{z}}_{k}}w_{\hat{\mathbf{z}}_{k}}\left(\mathbf{z}_{k} \mathbf{z}_{k}^{H}\otimes\widehat{\mathbf{H}}_{k}^{H}\mathbf{R}_{k}^{-1}\widehat{\mathbf{H}}_{k} \right)\text{vec}(\mathbf{V}_{k})= \tag{17}\] \[\sum\limits_{i\neq k}\sum\limits_{\hat{\mathbf{z}}_{i}}w_{\hat{\mathbf{z}} _{i}}\left(\mathbf{z}_{k}\mathbf{z}_{k}^{H}\otimes\widehat{\mathbf{H}}_{i}^{H}(\mathbf{R}_{ \overline{i}}^{-1}-\mathbf{R}_{i}^{-1})\widehat{\mathbf{H}}_{i}\right)\text{vec}(\mathbf{V}_{k})\] (18) \[\implies\text{vec}(\widehat{\mathbf{V}}_{k})=\text{G.E.V}(\mathbf{S}_{t,k}, \mathbf{I}_{t,k}).\]
In (18), we take the dominant \(D\) eigen vectors as the solution for \(\mathbf{V}_{k}\). Following similar procedure, with \(\mathbf{V}_{k}\) fixed, we can derive the combiner matrices as in (16).
Intuitively, (18) implies that \(\mathbf{V}_{k}\) should be in the orthogonal complement of the semantically aware leakage component. This means that the leakage channel space is weighted by the power in the causal component, and, hence, the null space of \(\mathbf{V}_{k}\) contains only those user channels that have semantically rich information. A similar interpretation follows for the combiner matrices, too, with the BF or nulling along the effective channel matrices \(\widehat{\mathbf{H}}_{k}\mathbf{V}_{k}\) or \(\widehat{\mathbf{H}}_{k}\mathbf{V}_{i}\), respectively. Further, given the BF and combiner matrices, following classical max-min optimization algorithms, we translate the problem as a maximization over an auxiliary variable \(\alpha\).
\[\mathcal{P}_{3}:\qquad\max_{\mathbf{z}}\alpha \tag{19}\] \[\text{subject to}\ p\left(E(\mathbf{z}_{k},\widehat{\mathbf{z}}_{k})<\delta \right)\geq 1-\epsilon,\] \[\mathbb{E}_{\mathbf{z}_{k}}S_{k}(\mathbf{z}_{k};\mathbf{y}_{k})\geq\alpha,\forall k.\]
This can solved using the bisection algorithm, as shown in Algorithm 1. The feasibility (19) can be solved using semidefinite programming [21] after rewriting it \(\widehat{\mathbf{Z}}_{k}=\bm
\[p(\mathbf{z}_{k}^{t}\mid\mathbf{z}_{k}^{t-1})\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
|
2305.05865
|
An adaptable JSON Diff Framework
|
In this paper, we present an implementation of JSON-diff framework JYCM,
extending the existing framework by introducing the concept of "unordered"
comparisons and allowing users to customize their comparison scenarios
flexibly. Furthermore, we provide a diff-result renderer to visualize better
and understand the differences between JSON objects. Our work enables more
adaptable and comprehensive comparisons to accommodate a wider range of use
cases and requirements.
|
Ao Sun
|
2023-05-10T03:18:01Z
|
http://arxiv.org/abs/2305.05865v2
|
# An adaptable JSON Diff Framework
###### Abstract
In this paper, we present an implementation of JSON-diff framework JYCM, extending the existing framework by introducing the concept of "unordered" comparisons and allowing users to customize their comparison scenarios flexibly. Furthermore, we provide a diff-result renderer to visualize better and understand the differences between JSON objects. Our work enables more adaptable and comprehensive comparisons to accommodate a wider range of use cases and requirements.
json, json-diff, testing, unit test
## I Introduction
JSON [1] as a protocol has become prevalent in web applications [2], where it is used as the most common input and output format. Many test cases have been created based on that to ensure the quality assurance of large web applications. A typical use case is that we periodically send JSON input to an idempotent web API and obtain JSON output, then use scripts to verify whether the output JSON meets expectations by differing the output with a target one, which is usually called JSON diff. As these technologies are increasingly used, we face several challenges.
First, the returned JSON often contains fields like timestamp that need to be excluded naturally during the JSON diff process. We note that these fields are not always on the top level; they could appear anywhere in JSON. Second, in scenarios involving large JSON composed of complex nested structures and long-length arrays, it can be challenging for users to view and analyze JSON diff. Third, for a particular field in JSON, whether it "has been changed or its change is OK" really depends on the context. For example, for an API that outputs the bounding box of a person in a picture based on an input image URL, usually, we should compare it with the benchmark bounding box in IOU metrics rather than comparing the four coordinates of the bounding box are the same or not. Additionally, when an API returns an array, its meaning in the context could be a set, so it should be compared as two sets instead of ordered arrays.
In this paper, we have proposed a JSON diff framework JYCM, which you can access on github 1, several contributions to the field of JSON diff:
Footnote 1: [https://github.com/eggachecat/jycm](https://github.com/eggachecat/jycm)
* We have presented an implementation of a JSON diff framework, demonstrating high efficiency, adaptability, and scalability for various use cases.
* Our framework introduces the concept of unordered comparison for JSON arrays, which enhances its applicability in scenarios where the order of elements is not crucial.
* Our framework allows users to flexibly customize the comparison logic according to their specific requirements, such as comparing or matching only the IDs of objects within a collection or defining domain-specific similarity function, which further increases its versatility and suitability for a wide range of scenarios.
* We have developed a dedicated renderer for diff results, which enables users to conveniently visualize and analyze the differences identified by our JSON diff framework.
These contributions collectively demonstrate the value of our JSON diff framework as a powerful and flexible tool for JSON diff in various applications and contexts.
The structure of this paper is as follows:
First, we provide a survey of the related works, highlighting the theoretical foundations underpinning the current state-of-the-art and practical implementations found in open-source communities.
Second, we overview the architecture of our proposed framework, outlining the design principles and key components that contribute to its efficacy and extendability. And we introduce various concepts, assumptions, and notations that will enable us to clearly define the problem and maintain consistency throughout our subsequent discussions. By employing these tools, we can formulate the JSON diff problem and provide the proposed framework.
Following this, we describe the design and implementation of our framework, detailing the relevant algorithms and their applications in real-world scenarios. As we discuss these algorithms, we also demonstrate how they can be adapted to suit the specific requirements of different use cases.
Finally, we conclude the paper with a summary of our findings and a discussion of our research's implications and potential future directions. This section summarizes the key takeaways from our study and provides
insights into areas that warrant further exploration, ultimately contributing to the ongoing advancement of the research field.
## II Related Works
The study of diff structured data has received widespread attention for years. One notable research paper, [3], examined the problem of efficiently obtaining the shortest delta operation given a tree structure. The paper focused on two key issues: effectively representing and detecting changes in hierarchical data and utilizing these changes to optimize data synchronization and version control processes. The paper proposed an algorithm that uses a top-down approach to compare the two trees. Starting from the root node, the algorithm systematically compares child nodes until a minimum edit script is found, which can transform one tree into another and provides a valuable foundation for further research in this field.
Besides, [4] extensively discussed the algorithms used for entire JSON diffs, providing a detailed analysis of algorithmic complexity and discourse. [4] standardized the calculation of "similarity" using the concept of edit distance, and its algorithmic complexity surpassed that of previous frameworks. Compared to this paper, our work focuses more on introducing the implementation of JYCM and demonstrating its high customizability and visualization capabilities.
Furthermore, subsequent researches, such as [5] and [6], have examined best practices for working with specific data structures like XML and HTML. In addition, open-source communities have contributed many excellent JSON diff frameworks, such as [7] and [8] and [9], which implement various algorithms with different contributions and focuses. For example, [7] provides a comprehensive JSON diff framework and includes a deephash library, while [8] has made significant progress in fuzzy matching.
## III Overview and Preliminary
### _Json_
JSON, short for JavaScript Object Notation, is a lightweight data-interchange format that is human-readable and machine-readable. It has become a widely adopted standard for data exchange between web applications often used in RESTful APIs, configuration files, and data storage and servers due to its simplicity and compatibility with various programming languages [1]. A JSON object is an unordered collection of key-value pairs enclosed in curly braces ({ }). The keys are strings, and the values can be strings, numbers, booleans, null, objects, or arrays. Objects can be nested within one another, providing a flexible way to model complex data structures.
A JSON array is an ordered collection of values enclosed in square brackets ([ ]). The values within an array can be any valid JSON data type, including objects, arrays, strings, numbers, booleans, or null. Arrays can also be nested within one another to represent multi-dimensional data structures.
JSON supports several primitive data types as shown in Code 1:
* String: A sequence of Unicode characters enclosed in double quotes (" ").
* Number: A numeric value can be an integer or a floating-point number. JSON does not differentiate between the two.
* Boolean: Represents true or false values.
* Null: Represents an empty or non-existent value.
```
1{
2"Image":{
3"Width":800,
4"Height":600,
5"Title":"Viewfrom15th_Floor",
6"Thumbnail":{
7"Height":125,
8"Width":"100"
9},
10"IDs":[116,943,234,38793]
11}
12}
```
Code 1: Example JSON from [1]
### _Design factors_
In designing our framework, we primarily considered several factors:
* High coverage: The diff functionality should encompass a wide range of scenarios.
* High extensibility and ease of use: The framework must allow users to define scenarios flexibly.
* Friendly UI: Human-readable results should also be easy to analyze, even for large JSON files.
We divided the diff capabilities according to the components of a JSON, which include Primitive components, Dictionary objects, and Array objects. For each component, we provide various diff strategies tailored to the specific component type.
By incorporating different diff strategies for each JSON component and supporting nested structures, our framework achieves high coverage and extensibility, enabling users to define scenarios flexibly and efficiently.
In addition to the component-specific diff strategies, we adopted a similarity-based design. This approach enables a more nuanced comparison between objects beyond a simple binary distinction of "identical" or
"different". As a result, the framework's extensibility and flexibility are enhanced, providing users with more granular control over the comparison process.
Since our algorithm is recursive, it is essential to define the terminal state, in which no further recursion is required, and the actual algorithm execution can take place.
### _Similarity_
In our design, a single terminal state exists: Given two objects, a similarity score can be calculated between them. The similarity \(\Phi\) is defined as follows: a scalar real value ranging from 0 to 1, where a value of 1 indicates complete equality between the objects and 0 indicates complete inequality, respectively, whose formula can be found at (1)
\[\Phi:x,y\rightarrow[0,1] \tag{1}\]
where
\[x,y\in\{\textbf{STR},\textbf{NUM},\textbf{NULL},\textbf{BOOL},\textbf{OBJ}, \textbf{ARR}\}\]
for simplicity, we point out the the \(\Phi\) should be symmetric, that is
\[\Phi(x,y)=\Phi(y,x) \tag{2}\]
and by default, we define
\[\Phi(x,\textbf{NONE})=0 \tag{3}\]
where **NONE** is for non-existing value.
### _JSON path_
In our approach, we utilize JSON path notation to effectively locate elements within JSON objects. While there is no official standard for JSON path, the basic concepts are widely shared, as outlined in [10] and [11], and implemented in [12] and [13]. JSON objects exhibit a tree-like structure, allowing each node to be accessed by tracing the path from the root node to the desired target node.
To represent these paths, we adopt a unique symbol (\(\rightarrow\)) to connect nodes sequentially along the path, ultimately referencing the target element. Furthermore, to offer a more intuitive depiction of a node's position within an array object, we include the array index within square brackets, denoted as \([index]\). This notation enhances the overall readability and comprehension of JSON paths in our framework, providing a clearer understanding of a node's location and its relation to surrounding elements.
We have also extended our JSON path notation to support regular expressions, allowing for more flexible and powerful pattern matching when locating elements within JSON objects. This enables users to find and target nodes based on specific patterns and conditions, improving the versatility and adaptability of JSON paths in our framework.
```
1{
2"a":1,
3"b":1
4{
5":1},
6{
7d":2}
8}
```
Code 2. Example JSON for JSON path
In Table I We provide an example of JSON path and the values it receives given Code 2
JSON path empowers users by enabling them to define custom similarity functions for comparing two objects by checking the current differed objects' JSON path as shown in Code 3. It also gives users an accessible and efficient way to analyze the diff results. By offering this level of control, our framework caters to the specific needs of users, allowing for more precise and meaningful comparisons within the context of their applications.
### _Pairing_
In addition to the primary task of computing the differences between JSON objects, our framework also addresses the challenge of rendering and collecting the diff results. To achieve this, it is crucial to record the optimal similarity pairs, described in JSON path, identified during the execution of the diff algorithm.
In our framework, recording the specific operations is of utmost importance. For example, We must document the transformation process during our diff algorithm, which converts the array \(x\) into the array \(y\): which elements need to be deleted, added, modified, and preserved. By tracking these operations, we can not only determine the similarity between JSON objects but also provide a clear and concise representation of the changes that have occurred.
Moreover, this approach allows for a more in-depth analysis of the paired JSON objects. For example, by examining the JSON diff, users can gather insights into the primary locations of differences based on JSON path patterns. This information can provide valuable insights to users and guide them in identifying the key areas of change between the JSON objects.
\begin{table}
\begin{tabular}{|c|c|} \hline
**JSON PATH** & **VALUE** \\ \hline \(a\) & 1 \\ \hline \(b\rightarrow[0]\) & \{ ’c’:1 \} \\ \hline \(b\rightarrow[\ast]\) & \{ ’c’:1 \} **and \{ ’d’:2 \} \\ \hline \(b\rightarrow[0]\to d\) & 2 \\ \hline \end{tabular}
\end{table} TABLE I: Retrieve value by JSON path on Code 2
Given two JSON objects \(A\) and \(B\), we use \(\theta\) to denote the set of optimal similarity pairs identified during the execution of the diff algorithm. Each pair in \(\theta\) consists of elements (or pointers to those elements) from \(A\) and \(B\).
### Formulation
With the above definitions and notions, we can now formally describe this JSON diff problem as an optimized problem: give two JSON objects \(x\) and \(y\), we want to find the paring that maximizes the similarity of these two objects, which can be expressed as in (4).
\[\max_{\theta}\Phi\Big{(}x,y;\theta\Big{)} \tag{4}\]
And if we donate
\[\theta^{*}=\operatorname*{arg\,max}_{\theta}\Phi\Big{(}x,y;\theta\Big{)}\,\ \phi^{*}=\Phi\Big{(}x,y;\theta^{*}\Big{)}\]
Then our diff algorithm \(\tilde{A}\) can be expressed as in (5)
\[\tilde{A}:x,y\rightarrow\theta^{*},\phi^{*} \tag{5}\]
## IV Design and Implementation
### Primitive Similarity
For primitive data types (such as strings, numbers, and boolean values), the default similarity function is relatively straightforward, comparing their equality as shown in Algorithm 1. However, users can also customize the similarity function by hooking into this functionality, for instance, by utilizing the edit distance to calculate the similarity between two strings. This flexibility allows for more tailored comparisons that cater to the specific needs of the users and their datasets.
```
1:procedure\(\Phi\)(A, B)
2:if\(A==B\)then
3:return\(1\)
4:else
5:return\(0\)
6:endif
7:endprocedure
```
**Algorithm 1** Default Primitive Similarity
### _Object Similarity_
For the similarity function of JSON objects, the default similarity function computes the average similarity score for each key-pair in objects \(A\) and \(B\) shown in Algorithm 2 where **keys** is to retrieve all keys of an object (dictionary). This approach takes into consideration the individual similarity scores for each corresponding key-pair, ultimately producing an overall average score that represents the similarity between the two JSON objects under comparison.
### _Array Similarity_
Array comparison in JSON data is classified into two main categories in our framework: Ordered and Unordered comparisons. Furthermore, each category can be divided into two subcategories: Exact matching and Fuzzy matching.
The reason why we design this way is as follows.
JSON arrays are generally considered to have an order. However, to accommodate a broader range of scenarios, we allow users to request that the framework treat arrays as unordered "sets" when comparing them.
When comparing arrays with distinct orderings, such as (a, b, c) and (c, b, a), different conclusions may be reached depending on whether the order is considered or not. Therefore, it is essential to treat these cases separately.
The distinction between precise and fuzzy matching is crucial for our framework, as fuzzy matching is fundamentally a pairing problem to find a combination of pairs with the minimum cost. Using the ordered arrays (a, b, c) and (a, z, c) as an example, it is possible that the cost of matching \(b\) and z is too high, causing the algorithm to incorrectly pair \(a\) with z. This result may not be reasonable in some scenarios, hence the need for fuzzy matching separately.
It is important to note that our work does not introduce any fundamentally new algorithms. Instead, our main contribution lies in combining and adapting existing algorithms to suit common web application JSON-used scenarios. Consequently, we will not provide proof for the fundamental algorithms but will focus on our definitions and context in the framework of these algorithms.
All the fundamental algorithms we use have been summarized together under different matching scenarios Table II;
By default, we utilize Algorithm 3 to compute the similarity between two arrays. This algorithm takes two input arrays, and the pairs obtained through various matching methods. Consequently, the subsequent algorithms discussed in this paper are primarily employed for matching purposes. Once the matching process is completed, we use this function to calculate the real-valued similarity between the arrays.
```
1:procedure\(\Phi_{\text{arrayHelper}}\)(A, B, pairs)
2: Initialize \(score\gets 0\)
3: Initialize \(n\leftarrow\text{len}(A)+\text{len}(B)\)
4:record(A,B,pairs)
5:for\(pair\)in\(pairs\)do
6:\(score\gets score+\Phi(pair[0],pair[1])\)
7:endfor
8:return\(score\)/\(n\)
9:endprocedure
```
**Algorithm 3** Array Similarity Helper
#### Iii-C1 Ordered Array similarity under Exact Matching
For this matching type, we use the Longest Common Subsequence (LCS) [14] algorithm to find the longest common subsequence between two arrays.
The Longest Common Subsequence (LCS) algorithm is a dynamic programming method to find the longest subsequence common to two sequences. In the context of matching elements from two ordered array, the LCS algorithm can identify the longest subsequence of elements shared by two arrays, taking into account their order but not necessarily their contiguity. This method is particularly useful for ordered, exact array comparisons where the elements' relative positions matter.
```
1:procedure\(\Phi\)(A, B)
2: Initialize \(dp\leftarrow\text{LCS}(A,B)\)
3: Initialize \(pairs\leftarrow\text{BacktrackLCS}(A,B,dp)\)
4:return\(\Phi_{\text{arrayHelper}}\)\((A,B,pairs)\)
5:endprocedure
```
**Algorithm 4** Ordered Array Exact Matching
The whole algorithm we used here is described in Algorithm 4, which is composed of two parts: first we apply the LCS, described in Algorithm 5, then we use another procedure, described in Algorithm 6, to backtrack what are exactly the common elements.
```
1:procedureLCS(A, B)
2: Initialize \(n\leftarrow\text{length}(A)\)
3: Initialize \(m\leftarrow\text{length}(B)\)
4: Initialize \(dp[0\ldots n,0\ldots m]\) with all zeros
5:for\(i=1\)to\(n\)do
6:for\(j=1\)to\(m\)do
7:if\(1==\Phi(A[i],B[j])\)then
8:\(dp[i,j]\gets dp[i-1,j-1]+1\)
9:else
10:\(dp[i,j]\leftarrow\max(dp[i-1,j],dp[i,j-1])\)
11:endif
12:endfor
13:endfor
14:return\(dp\)
15:endprocedure
```
**Algorithm 5** Longest Common Subsequence
As shown in Fig. 1, this type of matching is useful when the order of the elements is crucial, and only identical matches are considered valid. One such application is in the field when swapping the order of operations or introducing different events could result in unexpected behavior or even errors. An example of this would be comparing the outputs of two APIs that provide lists of chronological events, such as user activity logs or transaction histories, where the order of events is essential and the exact details of each event need to match.
```
1:procedureBacktrackLCS(A, B, dp)
2: Initialize \(pairs\) as an empty list
3: Initialize \(i\leftarrow\text{length}(A)\)
4: Initialize \(j\leftarrow\text{length}(B)\)
5:while\(i>0\) and \(j>0\)do
6:if\(X[i-1]==Y[j-1]\)then
7: Prepend\([i-1,j-1]\) to \(pairs\)
8:\(i\gets i-1\), \(j\gets j-1\)
9:elseif\(dp[i-1][j]>dp[i][j-1]\)then
10:\(i\gets i-1\)
11:else
12:\(j\gets j-1\)
13:endif
14:endwhile
15:return\(pairs\)
16:endprocedure
```
**Algorithm 6** LCS backtrack
#### Iii-C2 Ordered Array similarity under Fuzzy Matching
In ordered fuzzy matching, we use a variation of the Edit Distance algorithm [15] to find the minimum cost matching between two arrays. In this case, the default cost of "editing" two elements is the negative of their
\begin{table}
\begin{tabular}{|c|c|c|} \hline & **Exact matching** & **Fuzzy matching** \\ \hline
**Ordered** & LCS [14] & Edit distance [15] \\ \hline
**InOrdered** & Brute force & Hungarian [16] \\ \hline \end{tabular}
\end{table} TABLE II: Matching algorithm for array
similarity; that is, the more they are similar, the less they need to be edited.
The Edit Distance algorithm, also known as the Levenshtein distance, is a dynamic programming technique used to determine the minimum number of edit operations required to transform one sequence into another.
```
1:procedure\(\Phi\)(A, B)
2: Initialize \(dp\leftarrow\text{EditDistance}(A,B)\)
3: Initialize \(pairs\leftarrow\text{BacktrackEditDistance}(A,B,dp)\)
4:return\(\Phi_{\text{arrayHelper}}\)\((A,B,pairs)\)
5:endprocedure
```
**Algorithm 7** Ordered Array Fuzzy Matching
The whole algorithm we used here is described in Algorithm 7 which is composed of two parts: first, we apply the LCS, described in Algorithm 5, then we use another procedure, described in Algorithm 6 where **zeros** is a helper function to create a matrix filled with zeros, to backtrack what are exactly the common elements.
In the context of array comparison, the Edit Distance algorithm can quantify the similarity between two arrays by calculating the minimum number of element insertions, deletions, and substitutions needed to make the arrays identical.
This method is particularly useful for ordered, approximate array comparisons where the elements' relative positions matter.
```
1:procedureEditDistance(A, B)
2: Initialize \(m\gets 1+\textbf{len}(A)\)
3: Initialize \(n\gets 1+\textbf{len}(B)\)
4: Initialize \(dp\leftarrow\textbf{zeros}(m,n)\)
5:for\(x\in\{m-2,\dots,0\}\)do
6:for\(y\in\{n-2,\dots,0\}\)do
7:\(dp[x][y]\leftarrow\) max( \(dp[x+1][y]\), \(dp[x][y+1]\), \(\Phi(A[x],B[y])+dp[x+1][y+1])\) )
8:endfor
9:endfor
10:return\(dp\)
11:endprocedure
```
**Algorithm 8** Edit Distance
Just for clarification, to re-construct from \(A[i]\) to \(B[j]\), line 11 is for removing \(A[i]\) and line 15 is for adding \(B[j]\)
As shown in Fig. 2, this type of matching is useful when the order of the elements is essential, but some degree of flexibility is allowed in terms of matching the elements themselves. One such application is in the field of natural language processing, where the order of words or phrases is significant, but synonyms or paraphrasing can still convey the same meaning. An example of this would be comparing the outputs of two APIs that provide ranked lists of search results, such as product listings or top news articles, where the order of the results is essential, but the exact details of each result might vary slightly.
#### Iii-B3 Unordered Array similarity under Exact Matching
We use a brute-force approach to find matching pairs between two arrays in unordered exact matching (i.e., two sets). Usually, we should use hash to deal with such tasks effectively. However considering the difficulties of calculating hash value in the context of flexibility and user-defined similarity, taking the IOU example in the previous section it is impossible to use a reasonable hash function to hash those two high-IOU-yet-different coordinates into the same value, we use a two-depth nested for-loop,
The procedure is described in Algorithm 10 where
the brute force approach is described in Algorithm 11
```
1:procedure\(\Phi\)(A, B)
2: Initialize \(pairs\leftarrow\) BruteForceMatching(\(A,B\))
3:return\(\Phi_{\textbf{arxylepler}}\)\((A,B,pairs)\)
4:endprocedure
```
**Algorithm 10** Unordered Array Exact Matching
As shown in Fig. 3, an ideal scenario for this matching algorithm would be when the order of the elements is not essential, but only identical matches are considered valid. One such application is in the field of inventory management, an example of this would be comparing the outputs of two inventory management APIs that return lists of items in stock, where the order of items is not important, but the exact items and their properties need to match.
#### Iii-C4 Unordered Array similarity under Fuzzy Matching
Before delving into this scenario, we first formulate this matching problem as follows: Given two sets of elements, where the similarity between any two elements can be calculated using (1), we aim to find a pairing method that maximizes the **total similarity** of the pairings.
To achieve this, we must define the **total similarity**. By default, we employ Algorithm 3 to compute it.
This problem formulation aligns well with the Hungarian algorithm [16], also known as the Kuhn-Munkres algorithm. This efficient method solves the assignment problem, which involves assigning tasks to agents in a manner that minimizes the total cost of the assignments.
Due to space constraints, we will not provide a detailed explanation of this algorithm. However, we shall define its input and output. Given an \(m\times n\) cost matrix \(costMatrix=[costMatrix_{ij}]\), where \(costMatrix_{ij}\) represents the cost of assigning the \(i\)-th worker to the \(j\)-th job, the goal is to find a permutation \(\sigma\) that minimizes the total cost \(\sum_{i=1}^{n}c_{i,\sigma(i)}\). This is where the **hungarian** function comes into play, as described in (6), where \(\sigma(i)\) can be deduced from the \(pairs\) variable:
\[\textbf{hungarian}:costMatrix\to pairs \tag{6}\]
The algorithm operates by constructing a cost matrix representing the dissimilarity between each pair of elements in the two sets. It then iteratively modifies the cost matrix by subtracting the smallest element in each row and column until a complete set of assignments can be made with zero total cost. The optimal matching is obtained from the modified cost matrix by identifying the unique assignments corresponding to zero-cost pairs.
We can now describe our matching algorithm under this scenario in Algorithm 12
```
1:procedure\(\Phi\)(A, B)
2: Initialize \(costMatrix\leftarrow-1\times\)\(sm^{*}\)
3:return\(\Phi_{\textbf{arxylepler}}\)\((A,B,\textbf{hungarian}(costMatrix))\)
4:endprocedure
```
**Algorithm 12** Unordered Array Fuzzy Matching
\({}^{*}sm\) is calculated as below:
\[sm=\begin{bmatrix}\Phi(A_{1},B_{1})&\Phi(A_{1},B_{2})&\cdots&\Phi(A_{1},B_{n} )\\ \Phi(A_{2},B_{1})&\Phi(A_{2},B_{2})&\cdots&\Phi(A_{2},B_{n})\\ \vdots&\vdots&\vdots&\vdots\\ \Phi(A_{n},B_{1})&\Phi(A_{m},B_{2})&\cdots&\Phi(A_{m},B_{n})\end{bmatrix}\]
As shown in Fig. 4, an ideal scenario would be when the order of the elements is not essential, and some degree of flexibility is allowed in terms of matching the elements themselves. An example of this would be comparing the output of two search engine APIs that return similar but not identical results.
### Renderer
We have utilized React[17], an open-source front-end library, to implement our renderer. Leveraging the context feature of React, we have made it easy for users to override our rendering logic, such as the coloring scheme and the presentation of specific diff information. For example, users can customize the rendering of the edit distance between two strings according to their preferences. The code for our renderer can be found in the repository at react-jycm-viewer, which you can access on Furthermore, our renderer supports the display of large JSON objects and seamless navigation between pairings. This is made possible by the integration of monaco-editor[18] project, which enables
efficient browsing and searching within large JSON files. Thanks to this feature, our JYCM renderer can handle substantial JSON objects and diff results.
## V Conclusion
This paper presents a comprehensive framework for comparing and analyzing JSON objects by identifying their differences. Our approach emphasizes the computation of optimal similarity between JSON objects and the rendering of diff results in a user-friendly manner, accommodating various scenarios and empowering users to define custom similarity functions that fit within the framework easily. As discussed, no universal rule defines "what has been changed" without considering realistic scenarios. We have employed JSON path notation to locate and represent elements within JSON objects and have introduced regular expression support for more flexible path matching. Moreover, our renderer, built using the React library, enables users to customize rendering logic, such as color schemes and diff presentation styles.
We have demonstrated the effectiveness of our framework in various practical scenarios by applying different algorithms, such as the Hungarian algorithm for ordered exact matching and the default array similarity algorithm for unordered matching. Our framework also supports the implementation of user-defined similarity functions for more specific use cases.
|
2302.00762
|
AmbiCoref: Evaluating Human and Model Sensitivity to Ambiguous
Coreference
|
Given a sentence "Abby told Brittney that she upset Courtney", one would
struggle to understand who "she" refers to, and ask for clarification. However,
if the word "upset" were replaced with "hugged", "she" unambiguously refers to
Abby. We study if modern coreference resolution models are sensitive to such
pronominal ambiguity. To this end, we construct AmbiCoref, a diagnostic corpus
of minimal sentence pairs with ambiguous and unambiguous referents. Our
examples generalize psycholinguistic studies of human perception of ambiguity
around particular arrangements of verbs and their arguments. Analysis shows
that (1) humans are less sure of referents in ambiguous AmbiCoref examples than
unambiguous ones, and (2) most coreference models show little difference in
output between ambiguous and unambiguous pairs. We release AmbiCoref as a
diagnostic corpus for testing whether models treat ambiguity similarly to
humans.
|
Yuewei Yuan, Chaitanya Malaviya, Mark Yatskar
|
2023-02-01T21:25:34Z
|
http://arxiv.org/abs/2302.00762v2
|
# AmbiCoref:
###### Abstract
Given a sentence "Abby told Britthey that she upset Courtney", one would struggle to understand who "she" refers to, and ask for clarification. However, if the word "upset" were replaced with "hugged", "she" unambiguously refers to Abby. We study if modern co-reference resolution models are sensitive to such pronominal ambiguity. To this end, we construct **AmbiCoref**, a diagnostic corpus of minimal sentence pairs with ambiguous and unambiguous referents. Our examples generalize psycholinguistic studies of human perception of ambiguity around particular arrangements of verbs and their arguments. Analysis shows that (1) humans are less sure of referents in ambiguous AmbiCoref examples than unambiguous ones, and (2) most coreference models show little difference in output between ambiguous and unambiguous pairs. We release AmbiCoref as a diagnostic corpus for testing whether models treat ambiguity similarly to humans.1
Footnote 1: Our dataset and code is available at [https://github.com/LucyYYW/AmbiCoref](https://github.com/LucyYYW/AmbiCoref).
## 1 Introduction
Ambiguity is a fundamental feature of language [20] that some linguists believe arises because of a pressure for efficient communication [1, 17]. Recently, several works have highlighted the existence of ambiguity in tasks such as question answering [14, 15], frame disambiguation [21], anaphora resolution [16] and language modeling [1]. Yet systematic evaluation of how models react to ambiguity across many types of language processing problems is missing. We contribute one such study about coreference resolution.
Coreference resolution is crucial to natural language understanding, especially in long contexts, such as dialog. Ambiguity may arise naturally in dialog, but existing models do not have well-defined target behavior for such coreferences. In contrast, when people encounter coreferential ambiguity, they recognize it, and can ask for clarification. Existing resources, such as OntoNotes [15], do not provide fine-grained annotations of such instances to evaluate model behavior. This may result in models not being calibrated to handle the uncertainty in interpretations of ambiguous statements. In this work, we ask how sensitive to ambiguity are models trained on these resources?
To understand how existing coreference models react to ambiguity, we construct a diagnostic corpus, **AmbiCoref**. AmbiCoref is composed of minimal pairs with ambiguous and unambiguous referents, created from four types of templates. Ambiguity is achieved by reducing context sizes to one sentence, and creating sentences where participating verbs under-constrain the interpretation of their arguments. For example, in Table 1, line 2, our first template leverages ambiguity around verbs expressing subjective experiences.2 The templates are designed by drawing on psycholinguistic studies [18, 19, 10] and a core contribution of our work is to generalize their observations to create thousands of instances. We achieve this by identifying VerbNet [13] classes that are likely to contain appropriate verbs, and manually assigning them to templates. Combined with variability we introduce using noun lists, AmbiCoref contains over 96 thousand sentences.
Footnote 2: Such instances require specific syntactic arrangements: the ambiguous instance in line 2 is unambiguous if the pronoun is moved to the object position of bored.
We verify that humans perceive instances in AmbiCoref in intended ways by crowdsourcing judgements (SS3). Annotators are asked to find the coreferent for a pronoun in a sentence, and rate their confidence, to account for the gradi
ambiguity judgements Schutze (1995). We find that, for unambiguous instances, humans strongly associate the pronoun with the intended noun but for ambiguous ones, they show reduced confidence across all templates, where the majority of participants are either not confident or mark them as ambiguous. This suggests that humans process ambiguous and unambiguous sentences in AmbiCoref in qualitatively different ways.
AmbiCoref can be used to evaluate model behavior in the presence of ambiguity. We analyze five representative English models: three in CoreNLP Manning et al. (2014), SpanBERT Joshi et al. (2020), and NeuralCoref 4.0 Wolf et al. (2020) (SS4). Our main evaluation involves comparing coreference cluster assignments of the pronoun, between ambiguous and unambiguous samples. 4 out of the 5 models we analyze show almost no behavioral change. Unlike humans, coreference models largely do not alter their decisions in the presence of ambiguity. Our analysis implies models likely need to explicitly account for ambiguity to achieve human-like behavior in the face of ambiguous input.
## 2 Dataset Construction
To understand model sensitivity towards coreferential ambiguity, we build AmbiCoref using four types of templates, shown in Table 1. The templates are created in minimal pairs, and the only difference between the ambiguous and unambiguous counterparts lies in the choice of verb phrase. Note that while ambiguity is a graded phenomenon, we use the the term "ambiguous" for instances that are _more likely_ to elicit ambiguous human judgements and vice-versa. Verb phrases are extracted from suitable verb classes in VerbNet Schuler (2005), identified by manual annotation of VerbNet clusters.3 Each template is instantiated with verbs, names, noun-phrases, and gender-appropriate pronouns, greatly expanding the variation in cases identified in previous studies.
Footnote 3: We consider verbs from verb classes 31: Psych-Verbs Verbs of Psychological State), 13: Verbs of Change of Possession, 37: Verbs of Communication as they conceptually align well with conditions required for ambiguity. Verbs within clusters were individually evaluated for appropriateness for templates by the authors.
### Template Types
**Experiencer Constraint for Objects (ECO)**Springston (1976) propose the Experiencer Constraint for complement constructions which we operationalize in our templates. Verbs that mark their object as the experiencer of an emotion restrict the assignment of an object position pronoun to the subject of a declarative communication verb. Conversely, the assignment is unconstrained when the pronoun is the subject of an experiencer verb. For example, in row 2 of Table 1, a pronoun in the subject position of "bored" is ambiguous (but would not be so in the object position). If the main verb does not impose an experiencer constraint, row 1, then a pronoun in the subject position is unambiguous. We instantiate two variants with names (rows 1,2) and general entities (rows 3,4).
**Experiencer Constraint for Subjects (ECS)** The Experiencer Constraint also suggests that verbs that mark their subjects as the experiencer of the emotion restrict the assignment of a subject position pronoun. The assignment of the pronoun is
\begin{table}
\begin{tabular}{|c|l|c|c|l|l|} \hline & **Type** & **Ambig.** & **Template** & **Count** \\ \hline
1 & Experiencer Obj (ECO-1) & ✗ & \([Emily]_{A}\) told \([ Jessica]_{B}\) that \([she]_{A}\) [saw] [Brian]. & 11336 \\
2 & Experiencer Obj (ECO-1) & ✓ & \([Emily]_{A}\) told \([ Jessica]_{B}\) that \([she]_{\gamma}\) [bored] [Brian]. & 11336 \\
3 & Experiencer Obj (ECO-2) & ✗ & \([The\)\(mother]\) & \(\text{old}\)\([the\)\(sister]_{B}\) that \([she]_{A}\) [saw] the client. & 11336 \\
4 & Experiencer Obj (ECO-2) & ✓ & \([The\)\(mother]_{A}\) told \([the\)\(sister]_{B}\) that \([she]_{\gamma}\) [bored] the client. & 11336 \\ \hline
5 & Experiencer Sub (ECS-1) & ✗ & \([The\)\(aut]_{A}\) told \([Sarah]_{B}\) that [the daughter] [met with \([her]_{A}\). & 4472 \\
6 & Experiencer Sub (ECS-1) & ✓ & \([The\)\(aut]_{A}\) told \([Sarah]_{B}\) that [the daughter] [liked] \([her]_{\gamma}\). & 4472 \\
7 & Experiencer Sub (ECS-2) & ✗ & \([The\)\(father]_{A}\) told \([the\)\(son]_{B}\) that the client [met with \([him]_{A}\). & 4472 \\
8 & Experiencer Sub (ECS-2) & ✓ & \([The\)\(father]_{A}\) told \([the\)\(son]_{B}\) that the client [met] \([him]_{\gamma}\). & 4472 \\ \hline
9 & Implicit Causality (IC) & ✗ & \([\)\(\mathit{Abby}]_{A}\) [called] \([Jane]_{B}\) because \([she]_{A}\) [wanted to apologize]. & 8424 \\
10 & Implicit Causality (IC) & ✓ & \([\)\(\mathit{Abby}]_{A}\) [called] \([Jane]_{B}\) because \([she]_{\gamma}\) [is leaving soon]. & 8424 \\ \hline
11 & Transfer (TOP) & ✗ & \([Daniel]_{A}\) [baked] \([\)\(\mathit{the\)\(boy}]_{B}\) [a cake] [after] \([ie]_{B}\) [asked for one]. & 8424 \\
12 & Transfer (TOP) & ✓ & \([Daniel]_{A}\) [baked] \([\)\(\mathit{the\)\(boy}]_{B}\) [a cake] [before] \([he]_{\gamma}\) [had lunch]. & 8424 \\ \hline \end{tabular}
\end{table}
Table 1: Summary of the six template pairs that make up AmbiCoref. Template slot are indicated in square bracket, and clusters are marked with subscripts and color. All templates pair an unambiguous sentence with an ambiguous sentence, where they differ only in the choice of verb phrase.
unconstrained when it is in the object position. For example, in Table 1, row 6, "liked" is ambiguous when a pronoun is placed in the object position (but not in the subject position). We instantiate variants with names (rows 5,6) and entities (rows 7,8).
Implicit Causality (IC)Caramazza et al. (1977) hypothesize that implicit causality of a verb can determine the direction of pronoun assignment. For example, in Table 1 row 9, the phrase "wanted to apologize" establishes a cause for why "Emily called," so the pronoun is constrained to the subject of "call". Conversely, in row 10, the phrase "is leaving soon" fails to create such a relationship, leaving the pronoun ambiguous. For these templates (rows 9,10), we vary the names of the entities involved, and pair verbs (i.e. called) with constructed phrases that imply causality (i.e. apologizing), manually.
Transfer of Possession (TOP)Rohde and Kehler (2014) suggests that in transfer-of-possession contexts such as, "John passed the comic to Bill. _He..._", the pronoun is equally likely to refer back to subject and non-subject. We draw upon this observation, and create a template around verbs that involve source-goal possession transfers. We distill the example to one sentence and pair the transfer event with a reason. For example, in Table 1 row 11, the phrase "asked for one" constrains the pronoun to be the receiver of "bake". Conversely, before having lunch provides no such constraint, because either the receiver or giver could have "had lunch" before the event. Templates vary the names, verbs, objects, reasons, and preposition (rows 11,12).
### Filling Template Slots
For each template, we construct a list of appropriate verb phrases, reasons (for IC and TOP templates), and shared list of gendered names and noun-phrases. Verb phrases were constructed by manually inspecting VerbNet classes. To control for name bias, we randomly sample names from popular name lists4 from the last 50 years, and reuse gendered noun-phrase lists from Wino-Bias Zhao et al. (2018). Excluding name and noun-phrase variations, templates have 114, 45, 81, 82 instances for ECO, ECS, IC, and TOP, respectively.
Footnote 4: [https://www.ssa.gov/oact/babynames/decades/](https://www.ssa.gov/oact/babynames/decades/)
## 3 Human Judgements
The templates used to create AmbiCoref generalize several psycholinguistic studies using lexical resources. Next, we verify that humans perceive ambiguity in these examples in the intended ways. We extract a subset of data for each template and ask Amazon Mechanical Turk workers which person a pronoun refers to (marked as \(A\) or \(B\) in Table 1) and assign confidence (_definitely_, or _likely_). Annotators were also allowed to mark the referent as entirely _ambiguous_. One sentence was sampled for each template and verb slot, uniformly at random. We collected 3 annotations per instance.5 See Appendix A for details on the collection of human judgements.
Footnote 5: In ambiguous cases, annotators do not reliably annotate a particular category, but often guess with low confidence. As such, we do not only report a majority opinion per instance, but instead simply report multiple annotations per sentence to see overall trends.
Figure 1 summarizes our results. Human judgments for unambiguous templates favor the intended coreference decision. For unambiguous ECO, ECS, IC, TOP instances, the intended reading is selected as likely or definitely, 83.2%, 91.9%, and 85.8%, 68.3% of the time, respectively. For ambiguous instances, annotations display a substantial shift toward ambiguity. As shown in previous work, humans display substantial disagreement on ambiguous instances Poesio et al. (2019). This is
Figure 1: Human annotation of ambiguous ambiguous ambiguous sentences. We abbreviate human annotations by whether they identified noun A or B and whether they annotate definitely or likely (marked with?). For example A? indicates, noun A, likely. The ground truth for unambiguous instances, from left to right, corresponds to A, A, A, A, A, B. Annotators read unambiguous examples as intended, and reduce their confidence on ambiguous examples.
reflected in many templates, such as TOP, where humans produce almost uniform responses.
## 4 Model Evaluation
We now examine if we can detect sensitivity to ambiguity in existing coreference resolution models by evaluating on AmbiCoref. We experiment 6 with five representative models: NeuralCoref 4.0 model from Hugging Face 7, SpanBERT Joshi et al. (2020) representation within the independent framework for end-to-end coreference Joshi et al. (2019), and the three models in Stanford CoreNLP Manning et al. (2014): deterministic Lee et al. (2013), statistical Clark and Manning (2015) and neural mention ranking Clark and Manning (2016). All models were trained on the CoNLL 2012 dataset Pradhan et al. (2012).
Footnote 6: Roughly one week of continuous Colab GPU compute.
Footnote 7: [https://github.com/huggingface/neuralcoref](https://github.com/huggingface/neuralcoref)
Here, we evaluate the model's final predictions, not their distribution over possible choices. The reason is two-fold: (1) not all models produce a distribution and (2) initial analysis revealed that the models are miscalibrated, as in other settings Dessai and Durrett (2020); Jiang et al. (2021), making it unreliable to interpret their output scores directly.
### Setup
In this section, we ask, are there differences between how models process similar unambiguous and ambiguous examples? As our examples are synthetically generated, we use the unambiguous examples as a form of control. If a model is unable to link the pronoun with the correct noun on unambiguous examples for at least 40% of examples, we omit that template during evaluation.
We analyze model behavior by breaking it into cases that cover all possible cluster assignments for the pronoun in a single sentence. We compute the percentage of time a model outputs a cluster with:
* case **A**: the pronoun and noun A
* case **B**: the pronoun and noun B
* case **S**: the pronoun as a singleton
* case **M**: the pronoun, noun A, and noun B
* case **O**: the pronoun and any other span
For example, Figure 2 contains SpanBERT's output distribution over these cases for each template. For each such distribution where the model's performance is above threshold, we compare ambiguous (red bar) and unambiguous (grey bar) distributions using Earth Mover's Distance (EMD) Pele and Werman (2009)8. Table 2 reports the number of templates above threshold, and their mean EMD.
Footnote 8: Earth Mover’s distances represent the amount of probability mass required to match two probability distributions. Hence, they help us compare distributions for ambiguous and unambiguous instances in a more interpretable way, than other possible measures like KL divergence.
### Results
Overall, most models we evaluated show essentially no change in output distribution over cases between ambiguous and unambiguous templates, as evidenced by near zero EMD. Most models are evaluated on five of six templates, but TOP is often excluded, representing a hard unambiguous case for most systems in its own right.
Of the models we evaluated, only SpanBERT shows significant deviation in behavior with am
\begin{table}
\begin{tabular}{|c|c|c|} \hline Model & Mean EMD \% & Templates \\ \hline SpanBERT & 11.7 & 5 \\ \hline CoreNLP Neural & 3.5 & 5 \\ \hline NeuralCoref 4.0 & 4.0 & 5 \\ \hline CoreNLP Statistical & 1.2 & 3 \\ \hline CoreNLP Deterministic & 0.6 & 5 \\ \hline \end{tabular}
\end{table}
Table 2: Mean Earth Mover’s Distance between matched ambiguous and unambiguous case distributions and the number of templates where models get at least 40% of unambiguous cases correct.
Figure 2: Percentage of ambiguous () and unambiguous () instances that fall into each of our five cases for the SpanBERT-based model across all templates. All other models show negligible shifts (red and grey distributions are almost identical). The ground truth for unambiguous instances, from left to right, corresponds to A, A, A, A, A, B.
biguous inputs. Figure 2 breaks down SpanBERT's performance on each template. While average EMD is higher than for other models, it still largely doesn't change predictions. When decisions change, often the pronoun is linked with the other noun. For example, in ambiguous cases of ECO-1, SpanBERT reduces merged outputs, and instead links the pronoun with noun B more frequently. In ambiguous cases, other models largely link the first noun-phrase (A) to the pronoun.
## 5 Discussion and Conclusion
Overall, our results suggest that model behavior significantly deviates from how human treat ambiguous coreference. We lend more evidence that models miss aspects of how people understand language, especially in discourse Upadhye et al. (2020). The reason is likely in part that models are trained on resources which do not account for distributions in judgments. As a result, models do not have well-defined behavior when ambiguity arises and are poorly calibrated.
Training models with finer-grained coreference judgments could allow models to better align with human behavior. Techniques to improve model calibration could also be effective, allowing models to abstain or seek clarification when ambiguity arises. We hope that AmbiCoref can serve as a diagnostic set for future modeling approaches in evaluating their sensitivity to instances of ambiguity in language.
## 6 Limitations
Our study focuses entirely on coreference in the English language with models trained in high-resource settings. Furthermore, the cases of ambiguity we identify are English-specific and the names we insert into templates are popular American names. It is an open question as to how our results generalize to low-resource non-American-English settings.
The language we use to evaluate models is template. While we make an effort to account for unnatural data, by only evaluating templates models do well at, models struggle to completely solve all our unambiguous examples. This presents a challenge for future model builders. On the other hand, our templates may not reflect a particular real world distribution that models will be tested on.
## Acknowledgements
We thank Chris Callison-Burch and the PennNLP group for their helpful comments on this work.
|
2308.01427
|
Differential Evolution VQE for Crypto-currency Arbitrage. Quantum
Optimization with many local minima
|
Crypto-currency markets are known to exhibit inefficiencies, which presents
opportunities for profitable cyclic transactions or arbitrage, where one
currency is traded for another in a way that results in a net gain without
incurring any risk. Quantum computing has shown promise in financial
applications, particularly in resolving optimization problems like arbitrage.
In this paper, we introduce a differential evolution (DE) optimization
algorithm for Variational Quantum Eigensolver (VQE) using Qiskit framework. We
elucidate the application of crypto-currency arbitrage using different VQE
optimizers. Our findings indicate that the proposed DE-based method effectively
converges to the optimal solution in scenarios where other commonly used
optimizers, such as COBYLA, struggle to find the global minimum. We further
test this procedure's feasibility on IBM's real quantum machines up to 127
qubits. With a three-currency scenario, the algorithm converged in 417 steps
over a 12-hour period on the "ibm_geneva" machine. These results suggest the
potential for achieving a quantum advantage in solving increasingly complex
problems.
|
Gines Carrascal, Beatriz Roman, Guillermo Botella, Alberto del Barrio
|
2023-08-02T20:58:24Z
|
http://arxiv.org/abs/2308.01427v1
|
Differential Evolution VQE for Crypto-currency Arbitrage. Quantum Optimization with many local minima.
###### Abstract
Crypto-currency markets are known to exhibit inefficiencies, which presents opportunities for profitable cyclic transactions or arbitrage, where one currency is traded for another in a way that results in a net gain without incurring any risk. Quantum computing has shown promise in financial applications, particularly in resolving optimization problems like arbitrage. In this paper, we introduce a differential evolution (DE) optimization algorithm for Variational Quantum Eigensolver (VQE) using Qiskit framework. We elucidate the application of crypto-currency arbitrage using different VQE optimizers. Our findings indicate that the proposed DE-based method effectively converges to the optimal solution in scenarios where other commonly used optimizers, such as COBYLA, struggle to find the global minimum. We further test this procedure's feasibility on IBM's real quantum machines up to 127 qubits. With a three-currency scenario, the algorithm converged in 417 steps over a 12-hour period on the "ibm_geneva" machine. These results suggest the potential for achieving a quantum advantage in solving increasingly complex problems.
Quantum Computing, Optimization, Differential Evolution, VQE, Arbitrage
## 1 Introduction
For millennia, commercial practice was based on a combination of barter and other means of exchange and, it was not until the introduction of coinage in the Greek regions at the late seventh century BC, that the first roots of arbitrage were born. Ancient arbitration was related to the trading of coins and ingots for profit, thus giving rise to the first desire to develop this practice [16].
Arbitrage could be defined as a financial strategy in which the investor obtains a profit from the difference in price of the same financial asset between different markets, this being a risk-free operation. The basic idea of this concept is to carry out complementary operations (buying and selling) taking advantage of market irregularities [19]. The importance of simultaneity of transactions in arbitrage is crucial to avoid exposure to market risk or execution risk, the latter being the risk that prices change before the purchase and sale of the asset has been completed. This pursues that the arbitrage is opposite to the idea that the market is perfectly efficient and all the assets converge to the same price [9].
Due to their properties, financial products that are digitally traded are ideally suited for arbitrage. That is why, with the emergence of digital assets, the crypto-currency market has burst with great
force. In this case and unlike traditional markets, they lack of a centralized exchange, so it is an immature market where there are periods with great arbitrage opportunities [9].
To acquire cryptocurrencies it is necessary to go to an exchange or cryptobank. There are currently more than six hundred exchanges worldwide, with _Binance, FTX or Coinbase_ being the best-valued exchanges with the highest trading volume to date [5]. Therefore, arbitrage opportunities emerge from the price difference between the different exchanges as it is an unregulated market. In this line, there are several types of cryptocurrency arbitrage depending on the number of assets traded and the different exchanges involved. If two cryptocurrencies are bought and sold between two different exchanges, we would be talking about parallel arbitrage. If three cryptocurrencies and two or even three different exchanges are involved, it would be a triangular arbitrage. It is important to note that, in order to carry out these two types of arbitrage, it is necessary to have the cryptocurrencies to be traded in all the exchanges involved. The rebalancing technique, known as the action of sending cryptocurrencies between different exchanges, is usually subject to the payment of commissions. This is the reason why the arbitrage intraexchange is one of the most used and therefore the one that allows the most operations to be made. The process consists in transferring the capital from one cryptocurrency to another, always starting and ending in the same one thus being able to avoid commissions [1]. Throughout this paper we will focus on the latter type of arbitration.
This paper is organized as follows: Section 2 contextualizes the current approaches to arbitrage through quantum. Section 3 introduces the idea of quantum optimization using VQE. Section 4 explains the implications of arbitrage problem. Section 5 describes the possibilities of using quantum computing for arbitrage. Section 6 introduces diferential evolution as well as its benefits and drawbacks. Sections 7 introduces the model with 3 currencies, Sections 8 expand the model to 4 currencies, and Section 9 describes the quantum algorithms that can be applied to real data from 5 crypto-curencies, and details the considerations we take with real quantum computers. Section 10 contains our conclusions.
## 2 State of the art
Given the multitude of opportunities offered by the cryptocurrency market [12], trading algorithms have been developed to exploit short-term arbitrage opportunities, thus making a profit from optimization algorithms [9]. This is why quantum computing emerges as a complementary and more efficient alternative in certain processes with a greater speed and versatility unattainable by classical computing.
The auge of machine learning research has also reached finance. Consequently, initial machine-learning-based statistical arbitrage strategies have emerged [6]. Also there are authors that propose mechanism to autoregulate the cryptocurrency market and compensate arbitrage opportunities [10, 22].
Especially in the banking sector, the importance of quantum computing in solving optimization algorithms has been demonstrated. In this sector, problems try to be simplified by reducing the number of possible solutions, which is why quantum computers can become very useful when the number of variables, and therefore the complexity of the problem, increases. Given the computational intensity of financial problems, increasingly more institutions are betting on the use of quantum technology to solve arbitrage problems as well as portfolio optimization and price scoring. It is expected that the limits of quantum computing will be progressively reduced in order to take advantage of the full potential of this technology in the near future [2].
Many industries are trying to develop quantum algorithms to address financial problems. In particular, companies such as _IBM_ and _McKinsey_ see optimization as one of the most in-demand use cases in the financial industry [3]. This is because most financial problems can be formulated as optimization problems, which is a particularly challenging task for classical computers. This is where the application of quantum algorithms makes it possible to easily solve such problems [14].
In the case of financial arbitrage, the problem can be formulated as a graph in which the different assets are the vertices. In 2007, Wanmei Soon and Heng-Qing Ye proposed a binary optimization model based on binary integer programming (BIP) by which the unboundedness problem of classical linear programming (LP) solving was solved. In this model, the authors established the existence of a feasible
solution and a bounded objective function value [20]. Later in 2016, Rosenberg introduced several key ideas in arbitrage algorithms. First, the difference between detecting an arbitrage opportunity and detecting the optimal arbitrage solution, thus creating an alternative that not only finds the best solution, but those closest to it. To this end, he introduced quantum resolution by transforming the Wanmei Soon and Heng-Qing problem into a quadratic binary unconstrained binary optimization (QUBO) problem by rewriting the constraints as penalty terms. In addition, Rosenberg developed an alternative model that included backward arbitrage strategies, that is, the same asset (vertex) could be revisited [14].
To the best of our knowledge, these work is the firs aplying quantum optimization with a differential evolution approach to the arbitrage problem. There is a work from Zhuang et al. that proposes quantum algorithms for high-frequency statistical arbitrage trading, which is not exactly the same problem and in a different approach [23].
In March 2017 Qiskit was launched, a software designed to create quantum algorithms, connect them to a back-end device and run them on simulators and real hardware [18]. In it, quadratic optimization models can be created using DocPLEX. IBM Decision Optimization CPLEX for python, known as DocPLEX, is an optimization software that solves quadratic models (QP), i.e. models with linear constraints and objective function with one or more quadratic terms. CPLEX can also solve convex problems efficiently [7]. Once the quadratic problem is modeled, it is converted to QUBO to be solved applying quantum computing.
To date, there is still a large gap between the resources available on current hardware and those demanded by some of the most relevant quantum applications. However, quantum research is advancing rapidly and increasingly more companies are betting on the research and use of quantum software. Big companies or new startups specialized in quantum computing in finance and economics, are working on the creation of new combinatorial optimization algorithms applicable to arbitrage with main banking institutions [11].
## 3 Variational Quantum Eigensolver (VQE)
Despite the evolution of quantum software, many quantum optimization algorithms have hardware requirements that exceed the capacity of current quantum computers and may even fail to run.
That is why, in 2014 Peruzzo and McClean et al. [15] developed VQE (Variational Quantum Eigensolver) a hybrid quantum-classical algorithm capable of finding variational solutions to problems that are not supportable by classical computers. Fig. 1 shows the processing cycle between a quantum and a classical processor. VQE has an advantage over classical algorithms in that a quantum processing unit can represent and store the exact wave function of the problem, which is extremely difficult for a classical computer [13].
Figure 1: Hybrid quantum-classical algorithm.
The VQE algorithm starts with a parameterized quantum circuit called ansatz and searches for the optimal parameter for this circuit using a classical optimizer. The ansatz is varied, via its set of parameters, by the optimizer, such that it works towards a state, as determined by the parameters applied to the ansatz, that will result in the minimum expectation value being measured of the input operator which is a Hamiltonian as (1). The Ising Hamiltonian, \(H\), can be decomposed as a sum of Pauli terms and with its ground state (minimal energy state) corresponding to the optimal solution of original optimization problem.
Given \(H\), with an unknown minimum eigenvalue \(\lambda_{min}\), associated with an eigenstate \(|\psi_{min}\rangle\), VQE provides an estimate \(\lambda_{\theta}\) bounding \(\lambda_{min}\) where \(|\psi(\theta)\rangle\) is the eigenstate associated with \(\lambda_{\theta}\). By applied the ansatz, \(U(\theta)\), to some arbitrary starting state \(|\psi\rangle\), VQE obtains an estimate \(U(\theta)|\psi\rangle\equiv|\psi(\theta)\rangle\) on \(|\psi_{min}\rangle\). This estimate is iteratively optimized by the classical optimizer changing the parameter \(\theta\) minimizing the expectation value of \(\langle\psi(\theta)|H|\psi(\theta)\rangle\)[13].
\[\lambda_{min}\leq\lambda_{\theta}\equiv\langle\psi(\theta)|H|\psi(\theta) \rangle. \tag{1}\]
Using Qiskit, the Ising Hamiltonian can be obtained from a QUBO type problem created with DocPLEX. That is, a quadratic, binary and unconstrained problem. This aspect will be detailed in future sections.
In addition, it is possible to define the quantum circuit of the ansatz. There are several ways to create the ansatz, one of the most used is to create a heuristic circuit called Real Amplitudes. This type of circuit consists on altering layers of gates Y rotations (purple gates in Fig. 2) and CNOT entanglements (blue gates in Fig. 2). Depending on the type of entanglement between the qubits we can create different ansatz as shown in Fig. 2.
It is also possible to set the number of repetitions, in this case, Fig. 2 has two repetitions and they can be distinguished by the barriers of the circuit. Different ansatz can lead to different solutions in the optimization algorithm, so it is interesting to adjust these two hyper-parameters.
## 4 Arbitrage as an optimization problem
As mentioned, there are several types of crypto-currency arbitrage and, in this case, we will illustrate the problem of intra-exchange arbitrage in which \(n\) currencies are involved and the exchange rate between these \(n\) currencies is calculated. As we recall, in this type of arbitrage only a single exchange is involved.
To illustrate the problem, the exchange prices of three crypto-assets will be simulated. This is because to solve the problem quantumly, it is necessary to connect to a quantum simulator in the cloud and therefore, the execution time is much higher than if we execute the problem on a real quantum computer, which would take milliseconds. That is why, first we will pose a small-scale
Figure 2: Types of entanglement on the Ansatz.
problem with simulated data to test which optimizers work better and later, we will increase the number of crypto-assets and simulate a problem with real data.
Originally, and in order to set up the model, the problem can be presented as a graph as shown in the Fig. 3. In this case we have three crypto-assets with a clear arbitrage opportunity.
### Initial formulation of the problem
The first step in solving the arbitrage problem is to set up the optimization model. In this case, in (2) an objective function is posed where the profit is maximized based on the exchange rates. We take logarithms of the elements of the transit matrix, \(\Sigma\in\mathbb{R}^{nxn}\), and two constraints are established. The first establishes a closed arbitrage circle, i.e., it must start and end in the same currency, while the second establishes that each currency can only be traded once. Notice that \(x\in\{0,1\}^{n\times n}\) denotes the matrix of binary decision variables, which indicate which edge to pick \((x[i]=1)\) and which not to pick \((x[i]=0)\). In this problem we will assume that the commissions are included in the transit matrix. Remember that the model will be computationally created with DocPLEX for easy conversion to quantum mode.
\[\begin{split}&\max_{x\in\{0,1\}^{nxn}}x\Sigma\,\\ \text{s.t:}&\sum_{i=1}^{n}x_{ij}=\sum_{j=1}^{n}x_{ ij}\,\\ &\sum_{i=1}^{n}x_{ij}\leq 1\.\end{split} \tag{2}\]
Fig. 4 shows the result of running the optimization algorithm classically.
In this case, the profit rate was 23, that is the difference between the initial value, and the value obtained after the loop. The disadvantage of using only the classical method is that the arbitrage problem scales, that is, the classical algorithm calculates the result by "brute force" by trying all possible combinations, which is why if we add more crypto-assets to the problem and, therefore, to the graph, the problem would grow exponentially. For this reason, a quantum-classical method is necessary.
### QUBO transformation
To apply the quantum method it is necessary to have the problem in QUBO form. If we remember, it is a binary, unconstrained and quadratic problem. The first step is to convert the inequality constraints into equality using slack variables, the signs of these variables depend on the inequality symbol. Successively, our model was originally a binary model, but when introducing the slack variables, it is
Figure 3: Exchange rates between three crypto-currencies and the transit price matrix.
ncreased to convert these to binary as well. The last step is to convert the model to an unconstrained model. Up to this point, the model had equality constraints, so to remove them, they will be converted into additional quadratic penalty terms of the objective function. All in all, we would have a quadratic, binary, unconstrained problem just as we needed. These four steps are reflected in Fig. 5 where the evolution of the model until the QUBO format is achieved can be seen. To solve this problem will need 9 qubits as we have 9 variables.
## 5 Quantum computing for arbitrage
As seen in the previous section, quantum computing makes use of a hybrid quantum-classical algorithm, VQE, to solve optimization problems. If we recall, three elements are needed to apply VQE: the Ising Hamiltonian, a classical optimizer and the ansatz.
First, we need to calculate the Ising Hamiltonian, for this, the QUBO model is used. As we already know, the Hamiltonian is expressed as the Pauli gate decomposition, (3) shows a part of the Hamiltonian of the problem. As we can see, it uses I and Z gates.
\[\begin{array}{l}-1.395\cdot IIIIIIIIZ\\ -1.539\cdot IIIIIIZIII\\ -1.742\cdot IIIIZIII\\ -2.089\cdot IIIIIIIIZI\\...\end{array} \tag{3}\]
Figure 4: Solution of the arbitrage optimization problem: best arbitrage path.
Figure 5: QUBO steps.
One of the most used classical optimizers in quantum computing due to their good results is the Constrained Optimization by Linear Approximation, COBYLA. This optimizer is suitable when noise is not present in the cost function evaluation and it also performs only one evaluation of the objective function per optimization iteration (the number of evaluations is independent of the cardinality of the parameter set) [17].
To solve the arbitrage problem using VQE in conjunction with COBYLA, the ansatz quantum circuit must be established. As we saw in the previous section, the ansatz has two hyper-parameters, entanglement and repetitions. Therefore, different combinations of these hyper-parameters have been tested in search of relevant conclusions.
As we can see in Table 1 the configuration of the ansatz entanglement that has managed to reach the optimal solution regardless of the number of repetitions has been SCA. However, using three repetitions, the optimal solution is also achieved in all the entanglements.
In the entanglements where the optimal solution has not been reached, the algorithm has found a solution that does not correspond to the global maximum of the problem. Fig. 6 shows a comparison of a local arbitrage solution with a 1.99 profit rate achieved by an ansatz using two repetitions and full entanglement versus the optimal solution where a gain of 23 is achieved. The main difference is that, although the local solution is feasible, it involves only two currencies.
During the rest of this study, we will use SCA ansatz with only one repetition to maintain the depth of the circuits at the minumum for best performance on real devices. Under this circunstances, as you can see in 2, the SCA and the Circular are the same.
## 6 Differential Evolution Global Optimization
Although finding local solutions is feasible, when it comes to arbitrage we want to find the global solution that brings us the greatest benefit. As the number of crypto-currencies increases and, therefore,
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline \multicolumn{5}{|c|}{Entanglement} \\ \hline Reps & Full & Linear & Circular & SCA \\ \hline
1 & \(\times\) & ✓ & \(\times\) & ✓ \\
2 & \(\times\) & \(\times\) & \(\times\) & ✓ \\
3 & ✓ & ✓ & ✓ & ✓ \\ \hline \end{tabular}
\end{table}
Table 1: Hyper parameters that reached the optimal solution.
Figure 6: Comparation of a local and a global solution.
the complexity of the problem, a multitude of local solutions can arise and in the case of COBYLA we will not always find the optimal solution, but given the multitude of possibilities it can be halfway.
Fig. 7 shows a possible inverse scenario of this type of problem. As can be seen, there are a multitude of local minimums, increasing the likelihood that the algorithm will stay with one of them. We want to achieve the global one and for this reason, a new algorithm is proposed to try to find it.
Differential evolution is an evolutionary global optimization algorithm, quite related with the genetic algorithm, that does not use gradient information and is therefore well suited to nonlinear non-differential objective functions. It is an algorithm whose operation is based on maintaining a population of candidate solutions represented as real-valued vectors. New candidate solutions are created by making modified versions of existing solutions that then replace a large part of the population each time the algorithm makes a new interaction, i.e., the base solutions are replaced by their children if the children have a better evaluation of the objective function.
This algorithm has a number of hyper-parameters such as _strategy_, which controls the type of differential evolution search that is performed. We will set a _best1bin_ search which involves creating new candidate solutions by selecting random solutions from the population, subtracting one from the other and adding a scaled version of the difference to the best candidate solution in the population. There are also other important parameters such as _popsize_, which controls the size or number of candidate solutions that are kept in the population. The _tolerance_ can also be adjusted, so the lower this value, the less permissive the algorithm will be [21].
Figure 7: Global solution scenario.
```
0: An operator representing the problem, an ansatz (parametrized circuit) to represent each individual, a number of individuals n, a Qiskit Runtime Session.
0: An eigenstate of the operator.
1: Create a initial generation of n ansatz (random parameters).
2: Use Qiskit Runtime Estimator to calculate the expectation values of the operator for all individuals in the generation within a Session.
3: apply scipy.optimize.differential_evolution to calculate the next generation and iterate.
4: Once reached convergence, use Qiskit Runtime Sampler to evaluate the ansatz with the selected parameters.
5: Return the eigenstate.
```
**Algorithm 1** Differential Evolution Minimum Eigensolver.
We will use differential evolution as an optimizer to apply VQE to the arbitrage problem. However, Qiskit does not have a default function for applying differential evolution. Therefore, as described in Algorithm 1, a function has been created from scratch to be able to apply this method. The optimal solution with this optimizer was reached with a tolerance of 0.001, population size 15 and using 5 interactions at most. Likewise, an ansatz with 1 repetition and SCA type entanglement was used.
Both COBYLA and the differential evolution optimizer managed to reach the global optimum, however there is a substantial difference between them. Both seek to minimize the expected value of the Hamiltonian since both are optimizers that are used in conjunction with VQE. However, as can be seen in Fig. 8, COBYLA has a behavior similar to gradient descent, when the algorithm finds a minimum it tries to look for solutions around that minimum. On the other hand, the differential evolution tries different candidate solutions by modifying the population through the popsize parameter, that is why the expected value does not have such a marked decrease but fluctuates when calculated with new candidates.
## 7Arbitrage with three crypto-currencies
As shown in 5, we have used the 3 currency scenario in simulators to make the hyperparameter search for the most interesting combination to test on real hardware.
For execution on real hardware we have selected 1 repetition circular entangled ansatz due to its best fit on the topology of the IBM real quantum computers [4].
The main result was a 3 currency scenario that required an ansatz of 9 qubits, executed on "ibm_geneva", a Falcon r4 QPU with 27 qubits, and quantum volume of 32:
* Basis gates: CX, ID, RZ, SX, X
Figure 8: Expected value for individual circuit evaluations during the optimization. COBYLA (left), Differential Evolution(right).
* Largest Pauli Error: 9.966e-4
* Largest CNOT Error: 7.725e-3
* Median T1: 353.86 \(\mu s\)
* Median T2: 197.83 \(\mu s\)
Differential evolution algorithm executes batches of 64 circuits with 4000 shots each in an average of 104 seconds for each generation. Convergence to the global minimum was reached in 417 steps with a total time of 12 hours.
In Table 2 there is a summary of the results obtained with 27 qubit machines on 3 asset problem.
## 8 Arbitrage with four crypto-currencies
Analogous to the arbitrage problem with three crypto-assets, a problem involving four crypto-assets is proposed to test its scalability. In this case, Fig. 9 shows the exchange rate graph and the price transition matrix with a clear arbitrage opportunity. The classical solution of this problem yielded a result with a profit rate of 118.99, which is quite an optimistic result which involved all four cryptocurrencies.
To solve the problem in quantum form it is necessary to follow the steps detailed in the previous section, i.e., convert the problem to QUBO, create the Hamiltonian and configure the ansatz. However, in this case we will use VQE with the differential evolution optimizer due to the good results obtained in the model with three assets and to try to find the global maximum.
Fig. 10 shows the optimal solution and the expected value plot of the differential evolution optimizer. It can be seen that the algorithm takes more iterations and thus, more time to get the minimal expected value.
To reach this solution, the ansatz configuration was set to one iteration with SCA interleaving. This gives us the idea of the importance of choosing both the hyper-parameters of the ansatz and of the optimizer itself. To get the best suitable parameters we have performed an extensive hyperparameter search and validations.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline name & QPU & circuits/batch & time(sec)/ batch \\ \hline ibm\_geneva & Falcon r4 & 64 & 104 \\ \hline \end{tabular}
\end{table}
Table 2: Three currencies on 27 qubit machine.
Figure 9: Exchange rates between four crypto-currencies and the transit price matrix.
## 9 Real-data application with five crypto-currencies
In this section we will present a case of real intra-exchange crypto-currency arbitrage with five assets. One of the main complications of using real data is that, given the variety of crypto-currencies and the great inequality in price and quote of them, the optimization problem presents a transition matrix with exchange rates very close to zero, which makes it difficult to find a solution.
For this reason, in order to tackle this problem it is necessary to normalize the transition matrix. Starting from the idea that from each crypto-currency we can buy even cents, a vector with the normalization coefficients of each asset is established.
In this way, we get the transition matrix of Fig. 11.
The procedure to solve this problem is analogous to the previous cases, i.e., the problem needs to be transformed to QUBO to apply VQE with the differential evolution optimizer.
Due to the multitude of variables, more qubits are needed to solve the problem, specifically 25.
Figure 11: Transition matrix and graph of the crypto-currency of five assets.
Figure 10: Differential evolution solution with four crypto-currencies.
### Qiskit Runtime on IBM cloud "simulator_mps"
We run a simulation using Qiskit Runtime on the IBM cloud "simulator_mps". Using the differential evolution optimizer, we send batches of 256 parametric circuits to the Qiskit runtime, with an average time of execution of 6 minutes. It converges after 469 steps of differential evolution (Fig. 12), taking 47 hours overall.
The solution obtained is shown in Fig. 13
### "ibm_hanoi", a Falcon r5.11
To compare the performance with the 27 qubit real machines, we have executed the 5 currency scenario on "ibm_hanoi", a Falcon r5.11 QPU with 27 qubits, a quantum volume of 64 and 2.3K CLOPS:
* Basis gates: CX, ID, RZ, SX, X
Figure 12: Simulator convergence in 5 real currencies.
Figure 13: Transition matrix and graph of the crypto-currency of five assets.
* Median CX Error: 9.232e-3
* Median SX Error: 1.977e-4
* Median Readout Error: 1.250e-2
* Median T1: 157.54 \(\mu s\)
* Median T2: 158.15 \(\mu s\)
This machine executed 256 circuits with 4000 shots each in 525 seconds. A limited number of steps was made to be able to measure times. This experiment was stopped before reaching convergence in less then 24 hours.
### "ibm_washington" Eagle r1
To evaluate the applicability of the methodology, we have executed diferential evolution on real 127 qubit machines. Executing in a bigger machine enables to find a best suitable path in the chip to map the circuit in an efficient way, avoiding swap gates and obtaining a shorter circuits, and thus less accumulated noise. On the other hand, bigger machines possess less CLOPS, so the execution time is longer.
We have tested the 5 real currencies scenario in two different generations of the IBM Eagle processor. In Fig. 14 we can see the 25 qubit ansatz used for this experiments.
The "ibm_washington" ia a first generation Eagle, with Quantum Volume of 64 and CLOPS of 850:
* Basis gates: CX, ID, RZ, SX, X
* Median CX Error: 1.258e-2
* Median SX Error: 2.870e-4
* Median Readout Error: 1.260e-2
* Median T1: 96.7 \(\mu s\)
* Median T2: 82.25 \(\mu s\)
For each step, it ran 256 circuits with 4000 shots each run in 1147 seconds. The experiment was stopped before reaching convergence in less then 24 hours.
### "ibm_brisbane" Eagle r3
TThird generation Eagle. The novelty in this machine is the change in the basis gates, implementing the less noisy two qubit gate ECR instead of CX.
* Basis gates: ECR, ID, RZ, SX, X
* Median ECR Error: 7.601e-3
* Median SX Error: 2.184e-4
* Median Readout Error: 1.360e-2
* Median T1: 242.56 \(\mu s\)
* Median T2: 133.59 \(\mu s\)
It was able to ran 256 circuits with 4000 shots each run in 926 seconds. As the previous ones, this experiment was also stopped before reaching convergence in less then 24 hours.
In Table 3 there is a summary of the results regarding the time needed per differential evolution generation (batch) obtained with the real quantum machines.
It is important to notice that we have achieved convergence and found the global minimun using a simulator. In real machines we have made some test to validate the viability of the methodology, but based on the number of generations in the simulator and the time taken on real computers, in these current machines the algoritmn will probably need several days of computation, which is too time consuming for this use case. These experiments show the importance of parallelization on the future of quantum computing.
Figure 14: Ansatz for the 5 currencies scenario (25 qubits, circular entanglement).
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline name & QPU & circuits/batch & entanglement & time(sec)/batch \\ \hline ibm\_hanoi & Falcon r5.11 & 256 & linear & 525 \\ \hline ibm\_brisbane & Eagle r3 & 256 & circular & 926 \\ \hline ibm\_washington & Eagle r1 & 256 & circular & 1147 \\ \hline \end{tabular}
\end{table}
Table 3: Executions for five currencies on real quantum machines.
The only related study we have found about applying quantum computing to arbitrage is the study from Wang et al [22]. Proposes quantum algorithms for high-frequency statistical arbitrage trading, which is a quantitative trading strategy that exploits price differences between correlated assets. The quantum algorithms use variable time condition number estimation and quantum linear regression to reduce the complexity of finding cointegrated pairs of stocks. That is a complete different approach to the arbitrage problem. In our work we study the overall landscape of posible exchange paths to find arbitrage opportunities in long chains of exchanged assets instead of instant diferences between two assets.
## 10 Conclusion
Quantum computing emerges as a new technology complementary to classical computing to address complex problems, with a special focus on financial problems such as arbitrage. In a world where digital assets represent the future, crypto-currency arbitrage arises as an optimization problem.
In the present work, we have tried to address crypto-currency arbitrage by proposing a quantum-classical approach by means of hybrid algorithms such as VQE. For this purpose, the scalability of the problem has been tested starting initially with three assets and finally including up to five crypto-currencies. The proposed model has been converted to QUBO format to make use of VQE and, several optimizers such as COBYLA and Differential Evolution together with a suitable configuration of the ansatz quantum circuit, have been tested in search of the global maximum of the problem.
Differential evolution has pros and cons. We have demonstrated that it has the ability to reach to the global minima, but to do so it has to pay the increase in the number of executions of quantum circuits. A interesting property of this algorithm is that all the circuits for a generation can be run in parallel. This could be a futute advantage when multiple parallel quantum systems will be available [8].
It is precisely the scalability of the problem that means that adding more assets, increases the complexity of the problem. Despite the good results for three assets, for the rest of the cases the importance of the hyper-parameters adjustment is highlighted in order to reach the global solution. Moreover, due to the inequalities of exchange rates in the crypto-currency field, solving a problem with real data requires normalization of the exchange rate transition matrix to obtain accurate results.
As quantum computing evolves and begins to tackle practical problems, we must pay greater attention to how much work quantum computing systems can do in a given unit of time. Increased quantum processor speed is critical to support near-term algorithms based on the variational method, which requires thousands of iterations.
Nevertheless, it is a growing technology and especially in the field of crypto-currency arbitrage there is still a lot of research to be done. As future lines of research we are working in novel ways of encoding the problem to the Hamiltonian in order to be able to represent more QUBO variables using less qubits. Also a fine tuning in the multiple hyperparameters of the differential evolution has to be done to enhance the convergence and reduce the number of generations needed to achieve it.
## Acknowledgment
This work was supported by grant PID2021-123041OB-I00 funded by MCIN/AEI/ 10.13039/501100011033 and by "ERDF A way of making Europe" and by the CM under grant S2018/TCS-4423.
We acknowledge the use of IBM Quantum services for this work. The views expressed are those of the authors and do not reflect the official policy or position of IBM or the IBM Quantum team.
|
2305.08399
|
Loop Vertex Representation for Cumulants
|
In this paper we construct cumulants for stable random matrix models with
single trace interactions of arbitrarily high even order. We obtain explicit
and convergent expansions for the cumulants. We show that any cumulant is an
analytic function inside a cardioid domain in the complex plane and we prove
their Borel-LeRoy summability at the origin of the coupling constant. Our proof
is uniform in the external variables.
|
Vincent Rivasseau
|
2023-05-15T07:30:41Z
|
http://arxiv.org/abs/2305.08399v3
|
# Loop Vertex
###### Abstract
In this paper we construct cumulants for stable random matrix models with single trace interactions of arbitrarily high even order. We obtain explicit and convergent expansions for the cumulants. We show that any cumulant is an analytic function inside a cardioid domain in the complex plane and we prove their Borel-LeRoy summability at the origin of the coupling constant. We also prove Borel-LeRoy summability of the topological expansion in the genus of the associated maps. Our proof is uniform in the external variables.
**keywords** Random Matrix; Cumulants; Constructive Field Theory
Mathematics Subject Classification 81T08
## 1 Introduction
Random matrix theory [1, 2] studies probability laws for matrices. Application of random matrices to 2d quantum gravity [3] relies on their associated _combinatorial maps_, which depend on (at least) two parameters: a coupling constant \(\lambda\) and the size of the matrix, \(N\). A _formal_ expansion in the parameter \(\lambda\) yields generating functions for maps of arbitrary genus. The coupling constant \(\lambda\) roughly measures the size of the map while the parameter \(1/N\) turns out to measure the genus of the map [4].
We are interested in the loop vertex representation (LVR) [5]. This is a improvement of the loop vertex expansion (LVE) [6]. This LVE was introduced itself as a tool for constructive field theory to deal with _random matrix
fields_. A common main feature of the LVR and LVE is that it is written in terms of trees which are exponentially bounded. It means that the outcome of the LVR-LVE is convergent and _is the Borel-LeRoy sum in \(\lambda\)_, whereas the usual perturbative quantum field theory _diverges_ at the point \(\lambda=0\).
For a general exposition of constructive field theory, see [7, 8, 9]; for an early application to the generating function of connected Schwinger functions - which in this paper are denoted _cumulants_ - see [10]; for the actual mechanism of replacing Feynman graphs, which are not exponentially bounded, by trees, see [11]; for a review of the LVE, we suggest consulting [12]; and for the LVE applied to cumulants, we refer to [13]. Together with [14]-[16] this is our main source of inspiration for this paper. In accordance with [13] we also provide a constructive Borel-LeRoy theorem on the topological expansion in \(1/N\).
We think that the LVR has _more power_ than the LVE, since the LVR can treat more models, with higher polynomial interactions. The added ingredients of the LVR are essential combinatorial, based on the selective Gaussian integration [5], and the Fuss-Catalan numbers and their generating function [17]. The authors of [14] join this formalism to Cauchy holomorphic matrix calculus and have been applied to the simplest complex matrix model with stable monomial interaction. In [15] the same authors have extended it to the case of _Hermitian_ or _real symmetric_ matrices, in a manner both _simpler and more powerful_. The basic formalism is still the LVR, but while [5, 14] used contour integral parameters attached to every _vertex_ of the loop rep
Figure 1: In blue the cardioid domain considered in [16], in red the cardioid domain considered in [22].
resentation, [15] introduces more contour integrals, one for each _loop vertex corner_. This results in simpler bounds for the norm of the corner operators.
But we should remember that the LVE is older and their authors have more time to fine-tune their models. They construct their models with the coupling constant in a cardioid-shaped domain (see Figure 1) which has opening angle arbitrarily close to \(2\pi\)[16] or even exceeding \(2\pi\)[22]. In this case the LVE is capable to compute some typically non-perturbative effects like instantons by resuming perturbative field theory!
In [14, 15] we only prove analyticity and Borel-LeRoy summability inside a pacman domain like Figure 2 (see [23, 24]). For this article, in Appendix B, we extend to the more up-to-date cardioid domain of Nevanlinna-Sokal [25, 26, 27].
In [18] Sazonov has recently applied selective Gaussian integration in the quantum field theory formulation of the Jacobian conjecture [19, 20, 21].
**Acknowledgement** We would like to thank T. Krajewski, L. Ferdinand, R. Gurau, P. Radpay and V. Sazonov for comments on the present work when we were in some preliminary stage and simply for expressing some interest and motivating us to pursue. We also acknowledge the support of the CEA-LIST through the chair "Artificial Intelligence and Complexity".
The model
Consider a complex square matrix model with stable interaction of order \(2p\), where \(p\geq 2\) is an integer which is fixed through all this paper1. Let us recall some basics of our LVR in the scalar and \(d=0\) case [5]. One of the key elements of the LVR construction is the Fuss-Catalan numbers of order \(p\), which we denote by \(C_{n}^{(p)}\), and their generating function \(T_{p}\)[17]. This generating function \(T_{p}\) is defined by
Footnote 1: In this paper we consider only moments or cumulants for complex matrices of _even order_, since the moments or cumulants of odd order vanish identically. We remark that in the case \(p=2\) LVE is sufficient, but LVR still works.
\[T_{p}(z)=\sum_{n=0}^{\infty}C_{n}^{(p)}z^{n}. \tag{1}\]
It is analytic at the origin and obeys the algebraic equation
\[zT_{p}^{p}(z)-T_{p}(z)+1=0. \tag{2}\]
In the case \(p=3\) the LVR is somewhat simplified; the Fuss-Catalan equation is
\[zT_{3}^{3}(z)-T_{3}(z)+1=0. \tag{3}\]
which is soluble by radicals. We give in Appendix A the details derived from Cardano's solution.
We shall only present our main result for _complex square matrices_ in a perturbation \((MM^{\dagger})^{p}\). The generalisation to cumulants of other cases, for instance rectangular complex matrices, or Hermitian matrices, or real symmetric matrices, is easy for someone who is familiar of [14, 15]2.
Footnote 2: For practical applications such as data analysis, the case \(p=3\) seems to be the main one; however we encourage our readers interested by data analysis to treat the case of _real symmetric_ matrices and _rectangular matrices_.
In this paper, Tr always means _the trace on_\({\bf C}^{N}\), Tr\({}_{\otimes}\) always means _the trace on_\({\bf C}^{N\times N}\), and we call \(J\) the source field. The generating function of the cumulants for a Gaussian matrix model is then defined by the integral over complex \(N\times N\) matrices \(M\)
\[Z_{0}(N,J) := \int\,dM\,e^{-\,{\rm Tr}[MM^{\dagger}+\sqrt{N}JM^{\dagger}+\sqrt {N}J^{\dagger}M]}, \tag{4}\]
where \(dM\) is defined by the probability measure
\[dM := \pi^{N}\prod_{1\leq i,j\leq N}d{\rm Re}(M_{ij})d{\rm Im}(M_{ij})\,. \tag{5}\]
We translate \(M+\sqrt{N}J\to\bar{M}\) the Gaussian measure; then we rename \(\bar{M}\to M\), and the equation (4) becomes
\[Z_{0}(N,J)=\int\,dM\,e^{-\,{\rm Tr}[MM^{\dagger}-NJ^{\dagger}J]}. \tag{6}\]
The same matrix model with perturbation \(\frac{\lambda}{N^{p-1}}(MM^{\dagger})^{p}\)3 is defined by
Footnote 3: The normalization in \(N\) has been chosen in a same way as in [13], so that the amplitude of a ribbon graph is \(N^{\chi(G)}\), with \(\chi(G)\) the Euler characteristic of the graph. Recall that [13] is dealing with the LVE, thus with a \(p=2\) (quartic) perturbation.
\[Z_{p}(\lambda,N,J) := \frac{\int\,dMe^{-\,{\rm Tr}[MM^{\dagger}+\frac{\lambda}{N^{p-1} }(MM^{\dagger})^{p}-NJ^{\dagger}J]}}{\int\,dMe^{-\,{\rm Tr}[MM^{\dagger}+\frac {\lambda}{N^{p-1}}(MM^{\dagger})^{p}]}} \tag{7}\]
where the stable case corresponds to \({\rm Re}\,\lambda>0\).
The case _without sources_, of the partition function and its logarithm, has been treated in [14]. Therefore in this paper we are only dealing with the case _with sources fields_\(J\). Our quantity of interest is the relation between the moments and the cumulants \(1\leq k\leq k_{\rm max}\), defined by
\[{\mathfrak{S}}^{k}_{p}(\lambda,N,J) := \Big{[}\frac{\partial^{2}}{J^{\dagger}_{a_{1}b_{1}}J_{c_{1}d_{1}} }\cdots\frac{\partial^{2}}{J^{\dagger}_{a_{k}b_{k}}J_{c_{k}d_{k}}}Z_{p}( \lambda,N,J)\Big{]}_{\{J\}=0}\,, \tag{8}\] \[{\mathfrak{K}}^{k}_{p}(\lambda,N,J) := \Big{[}\frac{\partial^{2}}{J^{\dagger}_{a_{1}b_{1}}J_{c_{1}d_{1}} }\cdots\frac{\partial^{2}}{J^{\dagger}_{a_{k}b_{k}}J_{c_{k}d_{k}}}\log Z_{p}( \lambda,N,J)\Big{]}_{\{J\}=0}. \tag{9}\]
From now on we will switch to an integral notation more adapted to the LVR, and more reminiscent of the functional integration in quantum field theory. We assume the reader is familiar with the notations of [14] and with Appendix B of the book [31]. We omit the subscript \(p\) from \(T,Z,{\mathfrak{S}},{\mathfrak{K}}\), etc... when no confusion is possible.
For any square matrix \(X\) we also define the matrix-valued function
\[A(\lambda,X) := XT(-\lambda X^{p-1})\,, \tag{10}\]
so that from (3)
\[X=A(\lambda,X)+\lambda A^{p}(\lambda,X)\,. \tag{11}\]
We often write \(A(X)\) for \(A(\lambda,X)\), or even simply \(A\), when no confusion is possible. In a simplification with respect to [14], we consider only square matrices. We define
\[d\mu(M):=dMe^{-\,\mathrm{Tr}\,MM^{\dagger}}, \tag{12}\]
the normalized Gaussian measure of complex square matrix associated to \(M\), and the \(N\) by \(N\) square matrix \(X_{l}\) through \(X_{l}:=\frac{MM^{\dagger}}{N}\). From now on we simply write \(X\), \(\mathbf{1}_{\otimes}\), instead of \(X_{l}\), \(\mathbf{1}_{ll}\) etc...4. In terms of \(X\), equation (7) write
Footnote 4: Of course there are similar formulas with \(X_{r}\) instead of \(X_{l}\).
\[Z(\lambda,N,J):=\frac{\int\,dMe^{-N\,\mathrm{Tr}[X+\lambda X^{p}-J^{\dagger}J ]}}{\int\,dMe^{-N\,\mathrm{Tr}\,\lambda X^{p}]}}=\frac{\int\,d\mu(M)e^{N\, \mathrm{Tr}\,J^{\dagger}J-N\,\mathrm{Tr}\,\lambda X^{p}}}{\int\,d\mu(M)e^{-N \,\mathrm{Tr}\,\lambda X^{p}}}. \tag{13}\]
In [14] the authors have computed the partition function without sources
\[Z(\lambda,N)=\int\,dMe^{-N\,\mathrm{Tr}[X+\lambda X^{p}]}:=\int\,d\mu(M)\,e^{- S(\lambda,X)}, \tag{14}\]
where \(dM\) is defined by (5) and where \(S(\lambda,X)\), the _loop vertex action without sources_ is:
\[S(\lambda,X) = -\,\mathrm{Tr}_{\otimes}\log\Big{[}\mathbf{1}_{\otimes}+\lambda [\sum_{k=0}^{p-1}A^{k}(X)\otimes A^{p-k-1}(X)]\Big{]} \tag{15}\] \[= -\,\mathrm{Tr}_{\otimes}\log\big{[}\mathbf{1}_{\otimes}+\lambda \Sigma(\lambda,X)\big{]}, \tag{16}\]
where \(\Sigma(\lambda,X):=\sum_{k=0}^{p-1}A^{k}(X)\otimes A^{p-1-k}(X)\). In line with [16], our starting point is to formally compute the moments
\[\mathfrak{S}^{k}(\lambda,N,J) = \left\{\frac{\partial^{2}}{J_{a_{1}b_{1}}^{\dagger}J_{c_{1}d_{1 }}}\cdots\frac{\partial^{2}}{J_{a_{k}b_{k}}^{\dagger}J_{c_{k}d_{k}}}\!\int d \mu(M)e^{S(\lambda,X,J)}\right\}_{J=0}, \tag{17}\]
where \(S(\lambda,X,J)\), the _loop vertex action with sources_ is
\[S(\lambda,X,J) = K_{J,N}(\lambda,X)-S(\lambda,X), \tag{18}\]
and \(K_{J,N}(\lambda,X)\) is defined by
\[K_{J,N}(\lambda,X) = N(J^{\dagger},\big{[}\mathbf{1}_{\otimes}+\lambda\Sigma( \lambda,X)\big{]}^{-1}J)\, \tag{19}\]
and our goal is to deduce from (17) the formula for the cumulants.
## 3 LVR for J-dependent Cumulants
Let us now come to the heart of our article. We have to factorized
\[Z(\lambda,N,J) = \int d\mu(M)e^{K_{J,N}(\lambda,X)-S(\lambda,X)}. \tag{20}\]
In the first step we define \(W:=W_{1}+W_{2}\), \(W_{1}:=e^{K_{J,N}(\lambda,X)}\), \(W_{2}:=e^{-S(\lambda,X)}\), and we expand to infinity the exponential of the interaction
\[Z(\lambda,N,J) = \sum_{n=0}^{\infty}\frac{1}{n!}\int d\mu(M)\ W^{n}\ =\sum_{n=0}^{\infty}\frac{1}{n!}\int d\mu(M)\ \prod_{i=1}^{n}W^{i}, \tag{21}\]
provided \(\forall i\,W^{i}=W\).
The second step is to introduce replicas and to replace (for the term of order \(n\)) the integral over the single \(N\times N\) complex matrix \(M\) by an integral over an \(n\)-tuple of such \(N\times N\) matrices \(M_{i},1\leq i\leq n\). The Gaussian part of the integral is replaced by a normalized Gaussian measure \(d\mu_{C}\) with a degenerate covariance \(C_{ij}=1\,\ \forall(i,j)\). Recall that for any real positive symmetric matrix \(C_{ij}\) one has
\[\int d\mu_{C}M_{i|ab}^{\dagger}M_{j|cd}=C_{ij}\delta_{ad}\delta_{bc}, \tag{22}\]
where \(M_{i|ab}\) denote the matrix element in the row \(a\) and column \(b\) of the matrix \(M_{i}\). That Gaussian integral with a degenerate covariance is indeed equivalent to a single Gaussian integral, say over \(M_{1}\) times a product of \(n-1\) Dirac distributions \(\delta(M_{1}-M_{2})\cdots\delta(M_{n-1}-M_{n})\). From the perturbative point of view, this degenerate covariance produces all the edges in a Feynman graph expansion that connect the various vertices together.
In order to exploit the fact that the matrix of covariance of \(d\mu_{C}\) has coefficients \(1\) everywhere, we shall write _in a subtle way_ the factor \(W^{i}=W_{1}^{i}+W_{2}^{i}\) to depend on \(X^{i}=\frac{1}{N}M_{i}M_{i}^{\dagger}\) instead of \(X\) and to list the \(i\) indices first of all in \(W_{1}\), say \(\{1,...,k^{\prime}\}\), then in \(W_{2}\), say \(\{k^{\prime}+1,...,n\}\). Next we move across the integral \(\sum_{n=0}^{\infty}\frac{1}{n!}\int d\mu(M)\) the product \(\prod_{i=1}^{k}\frac{\partial^{2}}{J_{a_{i}b_{i}}^{\dagger}J_{c_{i}d_{i}}}\). Hence we write
\[\mathfrak{S}^{k}(\lambda,N,J)=\sum_{n=0}^{\infty}\frac{1}{n!}\int\!d\mu(M) \bigg{[}\prod_{i=1}^{k}\frac{\partial^{2}}{J_{a_{i}b_{i}}^{\dagger}J_{c_{i}d_ {i}}}\prod_{i=1}^{k^{\prime}}W_{1}^{i}(X^{i})\bigg{]}_{\{J\}=0}\prod_{i=k^{ \prime}+1}^{n}W_{2}^{i}(X^{i}). \tag{23}\]
\(\mathfrak{S}^{k}(\lambda,N,J)\) can be represented as a sum over the set \(\mathfrak{F}\) of _oriented forests_5 by applying the BKAR formula [32, 33] to (23). For readers who want to look further into BKAR formula and oriented forests, ordered or not, see [11, 28, 13].
Footnote 5: Oriented forests simply distinguish edges \((i,j)\) and \((j,i)\) so have edges with arrows. It allows to distinguish below between operators \(\frac{\partial}{\partial M_{i}^{\dagger}}\frac{\partial}{\partial M_{j}}\) and \(\frac{\partial}{\partial M_{j}^{\dagger}}\frac{\partial}{\partial M_{i}}\).
We start by replacing the covariance \(C_{ij}=1\) by \(C_{ij}(x)=\frac{x_{ij}+x_{ji}}{2}\) evaluated at \(x_{ij}=1\) for \(i\neq j\) and \(C_{ii}(x)=1\ \forall i\). Then the Taylor BKAR formula for oriented forests \(\mathfrak{F}_{n}\) on \(n\) labeled vertices yields
\[\mathfrak{S}^{k}(\lambda,N,J) = \sum_{n=0}^{\infty}\frac{1}{n!}\ \sum_{\mathcal{F}\in\mathfrak{F}_{n}}\ \int dw_{\mathcal{F}}\ \partial_{\mathcal{F}}\ \mathfrak{S}^{k}_{n}(\lambda,N,J)\ \Big{|}_{x_{ij}=x_{ij}^{\mathcal{F}}(w)} \tag{24}\] \[\mbox{where}\ \ \int dw_{\mathcal{F}} := \prod_{(i,j)\in\mathcal{F}}\int_{0}^{1}dw_{ij}\,\quad\partial_{ \mathcal{F}}:=\prod_{(i,j)\in\mathcal{F}}\frac{\partial}{\partial x_{ij}}\,\] (25) \[\mathfrak{S}^{k}_{n}(\lambda,N,J) := \int d\mu_{C\{x^{\mathcal{F}}\}}(M)\prod_{i=1}^{n}W,\] (26) \[x_{ij}^{\mathcal{F}}(w) := \left\{\begin{array}{ll}\inf_{(k,l)\in P_{i\leftrightarrow j} ^{\mathcal{F}}}w_{kl}&\mbox{if $P_{i\leftrightarrow j}^{\mathcal{F}}$ exists}\,,\\ 0&\mbox{if $P_{i\leftrightarrow j}^{\mathcal{F}}$ does not exist}\,.\end{array}\right. \tag{27}\]
In this formula \(w_{ij}\) is the weakening parameter of the edge \((i,j)\) of the forest, and \(P_{i\leftrightarrow j}^{\mathcal{F}}\) is the unique path in \(\mathcal{F}\) joining \(i\) and \(j\) when it exists.
Next we perform the \(\frac{\partial^{2}}{J_{a_{i}b_{i}}^{\dagger}J_{c_{i}d_{i}}}\) derivatives and we put to \(0\) all the \(J\). This inductive partial computation is, recalling \(\frac{\partial^{2}}{J_{a_{i}b_{i}}^{\dagger}J_{c_{i}d_{i}}}\)\(1=0\):
\[\bigg{\{}\prod_{i=1}^{k}(\frac{\partial^{2}}{J_{a_{i}b_{i}}^{ \dagger}J_{c_{i}d_{i}}})\prod_{i=1}^{k^{\prime}}e^{N(J^{\dagger},[\mathbf{1}_{ \otimes}+\lambda\Sigma(\lambda,X^{i})]^{-1}J)}\bigg{\}}_{\{J\}=0}\] \[= \!N^{k}\prod_{i=1}^{k}\big{(}J_{a_{i}b_{i}}^{\dagger},\big{[} \mathbf{1}_{\otimes}+\lambda\Sigma(\lambda,X^{i})\big{]}^{-1}J_{c_{i}d_{i}} \big{)} \tag{28}\]
wherever \(k^{\prime}\geq k\), and
\[\bigg{\{}\prod_{i=1}^{k}(\frac{\partial^{2}}{J_{a_{i}b_{i}}^{ \dagger}J_{c_{i}d_{i}}})\prod_{i=1}^{k^{\prime}}e^{N(J^{\dagger},[\mathbf{1}_ {\otimes}+\lambda\Sigma(\lambda,X^{i})]^{-1}J)}\bigg{\}}_{\{J\}=0}=\!0 \tag{29}\]
wherever \(k^{\prime}\leq k-1\). The result is (taken into account that this partial computation over indices \(k+1\leq k"\leq k^{\prime}\) have value 1, hence can be deleted)
\[\mathfrak{S}^{k}(\lambda,N,J) = N^{k}\sum_{n=k}^{\infty}\frac{1}{n!}\sum_{\mathcal{F}}\int dw_{ \mathcal{F}}\partial_{\mathcal{F}}\int d\mu_{C\{x^{\mathcal{F}}\}}(M)\] \[\prod_{i=1}^{k}(J_{a_{i}b_{i}}^{\dagger},\left[\mathbf{1}_{ \otimes}+\lambda\Sigma(\lambda,X^{i})\right]^{-1}J_{c_{i},d_{i}})\prod_{i=k+1}^ {n}e^{-S(\lambda,X^{i})},\]
defined as a Gaussian integral over \(X^{i}=\frac{1}{N}M_{i}M_{i}^{\dagger}\).
Now there is a constraint: the cumulants must be connected. Remember that a main property of the forest formula is that the symmetric \(n\) by \(n\) matrix \(C\{x^{\mathcal{F}}\}_{ij}=\frac{x_{ij}^{\mathcal{F}}(w)+x_{ji}^{\mathcal{F}}( w)}{2}\) is positive for any value of \(w_{kl}\), hence the Gaussian measure \(d\mu_{C\{x^{\mathcal{F}}\}}(M)\) is well-defined. Since the fields, the measure and the integrand are now factorized over the connected components of \(\mathcal{F}\), its _logarithm_ is easily computed as exactly the same sum but restricted to the spanning trees (remember that an empty sum is 0, and _an empty product is_ 1):
\[\mathfrak{K}^{k}(\lambda,N,J) = N^{k}\sum_{n=k}^{\infty}\frac{1}{n!}\sum_{T}\int dw_{T}\partial _{T}\int d\mu_{C\{x^{T}\}}(M)\] \[\prod_{i=1}^{k}(J_{a_{i}b_{i}}^{\dagger},\left[\mathbf{1}_{ \otimes}+\lambda\Sigma(\lambda,X^{i})\right]^{-1}J_{c_{i},d_{i}})\prod_{i=k+1} ^{n}e^{-S(\lambda,X^{i})}.\]
Equation (32) is our definition of the LVR for cumulants depending on \(J\). But we want to make a distinction between J-dependent cumulants \(\mathfrak{K}^{k}(\lambda,N,J)\) and _scalar_ cumulants \(\mathfrak{K}^{k}_{\pi}(\lambda,N)\) (see the section below). For this task we have to introduce combinatorial maps, a refinement of the usual Feynman graphs. A combinatorial map is a graph with a distinguished cyclic ordering of the half edges incident at each vertex. Combinatorial maps are conveniently represented as _ribbon graphs_ whose vertices are disks and whose edges are ribbons (allowing one to encode graphically the ordering of the half edges incident at a vertex). When applied to cumulants, it is based on combinatorial maps with cilia. A _cilium_ is a half edge hooked to a vertex.
We denote \(k(G)\), \(v(G)\), \(e(G)\), \(f(G)\) and \(c(G)\) the sets of cilia, vertices, edges, faces and _corners_ of \(G\). A corner of \(G\) is a pair of consecutive half edges attached to the same vertex. The faces of \(G\) are partitioned between
the faces which do not contain any cilium (which we sometimes call internal faces) and the ones which contain at least a cilium, which we call _broken faces_. We denote \(b(G)\) the set of broken faces of \(G\). Each broken face corresponds to a puncture in the Riemann surface in which \(G\) is embedded, and the Euler characteristic of the graph \(G\) is:
\[\chi(G)=|v(G)|-|e(G)|+|f(G)|-|b(G)|=2-2\mathfrak{g}(G)-|b(G)|, \tag{32}\]
where \(|a|\) denotes the cardinality of \(a\) and \(\mathfrak{g}(G)\) is the genus of the graph \(G\).
**Definition 1** (Labeled ribbon graphs with cilia).: _A labeled ribbon graph with cilia \(G\) is an ordinary ribbon graphs having furthermore:_
* _a labeling of the edges of_ \(G\)_,_
* _at most one cilium per vertex._
_Its amplitude is_
\[\mathcal{A}_{G}(\lambda,N,J) = \frac{(-\lambda)^{|e(G)|}N^{\chi(G)}}{|v(G)|!}\prod_{f\in f(G)} \operatorname{Tr}\bigg{\{}\prod_{c\in\partial f}^{\longrightarrow}(J^{ \dagger}J)^{\eta_{c}}\bigg{\}}\;, \tag{33}\]
_where:_
* \(\prod_{c\in\partial f}^{\longrightarrow}\) _is the oriented product around the corners_ \(c\) _on the boundary_ \(\partial f\) _of the face_ \(f\)_,_
* \(i_{c}\) _is the label of the vertex the corner_ \(c\) _belongs to,_
* \(\eta_{c}=1,0\) _depending on whether_ \(c\) _is followed by a cilium (1) or not (0)._
We turn to the definition below, _common to LVE and to LVR_ [13]:
**Definition 2** (LVR graphs and trees).: _A LVR graph \((G,T)\) is a connected ribbon graph \(G\) with labels on its vertices having furthermore:_
* _a distinguished spanning tree_ \(T\subset G\)_,_
* _a labeling of the edges of_ \(G\) _not in_ \(T\) _(loop edges in physics parlance),_
* _at most one cilium per vertex._
_A LVR tree is a graph such that the set \(L(G,T):=e(G)-e(T)\) is empty, so \((G,T)=(T,T)\)._
We associate to every LVR graph \((G,T)\)_its amplitude_\(\mathcal{A}_{(G,T)}(\lambda,N,J)\). We emphasize here that the following definition of amplitudes is almost the same that in [13], but is subtly _different for the LVE and for the LVR_; we have replaced the intermediate field by \(\{X^{i}\}\), a family of functions depending only on the fields \(M_{i},M_{i}^{\dagger}\).
**Definition 3** (Amplitude of LVR graphs).: \[\mathcal{A}_{(G,T)}(\lambda,N,J)=\frac{(-\lambda)^{|e(G)|}N^{|v (G)|-|e(G)|}}{|v(G)|!}\int dw_{T}\partial_{T}\] (34) \[\int d\mu_{C\{x^{T}\}}(M)\prod_{f\in f(G)}\mathrm{Tr}\left\{ \prod_{c\in\partial f}^{\longrightarrow}\left[\mathbf{1}_{\otimes}+\lambda \Sigma(\lambda,X^{i_{c}})\right]^{-1}(J^{\dagger}J)^{\eta_{c}}\right\}\,,\]
_where:_
* \(\prod_{c\in\partial f}^{\longrightarrow}\) _is the oriented product around the corners_ \(c\) _on the boundary_ \(\partial f\) _of the face_ \(f\)_,_
* \(i_{c}\) _is the label of the vertex the corner_ \(c\) _belongs to,_
* \(\eta_{c}=1,0\) _depending on whether_ \(c\) _is followed by a cilium (_1_) or not (_0_),_
* _the Gaussian measure_ \(\int d\mu_{C\{x^{T}\}}(M)\) _can also be written as the differential operator:_ \[\int d\mu_{C\{x^{T}\}}(M)F\{X^{i}\}=\begin{bmatrix}e^{\frac{x_{ij}^{T}+x_{ji}^ {T}}{2}}\frac{\partial}{\partial M_{i}}\frac{\partial}{\partial M_{j}^{ \dagger}}\ F\Big{\{}\frac{M_{i}M_{i}^{\dagger}}{N}\Big{\}}\end{bmatrix}_{\{M_ {i}\}=0}.\] (35)
To allow this definition to be valid, simply apply the _oriented BKAR formula_ defined in (24)-(25)-(26)-(27) to the amplitude of LVR graphs.
If the graph \((G,T)\) has \(k\) cilia we use for its amplitude the notation \(\mathcal{A}_{G,T}^{k}(\lambda,N,J)\); if the graph \((G,T)\) is reduced to a tree we use the shorthand notation \(\mathcal{A}_{T}^{k}(\lambda,N,J)\) instead of \(\mathcal{A}_{(T,T)}^{k}(\lambda,N,J)\). The amplitude simplifies
drastically in this case: one trace is obtained (as trees have only one face). Hence
\[\mathcal{A}_{T}^{k}(\lambda,N,J)=\frac{(-\lambda)^{|e(T)|}N^{|v(T)|- |e(T)|}}{|v(T)|!}\] (36) \[\int d\mu_{C\{x^{T}\}}(M)\operatorname{Tr}\bigg{\{}\prod_{c\in \partial f}^{\to\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
given by the following absolutely convergent expansion_
\[\mathfrak{K}^{k}(\lambda,N,J) = \mathcal{P}^{k}_{n}(\lambda,N,J)+\mathcal{R}^{k}_{n}(\lambda,N,J), \tag{38}\] \[\mathcal{P}^{k}_{n}(\lambda,N,J) = \sum_{\stackrel{{ G\text{ labeled ribbon graph}}}{{ \text{with }k\text{ cilia}}}}\frac{(-\lambda)^{|e(G)|}N^{\chi(G)}}{|v(G)|!}\prod_{f\in b (G)}\mathrm{Tr}[J^{\dagger}J]^{c(f)}\] (39) \[+ \sum_{\stackrel{{(G,T)\text{ LVR graph}}}{{ \text{with }k\text{ cilia}}}}\mathcal{A}^{k}_{(G,T)}(\lambda,N,J),\] \[\mathcal{R}^{k}_{n}(\lambda,N,J) = \sum_{\stackrel{{ T\text{ LVR tree}}}{{ \text{with }k\text{ cilia}}}}\mathcal{A}^{k}_{T}(\lambda,N,J). \tag{40}\]
_This expansion is analytic for any \(\lambda\in\mathcal{C}\) and the remainder at order \(n\) obeys, for \(\sigma\) constant large enough, the analog of (103)_
\[|\mathcal{R}^{k}_{n}(\lambda,N,J)|=\left|\mathfrak{K}^{k}(\lambda,N,J)-\sum_{ m=0}^{n}a_{m}(N,J)\lambda^{m}\right|\leq\sigma^{n}\left[(p-1)n\right]!\,| \lambda|^{n+1}, \tag{41}\]
_uniformly in \(N\in\mathbb{N}^{*}\), \(J\) such that \(\|J^{\dagger}J\|<\epsilon_{\lambda}\). So it obeys the theorem stated in Appendix B (Borel-LeRoy-Nevanlinna-Sokal) with \(q\to p-1\), \(z\to\lambda\)._
## 4 LVR for Scalar Cumulants
In this section we want to make a distinction between J-dependent cumulants \(\mathfrak{K}^{k}(\lambda,N,J)\) and scalar cumulants \(\mathfrak{K}^{k}_{\pi}(\lambda,N)\). First of all we want to make explicit the dependency on the indices of J
\[\mathfrak{K}_{a_{1}b_{1}c_{1}d_{1},\ldots,a_{k}b_{k}c_{k}d_{k}}(\lambda,N,J):= \mathfrak{K}^{k}(\lambda,N,J). \tag{42}\]
Next we introduce Weingarten functions defined in [13]. As the authors of [13] remark, scalar cumulants arise when integrating over unitary matrices \(\mathrm{U}(N)\) with the invariant normalized Haar measure. Denoting \(U^{*}_{ab}\) the complex conjugate of \(U_{ab}\) we have [29, 30]:
\[\int dU\ U_{a_{1}b_{1}}\ldots U_{a_{k}b_{k}}U^{*}_{c_{1}d_{1}} \ldots U^{*}_{c_{l}d_{l}}= \tag{43}\] \[\delta_{kl}\sum_{\sigma,\tau\in\mathfrak{S}_{k}}\delta_{a_{\tau( 1)c_{1}}}\ldots\delta_{a_{\tau(k)c_{k}}}\delta_{b_{\sigma(1)}d_{1}}\ldots \delta_{b_{\sigma(k)}d_{k}}\mathrm{Wg}(\tau\sigma^{-1},N)\;.\]
The functions \(\mathrm{Wg}(\zeta=\tau\sigma^{-1},N)\) only depend on the cycle structure of \(\zeta\). Here is a few examples of Weingarten functions:
\[\mathrm{Wg}\big{(}(1),N\big{)} =\frac{1}{N} \mathrm{Wg}\big{(}(1,1,1),N\big{)} =\frac{N^{2}-2}{N(N^{2}-1)(N^{2}-4)}\] \[\mathrm{Wg}\big{(}(1,1),N\big{)} =\frac{-1}{N^{2}-1} \mathrm{Wg}\big{(}(1,2),N\big{)} =\frac{-1}{(N^{2}-1)(N^{2}-4)}\] \[\mathrm{Wg}\big{(}(2),N\big{)} =\frac{-1}{N(N^{2}-1)} \mathrm{Wg}\big{(}(3),N\big{)} =\frac{2}{N(N^{2}-1)(N^{2}-4)}\;.\]
For any permutation of \(k\) elements \(\sigma\in\mathfrak{S}_{k}\), let us write \(C(\zeta)\) the integer partition of \(k\) associated to the cycle decomposition of \(\zeta\) and \(|C(\zeta)|\) the number of cycles it contains. Let us also denote by \(\Pi_{k}\) the set of integer partitions of \(k\) (recall that a partition \(\pi\in\Pi_{k}\) is an increasing sequence of \(|\pi|\) integers \(0<k_{1}\leq\cdots\leq k_{|\pi|}\) such that \(k_{1}+\cdots+k_{|\pi|}=k\)). To any integer partition of \(k\) we associate a _trace invariant_:
\[\mathrm{Tr}_{\pi}(X)=\mathrm{Tr}(X^{k_{1}})\cdots\mathrm{Tr}(X^{k_{p}})\;. \tag{44}\]
Let us chose a permutation \(\zeta\in\mathfrak{S}_{k}\) whose cycle decomposition reproduces the contribution of the _broken faces_ to the amplitude of a graph. Specifically, if there are \(b=|b(G)|\) broken faces with \(k_{1},\ldots,k_{b}\) cilia, we choose \(\zeta\) to have a cycle decomposition of the form:
\[\zeta=(i_{1}^{1}\ldots i_{k_{1}}^{1})\cdots(i_{1}^{b}\ldots i_{k_{b}}^{b})\;. \tag{45}\]
This permutation defines a labeling of the cilia in such a way that the product of traces over the broken faces can be expressed as:
\[\prod_{1\leq m\leq b}\mathrm{Tr}\left[J^{\dagger}J\prod_{1\leq r\leq k_{m}}^{ \longrightarrow}X^{i_{r}^{m}}\right]=\sum_{1\leq p_{1},q_{1}\cdots\leq N} \prod_{1\leq l\leq k}(J^{\dagger}J)_{p_{l}q_{l}}X^{l}_{q_{l}p_{\zeta(l)}}\;, \tag{46}\]
where \(X^{l}\) is the product of the resolvents located on the corners separating the cilia labeled \(l\) and \(\zeta(l)\).
Then the amplitude of a graph in (34) expands in trace invariants as:
\[\mathcal{A}_{(G,T)}(\lambda,N,J)=\sum_{\pi\in\Pi_{k}}\mathcal{A}_{(G,T)}^{\pi }(\lambda,N)\;\,\mathrm{Tr}_{\pi}(J^{\dagger}J)\;, \tag{47}\]
with
\[\mathcal{A}^{\pi}_{(G,T)}(\lambda,N) = \frac{(-\lambda)^{|e(G)|}N^{|v(G)|-|e(G)|}}{|v(G)|!}\int dw_{T} \partial_{T}\int d\mu_{C\{x^{T}\}}(M) \tag{48}\] \[\prod_{f\in f(G)-b(G)}\mathrm{Tr}\left\{\,\prod_{c\in\partial f} ^{\longrightarrow}\left[\mathbf{1}_{\otimes}+\lambda\Sigma(\lambda,X^{i_{c}}) \right]^{-1}\right\}\] \[\sum_{\begin{subarray}{c}\tau,\zeta\in\mathfrak{e}_{k}\\ C(\sigma=\tau\zeta^{-1})=\pi\end{subarray}}\sum_{1\leq p_{1},\ldots,p_{k} \leq N}\mathrm{Wg}(\sigma,N)\prod_{1\leq l\leq k}X^{l}_{p_{\tau(l)}p_{\zeta(l) }}.\]
Simply make explicit the trace invariants of the broken faces of (46) and leave the unbroken faces as it is in (34). If the graph \((G,T)\) is reduced to a tree we use the shorthand notation \(\mathcal{A}^{\pi}_{T}(\lambda,N)\) instead of \(\mathcal{A}^{\pi}_{(T,T)}(\lambda,N)\). So
\[\mathcal{A}^{\pi}_{T}(\lambda,N) = \frac{(-\lambda)^{|e(T)|}N^{|v(T)|-|e(T)|}}{|v(T)|!}\int d\mu_{C \{x^{T}\}}(M) \tag{49}\] \[\sum_{\begin{subarray}{c}\tau,\zeta\in\mathfrak{e}_{k}\\ C(\sigma=\tau\zeta^{-1})=\pi\end{subarray}}\sum_{1\leq p_{1},\ldots,p_{k} \leq N}\mathrm{Wg}(\sigma,N)\prod_{1\leq l\leq k}X^{l}_{p_{\tau(l)}p_{\zeta(l) }}.\]
**Definition 4** (Scalar cumulants).: _The order \(k\) J-cumulants can be written as a sum over partitions of \(k\) and over two permutations of \(k\) elements:_
\[\mathfrak{K}_{a_{1}b_{1}c_{1}d_{1},\ldots,a_{k}b_{k}c_{k}d_{k}}(\lambda,N,J) =\sum_{\pi\in\Pi_{k}}\mathfrak{K}_{\pi}^{k}(\lambda,N)\sum_{\rho, \sigma\in\mathfrak{S}_{k}}\prod_{1\leq l\leq k}\delta_{d_{l},a_{\rho\tau\pi \sigma^{-1}(l)}}\delta_{c_{l},b_{\rho\xi\pi\sigma^{-1}(l)}}, \tag{50}\]
_where \(\tau_{\pi}\) and \(\xi_{\pi}\) are arbitrary permutations such that \(\tau_{\pi}(\xi_{\pi})^{-1}\) has a cycle structure corresponding to the partition \(\pi\) and the scalar cumulants \(\mathfrak{K}_{\pi}^{k}(\lambda,N)\) are given by the expansion:_
\[\mathfrak{K}_{\pi}^{k}(\lambda,N)=\sum_{T\text{ LVR tree with $k$ cilia}}\mathcal{A}^{\pi}_{T}(\lambda,N)\;. \tag{51}\]
Choosing any other pair of permutations \(\tau_{\pi}\) and \(\xi_{\pi}\) leads to an identical result, after reorganizing the sum over \(\rho\) and \(\sigma\). Therefore it explains why we call it _scalar cumulant_: \(\mathfrak{K}_{\pi}^{k}(\lambda,N)\) only depends on the partition \(\pi\) and not on the index structure of \(\mathfrak{K}_{a_{1}b_{1}c_{1}d_{1},\ldots,a_{k}b_{k}c_{k}d_{k}}(\lambda,N,J)\).
The main goal of this section is to establish some analyticity results as well as bounds for the scalar cumulants \(\mathfrak{K}_{\pi}^{k}(\lambda,N)\) regarded as functions of \(\lambda\) inside a cardioid domain, with \(N\) considered as a parameter.
The series restricted to scalar cumulants
\[\mathfrak{K}_{\pi}^{k}(\lambda,N)=\sum_{T\text{ LVR tree with $k$ cilia}}\mathcal{A}_{T}^{\pi}( \lambda,N) \tag{52}\]
defines an analytic function of \(\lambda\in\mathcal{C}\). By further expanding loop edges on each tree, we obtain a perturbative expansion with a well controlled remainder. In order to identify the graphs contributing to \(\mathfrak{K}_{\pi}^{k}(\lambda,N)\), we say that a ciliated ribbon graph has broken faces corresponding to \(\pi\) if the partition of the cilia defined by the broken faces agrees with the partition \(\pi\).
**Theorem 2** (Constructive expansion for scalar cumulants).: _Let \(k\leq k_{\max}\). The expansion_
\[\mathfrak{K}_{\pi}^{k}(\lambda,N)=\sum_{T\text{ LVR tree with $k$ cilia}}\mathcal{A}_{T}^{\pi}( \lambda,N) \tag{53}\]
_defines an analytic function of \(\lambda\in\mathcal{C}\). Moreover, each term in this sum is bounded as:_
\[\left|\mathcal{A}_{T}^{\pi}(\lambda,N)\right|\leq\frac{N^{2-|\pi|}|\lambda|^{ |e(T)|}\,(k!)^{2}\,2^{2k}}{(\cos\frac{\arg\lambda}{(p-1)})^{2|e(T)|+k}\,|v(T)|! }\;, \tag{54}\]
_where \(|\pi|\) is the number of integers in the partition \(\pi\). This expansion reads:_
\[\mathfrak{K}_{\pi}^{k}(\lambda,N)=\sum_{\begin{subarray}{c}G\text{ labeled ribbon graph with $k$ cilia},\\ broken faces corresponding to $\pi$, and $|e(G)|\leq n$}\end{subarray}\frac{(-\lambda)^{|e(G)|}N^{ \chi(G)}}{|v(G)|!}+\mathcal{R}_{\pi,n}^{k}(\lambda,N)\;, \tag{55}\]
_where \(\mathcal{R}_{\pi,n}^{k}(\lambda,N)\) is a sum over LVR graphs with \(k\) cilia, at least \(n+1\) edges and at most \(n+1\) loop edges. This remainder is uniformly analytic for \(\lambda\in\mathcal{C}\), and it obeys the bound, for \(\sigma\) constant large enough,_
\[\left|\mathcal{R}_{\pi,n}^{k}(\lambda,N)\right| \leq \sigma^{n}\,N^{2-|\pi|}\,[(p-1)n]!\;|\lambda|^{n+1}. \tag{56}\]
_So it obeys the theorem stated in Appendix B (Borel-LeRoy-Nevanlinna-Sokal) with \(q\to p-1\), \(z\to\lambda\)._
## 5 Topological expansion for scalar cumulants
The Taylor expansion at the origin of the cumulants leads to ribbon graphs drawn on surfaces with boundary. The Euler characteristic of such a surface
determines the power of \(N\). This is known as the topological expansion. While it is well known that the contributions of Feynman graphs of fixed genus are analytic functions in a disk of fixed radius \(\frac{1}{12}\), less is known about the remainder. We state an analyticity result and a bound for the remainder of the topological expansion for scalar cumulants of the LVR.
Let us define the cardioid domain
\[\widetilde{\mathcal{C}}=\left\{\lambda\in\mathbb{C}\quad\text{with}\quad| \lambda|<\frac{1}{12}\Big{(}\frac{\arg\lambda}{2}\Big{)}\right\} \tag{57}\]
as Figure 4.
**Theorem 3** (Constructive topological expansion for the scalar cumulants).: _Let \(k\leq k_{\max}\). The scalar cumulants \(\mathfrak{K}_{\pi}^{k}(\lambda,N)\) are expanded in inverse powers of \(N\) as [13]_
\[\mathfrak{K}_{\pi}^{k}(\lambda,N)=\sum_{h=0}^{\mathfrak{g}}N^{2-2\mathfrak{g}- |\pi|}\mathfrak{K}_{\pi,h}^{k}(\lambda)+\widetilde{R}_{\pi,\mathfrak{g}}^{k}( \lambda,N)\;. \tag{58}\]
_This topological expansion converges uniformly for \(\lambda\in\widetilde{\mathbf{C}}\); there, the topological reminder is bounded, for \(\sigma\) constant large enough:_
\[\left|\widetilde{R}_{\pi,\mathfrak{g}}^{k}(\lambda,N)\right| \leq \sigma^{\mathfrak{g}}\,(4\mathfrak{g})!\;N^{2-2(\mathfrak{g}+1)- |\pi|}. \tag{59}\]
_Therefore the rescaled scalar cumulants_
\[N^{-2+|\pi|}\mathfrak{K}_{\pi}^{k}(\lambda,N) \tag{60}\]
_obey the Borel-LeRoy-Nevanlinna-Sokal theorem of Appendix B with \(q=2\), \(n\to 2\mathfrak{g}\), \(z\to 1/N\) and \(\omega,\Omega\to\lambda,\widetilde{\mathbf{C}}\)._
Figure 4: Analyticity domain \(\widetilde{\mathcal{C}}\) of the topological expansion.
Holomorphic Matrix Calculus
We turn now to holomorphic matrix calculus and to contour integrals. In this section we are going to make a heavy use of the results of [14], and it is crucial to make the distinction between \({\rm Tr}\) and \({\rm Tr}_{\otimes}\). To simplify the notations of this section we often forget the dependency on \(\lambda\) and the superscript \(i\) from \(X^{i}\) when no confusion is possible. For example we often write simply \(A^{i}\) for \(A(\lambda,X^{i})\) and \(\Sigma^{i}\) for \(\Sigma(\lambda,X^{i})\).
Given a holomorphic function \(f\) on a domain containing the spectrum of a square matrix \(X\), Cauchy's integral formula yields a convenient expression for \(f(X)\):
\[f(X)=\oint_{\Gamma}du\frac{f(u)}{u-X}, \tag{61}\]
provided the contour \(\Gamma\) is a _finite_ keyhole contour enclosing all the spectrum of \(X\) (see Figure 5).
In [14] it is established that
\[A(\lambda,X)=\oint_{\Gamma}du\;a(\lambda,u)\,\frac{1}{u-X}, \tag{62}\]
Figure 5: A _finite_ keyhole contour \(\Gamma^{f}_{r,R,\psi}\) encircling a segment on the real positive \(Ox\) axis, which includes the spectrum of \(X\). The spectrum of \(X\) lies on a real axis positive segment, like the one shown in boldface.
where \(a(\lambda,u)=uT_{p}(-\lambda u^{p-1})\) (see (2),(10)). On the other hand by a useful lemma also proven in [14], we know that
\[\frac{\partial A}{\partial X}=\big{[}\mathbf{1}_{\otimes}+\lambda\Sigma( \lambda,X)\big{]}^{-1}=\big{[}\mathbf{1}_{\otimes}+\lambda\Sigma\big{]}^{-1}. \tag{63}\]
Then we can write the matrix derivative acting on a resolvent. We obtain by wrtiting again the superscript \(i\)
\[\frac{\partial A}{\partial X^{i}}=\big{[}\mathbf{1}_{\otimes}+\lambda\Sigma^{ i}\big{]}^{-1}=\oint_{\Gamma}du\;a(\lambda,u)\frac{1}{u-X^{i}}\otimes\frac{1}{u-X^{i}}. \tag{64}\]
Now we reapply the holomorphic calculus, but in different ways6 depending on the term chosen in the sum over \(k\).
Footnote 6: Our choices below are made in order to allow for the bounds of Section 7.
* For \(k=0\), we apply the holomorphic calculus to the right \(\frac{A^{p-1}(\lambda,X)}{u-X}\) factor, with a contour \(\Gamma_{2}\) surrounding \(\Gamma_{0}\) for a new variable called \(v_{2}\), and we rename \(u\) and \(\Gamma_{0}\) as \(v_{1}\) and \(\Gamma_{1}\) (see Figure 6),
* for \(k=p-1\), we apply the holomorphic calculus to the left \(\frac{A^{p-1}(\lambda,X)}{u-X}\) factor, with a contour \(\Gamma_{2}\) surrounding \(\Gamma_{0}\) for a new variable called \(v_{2}\), and we rename \(u\) and \(\Gamma_{0}\) as \(v_{1}\) and \(\Gamma_{1}\); we obtain a contribution identical to the previous case,
* in all other cases, hence for \(1\leq k\leq p-2\), we apply the holomorphic calculus both to left and right factors in the tensor product, with two variables \(v_{1}\) and \(v_{2}\) and two equal contours \(\Gamma_{1}\) and \(\Gamma_{2}\) enclosing enclose the contour \(\Gamma_{0}\).
The first \(\frac{\partial}{\partial X}\) derivative is a bit special as it destroys forever the logarithm in \(S(\lambda,X)\) and gives
\[\left[\frac{\partial}{\partial X}\right]\operatorname{Tr}_{\otimes}\log\big{[} \mathbf{1}_{\otimes}+\Sigma\big{]}=\big{[}\mathbf{1}_{\otimes}+\Sigma\big{]}^ {-1}\frac{\partial\Sigma}{\partial X}. \tag{65}\]
In this way, defining the loop resolvent
\[\mathcal{R}(v_{1},v_{2},X):=\Big{[}\operatorname{Tr}\frac{1}{v_{1}-X}\Big{]} \Big{[}\operatorname{Tr}\frac{1}{v_{2}-X}\Big{]}, \tag{66}\]
we obtain
\[\frac{\partial S}{\partial\lambda} = -\oint_{\Gamma_{1}}dv_{1}\oint_{\Gamma_{2}}dv_{2}\Big{\{}\oint_{ \Gamma_{0}}du\ a(\lambda,u)\sum_{k=1}^{p-2}\frac{\partial_{\lambda}[\lambda a^ {k}(\lambda,v_{1})a^{p-k-1}(\lambda,v_{2})]}{(v_{1}-u)(v_{2}-u)} \tag{67}\] \[\qquad+\ 2a(\lambda,v_{1})\frac{\partial_{\lambda}\big{[}\lambda a ^{p-1}(\lambda,v_{2})\big{]}}{v_{1}-v_{2}}\Big{\}}{\cal R}(v_{1},v_{2},X).\]
We recall the expression for the J-cumulants
\[{\mathfrak{K}}^{k}(\lambda,N,J) = N^{k}\sum_{n=k}^{\infty}\frac{1}{n!}\sum_{T}\int dw_{T}\partial_ {T}\int d\mu_{C\{x^{T}\}}(M) \tag{68}\] \[\prod_{i=1}^{k}(J_{a_{i}b_{i}}^{\dagger},\big{[}{\bf 1}_{\otimes} +\lambda\Sigma(\lambda,X^{i})\big{]}^{-1}J_{c_{i},d_{i}})\prod_{i=k+1}^{n}e^{ -S(\lambda,X^{i})}.\]
In the manner of [14] we can now commute the functional integral and the contour integration. This results in
\[{\mathfrak{K}}^{k}(\lambda,N,J) = N^{k}\sum_{n=k}^{\infty}\frac{1}{n!}\,\sum_{T}\int\{dw_{T}dtdudv \}\Phi_{n} \tag{69}\] \[\int d\mu_{C\{x^{T}\}}(M)\partial_{T}^{M}[{\cal R}_{n}^{k}(J){ \cal R}_{n}^{k}]\Big{|}_{x_{ij}=x_{ij}^{T}(w)},\]
Figure 6: A keyhole contour \(\Gamma_{1}\) encircling a keyhole contour \(\Gamma_{0}\).
where
\[\partial_{T}^{M} := \prod_{(i,j)\in T}{\rm Tr}_{\otimes}\Big{[}\frac{x_{ij}^{T}+x_{ji}^{ T}}{2}\frac{\partial}{\partial M_{i}}\frac{\partial}{\partial M_{j}^{\dagger}} \Big{]}, \tag{70}\] \[{\cal R}_{n}^{\,k}(J) := \prod_{i=1}^{k}(J_{a_{i}b_{i}}^{\dagger},{\cal R}(v_{1}^{i},v_{2}^ {i},X_{i})J_{c_{i},d_{i}}),\] (71) \[{\cal R}_{n}^{\,k} := \prod_{i=k+1}^{n}{\cal R}(v_{1}^{i},v_{2}^{i},X_{i}), \tag{72}\]
and the symbol \(\int\{dw_{T}dtdudv\}\Phi_{n}\) stands for
\[\int\{dw_{T}dtdudv\}\Phi_{n} = \prod_{i,j\in T}\int_{0}^{1}dw_{ij}\prod_{i=1}^{n}\Bigg{[}\int_{0 }^{\lambda}dt^{i}\oint_{\Gamma_{1}^{i}}dv_{1}^{i}\oint_{\Gamma_{2}^{i}}dv_{2}^ {i} \tag{73}\] \[\Big{\{}\oint_{\Gamma_{0}^{i}}du^{i}\phi(t^{i},u^{i},v_{1}^{i},v_ {2}^{i})+\psi(t^{i},v_{1}^{i},v_{2}^{i})\Big{\}}\Bigg{]}.\]
The trace \({\rm Tr}_{\otimes}\) in (70) can also be thought as two independent traces \({\rm Tr}\) associated to ordinary loops (hence the name "loop vertex representation"). In (69) they are only coupled through the scalar factors of (73). The nice property of this LVR representation is that it does not break the symmetry between the two factors of the tensor product in (64).
The condition on the contours \(\Gamma_{r_{j},\psi_{j},R_{j}}^{f}\) for \(j=0,1,2\), can be written
\[0<\psi_{0}<\min(\psi_{1},\psi_{2})\leq\max(\psi_{1},\psi_{2})<\delta, \tag{74}\] \[0<r_{0}<\min(r_{1},r_{2});\quad\|X\|+1\leq R_{0}<\min(R_{1},R_{ 2}). \tag{75}\]
\(S\) is _not uniformly bounded in \(X\)_ but grows logarithmically at large \(X\). However it _fully disappear in the LVR formulas below_, because these formulas do not use \(S\) but derivatives of \(S\) with respect to the field \(M\) or \(M^{\dagger}\). Hence we may use _infinite_ contours \(\Gamma_{r,\psi}^{\infty}\) which are completely independent of \(X\)[14].
The outcome of applying \(\partial_{T}^{M}\) to \([{\cal R}_{n}^{k}(J){\cal R}_{n}^{k}]\) is a bit difficult to write, but the combinatorics has been treated in [14]. For a single loop the Faa di Bruno formula allows to write this outcome as a sum over a set \(\Pi_{r}^{q,\bar{q}}\) of Faa di Bruno terms each with prefactor \(1\):
\[\frac{\partial^{r}}{\partial M_{1}\cdots\partial M_{q}\partial M _{1}^{\dagger}\cdots\partial M_{\bar{q}}^{\dagger}}\frac{1}{v-X} = \sum_{\pi\in\Pi_{r}^{q,\bar{q}}}\ {\rm Tr}\,\Big{[}O_{0}^{\pi}\sqcup O_{1}^{\pi} \sqcup\cdots\sqcup O_{r}^{\pi}\Big{]}. \tag{76}\]
In the sum (76) there are exactly \(r\) symbols \(\sqcup\), separating \(r+1\) corner operators \(O_{c}^{\pi}\).
The result of this computation is obtained by identifying the two ends of each pair of \(\sqcup\) symbols along each edge of \(T\). This pairing of the \(2n-2\)\(\sqcup\) symbols then exactly glue the \(2n\) traces of the tensor products present in the \(n\) vertices into \(n+1\) traces. These corner operators can be of four different types, either resolvents \(\frac{1}{v-X}\), \(M\)-resolvents \(\frac{1}{v-X}M\), \(M^{\dagger}\)-resolvents \(M^{\dagger}\frac{1}{v-X}\), or the identity operator \({\bf 1}\). We call \(r_{\pi}\), \(r_{\pi}^{M}\), \(r_{\pi}^{M^{\dagger}}\) and \(i_{\pi}\) the number of corresponding operators in \(\pi\). By a lemma proven in [14] we know
\[|\Pi_{r}^{q,\bar{q}}|\leq 2^{r}r!,\quad r_{\pi}=1+i_{\pi},\quad r_{\pi}^{M}+r_{ \pi}^{M^{\dagger}}=r-2i_{\pi}. \tag{77}\]
Applying (76) at each of the two loops of each loop vertex, we get for any tree \(T\)
\[\partial_{T}^{M}\mathop{\rm Tr}\nolimits\frac{1}{v-X}=\prod_{i=1}^{n}\Big{\{} \prod_{j=1}^{2}\Big{[}\sum_{\pi_{j}^{i}\in\Pi_{r_{j}^{i}}^{q_{j}^{i},q_{j}^{i} }}\mathop{\rm Tr}\nolimits\big{(}O_{0}^{\pi_{j}^{i}}\sqcup O_{1}^{\pi_{j}^{i} }\sqcup\cdots\sqcup O_{r_{j}^{i}}^{\pi_{j}^{i}}\big{)}\Big{]}\Big{\}} \tag{78}\]
where the indices of the previous (76) are simply all decomposed into indices for each loop \(j=1,2\) of each loop vertex \(i=1,\cdots,n\).
Exactly as in [14], we simply glue the \(\sqcup\) symbols of (78) into \(n+1\) traces \(\mathop{\rm Tr}\nolimits\). This is the fundamental common feature of the LVE-LVR. Each trace acts on the product of all corners operators \(O^{c}\) cyclically ordered in the way obtained by turning around the connected components \(\bar{T}\). Hence we obtain, with hopefully transparent notations,
\[\partial_{T}^{M}[{\cal R}_{n}^{k}(J){\cal R}_{n}^{k}]\Big{|}_{x_{ ij}=x_{ij}^{T}(w)} = \prod_{i=1}^{n}\prod_{j=1}^{2}\sum_{\pi_{j}^{i}\in\Pi_{r_{j}^{i}} ^{q_{j}^{i},q_{j}^{i}}}\big{[}\mathop{\rm Tr}\nolimits\prod_{c\;\circ\;\bar{T} }O^{c}(J^{\dagger}J)^{\eta_{c}}\big{]}. \tag{79}\]
## 7 The Bounds
We turn now the prove Theorem 1, and we decompose it into two different parts: the Borel-LeRoy part of the usual perturbative expansion \({\cal P}_{n}^{k}(\lambda,N,J)\) defined in (39), and the Borel-LeRoy part of the remainder, defined in (40).
For the remainder part \({\cal R}^{k}_{n}(\lambda,N,J)\), it is a sum over the _LRV tree amplitudes_ for J-cumulants, and therefore we can use (79). Hence we write
\[{\cal R}^{k}_{n}(\lambda,N,J) = N^{k}\sum_{n=k}^{\infty}\frac{1}{n!}\sum_{\stackrel{{ \mbox{\tiny$T$ LVR tree}}}{{\mbox{\tiny with $k$ cilia}}}{{\mbox{\tiny with $k$ cilia}}}}\int\{dw_{T}dtdudv\}\Phi_{n}F^{k}_{T}( \lambda,N,J,v),\] \[F^{k}_{T}(\lambda,N,J,v) := \int d\mu_{C\{x^{T}\}}\prod_{i=1}^{n}\prod_{j=1}^{2}\sum_{ \begin{subarray}{c}\pi^{i}_{j}\in\Pi^{q^{i}_{j},q^{i}_{j}}_{r^{i}_{j}}\end{subarray}} \big{[}\operatorname{Tr}\prod_{c\;\circ\;T}O^{c}(J^{\dagger}J)^{\eta_{c}}\big{]}. \tag{80}\]
We now bound the functional integral. Since there are exactly \(n+1\) traces, the factors _N exactly cancel_, all operator norms now commute and taking into account (77) we are left with
\[|F^{k}_{T}(\lambda,N,J,v)|\leq K^{n}\int d\mu_{C\{x^{T}\}}\prod_{i=1}^{n}r_{i}!\Big{[}\!\prod_{c\in\bar{T}}\|O^{c}(J^{\dagger}J)^{\eta_{c}}\|\Big{]}_{x_{ij} =\frac{x^{T}_{ij}(w)+x^{T}_{ji}(w)}{2}}. \tag{81}\]
where, like in [14], \(K\) is a constant.
Exactly as in [14], using that \(\sup\{\|M\|,\|M^{\dagger}\|\}\leq\|X\|^{1/2}\), it is easy to now bound, for \(v\)'s on these keyhole contours, the norm of resolvent factors such as \(\|\frac{1}{v^{i}_{j}-X^{i}}\|\) by a constant times \((1+|v^{i}_{j}|)^{-1}\) and the norm of resolvent factors such as \(\|\frac{1}{v^{i}_{j}-X^{i}}M^{i}\|\) or \(\|{M^{i}}^{\dagger}\frac{1}{v^{i}_{j}-X^{i}}\|\) by a constant times \((1+|v^{i}_{j}|)^{-1/2}\). Plugging into (81) we can use again (77) to prove that we get exactly a decay factor \((1+|v^{i}_{j}|)^{-(1+r^{i}_{j}/2)}\) for each of the \(2n\) loops. The corresponding bound being _uniform_ in all \(\pi,\{w\},\{M\}\), and since the integral \(\int d\mu_{C\{x^{T}\}}\) is _normalized_, we get
\[|F^{k}_{T}(\lambda,N,J,v)|\leq K^{n}\;\|J^{\dagger}J\|^{k}\prod_{i=1}^{n}\Big{\{} r_{i}!\prod_{j=1}^{2}(1+|v^{i}_{j}|)^{-(1+r^{i}_{j}/2)}\Big{\}}. \tag{82}\]
Recall that with our notations, \(r_{i}=r^{i}_{1}+r^{i}_{2}\). Since all integrals with respect to \(w\) are _normalized_, i.e.
\[\int dw_{T}\phi(w_{T})=\prod_{(i,j)\in T}\int_{0}^{1}dw_{ij}\phi(\{w_{ij}\}) \leq\|\phi(\{w_{ij}\})\|, \tag{83}\]
we have simply to bound
\[\int\{dw_{T}dtdudv\}\Phi_{n}\leq\|\int\{dtdudv\}\Phi_{n}\|. \tag{84}\]
Now we bound the contour integral bound \(\int\{dtdudv\}\Phi_{n}\) exactly like in [14]. Finally since each vertex has at least one contour operator, the number of \(|\lambda|^{\frac{1}{4p^{2}}}\) factors in the bound is at least \(n\). Taking into account that the number of (labeled) trees \(T\) is bounded by \(K^{n}n!\) for some constant \(K\), we arrive at
\[\mathcal{R}^{k}_{n}(\lambda,N,J) = N^{k}\|J^{\dagger}J\|^{k}\sum_{n=k}^{\infty}K^{n}|\lambda|^{n+2+ \frac{n}{4p^{2}}} \tag{85}\] \[\leq K^{k}N^{k}\|J^{\dagger}J\|^{k}, \tag{86}\]
uniformly in \(N\in\mathbb{N}^{*}\), \(J\).
So where is, should we say, the crux of Theorem 1? It's in the "Borel-LeRoy part" of the perturbative expansion! For the perturbative expansion, defined in (39),
\[\sum_{\begin{subarray}{c}G\text{ labeled ribbon graph}\\ \text{with $k$ cilia}\\ |e(G)|\leq n\end{subarray}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
**Lemma 2**.: _For \(N\) large enough,_
\[|\mbox{Wg}(\sigma,N)|<\frac{2^{2k}}{N^{2k-|C(\sigma)|}}. \tag{90}\]
This second lemma can be deduced from the asymptotic behavior of the Weingarten functions [29, 30]. Armed with these two lemmas, it is easy to prove Theorem 3.
## 8 Appendix A: the \((M^{\dagger}M)^{3}\) theory
In this Appendix we define \(g=(-\lambda)^{\frac{1}{3}}=e^{i\pi/3}\lambda^{1/3}\), \(\lambda=-g^{3}\) and \(z=\mbox{Tr}\,g^{3}(\frac{M^{\dagger}M}{N})^{2}=-\lambda\,\mbox{Tr}(\frac{M^{ \dagger}M}{N})^{2}\); moreover we simplify again the notations by writing \(T\), \(Z\), \(S\), rather than \(T_{3},Z_{3}(\lambda,N,J)\), \(S_{3}(\lambda,X)\) etc...
We recall Equation (3) in the case \(p=3\)
\[zT^{3}(z)-T(z)+1=0, \tag{91}\]
which is soluble by radicals. Introducing
\[u:=-\frac{27z}{4}=-\frac{27}{4}\,\mbox{Tr}\,g^{3}(\frac{M^{\dagger}M}{N})^{2} =\frac{27}{4}\lambda\,\mbox{Tr}(\frac{M^{\dagger}M}{N})^{2}, \tag{92}\]
Cardano's solution is
\[T(z)=\frac{\Delta_{+}(u)-\Delta_{-}(u)}{\sqrt{-3z}}=1+z+3z^{2}+\cdots, \tag{93}\]
where
\[\Delta_{\pm}(u):=\left(\sqrt{1+u}\pm\sqrt{u}\right)^{1/3}=1\pm\frac{1}{3}\sqrt {u}+\frac{u}{18}\mp\frac{4u^{3/2}}{81}-\frac{35u^{2}}{1944}+\cdots. \tag{94}\]
Defining \(h(u):=\frac{1}{\sqrt{1+u}}\), we can compute the derivatives
\[\Delta^{\prime}_{\pm}=\frac{d}{du}\Delta_{\pm}(u) = \frac{1}{6}\bigg{(}(1+u)^{-1/2}\pm u^{-1/2}\bigg{)}\bigg{(}\sqrt{ 1+u}\pm\sqrt{u}\bigg{)}^{-2/3} \tag{95}\] \[= \pm\frac{1}{6\sqrt{u(1+u)}}\Delta_{\pm}(u)=\pm\frac{h}{6\sqrt{u} }\Delta_{\pm}(u).\]
Hence
\[zT_{3}^{\prime}(z)=\frac{27\sqrt{-z}}{4\sqrt{3}}[\Delta_{+}^{\prime}(u)-\Delta_{- }^{\prime}(u)]-\frac{1}{2\sqrt{-3}z}[\Delta_{+}(u)-\Delta_{-}(u)]. \tag{96}\]
It gives
\[S = \log Z=-\frac{1}{2}\log(1+u)+\log\frac{\Delta_{+}+\Delta_{-}}{2} \tag{97}\]
The \(u\) derivatives of \(S\) give access to its \(g\) derivatives since \(u=-\frac{27}{4}g^{3}\,{\rm Tr}(\frac{M^{\dagger}M}{N})^{2}\), hence
\[\frac{\partial u}{\partial g}=-\frac{81}{4}g^{2}\,{\rm Tr}(\frac{M^{\dagger}M }{N})^{2}. \tag{98}\]
For instance
\[\frac{\partial S}{\partial g} = \frac{81g^{2}\,{\rm Tr}(\frac{M^{\dagger}M}{N})^{2}}{4}\biggl{(} \frac{1}{2(1+u)}-\frac{\Delta_{+}^{\prime}+\Delta_{-}^{\prime}}{\Delta_{+}+ \Delta_{-}}\biggr{)} \tag{99}\] \[= \frac{81g^{2}\,{\rm Tr}(\frac{M^{\dagger}M}{N})^{2}}{8}\biggl{(} h^{2}-\frac{h}{3\sqrt{u}}\frac{\Delta_{+}-\Delta_{-}}{\Delta_{+}+\Delta_{-}} \biggr{)}.\]
We remark that the quotient \(\frac{\Delta_{+}-\Delta_{-}}{\Delta_{+}+\Delta_{-}}=\frac{A-B}{A+B}\) for \(A=\Delta_{+}\), \(B=\Delta_{-}\) simplifies, using that \((A+B)(A^{2}-AB+B^{2})=A^{3}+B^{3}\) and \((A-B)(A^{2}-AB+B^{2})=A^{3}-B^{3}-2AB(A-B)\). Remarking that in our case \(AB=\Delta_{+}\Delta_{-}=1\) we find
\[\frac{\Delta_{+}-\Delta_{-}}{\Delta_{+}+\Delta_{-}} = h[\sqrt{u}-(\Delta_{+}-\Delta_{-})] \tag{100}\] \[\frac{\partial S}{\partial g} = \frac{81g^{2}\,{\rm Tr}(\frac{M^{\dagger}M}{N})^{2}}{8}\biggl{(} h^{2}-\frac{h^{2}}{3\sqrt{u}}[\sqrt{u}-(\Delta_{+}-\Delta_{-})]\biggr{)}\] (101) \[= \frac{81g^{2}\,{\rm Tr}(\frac{M^{\dagger}M}{N})^{2}}{8}\biggl{[} \frac{2}{3}h^{2}+\frac{h^{2}}{3\sqrt{u}}(\Delta_{+}-\Delta_{-})\biggr{]}\] \[= \frac{27g^{2}\,{\rm Tr}(\frac{M^{\dagger}M}{N})^{2}}{4(1+u)} \biggl{[}1+\frac{\Delta_{+}-\Delta_{-}}{2\sqrt{u}}\biggr{]}\]
from which we find
\[\mathfrak{K}^{1}_{abcd}(\lambda,N,J)=\delta(a,d)\delta(b,c)\biggl{[} 1+\frac{g}{Z(\lambda)}\int dMdM^{\dagger}\frac{\partial S}{\partial g}e^{-S} \biggr{]}\] \[=\delta(a,d)\delta(b,c)\biggl{[}1-\frac{27\lambda}{4Z(\lambda)} \int dMdM^{\dagger}\frac{{\rm Tr}(\frac{M^{\dagger}M}{N})^{2}}{1+u}\bigl{[}1 +\frac{\Delta_{+}-\Delta_{-}}{2\sqrt{u}}\bigr{]}e^{-S}\biggr{]}\] \[\lim_{N\to\infty}\mathfrak{K}^{1}_{abcd}(\lambda,N,J)=\delta(a, d)\delta(b,c)[1-18\lambda+\cdots].\]
## 9 Appendix B
We recall the Borel-LeRoy-Nevanlinna-Sokal theorem [25, 26, 27].
**Theorem 4**.: _Let \(q\in\mathbb{N}^{*}\). Let \(F_{\omega}(z)\) be a family of analytic functions on the domain_
\[D_{R}=\{z:\Re z^{-\frac{1}{q}}>(2R)^{-1}\}=\{z:|z|<(2R)^{q}\cos^{q}(\frac{ \arg z}{q})\} \tag{102}\]
_depending on some parameter \(\omega\in\Omega\), and such that, for some \(\sigma\in\mathbb{R}_{+}\),_
\[|R_{n}(z)|=\left|F_{\omega}(z)-\sum_{m=0}^{n}a_{m}(\omega)z^{m}\right|\leq \sigma^{n}(qn)!\,|z|^{n+1} \tag{103}\]
_uniformly in \(D_{R}\) and \(\omega\in\Omega\). Then the formal expansion_
\[\sum_{n=0}^{\infty}s^{qn}\frac{a_{n}(\omega)}{(qn)!} \tag{104}\]
_is convergent for small \(s\) and determines a function \(B_{\omega}(s^{q})\) analytic in_
\[\Sigma_{\sigma}=\{s:\mbox{dist}(s,\mathbb{R}_{+})<\sigma^{-1}\} \tag{105}\]
_and such that_
\[|B_{\omega}(s^{q})|\leq B\exp\big{(}\frac{|s|}{R}\big{)} \tag{106}\]
Figure 7: Domain of analyticity of \(F\) and of its Borel transform for \(q=1\).
uniformly for \(\Sigma_{\sigma}\) (in (106) \(B\) is a constant, that is, it is independent of \(\omega\)). Moreover, setting \(t=s^{q}\),_
\[F_{\omega}(z)=\frac{1}{qz}\int_{0}^{\infty}B_{\omega}(t)\,\left(\frac{t}{z} \right)^{\frac{1}{q}-1}\exp\bigg{(}-\left(\frac{t}{z}\right)^{\frac{1}{q}} \bigg{)}\;dt \tag{107}\]
_for all \(z\in D_{R}\). Conversely, if \(F_{\omega}(z)\) is given by (107), with the above properties for \(B_{\omega}(s^{q})\), then it satisfies remainder estimates of the type (103) uniformly, in any \(D_{r}\) such that \(0<r<R\), and in \(\omega\in\Omega\)._
For theorem 1 and 2 in the core of this article, change \(q\to p-1\), \(z\to\lambda\). For theorem 3, change \(q\to 2\), \(n\to\mathfrak{g}\), \(z\to 1/N\).
|
2306.11398
|
Revisiting the Direct Fourier Filtering Technique for the Maximal Decay
Rate of Boundary-damped Wave Equation by Finite Differences and Finite
Elements
|
The one-dimensional PDE model of the wave equation with a state feedback
controller at its boundary, which describes wave dynamics of a wide-range of
controlled mechanical systems, has exponentially stable solutions. However, it
is known that the reduced models of the wave equation by the standard Finite
Differences and Finite Elements suffer from the lack of exponential stability
(and exact observability without a state feedback controller) uniformly as the
discretization parameter tends to zero. This is due to the loss of uniform gap
among the high-frequency eigenvalues as the discretization parameter tends to
zero. One common remedy to overcome this discrepancy is the direct Fourier
filtering of the reduced models, where the high-frequency spurious eigenvalues
are filtered out. After filtering, besides from the strong convergency, the
exponential decay rate, mimicking the one for the partial differential equation
counterpart, can be retained uniformly. However, the existing results in the
literature are solely based on an observability inequality of the control-free
model, to which the filtering is implemented. Moreover, the decay rate as a
function of the filtering parameter is implicit. In this paper, exponential
stability results for both filtered Finite Difference and Finite Element
reduced models are established directly by a Lyapunov-based approach and a
thorough eigenvalue estimation.The maximal decay rate is explicitly provided as
a function of the feedback gain and filtering parameter. Our results,
expectedly, mimic the ones of the PDE counterpart uniformly as the
discretization parameter tends to zero. Several numerical tests are provided to
support our results.
|
Ahmet Ozkan Ozer, Rafi Emran
|
2023-06-20T09:04:08Z
|
http://arxiv.org/abs/2306.11398v1
|
Revisiting the Direct Fourier Filtering Technique for the Maximal Decay Rate of Boundary-damped Wave Equation by Finite Differences and Finite Elements
###### Abstract
The one-dimensional PDE model of the wave equation with a state feedback controller at its boundary, which describes wave dynamics of a wide-range of controlled mechanical systems, has exponentially stable solutions. However, it is known that the reduced models of the wave equation by the standard Finite Differences and Finite Elements suffer from the lack of exponential stability (and exact observability without a state feedback controller) uniformly as the discretization parameter tends to zero. This is due to the loss of uniform gap among the high-frequency eigenvalues as the discretization parameter tends to zero. One common remedy to overcome this discrepancy is the direct Fourier filtering of the reduced models, where the high-frequency spurious eigenvalues are filtered out. After filtering, besides from the strong convergency, the exponential decay rate, mimicking the one for the partial differential equation counterpart, can be retained uniformly. However, the existing results in the literature are solely based on an observability inequality of the control-free model, to which the filtering is implemented. Moreover, the decay rate as a function of the filtering parameter is implicit. In this paper, exponential stability results for both filtered Finite Difference and Finite Element reduced models are established directly by a Lyapunov-based approach and a thorough eigenvalue estimation.The maximal decay rate is explicitly provided as a function of the feedback gain and filtering parameter. Our results, expectedly, mimic the ones of the PDE counterpart uniformly as the discretization parameter tends to zero. Several numerical tests are provided to support our results.
wave equation boundary feedback stabilization computational issues model reductions finite differences finite elements numerical Fourier filtering maximal decay rate.
35Q60, 35Q93; 74F15; 35Q74, 93B52
## 1 Introduction
Let dots denote the derivatives with respect to the time variable \(t\). Consider the standard one-dimensional wave equation clamped at the left. The boundary feedback is injected at the free end:
\[\left\{\begin{array}{l}\ddot{v}-c^{2}v_{xx}=0,\quad(x,t)\in(0,L)\times \mathbb{R}^{+}\\ v\left(0,t\right)=0,\quad c^{2}v_{x}\left(L,t\right)=-\xi v_{t}\left(L,t \right),\ t\in\mathbb{R}^{+}\\ \left[v,v_{t}\right]\left(x,0\right)=\left[v_{0},v_{1}\right]\left(x\right), \quad x\in\left[0,L\right]\end{array}\right. \tag{1}\]
where \(c>0\) is the wave propagation speed and \(\xi>0\) is the feedback gain. The closed-loop system (1) is widely used in the literature to demonstrate transverse vibrations on a string [11], longitudinal vibrations on a piezoelectric beam [17], or sound vibrations in a duct [26], etc.
The natural energy of the solutions for (1) is defined as
\[E(t)=\frac{1}{2}\int_{0}^{L}\left[|\dot{v}|^{2}+c^{2}|v_{x}|^{2}\right]dx. \tag{2}\]
Defining the state \(\vec{\psi}=(\psi^{1},\psi^{2})^{\rm T}=(v,\dot{v})^{\rm T}\), the system (1) can be formed into the first-order form
\[\left\{\begin{array}{l}\dot{\vec{\psi}}=\mathcal{A}\vec{\psi}=\begin{bmatrix} 0&I\\ c^{2}\frac{\partial^{2}}{\partial^{2}x^{2}}&0\end{bmatrix}\vec{\psi},\quad t\in \mathbb{R}^{+}\\ \psi^{1}(0)=0,\quad\psi^{1}_{x}(L)=-\xi\psi^{2}(L),\\ \vec{\psi}(x,0)=\vec{\psi}_{0}(x).\end{array}\right. \tag{3}\]
In fact, the eigenvalues \(\{\lambda(\xi)\}_{k\in\mathbb{Z}}\) of the non-self-adjoint system operator \(\mathcal{A}\) in (3) are calculated explicitly [2] as the following
\[\lambda_{k}(\xi)=\ \ -\tfrac{c}{2L}ln\left|\tfrac{\xi+c}{\xi-c}\right|+\tfrac {(2k+1)\pi c}{2L}i,\quad\xi<c. \tag{4}\]
Therefore, the fed-back sensor measurement,\(\dot{v}(L,t)\) is sufficient to push all the eigenvalues to the left-half plane, uniformly bounded away from the imaginary axis. This leads to the known exponential stability result [2, 10].
The same exponential stability result can be also proved directly by a Lyapunov approach. A Lyapunov function \(E_{\delta}(t)\) is constructed by a \(\delta-\)perturbation of the energy \(E(t)\) by a standard energy-like functional \(F(t)\) as the following
\[F(t):=\int_{0}^{L}2\dot{u}xu_{x}dx,\quad E_{\delta}(t):=E(t)+\delta F(t). \tag{5}\]
**Theorem 1**.: [15, Chap. 5] _Let \(H^{1}_{L}(0,L):=\{v\in H^{1}(0,L)\ :\ v(0)=0\}\). There exist constants \(M,\delta,\omega>0\) such that for all initial data \((v^{0},v^{1})\in H^{1}_{L}(0,L)\times L^{2}(0,L)\), the solutions \((v,\dot{v})\) of the controlled system (1) with \(0<\xi<c\) are exponentially stable, i.e._
\[E(v,\dot{v};t)\leq M(\delta)E(v_{0},v_{1};0)e^{-\sigma(\delta(\xi))t}\]
_where_
\[\delta(\xi)=\tfrac{1}{2}\min\left\{\tfrac{c}{2L},\quad\tfrac{c\xi^{2}}{L(c^{2 }+\xi^{2})}\right\},\quad\sigma(\delta)=2\delta\left(1-\tfrac{2L\delta}{c} \right),\quad M(\delta)=\tfrac{c+2L\delta}{c-2L\delta}. \tag{6}\]
_Hence, the maximal decay rate \(\sigma_{max}=\tfrac{c}{4L}\) is attained with the optimal feedback gain \(\xi=c\). Indeed, at \(\xi=c\), the spectral abscissa approaches \(-\infty\). Therefore, the solutions disappear in finite time._
It is also known that the system (1) with \(\xi=0\) is exactly observable in the energy space \(H^{1}_{L}(0,L)\times L^{2}(0,L)\). Indeed, proving Theorem 1 is mathematically equivalent to proving the so-called an observability inequality for the control-free problem, i.e. \(\xi=0\).
**Theorem 2**.: [10, Chap. 3] _Consider the solutions \((v,\dot{v})\in H^{1}_{L}(0,L)\times L^{2}(0,L)\) of the control-free system (1), i.e. \(\xi=0\). For any \(T>\tfrac{2L}{c}\), there exists a constant \(C(T)>0\) such that_
\[\int_{0}^{T}|\dot{v}(L,t)|^{2}\ dt\geq C(T)E(0) \tag{7}\]
_for all initial data \((v_{0},v_{1})\in H^{1}_{L}(0,L)\times L^{2}(0,L)\)._
It is widely known that Theorem 1 above fails to hold true as the discretization parameter \(h\) in Finite Difference and Finite Element-based approximations tends to zero. This discrepancy is first observed in the pioneer work of [2], where the widely used model reductions failed to mimic the PDE counterpart (1). The lack of exponential stability of Finite Difference and Finite Element-based model reductions as the discretization parameter approaches zero is shown rigorously in [21] by thorough spectral estimates. Indeed, it is observed that the high-frequency of eigenvalues tend to the imaginary axis as the discretization parameter approaches zero. Later, it is shown by a multipliers approach that the lack exponential stability is due to the the lack of exact observability of the control-free
model, i.e. \(\xi\equiv 0\), as the discretization parameter in both Finite Differences and Finite Element approximations tend to zero. In other words, the observability constant \(C(T)\) in (7) of Theorem 2 turns out to blow up to infinity as the discretization parameter approaches zero. Investigating this discrepancy further shows that high-frequency eigenvalues tend to a specific value as the discretization parameter approaches zero. To remedy this issue, a "direct Fourier filtering technique", controlling only the low-frequency part of the solution in order to eliminate the short wave-length (high-frequency) components of the solutions, is proposed for the first time in [9] for fully-clamped boundary conditions. With this approach, the observability results is recovered fully, see also the detailed review paper on this issue [25]. Since the numerical approximation is proved to converge to the PDE counterpart, the high-frequency components of the solutions are retained by choosing the discretization small enough.
The first attempt of a Finite-Difference-based numerical scheme for (1) is reported in [23]. The proof of the exponential stability result is solely based on an observability result, and a numerical viscosity term artificially added to the system (1), which is referred to the "indirect filtering" technique in the literature. The proof of exponential stability uses the decomposition of the PDE model (1) into a control-free problem with non-zero initial conditions and the controlled problem with zero initial conditions. The techniques in the proof prevents the rigorous investigation of the maximal decay rate in terms of the numerical filtering parameter. Later on, with the direct Fourier filtering technique, as in [9], the exponential stability for the Finite Difference-based model is shown to be retained [5] as the discretization parameter approaches zero. The proof outlined in [5] is based off of the similar decomposition technique as in [23] yet the "direct Fourier filtering" is implemented to the control-free model, \(\xi=0\) in (1). The major drawback of the exponential stability results in both [5] and [23] is that the proof of the exponential stability result solely relies on an observability result of the control-free model. This together with the decomposition argument make the maximal decay rate analysis, and therefore the analysis of finding the optimal feedback gain to achieve the maximal decay rate, more complicated.
It is worthwhile to mention that an alternate semi-discretized Finite-Difference based model reduction, based on reducing the order of the system (1) in time and space variables, is reported [14, 22]. This type of model reduction, similar to the one by mixed-finite elements [2, 3, 16], does not need any numerical filtering. There are several other remedies proposed in the literature for interested audience worth to read for, e.g. Tichonoff regularization [8], non-uniform meshes [6], etc.
In this paper, following the work in [21], the spectral investigation of the non-self-adjoint system operator \(\mathcal{A}\) in (3) is extended for the estimation of the maximum modulus of the eigenvalues of the system matrices for each Finite Difference and Finite Element-based model reductions of (1). It is proved that both reduced models lack uniform observability with respect to mesh parameter \(h\to 0\). By the implementation of the direct Fourier filtering technique to the closed-loop model reductions directly, the exponential stability of the model reductions with respect to mesh parameter is recovered uniformly. The maximal decay rate for each model reduction is established as a function of the filtering parameter. The exponential stability results mimic the ones in Theorem 1, and the proofs are solely based on a Lyapunov approach, discrete multipliers, and through spectral estimates. Maximal decay rates for both approximations as a function of the filtering parameter are also provided for each approximation technique.
The overall methodology presented here not only extends the results in [7, 5, 23] but also provides better insights to understand the overall exponential stability of the Fourier-filtered solutions of (1) by Finite Differences and Finite Elements [2, 21]. To the best of our knowledge, the exponential stability of the Finite Elements with direct Fourier filtering is not reported at all. More importantly, our analysis is applicable to large collections of wave and beam models.
## 2 Semi-discretizations of (1) in the \(x-\)variable
Let \(N\in\mathbb{N}\) be given, and define the mesh size \(h:=\frac{1}{N+1}\). Consider a uniform discretization of the interval \([0,L]\): \(0=x_{0}<x_{1}<...<x_{N-1}<x_{N}<x_{N+1}=L\).
### Finite Differences with \(\xi\equiv 0\)
Let \(v_{j}=v_{j}(t)\approx v(x_{j},t)\) the approximation of the solution \(v(x,t)\) of (1) at the point space \(x_{j}=j\cdot h\) for any \(j=0,1,...,N,N+1\), and \(\vec{v}=[v_{1},v_{2},...,v_{N}]^{T}\). Now consider the central differences for \(z^{\prime\prime}(x_{j})\approx(-A_{h}\vec{v})_{j}\) with the matrix \(A_{h}\) defined by
\[A_{h}^{FD}:=\frac{c^{2}}{h^{2}}\begin{bmatrix}2&-1&0&\ldots&\ldots&\ldots&0\\ -1&2&-1&0&\ldots&\ldots&0\\ &\ddots&\ddots&\ddots&\ddots&\ddots&\\ 0&\ldots&\ldots&0&-1&2&-1\\ 0&\ldots&\ldots&\ldots&0&-1&1\end{bmatrix}_{N\times N} \tag{8}\]
whose eigen-pairs \((\mu_{k}^{FD}(h),\vec{\phi}_{k}(h))\) are [5]:
\[\left\{\begin{array}{l}\mu_{k}^{FD}(h)=\frac{4c^{2}}{h^{2}}\sin^{2}\left( \frac{(2k-1)\pi h}{2(2-h)L}\right),\\ \phi_{k,j}=\sin\left(\frac{(2k-1)j\pi h}{L(2-h)}\right),\quad k,j=1,2,...,N. \end{array}\right. \tag{9}\]
Considering no feedback \(\xi\equiv 0\) in (1), the following discretization is obtained
\[\left\{\begin{array}{l}\vec{v}+A_{h}^{FD}\vec{v}=0,\quad t\in\mathbb{R}^{+} \\ v_{0}=0,\quad v_{N+1}-v_{N}=0,\\ v_{j}(0)=v_{j}^{0},\ \dot{v}_{j}(0)=v_{j}^{1},\quad j=0,...,N+1.\end{array}\right. \tag{10}\]
The discretized energy corresponding to (2) is
\[E_{h,0}^{FD}(t):=\tfrac{h}{2}\sum\limits_{j=0}^{N}\rho\left|\dot{v}_{j}\right| ^{2}+c^{2}\left|\tfrac{v_{j+1}-v_{j}}{h}\right|^{2}. \tag{11}\]
Defining \(\mathcal{A}_{h}^{FD}\) by \(\mathcal{A}_{h}^{FD}(\vec{u}_{1},\vec{u}_{2}):=\left(\vec{u}_{2},-A_{h}^{FD} \vec{u}_{1}\right),\) and calling \(\vec{y}_{h}=(\vec{v},\dot{\vec{v}})\), (10) can be reformulated as
\[\frac{d}{dt}\vec{y}_{h}=\mathcal{A}_{h}^{FD}\vec{y}_{h}. \tag{12}\]
**Lemma 1**.: [5] _For \(K:=\{-N,...,-1,1,...N\}\), the eigen-pairs \(\{(\lambda_{k}^{FD}(h),\vec{\varphi}_{k}(h))\}_{K}\) of \(\mathcal{A}_{h}^{FD}\) are_
\[(\lambda_{k}^{FD},\vec{\varphi}_{k})=\left(i\sqrt{\mu_{k}^{FD}(h)},\left[ \tfrac{1}{i\sqrt{\mu_{k}^{FD}(h)}}\vec{\phi}_{k}(h)\right]\right), \tag{13}\]
_and \(\sqrt{\lambda_{k}^{FD}(h)}:=-\sqrt{\lambda_{-k}^{FD}(h)}\) and \(\vec{\varphi}_{k}:=\vec{\varphi}_{-k}\) for \(k=-1,-2,...,-N\). Therefore, the solutions to (10) can be expressed as_
\[\vec{v}(t)=\sum_{k\in K}(a_{k}e^{i\sqrt{\lambda_{k}^{FD}(h)}t})\vec{\phi}_{k }(h). \tag{14}\]
### Finite Elements with \(\xi\equiv 0\)
First, multiply both sides of the equation in (1) by a continuously differentiable test function \(\phi(x)\in C_{0}^{\infty}[0,L]\), and integrate both sides of the equation over \([0,L]\) to get
\[\int_{0}^{L}u_{tt}\phi\ dx+\int_{0}^{L}u_{x}\phi_{x}\ dx=0. \tag{15}\]
At each node \(\left\{x_{i}\right\}_{i=1}^{N}\), the following linear splines are defined
\[\phi_{i}(x)=\left\{\begin{array}{ll}\frac{1}{h}(x-x_{i}),&x_{i-1}<x<x_{i}\\ \frac{1}{h}(x-x_{i+1}),&x_{i}<x<x_{i+1}\\ 0,&otherwise,\end{array}\right. \tag{16}\]
\[\phi_{N+1}(x)=\left\{\begin{array}{ll}\frac{1}{h}(x-x_{N}),&x_{N}<x<x_{N+1} \\ 0,&otherwise.\end{array}\right. \tag{17}\]
Defining the \((N+1)\times(N+1)\) matrices \(A_{h}\) and \(M\) by
\[A_{h}^{FEM}:=\frac{c^{2}}{h^{2}}\begin{bmatrix}2&-1&0&\dots&\dots&\dots&0\\ -1&2&-1&0&\dots&\dots&0\\ &\ddots&\ddots&\ddots&\ddots&\ddots\\ 0&\dots&\dots&0&-1&2&-1\\ 0&\dots&\dots&\dots&0&-1&1\end{bmatrix}, \tag{18}\]
\[M:=\left(\begin{array}{cccccc}2/3&1/6&0&0&\dots&0\\ 1/6&2/3&1/6&0&\dots&0\\ 0&1/6&2/3&1/6&\dots&0\\ \vdots&\vdots&\vdots&\ddots&\vdots&\vdots\\ 0&0&0&1/6&2/3&1/6\\ 0&0&\dots&0&1/6&1/3\end{array}\right), \tag{19}\]
and seeking solutions to (15) with \(\xi\equiv 0\) of the form \(v(x,t)=\sum\limits_{i=0}^{N+1}v_{i}(t)\phi_{i}(x)\) leads to
\[\left\{\begin{array}{l}\vec{v}+M^{-1}A_{h}^{FEM}\vec{v}=0,\quad t\in\mathbb{ R}^{+}\\ v_{0}=0,\quad h\frac{2v_{N+1}+v_{N}}{6}+c^{2}\frac{v_{N+1}-v_{N}}{h}=0,\\ v_{j}(0)=v_{j}^{0},\ \dot{v}_{j}(0)=v_{j}^{1},\quad j=0,...,N+1.\end{array}\right. \tag{20}\]
The discretized energy corresponding to (20) is defined by
\[E_{h}^{FEM}(t):=\frac{h}{12}\left[\left|\dot{v}_{N+1}\right|^{2}+\sum_{j=1}^{ N}\left(2\left|\dot{v}_{j}\right|^{2}+\left|\dot{v}_{j}+\dot{v}_{j+1}\right|^{2}+6c ^{2}\left|\frac{v_{j+1}-v_{j}}{h}\right|^{2}\right)\right]. \tag{21}\]
Defining \(\mathcal{A}_{h}^{FEM}(\vec{u}_{1},\vec{u}_{2}):=\left(\vec{u}_{2},-M^{-1}A_{h }^{FEM}\vec{u}_{1}\right),\) and calling \(\vec{y}_{h}=(\vec{v},\dot{\vec{v}}),\) (20) can be reformulated as
\[\frac{d}{dt}\vec{y}_{h}=\mathcal{A}_{h}^{FEM}\vec{y}_{h}. \tag{22}\]
Introduce an \((N+1)\times(N+1)\) diagonal matrix \(K\) by \(K=\mathrm{diag}(2,\dots,2,1)\) so that
\[\lambda_{j}\left(M^{-1}A_{h}^{FEM}\right)=\lambda_{j}\left(M^{-1}KK^{-1}A_{h} ^{FEM}\right)=\frac{\lambda_{j}\left(K^{-1}A_{h}^{FEM}\right)}{\lambda_{j} \left(K^{-1}M\right)},\ j=1,\dots,N+1, \tag{23}\]
where matrices \(K^{-1}A_{h}\) and \(K^{-1}M\) are band-matrices:
\[K^{-1}A_{h}^{FEM}=\frac{c^{2}}{2h^{2}}\left(\begin{array}{cccccc}2&-1&0&0& \dots&0\\ -1&2&-1&0&\dots&0\\ \vdots&\vdots&\vdots&\ddots&\vdots&\vdots\\ 0&0&0&-1&1&-1\\ 0&0&\dots&0&-2&2\end{array}\right), \tag{24}\]
\[K^{-1}M=\left(\begin{array}{cccccc}\frac{1}{3}&\frac{1}{12}&0&0&\dots&0\\ \frac{1}{12}&\frac{1}{3}&\frac{1}{12}&0&\dots&0\\ \vdots&\vdots&\vdots&\ddots&\vdots&\vdots\\ 0&0&0&\frac{1}{12}&\frac{1}{3}&\frac{1}{12}\\ 0&0&\dots&0&\frac{1}{6}&\frac{1}{3}\end{array}\right).\]
The next two lemmas are necessary for finding the eigenvalues of matrices (24).
**Lemma 2**.: _The eigenvalues and eigenfunctions of the matrix \(K^{-1}A_{h}^{FEM}\), respectively, are_
\[\left\{\begin{array}{l}\lambda_{j}(h)=\frac{2c^{2}}{h^{2}}\sin^{2}\left( \frac{(2j-1)\pi}{4N}\right),\\ u_{j,k}=\sin\left(\frac{(2j-1)k\pi}{2N}\right),\ j,k=1,2,...,N+1.\end{array}\right. \tag{25}\]
**Proof:** Letting \(u=[u_{1},u_{2},\ldots,u_{N}]^{T},\) the eigenvalue problem for \(k=1,2,\ldots,N+1\) is
\[\left\{\begin{array}{l}-u_{k-1}+(2-2h^{2}\lambda)u_{k}-u_{k+1}=0,\\ u_{0}=0,\ u_{N+1}=u_{N}.\end{array}\right. \tag{26}\]
With \(u_{k}=z^{k},\)\(u_{k-1}=z^{k-1}\) and \(u_{k+1}=z^{k+1},\) both \(z^{k-1}\neq 0\) and \((-1+(2-2h^{2}\lambda)z-z^{2})=0.\) Thus, the general solution to (26) is
\[u_{k}=c_{1}z^{k}(\lambda)+c_{2}z^{-k}(\lambda). \tag{27}\]
By the boundary conditions,
\[z_{j}=e^{\frac{i(2j-1)k\pi}{2N}},\ j=1,\ldots,N+1. \tag{28}\]
Therefore, substituting \(z_{j}\) into (27) leads to
\[u_{j,k}=sin\left(\frac{(2j-1)k\pi h}{2(L-h)}\right),\quad j,k=1,2,\ldots,N+1. \tag{29}\]
Lastly, the eigenvalues are solved by using \(z_{1}z_{2}=1\) and \(z_{1}+z_{2}=2-2h^{2}\lambda_{j}\):
\[2\left(1-2\sin^{2}\left(\frac{(2j-1)\pi}{4N}\right)\right)=2-2h^{2}\lambda_{j}. \tag{30}\]
Therefore, noting \(N=\frac{L-h}{h},\) (25) is obtained. \(\square\)
**Lemma 3.**_For \(j=1,\ldots,N+1,\) the eigenvalues \(M^{-1}A_{h}^{FEM}\) of (22) are given by_
\[\lambda_{j}(M^{-1}A_{h}^{FEM})=\frac{1}{h^{2}}\frac{6-6\cos\left(\frac{(2j-1) \pi h}{2(L-h)}\right)}{2+\cos\left(\frac{(2j-1)\pi h}{2(L-h)}\right)}. \tag{31}\]
**Proof:** Define the \((N+1)\times(N+1)\) matrix \(J\) by \(J:=\mbox{tridiag}(1,0,1).\) One can readily verify that \(h^{2}K^{-1}A=I-K^{-1}J\) and \(6K^{-1}M=2I+K^{-1}J.\) Since \(h^{2}K^{-1}A_{h}^{FEM}\) and \(6K^{-1}M\) are diagonalizable, there exists an invertible matrix P such that \(6P^{-1}K^{-1}MP=2I+P^{-1}K^{-1}JP.\) Lastly, letting \(P^{-1}K^{-1}AP=D\) where D is a diagonal matrix, \(6P^{-1}K^{-1}MP=3I-h^{2}D,\)\(P^{-1}K^{-1}MP=\frac{1}{6}\left(3I-h^{2}D\right).\) Therefore, the eigenvalues of the matrix \(K^{-1}M\) lie on the diagonal of matrix \(P^{-1}K^{-1}MP,\)
\[\lambda_{j}\left(K^{-1}M\right)=\frac{1}{6}\left(3-h^{2}\lambda_{j}\left(K^{- 1}A_{h}^{FEM}\right)\right). \tag{32}\]
Therefore,
\[\lambda_{j}\left(M^{-1}A_{h}^{FEM}\right) =\frac{\lambda_{j}\left(K^{-1}A_{h}^{FEM}\right)}{\lambda_{j}(K^{ -1}M)}=\frac{\frac{12}{h^{2}}\sin^{2}\left(\frac{(2j-1)\pi h}{4(L-h)}\right)}{ 2\left(\frac{3}{2}-\sin^{2}\left(\frac{(2j-1)\pi h}{4(L-h)}\right)\right)}. \tag{33}\]
Following the sub-eigenvalue problem (23), (31) is obtained. \(\square\)
**Theorem 3.**_It can be shown that the eigen-pairs \(\{\lambda_{k}^{FEM}(h),\vec{\varphi}_{k}(h)\}_{K}\) of \(\mathcal{A}_{h}\) are_
\[\left(i\sqrt{\lambda_{k}(M^{-1}A_{h}^{FEM})},\left[\frac{1}{i\sqrt{\lambda_{k }(M^{-1}A_{h}^{FEM})}}\vec{\phi}_{k}\right]\right), \tag{34}\]
_where \(\sqrt{\lambda_{k}^{FEM}(h)}:=-\sqrt{\lambda_{-k}^{FEM}(h)}\) and \(\vec{\varphi}_{k}:=\vec{\varphi}_{-k}\) for \(k=-1,-2,...,-N.\) The solutions to (10) can be expressed as_
\[\vec{v}(t)=\sum_{k\in K}(a_{k}e^{i\sqrt{\lambda_{k}^{FEM}(h)}t})\vec{\phi}_{ k}(h). \tag{35}\]
## 3 Lack of Observability and Exponential Stability as \(h\to 0\)
It can be shown easily the observability result stated in Theorem 2 does not hold uniformly for the discretized models since
\[\left\{\begin{array}{l}\frac{h^{2}|\lambda_{N+1}^{FD}(h)|^{2}}{c^{2}}\to 4, \qquad\frac{h^{2}|\lambda_{N+1}^{FEM}(h)|^{2}}{c^{2}}\to 12\quad\mbox{as $N\to\infty$}.\end{array}\right. \tag{36}\]
**Lemma 4**: _Considering either (10) or (20), for any \(T>0,\) respectively,_
\[\lim_{h\to 0}\sup\frac{E_{h,0}^{FD}(0)}{\int_{0}^{T}\left|\frac{\dot{v}_{N}}{h} \right|^{2}},\qquad\lim_{h\to 0}\sup\frac{E_{h}^{FEM}(0)}{\int_{0}^{T} \left|\frac{\dot{v}_{N}}{h}\right|^{2}}\to\infty. \tag{37}\]
**Proof**: The first limit is proved in [5]. The second limit follows from the same argument together with Theorem 3. \(\square\)
### Finite Differences with \(\xi>0\)
First, observe that the absorbing boundary conditions \(c^{2}v_{x}(L,t)=-\xi\dot{v}(L,t)\) in (1) can be approximated by
\[c^{2}v_{xx}(x_{N+1}) \approx c^{2}\frac{v_{x}(x_{N+1})-v_{x}(x_{N})}{h}=\frac{-\xi \dot{v}_{N+1}-c^{2}\left(\frac{v_{N+1}-v_{N}}{h}\right)}{h}.\]
Then, the following Finite-Difference approximation for (1) can be considered
\[\left\{\begin{array}{l}\left\{\ddot{v}_{j}-c^{2}\left(\frac{v_{j+1}+v_{j-1} -2v_{j}}{h^{2}}\right)=0\right\}_{j=1}^{N},\\ v_{0}=0,\ddot{v}_{N+1}+c^{2}\frac{v_{N+1}-v_{N}}{h^{2}}+\xi\frac{\dot{v}_{N+1 }}{h}=0,\\ v_{j}(0)=v_{j}^{0},\ \dot{v}_{j}(0)=v_{j}^{1},\quad j=0,...,N+1.\end{array}\right. \tag{38}\]
The energy \(E_{h,0}^{FD}(t),\) defined in (11) for the case \(\xi\neq 0,\) is now redefined as the following
\[E_{h}^{FD}(t):=\frac{h}{2}\sum\limits_{j=0}^{N+1}\rho\left|\dot{v}_{j}\right| ^{2}+c^{2}\frac{h}{2}\sum\limits_{j=0}^{N}\left|\frac{v_{j+1}-v_{j}}{h}\right| ^{2}. \tag{39}\]
Letting \(\vec{u}_{1,h}=(v_{1},v_{2},...,v_{N+1})^{\rm T},\)\(\vec{u}_{2,h}=(\dot{v}_{1},...,\dot{v}_{N+1})^{\rm T},\)\(\vec{y}_{h}=(\vec{u}_{1,h},\vec{u}_{2,h})^{\rm T},\) the system (38) is written in the first-order form
\[\dot{\vec{y}}_{h}=\mathcal{A}_{h}^{FD}(\xi)\vec{y}_{h}=(\vec{y}_{2,h},-A_{h} \vec{y}_{1,h}+B_{h}\vec{y}_{2,h})^{\rm T} \tag{40}\]
where \(A_{h}\) is the matrix defined by (18), and\(B_{N+1,N+1}=\frac{-\xi}{h}\neq 0\) and otherwise \(B_{i,j}\equiv 0\) for any other \(i,j.\)
Consider the eigenvalue problem for (40):
\[\mathcal{A}_{h}^{FD}(\xi)\vec{y}=\lambda^{FD}(\xi,h)\vec{y}. \tag{41}\]
For \(0<\xi<c,\) it can be shown that the real part of the eigenvalues are negative, e.g. \(\mbox{Re}\lambda^{FD}(\xi,h)=\frac{-L\xi|\lambda^{FD}(\xi,h)y_{1,N+1}|^{2}}{h c(|\lambda^{FD}(\xi,h)\vec{y}_{1,h}|^{2}-\vec{y}_{1,h}^{\rm T}A\vec{y}_{1,h}))}<0.\)
Seeking the solution of (41) of the form \(y_{1,k}=z^{2k}-z^{-2k}\) with \(k=1,2,\ldots,N+1\) and \(z\in\mathbb{C}\) leads to \(\lambda^{FD}(\xi,h)=c(N+1)(z-z^{-1})\) where \(z\) satisfies \(z^{2}\neq\mp 1\) and
\[p(z)=z^{4N+6}+\frac{\xi}{c}z^{4N+5}-\frac{\xi}{c}z+1=0. \tag{42}\]
Note that the eigenvalues \(\lambda^{FD}(\xi,h)\) of \(\mathcal{A}_{h}^{FD}(\xi)\) solve (42), but some of the solutions of (42) are not related to the eigenvalues \(\lambda^{FD}(\xi,h).\) In fact, since \(\mathcal{A}_{h}^{FD}(\xi)\) is a real matrix, it is sufficient to identify all eigenvalues of \(\mathcal{A}_{h}^{FD}(\xi)\) with positive imaginary part.
Moreover, observe that \(z=\mp i\) are the roots of \(p.\) On the other hand, if \(z\) is a root of \(p(z),\) then \(-z\) and \(z^{-1}\) are both the roots of \(p(z).\) As \(z\neq-\frac{\xi}{c},\) (42) is equivalent to
\[z^{4N+5}=\frac{\xi z-c}{\xi+cz}. \tag{43}\]
The following list of results are proved in [21].
**Lemma 5**.: [21, Prop. 3.3] _The polynomial \(p(z)\) satisfies the following:_
1. _The only roots of_ \(p(z)\) _on the unit circle are_ \(z=\mp i.\) _Moreover,_ \(z=\mp i\) _are the only root of_ \(p(z)\) _with purely imaginary parts._
2. _All roots of_ \(p(z)\) _with positive real parts must be within the unit circle. All roots of_ \(p(z)\) _with negative real parts must be outside the unit circle._
It is inferred from these results that each root \(z\) of \(p(z)\) in the first quadrant of the complex plane leads to three other roots of \(p(z):\bar{z},z^{-1},\bar{z}^{-1}\). Therefore, the eigenvalue of \(\mathcal{A}_{h}^{FD}(\xi)\) corresponding to the roots of \(p(z)\) has positive imaginary part if and only if \(z\) has positive imaginary part. As a result, we only estimate the roots of \(p(z)\) in the first quadrant.
Following the clever discussion in [21], the analysis of the distribution of the roots of (42) is solely based on the analysis of the mapping defined by \(\mathcal{T}(z):=\mathcal{G}^{\frac{1}{4N+5}}(z)\) where \(\mathcal{G}(z)=\frac{\xi z-c}{zc+\xi}\). Indeed, a fixed point \(z\) of \(\mathcal{T}\) is a root of the polynomial (42). Therefore, the root of \(p\) satisfies
\[z=[\mathcal{G}(z)]^{\frac{1}{4N+5}}=\left(\frac{\xi z-c}{zc+\xi}\right)^{\frac {1}{4N+5}}.\]
Define the following sector in the first quadrant of \(\mathbb{C}:\)
\[S=\{z\in\mathbb{C}\ |\ Re(z)\geq 0,Im(z)\geq 0,|z|\leq\frac{\xi+c}{2\xi}\}.\]
Since \(T\) is multi-valued, considering the branches \(\mathcal{T}_{j}\) of \(\mathcal{T}\) as the following
\[\begin{array}{l}\mathcal{T}_{j}(z)=\mathcal{G}(z)|^{\frac{1}{4N+5}}e^{\frac {i(\theta(z)+2j\pi)}{4N+5}},\ 1<j\leq 4N+4,\\ \theta(z)=\mathrm{A}rg(\mathcal{G}(z)).\end{array} \tag{44}\]
**Lemma 6**.: [21, Prop. 3.4 & 3.5, Cor. 3.1] _The following results hold:_
1. _The complex functions_ \(\mathcal{T}_{j},\,j=1,\ldots 4N+4,\) _are analytic over_ \(S,\) _and_ \(\mathcal{T}_{j}\) _is a contraction mapping over_ \(S\) _for_ \(N\) _large enough._
2. _Since_ \(G\) _is analytic over_ \(S,\) _there exists a constant_ \(M_{G}>0\) _such that_ \(|G(z)|<M_{G}\) _for all_ \(z\in S.\)__
3. _As well, for every_ \(j\in[0,N],\) _the subsections_ \(S_{j}\) _of_ \(S\) _defined by_ \[S_{j}:=\left\{z\in S,Arg(z)\in\left[\frac{2j\pi}{4N+5},\frac{(2j+1)\pi}{4N+5} \right]\right\}\] _is invariant under_ \(\mathcal{T}_{j}\) _for large enough_ \(N\)_._
4. _The fixed point_ \(z\) _of_ \(\mathcal{T}_{j}\) _in_ \(S_{j}\) _satisfies_ \[|z| \geq(\frac{\xi c-\xi^{2}}{2\xi^{2}+\xi c+c^{2}})^{\frac{1}{4N+5}}- \sqrt{2}(1+\frac{c}{\xi})\frac{M_{G}}{4N+5},\] (45) \[|z| \leq\left(\frac{c}{\xi}\right)^{\frac{1}{4N+5}}+\left(1+\frac{c} {\xi}\right)\frac{M_{G}}{4N+5}.\] (46) Now, notice that \[\begin{array}{l}\mathrm{Re}\lambda_{j}^{FD}(\xi,h)=\frac{c(N+1)}{L}(z_{j} -\frac{1}{z_{j}})\cos{(\mathrm{A}rg\ z_{j})},\\ \mathrm{I}m\lambda_{j}^{FD}(\xi,h)=\frac{c(N+1)}{L}(z_{j}+\frac{1}{z_{j}}) \sin{(\mathrm{A}rg\ z_{j})},\end{array}\] (47) where \[Arg(z_{j})=Arg(T_{j}(z_{k}))=\frac{\theta(z_{j})+2j\pi}{4N+5},j=0,1,\ldots 4 N+4,\] and \[\theta(z_{j})=\pi-\arctan\left(\frac{Im\ \mathcal{G}(z_{j})}{Re\ \mathcal{G}(z_{j})}\right),\qquad\frac{\pi}{2}\leq\theta(z_{j})\leq\frac{3 \pi}{2}.\] (48) Finally, the following results describe the eigenvalues of \(\mathcal{A}_{h}^{FD}(\xi)\) in terms of the roots of (42).
**Theorem 4**.: _[_21_, Thm 3.1]_ _For sufficiently large \(N\), all roots of \(p(z)\) are simple. There are exactly \(N+1\) roots in the first quadrant of the complex plane. Each subset \(S_{k}\) contains exactly one root \(z_{j}\) of \(p(z)\) for \(j=1,2,\ldots N\). As a result, the matrix \(\mathcal{A}_{h}^{FD}(\xi)\) has \(2N+2\) eigenvalues \(\lambda_{j}^{FD}(\xi,h)\). Moreover, this implies that \(Re\ \lambda_{j}^{FD}(\xi,h)\to 0\) as \(N\to\infty\). Hence, the system (38) lacks of exponential stability uniformly as \(N\to\infty\)._
Now we are ready to state one of our main results, which helps estimate the eigenvalues further:
**Theorem 5**.: _For all \(j=1,2,\ldots,2N+2\) and sufficiently large \(N\) (or small enough \(h\)), the eigenvalues \(\lambda_{j}^{FD}(\xi,h)\) of \(\mathcal{A}_{h}^{FD}(\xi)\) satisfy_
\[h\ |Re\lambda_{j}^{FD}(\xi,h)| =O(h), \tag{49}\] \[h\ |Im\lambda_{j}^{FD}(\xi,h)| =2c\left|\sin\left(\frac{2j\pi}{4N+5}\right)\right|+O(h). \tag{50}\]
_Hence,_
\[h^{2}|\lambda_{j}^{FD}(\xi,h)|^{2}\leq 4c^{2}\sin^{2}\left(\frac{2j\pi}{4N+5 }\right)+O(h). \tag{51}\]
**Proof:** First observe that (49) holds by (48) since
\[\begin{array}{ll}|h\ Re\lambda_{j}^{FD}(\xi,h)|&=ch\frac{N+1}{L}|(|z_{j}|-|z_ {j}|^{-1})\cos(Arg(z_{j}))|\\ &=ch\frac{1}{h}\left|\left(\frac{|z_{j}|^{2}-1}{|z_{j}|}\right)\cos\left(\frac {2j\pi}{4N+5}+\frac{\theta(z_{j})}{4N+5}\right)\right|\\ &=c\left|\frac{-1-|z_{j}|}{|z_{j}|}\right|\left|1-|z_{j}|\right|\left|\cos \left(\frac{2j\pi}{4N+5}\right)\cos\left(\frac{\theta(z_{j})}{4N+5}\right)- \sin\left(\frac{2j\pi}{4N+5}\right)\sin\left(\frac{\theta(z_{j})}{4N+5}\right) \right|,\end{array}\]
and by utilizing (45) and (46) leading to (49)
\[\begin{array}{ll}|h\ Re\lambda_{j}^{FD}(\xi,h)|&=ch\frac{N+1}{L}|(|z_{j}|-|z _{j}|^{-1})\cos(Arg(z_{j}))|\\ &=ch\frac{1}{h}\left|\left(\frac{|z_{j}|^{2}-1}{|z_{j}|}\right)\cos\left( \frac{2j\pi}{4N+5}+\frac{\theta(z_{j})}{4N+5}\right)\right|\\ &=c\left|-\frac{1-|z_{j}|}{|z_{j}|}\right|\left|1-|z_{j}|\right|\left|\cos \left(\frac{2j\pi}{4N+5}\right)\cos\left(\frac{\theta(z_{j})}{4N+5}\right)- \sin\left(\frac{2j\pi}{4N+5}\right)\sin\left(\frac{\theta(z_{j})}{4N+5}\right) \right|\\ &\leq c\left|-1-\left(\frac{\xi c-\xi^{2}}{2\xi^{2}+\xi c+c^{2}}\right)^{ \frac{1}{4N+5}}+\sqrt{2}(1+\frac{c}{\xi})\frac{M^{\frac{1}{4N+5}}}{4N+5}\right| \\ &\quad\times\left|1-\left(\frac{\xi c-\xi^{2}}{2\xi^{2}+\xi c+c^{2}}\right)^{ \frac{1}{4N+5}}+\sqrt{2}(1+\frac{c}{\xi})\frac{M^{\frac{1}{4N+5}}}{4N+5} \right|\times|1+O(h)|\\ &=c\left|-1-1+O(h)\right|O(h)\\ &=O(h).\end{array}\]
On the other hand, analogously, by (45)-(48)
\[\begin{array}{ll}\left|h\ Im\lambda_{j}^{FD}(\xi,h)\right|&=h\frac{c(N+1)} {L}|(|z_{j}|+|z_{j}|^{-1})\sin(Arg(z_{j}))|\\ &=c||z_{j}|+|z_{j}|^{-1}|\left|\sin\left(\frac{2j\pi}{4N+5}\right)\cos\left( \frac{\theta(z_{j})}{4N+5}\right)+\cos\left(\frac{2j\pi}{4N+5}\right)\sin \left(\frac{\theta(z_{j})}{4N+5}\right)\right|\\ &=c||z_{j}|+|z_{j}|^{-1}|\left|\sin\left(\frac{2j\pi}{4N+5}\right)+O(h)\right| \\ &\leq c\left|\left(\frac{c}{\xi}\right)^{\frac{1}{4N+5}}+\left(1+\frac{c}{\xi} \right)\frac{M^{\frac{1}{8N+10}}}{4N+5}+\frac{1}{\left(\frac{\xi c-\xi^{2}}{ 2\xi^{2}+\xi c+c^{2}}\right)^{\frac{1}{4N+5}}-\sqrt{2}(1+\frac{c}{\xi})\frac{M ^{\frac{1}{4N+5}}}{4N+5}}\right|\\ &\qquad\qquad\times\left|\sin\left(\frac{2j\pi}{4N+5}\right)+O(h)\right|\\ &\leq c\left|\sin\left(\frac{2j\pi}{4N+5}\right)+O(h)\right|\left|\left(\frac{ c}{\xi}\right)^{\frac{1}{4N+5}}+O(h)+\frac{1}{\left(\frac{\xi c-\xi^{2}}{2\xi^{2}+ \xi c+c^{2}}\right)^{\frac{1}{4N+5}}+O(h)}\right|\\ &\leq c\left|\sin\left(\frac{2j\pi}{4N+5}\right)+O(h)\right|\left|1+O(h)+\frac{1 }{1+O(h)}\right|\\ &\leq 2c\left|\sin\left(\frac{2j\pi}{4N+5}\right)\right|+O(h).\end{array}\]
Finally, (51) follows from (49) and (50). \(\square\)
### Finite Elements with \(\xi>0\)
Considering the discretized model by Finite Elements
\[\left\{\begin{array}{l}\left\{\frac{\ddot{v}_{j+1}+4\dot{v}_{j}+\ddot{v}_{j-1}} {6}+c^{2}\frac{v_{j+1}-2v_{j}+v_{j-1}}{h^{2}}=0\right\}_{j=1}^{N},\\ v_{0}=0,\;\frac{2\ddot{v}_{N+1}+\ddot{v}_{N}}{6}+c^{2}\frac{v_{N+1}-v_{N}}{h^{2 }}=-\frac{\xi}{h}\dot{v}_{N+1},\\ v_{j}(0)=v_{j}^{0},\;\dot{v}_{j}(0)=v_{j}^{1},\quad j=0,...,N+1,\end{array}\right. \tag{52}\]
the analysis and arguments above can be replicated analogously, see [21] for more in-depth discussion, to get the following result.
**Theorem 6**: _For all \(j=1,2,\ldots,2N+2\) and sufficiently large enough \(N\) (or small enough \(h\) ), the eigenvalues \(\lambda_{j}^{FEM}(\xi,h)\) of \(\mathcal{A}_{h}^{FEM}(\xi)\) satisfy_
\[h^{2}|\lambda_{j}^{FEM}(\xi,h)|^{2}\leq 12c^{2}\sin^{2}\left(\frac{2j\pi}{4N+ 5}\right)+O(h). \tag{53}\]
## 4 Exponential Stability as \(h\to 0\)
### Finite Differences with \(\xi\neq 0\)
**Lemma 7**: _The system (38) is dissipative, i.e._
\[\frac{dE_{h}^{FD}}{dt}+\xi|\dot{v}_{N+1}|^{2}=0. \tag{54}\]
**Proof:** Multiply both sides of (38) by \(h\dot{v}_{j}\) and take sum from \(j=1\) to \(N:\)
\[\sum\limits_{j=1}^{N}h\dot{v}_{j}\ddot{v}_{j}-c^{2}\sum\limits_{j=1}^{N}\frac{ v_{j+1}-2v_{j}+v_{j-1}}{h}\dot{v}_{j}=0. \tag{55}\]
Since
\[-hc^{2}\sum\limits_{j=1}^{N}\frac{v_{j+1}-2v_{j}+v_{j-1}}{h^{2}}\dot{v}_{j}= \frac{c^{2}}{h}(v_{N}-v_{N+1})\dot{v}_{N+1}+\frac{c^{2}}{h}\sum\limits_{j=0}^{ N}(v_{j+1}-v_{j})(\dot{v}_{j+1}-\dot{v}_{j}), \tag{56}\]
substituting (56) into (55) yields
\[\frac{dE_{h}^{FD}}{dt}-\frac{c^{2}}{h}(v_{N+1}-v_{N})\dot{v}_{N+1}-h\dot{v}_{N +1}\ddot{v}_{N+1}=0, \tag{57}\]
and this together with the boundary conditions (38), (54) follows. \(\square\)
Following (5), define the following Lyapunov functional
\[L_{h}^{FD}(t):=E_{h}^{FD}+\delta F_{h}^{FD}(t) \tag{58}\]
where the auxiliary function \(F_{h}^{FD}\) is defined by
\[F_{h}^{FD}(t)=\sum\limits_{j=1}^{N}jh\dot{v}_{j}\left(\frac{v_{j+1}-v_{j-1}}{2} \right)+\frac{L}{2}(v_{N+1}-v_{N})\dot{v}_{N+1}-\frac{L\xi h}{4c^{2}}|\dot{v}_{ N+1}|^{2}. \tag{59}\]
**Lemma 8**: _For \(0<\delta<\frac{c}{L}\), \(L_{h}^{FD}(t)\) is equivalent to \(E_{h}^{FD}(t),\)_
\[\left(1-\frac{L\delta}{c}\right)E_{h}^{FD}\leq L_{h}^{FD}\leq\left(1+\frac{L \delta}{c}\right)E_{h}^{FD}. \tag{60}\]
**Proof:** First of all, by Holders, and Cauchy-Schwartz inequalities for sums, the first and second terms in (59) are estimated as the following
\[\left|\sum\limits_{j=1}^{N}jh\hat{v}_{j}\left(\frac{v_{j+1}-v_{j-1}} {2}\right)\right|\leq\frac{Lh}{2c}\left[\sum\limits_{j=0}^{N+1}|\dot{v}_{j+1}|^ {2}+\sum\limits_{j=0}^{N}c^{2}\left|\frac{v_{j+1}-v_{j}}{h}\right|^{2}-\frac{c ^{2}}{2}\left|\frac{v_{N+1}-v_{N}}{h}\right|^{2}-|\dot{v}_{N+1}|^{2}\right], \tag{61}\] \[\frac{L}{2}\left|(v_{N+1}-v_{N})\dot{v}_{N+1}\right|\leq\frac{Lh}{ 4c}\left[c^{2}\left|\frac{v_{N+1}-v_{N}}{h}\right|^{2}+\dot{v}_{N+1}^{2}\right]. \tag{62}\]
Considering (61) and (62), the following is immediate
\[|F_{h}^{FD}| \leq\frac{Lh}{c}E_{h}(t)-\frac{Lh}{4c}|\dot{v}_{N+1}|^{2}-\frac{L \delta h}{4c^{2}}|\dot{v}_{N+1}|^{2}\leq\frac{Lh}{c}E_{h}(t). \tag{63}\]
Therefore, (60) follows from (63). \(\Box\)
**Lemma 9**.: _The function \(F_{h}\) satisfies_
\[\frac{dF_{h}^{FD}(t)}{dt}\leq-\left(1-\frac{\kappa^{FD}}{4c^{2}}\right)E_{h}^ {FD}(t)+\frac{L}{2}\left(1+\frac{1}{2N+2}+\frac{\xi^{2}}{c^{2}}\right)|\dot{v} _{N+1}|^{2}, \tag{64}\]
_where \(\kappa^{FD}:=\max\left\{h^{2}\left|\lambda_{j}^{FD}(\xi,h)\right|^{2}\right\} _{j=1}^{N+1},\) and the upper bound for the term \(h^{2}\left|\lambda_{j}^{FD}(\xi,h)\right|\) is given by (51)._
**Proof:** Finding the derivative of \(F_{h}(t)\) along the solutions of (38) leads to
\[\begin{array}{ll}\frac{dF_{h}^{FD}}{dt}&=h\left[\sum\limits_{j=1}^{N}j\ddot{ v}_{j}(\frac{v_{j+1}-v_{j-1}}{2})+\dot{v}_{j}\frac{\dot{v}_{j+1}-\dot{v}_{j-1}}{2}) \right]+\frac{L}{2}(\dot{v}_{N+1}-\dot{v}_{N})(\dot{v}_{N+1})\\ &\quad+\frac{L}{2}(v_{N+1}-v_{N})(\ddot{v}_{N+1})-\frac{L\xi h}{2c^{2}}\dot{v} _{N+1}\ddot{v}_{N+1}\\ &=-\frac{hc^{2}}{2}\sum\limits_{j=0}^{N}|\frac{v_{j+1}-v_{j}}{h}|^{2}-\frac{ h}{2}\sum\limits_{j=0}^{N}|\dot{v}_{j}|^{2}+\frac{h^{3}}{4}\sum\limits_{j=0}^{N}| \frac{\dot{v}_{j+1}-\dot{v}_{j}}{h}|^{2}+\frac{Lc^{2}}{2}\left|\frac{v_{N+1}-v _{N}}{h}\right|^{2}-\frac{h}{4}|\dot{v}_{N+1}|^{2}\\ &\quad-\frac{Lh^{2}}{2}\frac{|\dot{v}_{N+1}-\dot{v}_{N}|^{2}}{h^{2}}+\frac{L( |\ddot{v}_{N+1}|^{2}+|\dot{v}_{N}|^{2})}{h^{2}}+\frac{L}{2}(\dot{v}_{N+1}-\dot {v}_{N})(\dot{v}_{N+1})\\ &\quad+\frac{L}{2}(v_{N+1}-v_{N})\ddot{v}_{N+1}-\frac{L\xi h}{2c^{2}}\dot{v} _{N+1}\ddot{v}_{N+1}\\ &=-E_{h}^{FD}(t)+\frac{h}{2}|\dot{v}_{N+1}|^{2}+\frac{h^{3}}{4}\sum\limits_{j=0 }^{N}|\frac{\dot{v}_{j+1}-\dot{v}_{j}}{h}|^{2}-\frac{h}{4}|\dot{v}_{N+1}|^{2} \\ &\quad+\frac{L}{2}|\dot{v}_{N+1}|^{2}-\frac{L\xi}{2h}(v_{N+1}-v_{N})\dot{v}_{N +1}-\frac{L\xi h}{2c^{2}}\dot{v}_{N+1}\ddot{v}_{N+1}\\ &=-E_{h}^{FD}(t)+\frac{h^{3}}{4}\sum\limits_{j=0}^{N}|\frac{\dot{v}_{j+1}-\dot {v}_{j}}{h}|^{2}+\frac{h}{2c^{2}}\dot{v}_{N+1}\ddot{v}_{N+1}\\ &=-E_{h}^{FD}(t)+\frac{h^{3}}{4}\sum\limits_{j=0}^{N}|\frac{\dot{v}_{j+1}- \dot{v}_{j}}{h}|^{2}+\frac{L}{2c}\left(1+\frac{1}{2N+2}+\frac{\xi^{2}}{c^{2}} \right)|\dot{v}_{N+1}|^{2}.\end{array} \tag{65}\]
Defining the higher-order energy \(\tilde{E}_{h}^{FD}(t):=\frac{h}{2}\sum\limits_{j=0}^{N}\rho\left|\ddot{v}_{j} \right|^{2}+c^{2}\left|\frac{\dot{v}_{j+1}-\dot{v}_{j}}{h}\right|^{2},\) the inequality above is reduced to
\[\begin{array}{ll}\frac{dF_{h}^{FD}}{dt}&\leq-E_{h}^{FD}(t)+\frac{h^{2}}{4c^ {2}}\tilde{E}_{h}^{FD}(t)+\frac{L}{2}\left(1+\frac{1}{2N+2}+\frac{\xi^{2}}{c^{ 2}}\right)|\dot{v}_{N+1}|^{2}\end{array}\]
It follows from the definition of \(\kappa_{FD}\) that \(h^{2}\tilde{E}_{h}^{FD}(t)\leq\kappa^{FD}E_{h}^{FD}(t).\) Hence, (64) follows. \(\Box\)
**Theorem 7**.: _Suppose that there exists a constant \(0<\delta<\frac{2\xi c^{2}}{L\left[c^{2}\left(1+\frac{1}{2N+2}\right)+\xi^{2} \right]}<\frac{2\xi c^{2}}{L\left[c^{2}+\xi^{2}\right]}\) such that for all initial conditions \(\vec{v}^{0},\vec{v}^{1}\in\mathbb{R}^{N+2},\) the energy \(E_{h}^{FD}(t)\) corresponding to (38) satisfies_
\[E_{h}^{FD}(t)\leq\frac{c+\delta L}{c-\delta L}e^{-\delta(1-\frac{L\delta}{c})(1- \frac{\kappa^{FD}}{4c^{2}})t}E_{h}^{FD}(0),\quad\forall t>0. \tag{66}\]
**Proof**: Since \(\frac{L_{h}^{FD}(t)}{dt}=\frac{dE^{FD}(t)}{dt}+\delta\frac{dF_{h}^{FD}(t)}{dt}\), by Lemma 8
\[\begin{array}{ll}\frac{dL_{h}^{FD}(t)}{dt}&=-\delta\left(1-\frac{\kappa^{FD}}{4 c^{2}}\right)E_{h}^{FD}(t)-\left(\xi-\frac{\delta L}{2}\left[1+\frac{1}{2N+2}+ \frac{\xi^{2}}{c^{2}}\right]\right)|\dot{v}_{N+1}|^{2}\\ &\leq-\delta\left(1-\frac{L\delta}{c}\right)\left(1-\frac{\kappa^{FD}}{4c^{2}} \right)L_{h}^{FD}(t).\end{array}\]
By the Gronwall's inequality,
\[L_{h}^{FD}(t)\leq e^{-\delta\left(1-\frac{L\delta}{c}\right)\left(1-\frac{ \kappa^{FD}}{4c^{2}}\right)t}L_{h}^{FD}(0), \tag{67}\]
and Lemma 8, (66) is obtained. \(\square\)
### Finite Elements with \(\xi\neq 0\)
Letting \(\vec{u}_{1,h}=(v_{1},v_{2},...,v_{N+1})^{\rm T}\), \(\vec{u}_{2,h}=(\dot{v}_{1},...,\dot{v}_{N+1})^{\rm T}\), \(\vec{y}_{h}=(\vec{u}_{1,h},\vec{u}_{2,h})^{\rm T}\), the system (52) is written in the first-order form
\[\dot{\vec{y}}_{h}={\cal A}_{h}^{FEM}(\xi)\vec{y}_{h}=(\vec{y}_{2,h},-M^{-1}A_ {h}^{FEM}\vec{y}_{1,h}+B_{h}\vec{y}_{2,h})^{\rm T} \tag{68}\]
where \(A_{h}\) is defined by (8), and \(B_{N+1,N+1}=\frac{-\xi}{h}\neq 0\) and otherwise \(B_{i,j}\equiv 0\) for any other \(i,j\).
Consider the eigenvalue problem for (40):
\[{\cal A}_{h}^{FEM}(\xi)\vec{y}=\lambda^{FEM}(\xi,h)\vec{y}. \tag{69}\]
For \(0<\xi<\rm c\), it can be shown that the real part of the eigenvalues are negative, e.g. \({\rm Re}\lambda^{FEM}(\xi,h)=\frac{-L\xi\lambda^{FEM}(\xi,h)p_{1,N+1}|^{2}}{hc \left(|\lambda^{FEM}(\xi,h)\vec{y}_{1,h}|^{2}-\vec{y}_{1,h}^{T}A\vec{y}_{1,h} \right)}<0\).
**Lemma 10**.: _The system (20), or (40), is dissipative, i.e._
\[\frac{dE_{h}^{FEM}}{dt}+\xi|\dot{v}_{N+1}|^{2}=0. \tag{70}\]
**Proof**: Multiply both sides of (52) by \(h\dot{v}_{j}\) and take sum from \(j=1\) to \(N:\)
\[\sum_{j=1}^{N}\frac{\ddot{v}_{j+1}+4\ddot{v}_{j}+\ddot{v}_{j-1}}{6}h\dot{v}_{j} +c^{2}\sum_{j=1}^{N}\frac{v_{j+1}-2v_{j}+v_{j-1}}{h}\dot{v}_{j}=0, \tag{71}\]
where
\[h\sum_{j=1}^{N}\frac{\ddot{v}_{j+1}+4\ddot{v}_{j}+\ddot{v}_{j-1}}{6}\dot{v}_{ j}=-\frac{h}{6}(\ddot{v}_{N+1}+\ddot{v}_{N})\dot{v}_{N+1}+\frac{h}{3}\sum_{j=0}^{N} \ddot{v}_{j}\dot{v}_{j}+\frac{h}{6}\sum_{j=0}^{N}(\dot{v}_{j+1}+\dot{v}_{j})( \ddot{v}_{j+1}+\ddot{v}_{j}), \tag{72}\]
\[-hc^{2}\sum_{j=1}^{N}\frac{v_{j+1}-2v_{j}+v_{j-1}}{h^{2}}\dot{v}_{j}=\frac{c^{ 2}}{h}(v_{N}-v_{N+1})\dot{v}_{N+1}+\frac{c^{2}}{h}\sum_{j=0}^{N}(v_{j+1}-v_{j}) (\dot{v}_{v+1}-\dot{v}_{j}). \tag{73}\]
Substituting (72) and (73) into (71) yields (70). \(\square\)
Following (5), define the following Lyapunov functional
\[L_{h}^{FEM}(t):=E_{h}^{FEM}+\delta F_{h}^{FEM}(t) \tag{74}\]
where the auxiliary function \(F_{h}(t)\) is defined by
\[F_{h}^{FEM}(t)=h\sum_{j=1}^{N}\frac{\dot{v}_{j+1}+4\dot{v}_{j}+\dot{v}_{j-1}}{6 }j\frac{v_{j+1}-v_{j-1}}{2}+\frac{L}{6}\left(2\dot{v}_{N+1}+\dot{v}_{N}\right) (v_{N+1}-v_{N}). \tag{75}\]
**Lemma 11**.: _For \(0<\delta<\frac{c}{L}\), \(L_{h}\) is equivalent to \(E_{h}^{FEM}\), i.e._
\[\left(1-\frac{L\delta}{c}\right)E_{h}^{FEM}\leq L_{h}^{FEM}\leq\left(1+\frac{L \delta}{c}\right)E_{h}^{FEM}. \tag{76}\]
**Proof:** Applying the Cauchy-Schwartz inequality leads to
\[\begin{array}{ll}|F_{h}^{FEM}(t)|&\leq\frac{Lh}{6}\left|\frac{2\hat{v}_{N+1}+ \hat{v}_{N}}{2}\right|^{2}+\frac{Lh}{6}\left|\frac{v_{N+1}-v_{N}}{h}\right|^{2} \\ &+\frac{Lh}{2}\sum\limits_{j=1}^{N}\left|\frac{v_{j+1}-v_{j}}{2h}\right|^{2}+ \frac{Lh}{2}\sum\limits_{j=1}^{N}\left|\frac{\hat{v}_{j+1}+4\hat{v}_{j}+\hat{v }_{j-1}}{6}\right|^{2}\\ &\leq\frac{Lh}{9}\left|\frac{\hat{v}_{N+1}+\hat{v}_{N}}{2}\right|^{2}+\frac{Lh} {4}\left|\frac{v_{N+1}-v_{N}}{h}\right|^{2}+\frac{Lh}{18}|\hat{v}_{N+1}|^{2}+ \frac{Lh}{4}\sum\limits_{j=1}^{N}\left|\frac{v_{j+1}-v_{j}}{h}\right|^{2}\\ &+\frac{Lh}{4}\sum\limits_{j=1}^{N}\left|\frac{v_{j}-v_{j-1}}{h}\right|^{2}+ \frac{Lh}{9}\sum\limits_{j=1}^{N}\left|\frac{\hat{v}_{j+1}+\hat{v}_{j}}{2} \right|^{2}+\left|\frac{\hat{v}_{j}+\hat{v}_{j-1}}{2}\right|^{2}+\left|\hat{v }_{j}\right|^{2}\\ &\leq\frac{L}{c}F_{h}^{FEM}(t).\end{array} \tag{77}\]
Therefore, (74) follows. \(\square\)
**Lemma 12**.: _The function \(F_{h}^{FEM}\) satisfies_
\[\frac{dF_{h}^{FEM}}{dt}\leq\ \ -\left(1-\frac{\kappa^{FEM}}{12}\right)E_{h}^ {FEM}(t)+\frac{L}{2}\left(1+\frac{\xi^{2}}{c^{2}}\right)|\hat{v}_{N+1}|^{2} \tag{78}\]
_where \(\kappa_{FEM}=\max\left\{h^{2}\left|\lambda_{j}^{FEM}(\xi,h)\right|^{2}\right\} _{j=1}^{N+1}.\)_
**Proof:** Finding the derivative of \(F_{h}^{FEM}(t)\) along the solutions of (52) leads to
\[\begin{array}{ll}F_{h}^{FEM}(t)=h\sum\limits_{j=1}^{N}\frac{\hat{v}_{j+1}+4 \hat{v}_{j}+\hat{v}_{j-1}}{6}j\frac{v_{j+1}-v_{j-1}}{2}+h\sum\limits_{j=1}^{N }\frac{\hat{v}_{j+1}+4\hat{v}_{j}+\hat{v}_{j-1}}{6}j\frac{\hat{v}_{j+1}-\hat{v }_{j-1}}{2}\\ \ \ \ \ \ \ +\frac{L}{6}\left(2\hat{v}_{N+1}+\hat{v}_{N}\right)(v_{N+1}-v_{N})+ \frac{L}{6}\left(2\hat{v}_{N+1}+\hat{v}_{N}\right)(\hat{v}_{N+1}-\hat{v}_{N}) \\ =-\frac{c^{2}h}{2}\sum\limits_{j=0}^{N}\left|\frac{v_{j+1}-v_{j}}{h}\right|^{2 }-\frac{h}{12}\sum\limits_{j=0}^{N}\left|\hat{v}_{j+1}+\hat{v}_{j}\right|^{2} -\frac{h}{6}\sum\limits_{j=1}^{N}\hat{v}_{j+1}\hat{v}_{j}+\frac{c^{2}L}{2} \left|\frac{v_{N+1}-v_{N}}{h}\right|^{2}\\ \ \ \ \ \ \ \ -c^{2}L\left|\frac{v_{N+1}-v_{N}}{h}\right|^{2}-\frac{L\xi}{h} \hat{v}_{N+1}(v_{N+1}-v_{N})+L\frac{(\hat{v}_{N+1}+\hat{v}_{N})^{2}}{12}+\frac {L}{6}\hat{v}_{N+1}\hat{v}_{N}\\ \ \ \ \ \ \ \ +L\frac{2\hat{v}_{N+1}^{2}-\hat{v}_{N+1}\hat{v}_{N}-\hat{v}_{N}^{2}}{6} \\ =-E_{h}^{FEM}(t)+\frac{h}{6}\sum\limits_{j=0}^{N}\left[|\hat{v}_{j}|^{2}-\hat{v }_{j+1}\hat{v}_{j}\right]-\frac{c^{2}L}{2}\left|\frac{v_{N+1}-v_{N}}{h}\right| ^{2}\\ \ \ \ \ \ \ -\frac{L\xi}{h}\hat{v}_{N+1}(v_{N+1}-v_{N})+L\frac{5v_{N+1}^{2}+2v_{ N+1}\hat{v}_{N}-\hat{v}_{N}^{2}}{12}\\ \ \ \ \ \ \ \ \leq-E_{h}^{FEM}(t)+\frac{h}{6}\sum\limits_{j=0}^{N}\left[|\hat{v}_{j}|^{2 }-\hat{v}_{j+1}\hat{v}_{j}\right]+\frac{L}{2}(1+\frac{\xi^{2}}{c^{2}})|\hat{v}_ {N+1}|^{2}.\end{array}\]
(78) follows from this together with the following inequality
\[\begin{array}{ll}\frac{h}{6}\sum\limits_{j=0}^{N}|\hat{v}_{j}|^{2}-\hat{v}_{j} \hat{v}_{j+1}&=\frac{h}{12}\sum\limits_{j=0}^{N}\frac{|\hat{v}_{j}-\hat{v}_{j+1 }|^{2}}{2}-\frac{h}{12}|\hat{v}_{N+1}|^{2}\\ &\leq\frac{\kappa^{FEM}h}{12c^{2}}\sum\limits_{j=0}^{N}|\hat{v}_{j}|^{2}\\ &\leq\frac{\kappa^{FEM}h}{12c^{2}}E_{h}^{FEM}(t).\ \square\end{array}\]
**Theorem 8**.: _Suppose that there exists a constant \(0<\delta<\frac{2\mathcal{E}c^{2}}{L(c^{2}+\xi^{2})}\) such that for all initial conditions \(\vec{v}^{0},\vec{v}^{1}\in\mathbb{R}^{N+2},\) the energy \(E_{h}^{FEM}(t)\) corresponding to (52) satisfies \(\forall t>0\)_
\[E_{h}^{FEM}(t)\leq\frac{c+\delta L}{c-\delta L}e^{-\delta\left(1-\frac{L\delta}{c }\right)\left(1-\frac{\kappa^{FEM}}{12c^{2}}\right)t}E_{h}^{FEM}(0). \tag{79}\]
**Proof:** Since \(\frac{L_{h}^{FEM}(t)}{dt}=\frac{dEF^{FEM}(t)}{dt}+\delta\frac{dFF_{h}^{FEM}(t) }{dt}\), by Lemma 11
\[\begin{array}{ll}\frac{dL_{h}^{FEM}(t)}{dt}&=-\left(\xi-\frac{L\delta}{2} \left(1+\frac{\xi^{2}}{c^{2}}\right)\right)|\hat{v}_{N+1}|^{2}-\delta\left(1- \frac{\kappa^{FEM}h^{2}}{12c^{2}}\right)E_{h}^{FEM}(t)\\ &\leq-\delta\left(1-\frac{L\delta}{c}\right)\left(1-\frac{\kappa^{FEM}}{12c^{2} }\right)L_{h}^{FEM}(t).\end{array}\]
By the Gronwall's inequality,
\[L_{h}^{FEM}(t)\leq e^{-\delta\left(1-\frac{L\delta}{\varepsilon}\right)\left(1- \frac{\kappa^{FEM}}{12e^{2}}\right)t}L_{h}^{FEM}(0), \tag{80}\]
and Lemma 11, (79) is obtained. \(\Box\)
## 5 Maximal Decay Rate and Implementation of Direct Fourier Filtering
For \(N\) large enough (or small \(h\)) and \(1\leq k\leq 2N+2\), Theorems 5 and 6 leads to
\[\begin{array}{ll}\frac{h^{2}}{4c^{2}}\max\left\{\left|\lambda_{k}^{FD}(\xi,h )\right|^{2}\right\}&=\frac{\kappa^{FD}}{4c^{2}}=:\Gamma^{FD}+O(h),\\ \frac{h^{2}}{12c^{2}}\max\left\{\left|\lambda_{k}^{FEM}(\xi,h)\right|^{2} \right\}&=\frac{\kappa^{FEM}}{12c^{2}}=:\Gamma^{FEM}+O(h).\end{array}\]
where \(\Gamma\) is the Fourier filtering parameter. Therefore, for both model reductions \(0<\Gamma^{FD}<1\). Now, consider the space of filtered solutions for (40) and (68)
\[\mathcal{C}_{h}(\Gamma):=\left\{\vec{z}_{h}=\sum_{0<\Gamma<1}a_{k}e^{i\sqrt{ \lambda_{k}(\xi,h)}t}\vec{\phi}_{k}(h)\right\}.\]
By Theorems 7 and 8, the exponential stability as \(h\to 0\) is immediate as some filtering \(0<\Gamma<1\) is considered. If here is no filtering, i.e. \(\Gamma\approx 1\), the exponential stability is at steak since \(1-\Gamma\to 0\) as \(h\to 0\) e.g. see (67) and (80).
Notice that for each filtered solution in Theorems 7 and 8, the decay rate \(\sigma\) and \(\delta\) are functions of \(\Gamma\) and \(\xi\) :
\[\begin{array}{l}\sigma(\Gamma,\xi)=\left\{\begin{array}{ll}\delta(1-\frac{ L\delta}{\varepsilon})(1-\Gamma^{FD}),&FD\\ \delta\left(1-\frac{\Gamma\delta}{\varepsilon}\right)\left(1-\Gamma^{FEM} \right),&FEM\\ \end{array}\right.\\ \delta(\xi)=\frac{c}{2L}\ \min\left(1,\frac{2c\xi}{c^{2}+\xi^{2}}\right), \quad FD\ \mbox{and}\ FEM.\end{array} \tag{81}\]
Note that \(\delta\) reaches its maximum at \(\xi=c\) :
\[\delta_{max}(\xi=c)=\frac{c}{2L},\qquad\mbox{FD and FEM}\]
at which \(\sigma\) reaches its maximum
\[\sigma_{max}\left(\delta_{max}\right)=\left\{\frac{c}{4L}(1-\Gamma),\quad \mbox{FD and FEM}\right. \tag{82}\]
Our results perfectly mimic the maximal decay rate result (6) in Theorem 1.
## 6 Numerical Experiments
To show the strength of the Finite Difference (38) and Finite Element-based (52) model reductions of (1) with and without filtering, we consider high-enough number of nodes, e.g. \(N=30\) with \(60\) complex eigenvalues of the system matrix in total. Therefore, \(h=1/31\approx 0.0322\). For simplicity, consider \(c=1\) and \(L=1\). For simulations, the following set of high-frequency initial conditions is considered
\[\left\{\begin{array}{l}v(x_{j},0)=v_{j}^{0}=10^{-3}\sum_{i=20}^{30}\sin(i \pi x_{j})\\ \dot{v}(x_{j},0)=v_{j}^{1}=10^{-3}\sum_{i=20}^{30}\sin(i\pi x_{j}).\end{array}\right. \tag{83}\]
The eigenvalues \(\lambda_{k}(\xi)\) of the PDE in (4) and the approximated eigenvalues \(\lambda_{k}^{FD}(\xi,h)\) and \(\lambda_{k}^{FEM}(\xi,h)\) of the FEM and FD models, respectively, are simulated in Fig. 1. For \(\xi=0.9<c=1\), the spectral plot shows \(40\) high-frequency eigenvalues out of \(60\) complex conjugate eigenvalues in total filtered out. The corresponding filtering parameters are \(\Gamma^{FEM}=1.4133\) and \(\Gamma^{FD}=1.017\). Therefore, the maximal decay rates in (81) are calculated as \(\sigma_{max}^{FEM}=0.2205\) and \(\sigma_{max}^{FD}=0.1864\). This case can be compared to the maximal decay rate analysis in Theorem 1 where the optimal decay rate is found to be \(\sigma_{max}=0.25\).
Next, the simulations of \(v(x,t),\dot{v}(x,t),E(t)\), and the tip velocity feedback \(\dot{v}(L,t)\) with FD and FEM algorithms, \(\xi=0.9<c=1\), and for the unfiltered and filtered algorithms are shown in Figures 2-3 for the set of initial conditions (83), respectively.
Note that the simulations of various initial data (box, sawtooth, sinusoidal, pinch, square, triangle types) with and without filtering for both FD and FEM algorithms are real-time demonstrated in the recently-published Wolfram Demonstrations Project [24].
## 7 Conclusions
In conclusion, the Lyapunov approach laid out in this paper can be used to define an explicit decay rate in terms of the filtering parameter \(\Gamma\) and the feedback gain \(\xi\). Our findings are in line with the conclusions in [9] that FEM provides a more accurate approximation and a better decay rate than FD to the (1). For most PDE models based off of the wave [17, 18] and beam equations [1, 13, 12], our approach can be easily adapted. Indeed, an immediate application under consideration is the obtension of the spectral estimates for coupled systems where there several branches of eigenvalues.
|
2303.13351
|
DBLP-QuAD: A Question Answering Dataset over the DBLP Scholarly
Knowledge Graph
|
In this work we create a question answering dataset over the DBLP scholarly
knowledge graph (KG). DBLP is an on-line reference for bibliographic
information on major computer science publications that indexes over 4.4
million publications published by more than 2.2 million authors. Our dataset
consists of 10,000 question answer pairs with the corresponding SPARQL queries
which can be executed over the DBLP KG to fetch the correct answer. DBLP-QuAD
is the largest scholarly question answering dataset.
|
Debayan Banerjee, Sushil Awale, Ricardo Usbeck, Chris Biemann
|
2023-03-23T15:29:21Z
|
http://arxiv.org/abs/2303.13351v3
|
# DBLP-QuAD: A Question Answering Dataset over the DBLP Scholarly Knowledge Graph
###### Abstract
In this work we create a question answering dataset over the DBLP scholarly knowledge graph (KG). DBLP is an on-line reference for bibliographic information on major computer science publications that indexes over 4.4 million publications published by more than 2.2 million authors. Our dataset consists of 10,000 question answer pairs with the corresponding SPARQL queries which can be executed over the DBLP KG to fetch the correct answer. DBLP-QuAD is the largest scholarly question answering dataset.
Question Answering Scholarly Knowledge Graph DBLP Dataset +
Footnote †:
KGQA datasets exist [6]. However, not all datasets contain a mapping of natural language questions to the logical form (e.g. SPARQL, \(\lambda\)-calculus, S-expression). Some simply contain the question and the eventual answer. Such datasets can not be used to train models in the task of semantic parsing.
In this work, we present a KGQA dataset called DBLP-QuAD, which consists of 10,000 questions with corresponding SPARQL queries. The question formation process begins with human-written templates, and later, we machine-generate more questions from these templates. DBLP-QuAD consists of a variety of simple and complex questions and also tests the compositional generalisation of the models. DBLP-QuAD is the largest scholarly KGQA dataset being made available to the public5.
Footnote 5: [https://doi.org/10.5281/zenodo.7643971](https://doi.org/10.5281/zenodo.7643971)
## 2 Related Work
ORKG-QA benchmark [7] is the first scholarly KGQA dataset grounded to ORKG. The dataset was prepared using the ORKG API and focuses on the content of academic publications structured in comparison tables. The dataset is relatively small in size with only 100 question-answer pairs covering only 100 research publications.
Several other QA datasets exist, both for IR-based QA [8, 9] and KGQA [10, 11] approaches. Several different approaches have been deployed to generate the KGQA datasets. These approaches range from manual to machine generation. However, most datasets lie in between and use a combination of manual and automated process.
A clear separation can be created between datasets that contain logical forms and those that do not. Datasets that do not require logical forms can be crowd-sourced and such datasets are generally large in size. Crowd sourcing is generally not possible for annotating logical forms because this task requires high domain expertise and it is not easy to find such experts on crowd sourcing platforms. We focus on datasets that contain logical forms.
Free917 and QALD [12, 13] datasets were created manually by domain experts, however, their sizes are relatively small (917 and 806 respectively).
WebQuestionsSP and ComplexWebQuestions [14, 15] are developed using exisiting datasets. WebQuestionsSP is a semantic parsing dataset developed by using questions from WebQuestions [16]. Yih et al. [14] developed a dialogue-like user interface which allowed five expert human annotators to annotate the data in stages.
ComplexWebQuestions is a collection of 34,689 complex question paired with answers and SPARQL queries grounded to Freebase KG. The dataset builds on WebQuestionsSP by sampling question-query pairs from the dataset and automatically generating questions and complex SPARQL queries with composition, conjunctions, superlatives, and comparatives functions. The machine generated questions are manually annotated to natural questions and validated by 200 AMT crowd workers.
The OVERNIGHT (ON) approach is a semantic parsing dataset generation framework introduced by Wang et al. [17]. In this approach, the question-logical form pairs are collected with a three step process. In the first step, the logical forms are generated from a KG. Secondly, the logical forms are converted automatically into canonical questions. These canonical questions
are grammatically incorrect but successfully carry the semantic meaning. Lastly, the canonical questions are converted into natural forms via crowdsourcing. Following are some of the datasets developed using this approach.
GraphQuestions [18] consists of 5,166 natural questions accompanied by two paraphrases of the original question, an answer, and a valid SPARQL query grounded against the Freebase KG. GraphQuestions uses a semi-automated three-step algorithm to generate the natural questions for the KG.
LC-QuAD 1.0 [10] is another semantic parsing dataset for the DBpedia KG. LC-QuAD 1.0 is relatively larger in size with 5,000 natural language English questions and corresponding SPARQL queries. The generation process starts with the set of manually created SPARQL query templates, a list of seed entities, and a whitelist of predicates. Using the list of seed entities, two-hop subgraphs from DBpedia are extracted. The SPARQL query templates consist of placeholders for both entities and predicates which are instantiated using triples from the subgraph. These SPARQL queries are then used to instantiate natural question templates which form the base for manual paraphrasing by humans.
LC-QuAD 2.0 [19] is the second iteration of LC-QuAD 1.0 with 30,000 questions, their paraphrases and their corresponding SPARQL queries compatible with both Wikidata and DBpedia KGs. Similar to LC-QuAD 1.0, in LC-QuAD 2.0 a sub-graph is generated using seed entities and a SPARQL query template is selected based on whitelist predicates. Then, the query template is instantiated using the sub-graph. Next, a template question is generated from the SPARQL query which is then verbalised and paraphrased by AMT crowd workers. LC-QuAD 2.0 has more questions and more variation compared to LC-QuAD 1.0 with paraphrases to the natural questions.
GrailQA [20] extends the approach in [18] to generate 64,331 question-S-expression pairs grounded to the Freebase Commons KG. Here, S-expression are linearized forms of graph queries. Query templates extracted from graph queries generated from the KG are used to generate canonical logical forms grounded to compatible entities. The canonical logic forms are then validated by a graduate student if they represent plausible user query or not. Next, another graduate student annotated the validated canonical logic form with a canonical question. Finally, 6,685 Amazon Mechanical Turk workers write five natural paraphrases for each canonical question which are further validated by multiple independent crowd workers.
KQA Pro [21] is a large collection of 117,000 complex questions paired with SPARQL queries for the Wikidata KG. KQA Pro dataset also follows the OVERNIGHT approach where firstly facts from the KG are extracted. Next, canonical questions are generated with corresponding SPARQL queries, ten answer choices and a golden answer. The canonical questions are then converted into natural language with paraphrases using crowd sourcing.
CFQ [22] (Compositional Freebase Questions) is a semantic parsing dataset developed completely using synthetic generation approaches that consists of simple natural language questions with corresponding SPARQL query against the Freebase KG. CFQ contains 239,357 English questions which are generated using hand-crafted grammar and inference rules with a corresponding logical form. Next, resolution rules are used to map the logical forms to SPARQL queries. The CFQ dataset was specifically designed to measure compositional generalization.
In this work, we loosely follow the OVERNIGHT approach to create a large scholarly KGQA dataset for the DBLP KG.
## 3 DBLP KG
DBLP, which used to stand for Data Bases and Logic Programming6, was created in 1993 by Michael Ley at the University of Trier, Germany [23]. The service was originally designed as a bibliographic database for research papers and proceedings from the fields of database systems and logic programming. Over time, the service has grown in size and scope, and today includes bibliographic information on a wide range of topics within the field of computer science. The DBLP RDF data models a person-publication graph shown in Figure 1.
Footnote 6: [https://en.wikipedia.org/wiki/DBLP](https://en.wikipedia.org/wiki/DBLP)
The DBLP KG contains two main entities: _Person_ and _Publication_, where as other metadata such as journal and conferences, affiliation of authors are currently only string literals. Henceforth, we use the term _person_ and _creator_ interchangeably. At the time of its release, the RDF dump consisted of 2,941,316 person entities, 6,010,605 publication entities, and 252,573,199 RDF triples. DBLP currently does not provide a SPARQL endpoint but the RDF dump can be downloaded and a local SPARQL endpoint such as Virtuoso Server can be setup to run a SPARQL query against the DBLP KG.
The live RDF data model on the DBLP website follows the schema shown in Figure 1. However, the RDF snapshots available for download have the _coCreatorWith_ and _authorOf_ predicates missing. Although these predicates are missing, the _authoredBy_ predicate can be used to derive the missing relations. DBLP-QuAD is based on the DBLP KG schema of the downloadable RDF graph.
## 4 Dataset Generation Framework
In this work, the aim is to generate a large variety of scholarly questions and corresponding SPARQL query pairs for the DBLP KG. Initially, a small set of templates \(T\) containing a SPARQL query template \(s_{t}\) and a few semantically equivalent natural language question templates \(Q_{t}\)
Figure 1: Example of entries in the DBLP KG with its schema
are created. The questions and query templates are created such that they cover a wide range of scholarly metadata user information need while also being answerable using a SPARQL query against the DBLP KG. Next, we synthetically generate a large set of question-query pairs \((q_{i},s_{i})\) suitable for training a neural network semantic parser.
The core methodology of the dataset generation framework encompasses instantiating the templates using literals of subgraphs sampled from the KG. Moreover, to capture different representations of the literal values from a human perspective, we randomly mix in different augmentations of these textual representations. The dataset generation workflow is shown in Figure 2.
### Templates
The first step in the dataset generation process starts with the creation of a template set. After carefully analyzing the ontology of the DBLP KG, we manually wrote 98 pairs of valid SPARQL
Figure 2: **Motivating Example. The generation process starts with (1) selection of a template tuple followed by (2) subgraph generation. Then, literals in subgraph are (3) augmented before being used to (4) instantiate the selected template tuple. The generated data is (5) filtered based on if they produce answers or not.**
query templates and a set of semantically equivalent natural language question templates. The template set was written by one author and verified for correctness by another author. The query and question templates consist of placeholder markers instead of URIs, entity surface forms or literals. For example, in Figure 2 (Section 1), the SPARQL query template includes the placeholders \(?c1\) and \([VENUE]\) for DBLP person URI and venue literal respectively. Similarly, the question templates include placeholders \([CREATOR\_NAME]\) and \([VENUE]\) for creator name and venue literal respectively. The template set covers the two entities creator and publication, and additionally the foreign entity bibtex type. Additionally, they also cover the \(11\) different predicates of DBLP KG.
The template set consists of template tuples. A template tuple \(t=(s_{t},Q_{t},E_{t},P_{t})\) is composed of a SPARQL query template \(s_{t}\), a set of semantically equivalent natural language question templates \(Q_{t}\), a set of entity placeholders \(E_{t}\) and a set of predicates \(P_{t}\) used in \(s_{t}\). We also add a boolean indicating whether the query template is temporal or not and another boolean indicating whether to use or not use the template while generating \(train\) dataset. Each template tuple contains between four and seven paraphrased question templates offering wide linguistic diversity. While most of the question templates use the _"Wh-"_ question keyword, we also include instruction-style paraphrases.
We group the template tuples as creator-focused or publication-focused \(\epsilon\) and further group them by query types \(\delta\). We have \(10\) different query types and they include Single Fact, Multiple Facts, Boolean, Negation, Double Negation, Double Intent, Union, Count, Superlative/Comparative, and Disambiguation. The question types are discussed in Section 4.6 with examples. The distribution of templates per entity and query type is shown in Table 1. During dataset generation, for each data instance we sample a template tuple from the template set using stratified sampling maintaining equal distribution of entity types and query types.
### Subgraph generation
The second part of the dataset generation framework is subgraph generation. Given a graph \(G=(V,E)\) where \(V\) are the vertices, and \(E\) are edges, we draw a subgraph \(g=(v,e)\) where
\begin{table}
\begin{tabular}{c|c|c|c} \hline Query Type & Creator-focused & Publication-focused & Total \\ \hline Single Fact & 5 & 5 & 10 \\ Multiple Facts & 7 & 7 & 14 \\ Boolean & 6 & 6 & 12 \\ Negation & 4 & 4 & 8 \\ Double Negation & 4 & 4 & 8 \\ Double Intent & 5 & 4 & 9 \\ Union & 4 & 4 & 8 \\ Count & 6 & 5 & 11 \\ Superlative/Comparative & 6 & 6 & 12 \\ Disambiguation & 3 & 3 & 6 \\ \hline Total & 50 & 48 & 98 \\ \hline \end{tabular}
\end{table}
Table 1: Total number of template tuples per query type grouped by entity type
\(v\subset V\), \(e\subset E\). For the DBLP KG, \(V\) are the creator and publication entity URIs or literals, and the \(E\) are the predicates of the entities.
The subgraph generation process starts with random sampling of a publication entity \(v_{i}\) from the DBLP KG. We only draw from the set of publication entities as the RDF snapshot available for download has \(authorOf\) and \(coCreatorWith\) predicates missing for creator entity. As such, a subgraph centered on a creator entity would not have end vertices that can be expanded further. With the sampled publication entity \(v_{i}\), we iterate through all the predicates \(e\) to extract creator entities \(v^{\prime}\) as well as the literal values. We further, expand the creator entities and extract their literal values to form a two-hop subgraph \(g=(v,e)\) as shown in Figure 2 (Section 2).
### Template Instantiation
Using the generated subgraph and the sampled template tuple, the template tuple is instantiated with entity URIs and literal values from the subgraph. In the instantiation process, a placeholder marker in a string is replaced by the corresponding text representation.
For the SPARQL query template \(s_{t}\), we instantiate the creator/publication placeholder markers with DBLP creator/publication entity URIs or literal values for affiliation and conference or journals to create a valid SPARQL query \(s\) that returns answers when run against the DBLP KG SPARQL endpoint.
In case of natural language question templates, we randomly sample two from the set of question templates \(q_{t}^{1},q_{t}^{2}\in Q_{T}\), and instantiate each using only the literal values from the subgraph to form one main natural language question \(q^{1}\) and one natural language question paraphrase \(q^{2}\). In natural language, humans can write the literal strings in various forms. Hence to introduce this linguistic variation, we randomly mix in alternate string representations of these literal values in both natural language questions. The data augmentation process allows us to add heuristically manipulated alternate literal representations to the natural questions. A example of an instantiated template is shown in Figure 2 (Section 3).
### Data Augmentation
For the template instantiation process, we perform simple string manipulations to generate alternate literal representations. Then, we randomly select between the original literal representation and the alternate representation to instantiate the natural language questions. For each literal type, we apply different string manipulation techniques which we describe below.
**Names**: For names we generate four different alternatives involving switching parts of names or keeping only initials of the names. Consider the name _John William Smith_ for which we produce _Smith, John William, J. William Smith, John W. Smith_, and _Smith, J. William_.
**Venues**: Venues can be represented using either its short form or its full form. For example, _ECIR_ or _European Conference on Information Retrieval_. In DBLP venues are stored in its short form. We use a selected list of conference and journals7 containing the short form and its equivalent full form to get the full venue names.
Footnote 7: [http://portal.core.edu.au/conf-ranks/?search=&by=all&source=CORE2021&sort=atitle&page=1](http://portal.core.edu.au/conf-ranks/?search=&by=all&source=CORE2021&sort=atitle&page=1)
**Duration**: About 20% of the templates contain temporal queries, and some of them require dummy numbers to represent duration. For example, the question _"In the last five years, which
papers did Mante S. Nieuwland publish?_ uses the dummy value _five_. We randomly select between the numerical representation and the textual representation for the dummy duration value.
**Affiliation**: In natural language questions, only the institution name is widely used to refer to the affiliation of an author. However, the DBLP KG uses the full address of an institution including city and country name. Hence, using RegeEx we extract the institution names and randomly select between the institution name and the full institution address in the instantiation process.
**Keywords**: For disambiguation queries, we do not use the full title of a publication but rather a part of it by extracting keywords. For this purpose, we use SpaCy's Matcher API8 to extract noun phrases from the title.
Footnote 8: [https://spacy.io/api/matcher/](https://spacy.io/api/matcher/)
```
GenerateDataset\((T,x,N,G)\) inputs : template set \(T\); dataset set to generate \(x\); size of dataset to generate \(N\); KG to sample subgraphs from \(G\); output : dataset \(D\); \(D\leftarrow\emptyset\); \(n\leftarrow(N/|\epsilon|)/|\delta|\); foreach\(e\in\epsilon\)do foreach\(s\in\delta\)do \(i\gets 0\); \(T_{es}\gets T[e][s]\); if\(x==train\)then \(T_{es}\gets Filter(T_{es},test\_only==True)\) while\(i<n\)do \(g_{1},g_{2}\gets SampleSubgraph(G,2)\); \(t_{i}\gets random.sample(T_{es})\); \(d_{i}\gets Instantiate(t_{i},g_{1},g_{2},x)\); \(answer\gets Query(d_{i})\); if\(answer\)then \(D\gets d_{i}\); \(i\gets i+1\); return\(D\)
```
**Algorithm 1**Dataset Generation Process
### Dataset Generation
For each data instance \(d_{i}\), we sample \(2\) subgraphs (_SampleSubgraph(G,2)_) and instantiate a template tuple \(t_{i}\) (_Instantiate(\(t_{i}\), \(g_{1}\), \(g_{2}\), x)_). We sample \(2\) subgraphs as some template tuples require to be instantiated with two publication titles. Each data instance \(d_{i}=(s_{i},q_{i}^{1},q_{i}^{2},E_{i},P_{i},y,z)\) comprises of a valid SPARQL query \(s_{i}\), one main natural language question \(q_{i}^{1}\), one semantically
equivalent paraphrase of the main question \(q_{i}^{2}\), a list of entities \(E_{i}\) used in \(s_{i}\), a list of predicates \(P_{i}\) used in \(s_{i}\), a Boolean indicating whether the SPARQL query is temporal or not \(y\), and another Boolean informing whether the SPARQL query is found only in \(valid\) and \(test\) sets \(z\). We generate an equal number \(n\) of questions for each entity group \(\epsilon\) equally divided for each query type \(\delta\).
To foster a focus on generalization ability, we manually marked \(20\) template tuples to withhold during generation of the \(train\) set. However, we use all the template tuples in the generation of \(valid\) and \(test\) sets. Furthermore, we also withhold \(2\) question templates when generating \(train\) questions but use all question templates when generating \(valid\) and \(test\) sets. This controlled generation process allows us to withhold some entity classes, predicates and paraphrases from \(train\) set. Our aim with this control is to create a scholarly KGQA dataset that facilitates development of KGQA models that adhere to _i.i.d_, _compositional_, and _zero-shot_[20] generalization.
Further, we validate each data instance \(d_{i}\) by running the SPARQL query \(s_{i}\) against the DBLP KG via a Virtuoso SPARQL endpoint9. We filter out data instances for which the SPARQL query is invalid or generates a blank response. A SPARQL query may generate a blank response if the generated subgraphs have missing literal values. In the DBLP KG, some of the entities have missing literals for predicates such as _primaryAffiliation_, _orci_, _wikidata_, and so on. Additionally, we also store the answers produced by the SPARQL query against the DBLP KG formatted according to _[https://www.w3.org/TR/sparql11-results-json/_](https://www.w3.org/TR/sparql11-results-json/_). The dataset generation process is summarized in Algorithm 1.
Footnote 9: [https://docs.openlinksw.com/virtuoso/whatisvirtuoso/](https://docs.openlinksw.com/virtuoso/whatisvirtuoso/)
### Types of Questions
The dataset is composed of the following question types. The examples shown here are handpicked from the dataset.
* **Single fact**: These questions can be answered using a single fact. For example, "What year was 'SIRA: SNR-Aware Intra-Frame Rate Adaptation' published?"
* **Multiple facts**: These questions require connecting two or more facts to answer. For example, "In SIGCSE, which paper written by Darina Dicheva with Dichev, Christo was published?"
* **Boolean**: These questions answer where a given fact is true or false. We can also add negation keywords to negate the questions. For example, "Does Szeider, Stefan have an ORCID?"
* **Negation**: These questions require to negate the answer to the Boolean questions. For example, "Did M. Hachani not publish in ICCP?"
* **Double negation**: These questions require to negate the Boolean question answers twice which results. For example, "Wasn't the paper 'Multi-Task Feature Selection on Multiple Networks via Maximum Flows' not published in 2014?"
* **Count**: These questions pertain to the count of occurrence of facts. For example, "Count the authors of 'Optimal Symmetry Breaking for Graph Problems' who have Carnegie Mellon University as their primary affiliation."
* **Superlative/Comparative**: Superlative questions ask about the maximum and minimum for a subject and comparative questions compare values between two subjects. We group both types under one group. For example, "Who has published the most papers among the authors of 'k-Pareto optimality for many-objective genetic optimization?"
* **Union** questions cover a single intent but for multiple subjects at the same time. For example, "List all the papers that Pitas, Konstantinos published in ICML and ISCAS."
* **Double intent** questions poses two user intentions, usually about the same subject. For example, "In which venue was the paper 'Interactive Knowledge Distillation for image classification' published and when?"
* **Disambiguation** questions requires identifying the correct subject in the question. For example, "Which author with the name Li published the paper about Buck power converters?"
## 5 Dataset Statistics
DBLP-QuAD consists of 10,000 unique question-query pairs grouped into _train_, _valid_ and _test_ sets with a ratio of _7:1:2_. The dataset covers 13,348 creators and publications, and 11 predicates of the DBLP KG. For each query type in Table 1, the dataset includes 1,000 question-query pairs each of which is equally divided as creator-focused or publication-focused. Additionally, among the questions in DBLP-QuAD, 2,350 are temporal questions.
**Linguistic Diversity.** In DBLP-QuAD, a natural language question has an average word length of 17.32 words and an average character length of 114.1 characters. Similarly, a SPARQL query has an average vocab length of 12.65 and an average character length of 249.48 characters. Between the natural language question paraphrases, the average Jaccard similarity for unigram and bigram are 0.62 and 0.47 (with standard deviations of 0.22 and 0.24) respectively. The average Levenshtein edit distance between them is \(32.99\) (with standard deviation of \(23.12\)). We believe the metrics signify a decent level of linguistic diversity.
**Entity Linking.** DBLP-QuAD also presents challenging entity linking with data augmentation performed on literals during the generation process. The augmented literals present more realistic and natural representation of the entity surface forms and literals compared to the entries in the KG.
**Generalization.** In the _valid_ set 18.9% and in the _test_ set 19.3% of instances were generated using the withheld templates. Hence, these SPARQL query templates and natural language question templates are unique to the _valid_ and _test_ sets. Table 2 shows the percent of questions with different levels of generalization in the _valid_ and _test_ sets of the dataset.
\begin{table}
\begin{tabular}{c|c|c|c} \hline Dataset & I.I.D & Compositional & Zero-shot \\ \hline Valid & 82.8\% & 13.6\% & 3.6\% \\ Test & 81.2\% & 15.1\% & 3.8\% \\ \hline \end{tabular}
\end{table}
Table 2: Percent of questions with different levels of generalization in the _valid_ and _test_ sets of DBLP-QuAD
## 6 Semantic Parsing Baseline
To lay the foundation for future work on DBLP-QuAD, we also release baselines using the recent work by Banerjee et al. [24], where a pre-trained T5 model is fine-tuned [25] on the LC-QuAD 2.0 dataset.
Following Banerjee et al. [24], we assume the entities and the relations are linked, and only focus on query building. We formulate the source as shown in Figure 3, where for each natural language question a prefix "**parse text to SPARQL query**:" is added. The source string is further concatenated with entity URIs and relation schema URIs separated by a special token \([SEP]\). The target text is the corresponding SPARQL query which is padded with the tokens \(<s></s>\). We also make use of the sentinel tokens provided by T5 to represent the DBLP prefixes e.g. <_extra_id_1>_ denotes the prefix _[https://dblp.org/pid/_](https://dblp.org/pid/_), SPARQL vocabulary and symbols. This step helps the _T5-tokenizer_ to correctly fragment the target text during inference.
We fine-tune _T5-Base_ and _T5-Small_ on DBLP-QuAD train set with a learning rate of _1e-4_ for \(5\) epochs with an input as well as output text length of \(512\) and batch size of \(4\).
### Experiment Results
We report the performance of the baseline model on the DBLP-QuAD test set. Firstly, we report on the exact-match between the gold and the generated SPARQL query. For the exact-match accuracy we compare the generated and the gold query token by token after removing whitespaces. Next, for each SPARQL query on the test set, we run both the gold and and the query generated by the T5 baseline models using Virtuoso SPARQL endpoint to fetch answers from the DBLP KG. Based on the answers collected, we report on the F1 score. The results are reported on Table 3.
## 7 Limitations
One of the drawbacks of our dataset generation framework is that natural questions are synthetically generated. (CFQ [22] has a similar limitation.) Although the question templates were human-written, only two people (authors of the paper) worked on the creation of the question
Figure 3: Representation of source and target text used to fine-tune the T5 model
templates and was not crowd sourced from a group of researchers. Additionally, the questions are generated by drawing data from a KG. Hence, the questions may not perfectly reflect the distribution of user information need. However, the machine-generation process allows for programmatic configuration of the questions, setting question characteristics, and controlling dataset size. We utilize the advantage by programmatically augmenting text representations and generating a large scholarly KGQA with complex SPARQL queries.
Second, in generating _valid_ and _test_ sets, we utilize additional 19 template tuples which account for about 20% of the template set. Therefore, the syntactic structure for 80% of the generated data in _valid_ and _test_ would already be seen in the train set resulting in test leakage. However, to limit the leakage on 80% of the data, we withhold \(2\) question templates in generating the \(train\) set. Moreover, the data augmentation steps carried out would also add challenges in the \(valid\) and \(test\) sets.
Another shortcoming of DBLP-QuAD is that the paper titles do not perfectly reflect user behavior. When a user asks a question, they do not type in the full paper title and also some papers are popularly known by a different short name. For example, the papers "Language Models are Few-shot Learners" and "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding" are also known as "GPT-3" and "BERT" respectively. This is a challenging entity linking problem which requires further investigation. Despite the shortcomings, we feel the large scholarly KGQA dataset would ignite more research interest in scholarly KGQA.
## 8 Conclusion
In this work, we presented a new KGQA dataset called DBLP-QuAD. The dataset is the largest scholarly KGQA dataset with corresponding SPARQL queries. The dataset contains a wide variety of questions and query types and we present the data generation framework and baseline results. We hope this dataset proves to be a valuable resource for the community.
As future work, we would like to build a robust question answering system for scholarly data using this dataset.
## 9 Acknowledgements
This research was supported by grants from NVIDIA and utilized NVIDIA 2 x RTX A5000 24GB. Furthermore, we acknowledge the financial support from the Federal Ministry for Economic Affairs and Energy of Germany in the project CoyPu (project number 01MK21007[G]) and the German Research Foundation in the project NFDI4DS (project number 460234259). This research is additonally funded by the "Idea and Venture Fund" research grant by Universitat Hamburg, which is part of the Excellence Strategy of the Federal and State Governments.
\begin{table}
\begin{tabular}{c|c|c} \hline Evaluation metrics & T5-Small & T5-Base \\ \hline Exact-match Accuracy & 0.638 & 0.813 \\ F1 Score & 0.721 & 0.868 \\ \hline \end{tabular}
\end{table}
Table 3: Evaluation results of fine-tuned T5 to DBLP-QuAD
|
2301.12636
|
Exploring Image Augmentations for Siamese Representation Learning with
Chest X-Rays
|
Image augmentations are quintessential for effective visual representation
learning across self-supervised learning techniques. While augmentation
strategies for natural imaging have been studied extensively, medical images
are vastly different from their natural counterparts. Thus, it is unknown
whether common augmentation strategies employed in Siamese representation
learning generalize to medical images and to what extent. To address this
challenge, in this study, we systematically assess the effect of various
augmentations on the quality and robustness of the learned representations. We
train and evaluate Siamese Networks for abnormality detection on chest X-Rays
across three large datasets (MIMIC-CXR, CheXpert and VinDR-CXR). We investigate
the efficacy of the learned representations through experiments involving
linear probing, fine-tuning, zero-shot transfer, and data efficiency. Finally,
we identify a set of augmentations that yield robust representations that
generalize well to both out-of-distribution data and diseases, while
outperforming supervised baselines using just zero-shot transfer and linear
probes by up to 20%. Our code is available at
https://github.com/StanfordMIMI/siaug.
|
Rogier van der Sluijs, Nandita Bhaskhar, Daniel Rubin, Curtis Langlotz, Akshay Chaudhari
|
2023-01-30T03:42:02Z
|
http://arxiv.org/abs/2301.12636v2
|
# Exploring Image Augmentations for Siamese Representation Learning with Chest X-Rays
###### Abstract
Image augmentations are quintessential for effective visual representation learning across self-supervised learning techniques. While augmentation strategies for natural imaging have been studied extensively, medical images are vastly different from their natural counterparts. Thus, it is unknown whether common augmentation strategies employed in Siamese representation learning generalize to medical images and to what extent. To address this challenge, in this study, we systematically assess the effect of various augmentations on the quality and robustness of the learned representations. We train and evaluate Siamese Networks for abnormality detection on chest X-Rays across three large datasets (MIMIC-CXR, CheXpert and VinDr-CXR). We investigate the efficacy of the learned representations through experiments involving linear probing, fine-tuning, zero-shot transfer, and data efficiency. Finally, we identify a set of augmentations that yield robust representations that generalize well to both out-of-distribution data and diseases, while outperforming supervised baselines using just zero-shot transfer and linear probes by up to 20%.
**Keywords:** Data Augmentations, Self-Supervised Learning, Medical Imaging, Chest X-rays, Siamese Representation Learning.
## 1 Introduction
Deep learning algorithms enable high-accuracy medical image analysis, yet are constrained by limitations of labelled data. Determining ground-truth image labels for diagnostic and prognostic tasks typically involves multiple annotators with clinical expertise and is often costly, time-consuming, and subject to inter-reader variability (Kim et al., 2022). Such a scarcity of annotated datasets has spurred research in data-efficient deep learning techniques, such as transfer learning and self-supervision (Krishnan et al., 2022). ImageNet pretraining is common, yet transferring representations from natural images is not always successful, possibly due to the shifted distribution and visual features of medical images (Raghu et al., 2019). Self-supervision, on the other hand, exploits the intrinsic structure of unlabelled data to learn effective representations, which can then be used for fine-tuning or zero-shot transfer on downstream tasks. Self-supervision proves to be particularly useful in medicine, given the abundance of unlabelled imaging data. It also provides robustness to out-of-distribution data (Hendrycks et al., 2019) and concept drifts. Learning visual features without a strong supervisory signal, however, is challenging.
One particularly powerful technique used in self-supervision is to compare two or more augmented views of the same image using a Siamese network architecture (Bromley et al., 1993). A common denominator among variants of this technique, such as contrastive learning (Chen et al., 2020; He et al., 2020) and feature prediction (Grill et al., 2020; Caron et al., 2021; Chen and He, 2021), is
their reliance on an augmentation strategy to generate different views of the input data. The question "what makes good views" has been explored in-depth for natural images in the context of contrastive learning (Tian et al., 2020; Chen et al., 2020), but has not been answered for medical tasks. Efforts to transfer common augmentation strategies to pretrain representations on medical data have thus far had limited success compared with hand-crafted strategies (Azizi et al., 2021; Sowrirajan et al., 2021).
To address these limitations, we systematically evaluate the effectiveness, robustness, and generalizability of image augmentation strategies for representation learning on three large datasets of chest x-rays (Irvin et al., 2019; Johnson et al., 2019; Nguyen et al., 2022). In this study, we assess an extensive range of augmentations through linear probing, zero-shot transfer, fine-tuning, and data efficiency experiments and show that:
* Visual representations extracted with different augmentations results in substantial variations on downstream classification tasks (up to 18% difference). Random resized cropping largely defines optimal performance of the learned representations on downstream tasks.
* Representations learned with the optimal set of augmentations outperform supervised baselines on several occasions on both internal (by 13.6-20.0%) and external validation (up to 27.0%) sets.
* Zero-shot transfer, linear probing, and fine-tuning with limited data using pretrained representations surpass classification accuracy of their supervised counterparts on several occasions.
* The learned features are robust to forms of label drift and catastrophic forgetting, and show success in classification of diseases that are rare (e.g. fracture) and unseen across datasets (e.g. tuberculosis).
## 2 Related Work
**Self-supervised learning**. Self-supervision typically involves formulating a pretext task solely to learn a good representation of the data. This representation can subsequently be fine-tuned on a downstream task in a data-efficient manner. A broad range of such pretext tasks exist, such as solving jigsaw puzzles (Noroozi and Favaro, 2016; Taleb et al., 2021), image rotation prediction (Zhang et al., 2016), and context restoration (Pathak et al., 2016).
**Contrastive learning**. Contrastive visual representation learning seeks to contrast positive pairs of image views with negative pairs (Hadsell et al., 2006). Positive pairs are created from the input data, whereas negative pairs are sampled from a mini-batch (Chen et al., 2020) or queue (Chen and He, 2021). Traditional contrastive learning requires positive pairs and a large sample of negative pairs for effective training. Variations of contrastive methods use approaches that do not rely on negative pairs. BYOL (Grill et al., 2020) introduced a Siamese network trained to predict views of opposing branches. Extensions of this framework explore different architectural components, such as the loss function, projection heads, and the teacher-student architecture (Caron et al., 2021; Chen and He, 2021).
**Image augmentations for self-supervision**. Data augmentations are widely used in supervised learning to increase the diversity of the training data and to improve generalizability (Krizhevsky et al., 2017; Cubuk et al., 2018). RandAugment (Cubuk et al., 2020) is a powerful method that applies a randomly selected subset of predefined augmentations to the input data. In contrast, in self-supervised learning, augmentations are often applied to construct a pretext task (Tian et al., 2020). Common augmentations for contrastive learning were explored in SimCLR (Chen et al., 2020). In the medical domain, amongst others, affine transformations, elastic deformations (Chaitanya et al., 2020), and physics-driven augmentations (Desai et al., 2021) have been considered for self-supervised learning.
**Self-supervised learning for Chest X-Rays**. Chest X-Ray classification is a well-studied subject, and its recent role has been amplified in light of the COVID-19 pandemic (Wynants et al., 2020). Self-supervision has emerged as viable strategy to aid the detection of pathologies on chest x-rays (Gazda et al., 2021; Azizi et al., 2022). Multi-modal vision-language learning has shown to be effective (Zhang et al., 2020; Huang et al., 2021; Tiu et al., 2022; Delbrouck et al., 2022), but necessitates the availability of radiology reports. The current study is most closely aligned with the image-only augmentation strategies examined in MoCo-CXR (Sowrirajan et al., 2021) and MICLe (Azizi et al.,
2021). These studies, however, use contrastive methods that rely on negative sampling and were not designed to systematically explore augmentation strategies.
## 3 Methods
To evaluate the impact of data augmentations on the quality of the learned representations, we used SimSiam (Chen et al., 2020) - a minimal Siamese network architecture. SimSiam does not rely on negative sampling, knowledge distillation, or prototype clustering, which allows us to most directly study the role of augmentations in Siamese learning.
### Architecture and Pretraining Objective
The architecture of SimSiam consists of two identical and weight-sharing branches that each take an augmented view (i.e. \(x_{1}\) and \(x_{2}\)) of the same image \(x\) as an input (Figure 1). Both views (\(x_{1}\) and \(x_{2}\)) are processed by an identical encoder network, \(f(\cdot)\), that outputs feature vectors \(f(x_{i})\). These feature vectors are passed on to a two-layered MLP projector network \(g(\cdot)\) that produces a low-dimensional latent representation \(z_{i}=g(f(x_{i}))\) of the data. As a final step, the latent representations produced by each branch (\(z_{i}\)) are input to a predictor network \(h(\cdot)\). The predictor network is an MLP that aims to predict the projection \(z\) of the opposing branch (i.e. \(h_{1}(z_{1})=p_{1}\) tries to predict \(z_{2}\), while \(h_{2}(z_{2})=p_{2}\) tries to predict \(z_{1}\)). The loss function \(\mathcal{L}\) is defined as the negative cosine similarity between the predictions of the predictor networks \(p_{1}\) and \(p_{2}\) and the actual projected feature vectors \(z_{1}\) and \(z_{2}\):
\[\mathcal{L}=-\frac{1}{2}\left(\frac{p_{1}}{\left\|p_{1}\right\|_{2}}\cdot\frac {z_{2}}{\left\|z_{2}\right\|_{2}}\right)-\frac{1}{2}\left(\frac{p_{2}}{\left\| p_{2}\right\|_{2}}\cdot\frac{z_{1}}{\left\|z_{1}\right\|_{2}}\right), \tag{1}\]
where \(\left\|\cdot\right\|_{2}\) is the \(l_{2}\) norm. Note that, unlike typical contrastive self-supervised learning, calculation of the loss does not involve negative samples.
### Data Collection
Frontal chest x-rays from three publicly available datasets were used to train and evaluate our models. First, the MIMIC-CXR (Johnson et al., 2019) dataset (from Boston, USA) includes 377,110 images acquired from 277,835 imaging studies of patients, of which (\(n\)=200,000 and \(n\)=37,962) images were used for training and validation, respectively. Second, the CheXpert (Irvin et al., 2019) dataset (from Stanford, USA) contains 224,316 chest x-rays from 65,240 patients, of which (\(n\)=168,660 and \(n\)=22,367) were used for training and validation, respectively. In both CheXpert and MIMIC-CXR, an automatic radiology report labeller (Irvin et al., 2019) was used to annotate each report/image pair for the presence of 14 different conditions of which a diverse subset was included. Third, the VinDr-CXR (Nguyen et al., 2022) dataset (from Vietnam) contains 18,000 chest x-rays of which 15,000 were each manually labelled by three radiologists for 22 critical findings and 6 diagnoses in the training set. Every validation set image (\(n\)=3,000) was annotated by five radiologists. The sophisticated labelling makes VinDr-CXR an optimal dataset for evaluation purposes. Furthermore, the VinDr-CXR dataset contains both pathologies that overlap with MIMIC-CXR and CheXpert datasets (such as cardiomegaly) and completely unseen pathologies (such as tuberculosis).
### Experimental Setup and Study Design
#### 3.3.1 Training Pipeline
Our training pipeline consists of (i) self-supervised pretraining of an encoder, _ResNet_(\(\cdot\)), using unlabelled images via SimSiam (Section 3.1), (ii) supervised linear probing (i.e. training a single
Figure 1: SimSiam architecture.
layer classifier on top of a frozen encoder), and (iii) supervised fine-tuning of the entire encoder initialized with the weights of a pretrained encoder and a pretrained classification head.
#### 3.3.2 Datasets
We use the unlabelled MIMIC-CXR training split for all self-supervised pretraining experiments. We provide pretraining results with CheXpert in the Appendix (Tables 10 and 11). We perform supervised linear probing and fine-tuning on labelled train splits of MIMIC-CXR, CheXpert, and VinDr-CXR. For evaluation, we use held out data from an internal validation split (MIMIC-CXR) as well as external validation splits (CheXpert and VinDr-CXR). All dataset splits and the included labels are shown in Tables 1, 2 and 3. Data splits from the VinDr-CXR dataset were formed in two ways: balanced (i.e. stratified for the included conditions) and imbalanced, as given in Table 2. The three datasets encompass multi-site data and include different diseases and occurrence rates. Some labels are overlapping, while others are unseen in the pretraining data (e.g. tuberculosis in VinDr-CXR), forming a test-bed for comprehensive evaluations.
#### 3.3.3 Data Pre-Processing
Data was acquired in DICOM format for MIMIC-CXR and VinDr-CXR, while CheXpert images had been obtained as pre-processed images in JPEG format. Data was pre-processed on the basis of the DICOM headers. Images were corrected for photometric interpretation, windowed according to their respective window center and width, and scaled with an intercept and slope, if applicable. All images were resized to 224x224 pixels.
#### 3.3.4 Training Details
We use the SimSiam architecture (Chen and He, 2021) with a ResNet-50 encoder (He et al., 2016) for all experiments involving representation learning. The SGD optimizer (with _weight decay\(=0.0001\)_ and _momentum\(=0.9\)_) is used for pretraining (\(lr=0.05\)), linear probing (\(lr=30\), _weight decay\(=0\)_),
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & \multicolumn{2}{c}{MIMIC-CXR} & \multicolumn{2}{c}{CheXpert} \\ \hline Pathologies & Train & Eval & Train & Eval \\ \hline Atelectasis & \(48,833(24\%)\) & \(9461(25\%)\) & \(51,892(30\%)\) & \(7691(34\%)\) \\ Cardiomegaly & \(44,206(22\%)\) & \(8404(22\%)\) & \(26,989(16\%)\) & \(3103(14\%)\) \\ Edema & \(35,440(18\%)\) & \(6718(18\%)\) & \(54,755(33\%)\) & \(6738(30\%)\) \\ Pleural Effusion & \(51,836(26\%)\) & \(10,306(27\%)\) & \(78,258(46\%)\) & \(8219(37\%)\) \\ Pneumonia & \(29,969(15\%)\) & \(5704(15\%)\) & \(18,235(11\%)\) & \(2421(11\%)\) \\ Pneumothorax & \(10,294(5\%)\) & \(1929(5\%)\) & \(18,674(11\%)\) & \(1727(8\%)\) \\ Rib Fracture & \(4444(2\%)\) & \(825(2\%)\) & \(6914(4\%)\) & \(1021(5\%)\) \\ No Finding & \(67,239(34\%)\) & \(12,647(33\%)\) & \(14,430(9\%)\) & \(2544(11\%)\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Data distributions in MIMIC-CXR and CheXpert.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & \multicolumn{2}{c}{VinDR Balanced} & \multicolumn{2}{c}{VinDR Imbalanced} \\ \hline Pathologies & Train & Eval & Train & Eval \\ \hline
**Pulmonary fibrosis** & 1017 (11\%) & 217 (11\%) & 1017 (6\%) & 217 (6\%) \\ Cardiomegaly & 1817 (20\%) & 309 (16\%) & 1817 (11\%) & 309 (9\%) \\
**Pleural thickening** & 882 (10\%) & 169 (9\%) & 882 (5\%) & 169 (5\%) \\
**Lung Opacity** & 547 (6\%) & 84 (4\%) & 547 (3\%) & 84 (2\%) \\ Pleural effusion & 634 (7\%) & 111 (6\%) & 634 (4\%) & 111 (3\%) \\ Pneumonia & 471 (5\%) & 246 (12\%) & 471 (3\%) & 246 (7\%) \\
**Tuberculosiss** & 482 (5\%) & 164 (8\%) & 482 (3\%) & 164 (5\%) \\
**Nodule/Mass** & 409 (4\%) & 176 (9\%) & 409 (3\%) & 176 (5\%) \\ No finding & 3000 (32\%) & 500 (25\%) & 10601 (63\%) & 2051 (58\%) \\ \hline \hline \end{tabular}
\end{table}
Table 2: VinDr-CXR splits distribution. **Bold** refers to unseen concepts.
and fine-tuning (\(lr=0.00001\)) using a cosine decay learning rate scheduler (Chen and He, 2021). Batch sizes were fixed to 256. Experiments were trained with PyTorch on 8 NVIDIA A100 GPUs on a single node with 32-bit floating point precision. Representations pretrained for optimal augmentation selection were trained for 50 epochs whose training duration ranged from approximately 6 to 12 hours. The corresponding linear probes were trained for 40 epochs. Checkpoints from the final epochs were used for evaluation. The \(t_{\theta}\) set of augmentations was retrained for 100 epochs, and linear probes were trained for 90 epochs. Linear probes, fine-tuned models, and fully supervised models were trained free of augmentations to investigate the effectiveness of the pretrained embeddings. Fine-tuned and supervised models were trained for 90 (MIMIC-CXR and CheXpert) and 150 (VinDr-CXR) epochs.
#### 3.3.5 Evaluation Metrics
We evaluate the quality of our pretrained representations by measuring their downstream discriminative performance (averaged and per label), generalization capability, and data efficiency using the following multi-label metrics: (i) macro AUROC (area under receiver operating curve), (ii) label-wise AUROC, (iii) Hamming Loss, and (iv) Ranking Error (Tsoumakas et al., 2010).
Evaluating multi-label classification performance is more nuanced than evaluation of typical multi-class classification scenarios. Common metrics such as accuracy and AUROC might grossly overestimate or underestimate classifier capability. As a result, we report on three multi-label metrics including **AUROC** (both macroAUROC and label-wise AUROC), that cover three different aspects of the classifier predictions. Class-wise or label-wise AUROC allows us to identify high-performing subgroups and quantify minority class performance for each model.
The **Hamming loss** is an example-based metric (Tsoumakas et al., 2010) and computes the fraction of misclassified labels across each sample and across each label. The lower the Hamming loss, the better. It is mathematically defined as \(H=\frac{1}{NK}\sum_{i=i}^{n}\sum_{j=1}^{K}[p_{ij}\neq y_{ij}]\) where \(p_{ij}\) is the prediction, \(y_{ij}\) is the label, \(K\) is the number of classes and \(N\) is the number of samples.
The **Ranking Error**(Tsoumakas et al., 2010) is a ranking type of metric that computes the number of times the irrelevant labels (i.e., low probability labels) are ranked higher than relevant labels. The lower the Ranking error, the better.
We report overall AUROC in the main manuscript and the rest in Appendix B.
## 4 Experiments and Findings
### Optimal Augmentation Strategy
We seek to learn invariant features from augmented image views during pretraining. Inspired by the systematic study of augmentations for SimCLR by Chen et al. (2020), we first explore the efficacy of augmentations in isolation. We evaluate three common geometric/spatial transformations, namely resized cropping, rotation (Gidaris et al., 2018), and cutout (DeVries and Taylor, 2017), along with pixel-wise transformations of brightness/contrast adjustments, Gaussian noise, Gaussian blur, and Sobel filtering (Figure 2).
\begin{table}
\begin{tabular}{l c c} \hline \hline \multicolumn{2}{c}{VinDR In Distribution} \\ \hline \multicolumn{1}{c}{Pathologies} & \multicolumn{1}{c}{Train} & \multicolumn{1}{c}{Eval} \\ \hline Atelectasis & 62 (0\%) & 86 (3\%) \\ Cardiomegaly & 1817 (13\%) & 309 (11\%) \\ Edema & 1 (0\%) & 0 (0\%) \\ Rib fracture & 41 (0\%) & 11 (0\%) \\ Pleural effusion & 634 (5\%) & 111 (4\%) \\ Pneumonia & 471 (3\%) & 246 (9\%) \\ Pneumothorax & 58 (0\%) & 18 (1\%) \\ No finding & 10601 (77\%) & 2051 (72\%) \\ \hline \hline \end{tabular}
\end{table}
Table 3: VinDr-CXR In Distribution. Only pathologies that are In Distribution to MIMIC-CXR
First, we apply the identity transformation to one branch of the Siamese network, and apply a single augmentation \(t_{i}\in\mathcal{T}\) to the other branch (i.e. \(t_{1}(x_{i})\)). We repeat this procedure with pairs of augmentations (i.e. \(t_{2}(t_{1}(x_{i}))\)) as shown in Figure 2. We pretrain our models on MIMIC-CXR and evaluate their performance based on supervised linear probing (Zhang et al., 2016) on the MIMIC-CXR validation set. We refer to the pair of augmentations with the highest macro AUROC on the MIMIC-CXR validation set as \(t_{\theta}\).
We find a combination of random resized cropping and brightness/contrast adjustments (i.e. pixel distortion) to be the optimal pair of augmentations \(t_{\theta}\) with an AUROC of 0.76 (Figure 2). Pairs of augmentations that include random resized cropping consistently outperform other compositions (AUROC improvements ranging from \(0.04-0.06\)). This in contrast to natural images, in which cropping performs well, but mostly in conjunction with either color jittering or Sobel filtering (Chen et al., 2020). We further optimize the hyperparameters of \(t_{\theta}\) and find that strong cropping (\(scale=0.2-0.5\)) and large brightness/contrast distortions (\(\lambda=0.7\)) are favored for single-branch augmentations, while weaker cropping (\(scale=0.3-0.9\)) is favored for symmetrical dual-branch augmentations. A strategy without any augmentations yields surprisingly good results (AUROC of \(0.67\)). We report the Hamming Loss and Ranking Error for each of the augmentation pairs for MIMIC-CXR and CheXpert in the Appendix (Table 8-Table 11) and observe consistent trends.
Finally, we compare \(t_{\theta}\) head-to-head with RandAugment and observed that RandAugment linear probing on MIMIC-CXR was effective (AUROC of 0.76) but not superior to the simpler \(t_{\theta}\) strategy. We examined several less common augmentations as additions to \(t_{\theta}\), but do not consider them for further experiments (see Appendix A).
### Comparison to Fully Supervised Networks
We compare the performance of \(t_{\theta}\) with fully supervised models trained from scratch [FS (S)] and with ImageNet pretrained weights [FS (IN)] in Table 7. We observe that using the linear probe on \(t_{\theta}\) (or any pair of augmentations with resized cropping, see Appendix B) surpasses both fully supervised networks for MIMIC-CXR (0.05 and 0.018 AUROC improvement) and VinDr-CXR (0.097 and 0.055 AUROC improvement) datasets (Table 4). Further stratifying the results, we show that the \(t_{\theta}\) representations outperform the fully supervised approaches for each condition, including improving the challenging minority class of _fracture_ that has \(<5\%\) prevalence, by 0.089 and 0.055 AUROC.
### Zero-shot Generalization of Pretrained Representations
We evaluate zero-shot transfer of our supervised MIMIC-CXR \(t_{\theta}\) representations to VinDr-CXR and CheXpert, which have differing disease distributions and dataset statistics. In zero-shot transfer to VinDr-CXR pathologies available in MIMIC-CXR, the \(t_{\theta}\) representations achieve 0.767 AUROC,
Figure 2: A selection of image augmentations (left: a-h) and their performance on MIMIC-CXR validation data (right), following pretraining and supervised learning on MIMIC-CXR training data. Combining random resize crop with distortion resulted in the best augmentation pair, \(t_{\theta}\) (left: i).
outperforming fully supervised VinDr-CXR networks by 0.099 and 0.057 AUROC when trained from scratch and ImageNet, respectively (Table 5). This was striking as the MIMIC-CXR \(t_{\theta}\) representations did not have any access to VinDr-CXR data or label distributions. However, such an effective zero-shot transfer was not the case with CheXpert. Here, the fully supervised CheXpert performance was higher than that of the MIMIC-CXR \(t_{\theta}\) representations (Table 6). We attribute this zero-shot discrepancy due to the substantially higher amount of labelled images in CheXpert than in VinDr-CXR (168,660 vs 18,000).
### Linear Probing MIMIC-CXR \(t_{\theta}\) Transfer to VinDr-CXR
Here, we linearly probe the MIMIC-CXR \(t_{\theta}\) representations using the VinDr-CXR dataset, which consists of seen and unseen pathologies. We term the VinDr-CXR dataset, _VinDr-Imbalanced_, as it consists of a large number "No Finding" labels (60% label prevalence). We create a separate subset of _VinDr-Balanced_ by undersampling this majority class to 30% label prevalence (all data splits in Table 2). When evaluating the MIMIC-CXR \(t_{\theta}\) pretrained classifiers on held out VinDr-CXR data, we observe 0.099 AUROC and 0.067 AUROC improvements over fully supervised models (same trends for CheXpert representations) in Table 5. This was consistent across all pathologies. Remarkably, the performance on _Tuberculosis_, an unencountered disease with very low prevalence in the US, has 0.127 AUROC and 0.101 AUROC better performance than the supervised baselines (Table 5). This shows that linear probing of strong pretrained representations can generalize to out-of-distribution, unseen data and pathologies.
### Generalization Capability by Fine-Tuning MIMIC-CXR \(t_{\theta}\) on CheXpert
We fine-tune the pretrained MIMIC-CXR \(t_{\theta}\) representations and MIMIC-CXR trained linear classifier on labelled CheXpert training data. We see that the classification AUROC increases on fine-tuning from 0.649 to 0.768 AUROC (Table 6), even outperforming the fully supervised network trained from scratch (0.757 AUROC) (Table 6, _Eval on CheXpert_). We then evaluate this MIMIC-CXR \(t_{\theta}\) pretrained and CheXpert fine-tuned model for its zero-shot transfer capabilities to validation data from MIMIC-CXR and VinDr-CXR data (Table 6, _Eval on MIMIC-CXR_). Upon fine-tuning these representations on CheXpert, we wish to assess the presence of catastrophic forgetting or poorer generalization to MIMIC-CXR through our zero-shot evaluations. However, we see that zero-shot evaluation on MIMIC-CXR continues showing high performance (0.763 AUROC), which
\begin{table}
\begin{tabular}{l l l l l l l l l l l} \hline \hline Strategy & AT & CM & Edema & RF & PE & PNA & PTX & No & macro \\ & & & & & & & & & & \\ \hline \(t_{\theta}\) & **0.750** & **0.769** & **0.848** & **0.619** & **0.845** & **0.649** & **0.779** & **0.819** & **0.760** \\ FS (S) & 0.705 & 0.720 & 0.797 & 0.529 & 0.837 & 0.617 & 0.689 & 0.786 & 0.710 \\ FS (IN) & 0.744 & 0.753 & 0.832 & 0.563 & 0.856 & 0.667 & 0.748 & 0.770 & 0.742 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Comparison of MIMIC-CXR \(t_{\theta}\) (linear probe) to fully supervised models on MIMIC-CXR. Numbers are AUROC. Abbreviations: FS: Fully supervised, S: trained from scratch, IN: ImageNet, AT: Atelectasis, CM: Cardiomegaly, RF: Rib fracture, PE: Pleural Effusion, PNA: Pneumonia, PTX: Pneumothorax.
\begin{table}
\begin{tabular}{c c c c c|c c c c c|c c} \hline \hline & \multicolumn{3}{c|}{In-Distribution Pathologies} & \multicolumn{4}{c|}{Out-of-distribution Pathologies} & & \\ \hline \multirow{2}{*}{Strategy} & \multirow{2}{*}{CM} & \multirow{2}{*}{PE} & \multirow{2}{*}{PNA} & \multirow{2}{*}{No} & \multirow{2}{*}{PF} & \multirow{2}{*}{PT} & \multirow{2}{*}{LO} & \multirow{2}{*}{Mass} & \multirow{2}{*}{TB} & Macro & OOD \\ & & & & & & & & & & & AUROC & AUROC \\ \hline Zero-shot & 0.840 & 0.810 & 0.774 & 0.795 & NA & NA & NA & NA & NA & NA \\ Linear probe & 0.909 & 0.822 & 0.785 & **0.880** & **0.720** & **0.712** & 0.651 & 0.648 & 0.776 & 0.767 & 0.701 \\ Fine tune & **0.937** & 0.824 & **0.790** & 0.869 & 0.719 & 0.707 & **0.660** & **0.651** & **0.802** & **0.773** & **0.708** \\ FS (S) & 0.796 & 0.643 & 0.591 & 0.813 & 0.631 & 0.665 & 0.622 & 0.598 & 0.649 & 0.668 & 0.633 \\ FS (IN) & 0.888 & **0.872** & 0.672 & 0.778 & 0.631 & 0.694 & 0.613 & 0.571 & 0.675 & 0.710 & 0.637 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Transferring MIMIC-CXR \(t_{\theta}\) to VinDr for seen (in-distribution) and unseen (out-of-distribution, OOD) conditions in MIMIC. Numbers are AUROC. Abbreviations: FS (S), FS (IN): Fully supervised from scratch and from ImageNet, respectively, CM: Cardiomegaly, PE: Pleural Effusion, PNA: Pneumonia, PF: Pulmonary Fibrosis, PT: Pleural Thickening, LO: Lung Opacity, TB: Tuberculosis.
still outperforms fully supervised models by 0.053 and 0.021 AUROC, indicating no evidence of catastrophic forgetting. In fact, the performance on MIMIC-CXR validation data is nearly identical (AUROC difference of 0.003) when fine-tuning on CheXpert or MIMIC-CXR. Similarly, this fine-tuned model generalizes well to VinDr-CXR with 0.810 AUROC, which is 0.142 and 0.091 AUROC higher than fully supervised baselines trained on VinDr-CXR (Table 6, _Eval on VinDr-CXR_).
### Data-Efficiency in Fine-Tuning
We test the data-efficiency of our representations while fine-tuning, by varying the percentage of labelled data they are exposed to. We create stratified splits of the MIMIC-CXR training set, maintaining the label distribution, with 100%, 50%, 25%, 12.5%, 10% and 1% of the labelled data. All smaller subset splits are members of the larger split (i.e. all images in the 1% split are included in the 10% split, and so on). We fine-tune our MIMIC-CXR \(t_{\theta}\) representations on each of the stratified splits from labelled MIMIC-CXR training data. We evaluate each fine-tuned network on held-out MIMIC-CXR validation data, and also assess zero-shot transfer on CheXpert and VinDr-CXR. We observe that fine-tuning, even with as little with 10% data, improves performance on all three datasets (Table 7), indicating that the representations are data-efficient. For CheXpert evaluation, we see that even 1% fine-tuning improves performance over zero-shot transfer by over 0.05 AUROC. However, the fine-tuned performance still lags that of fully-supervised CheXpert, likely due to the scale of available training labels. Interestingly, 1% fine-tuning on VinDr-CXR reduces performance, while 10+% data improves performance. We hypothesize that this may be because the models overfits to the 1% data split and cannot generalize to distribution shifted manifold of VinDr-CXR, which has a varied label distribution (Table 3) than MIMIC-CXR.
## 5 Conclusion
In this work, we systematically evaluate the effect of augmentations on the quality of representations learned through self-supervision with Siamese Networks. We find random resized cropping to be crucial to the augmentation strategy, and the simple addition of random contrast and brightness adjustments yields powerful representations. The learned representations prove to be robust to out-of-distribution data, surpass the classification accuracy of fully supervised models for various disease labels, and even generalize to unseen conditions.
\begin{table}
\begin{tabular}{c c c c|c c c c|c c c} \hline \hline & \multicolumn{3}{c|}{Eval on CheXpert} & \multicolumn{3}{c|}{Eval on MIMIC-CXR} & \multicolumn{3}{c}{Eval on VinDr-CXR} \\ \hline \multirow{2}{*}{\(Z\)S} & FT & FS (S) & FS (IN) & \multirow{2}{*}{\(Z\)S} & FT & FS (S) & FS (IN) & \multirow{2}{*}{\(Z\)S} & FT & FS (S) & FS (IN) \\ & (Chex) & (Chex) & & & (Chex) & & (Minic) & & & (Chex) & (VinDr) & (VinDr) \\ \hline
0.649 & 0.768 & 0.757 & **0.789** & 0.760 & **0.763** & 0.710 & 0.742 & 0.765 & **0.810** & 0.668 & 0.719 \\ \hline \hline \end{tabular}
\end{table}
Table 6: MIMIC-CXR \(t_{\theta}\) to CheXpert transfer on fine-tuning. Macro AUROC. Abbreviations: ZS: Zero-shot transfer, FT: Fine-tuning, FS: Fully Supervised, S: trained from scratch, IN: ImageNet, Chex: CheXpert, Mimic: MIMIC-CXR, VinDr: VinDr-CXR, Eval: Evaluation.
\begin{table}
\begin{tabular}{l c c c c c c c c c} \hline \hline Eval Set & 1\% & 10\% & 12.5\% & 25\% & 50\% & 100\% & Zero-Shot & FS (S) & FS (IN) \\ \hline MIMIC-CXR & 0.783 & 0.792 & 0.797 & 0.800 & 0.805 & 0.810 & 0.760 & 0.710 & 0.742 \\ CheXpert & 0.679 & 0.687 & 0.690 & 0.696 & 0.701 & 0.707 & 0.649 & 0.757 & 0.788 \\ VinDr-CXR & 0.740 & 0.773 & 0.786 & 0.792 & 0.805 & 0.803 & 0.765 & 0.668 & 0.710 \\ \hline \hline \end{tabular}
\end{table}
Table 7: Fine-tuning data efficiency: Macro AUROC for fine-tuning MIMIC-CXR \(t_{\theta}\) pretrained representations on three held out evaluation sets fine-tuned on stratified splits of MIMIC-CXR training data (in %). Results compared with zero-shot evaluations and fully supervised from scratch [FS (S)] or from ImageNet [FS (IN)] pretrained weights on their respective train sets.
## Data and Code availability
We use publicly available, large scale chest X-ray datasets, MIMIC-CXR, CheXpert, and VinDr-CXR. Data collection details are given in Section 3.2 and data preprocessing steps are outlined in Section 3.3.3. We open-source all our code for all our experiments and analyses in the paper on GitHub.
## Acknowledgements
This work was supported by computational support from Stability.AI and the Institute for Human-Centered AI at Stanford. RS received support from the Dutch Research Council, independent of this work. We would also like to acknowledge the help of Pierre Chambon and Ashwin Paranjape in proofreading this manuscript. We would like to thank members of the Chaudhari Lab, Rubin Lab and Langlotz Lab for insightful discussions.
|
2306.08811
|
Equivariant Index on Toric Contact Manifolds
|
We compute the equivariant index of the twisted horizontal Dolbeault operator
on compact toric contact manifolds of Reeb type. The operator is elliptic
transverse to the Reeb foliation and its equivariant index defines a
distribution on the torus. Using the good cone condition, we show that the
symbol localises to the closed Reeb orbits corresponding to the edges of the
moment cone and obtain an Atiyah-Bott-Lefschetz type formula for the index. For
the horizontal Dolbeault operator, we obtain an expression for the index as a
sum over the lattice points of the moment cone, by applying an adaptation of
the Lawrence-Varchenko polytope decomposition to rational polyhedral cones.
|
Pedram Hekmati, Marcos Orseli
|
2023-06-15T01:58:14Z
|
http://arxiv.org/abs/2306.08811v2
|
# Equivariant index on toric contact manifolds
###### Abstract.
We compute the equivariant index of the twisted horizontal Dolbeault operator on compact toric contact manifolds of Reeb type. The operator is elliptic transverse to the Reeb foliation and its equivariant index defines a distribution on the torus. Using the good cone condition, we show that the symbol localises to the closed Reeb orbits corresponding to the edges of the moment cone and obtain an Atiyah-Bott-Lefschetz type formula for the index. For the horizontal Dolbeault operator, we obtain an expression for the index as a sum over the lattice points of the moment cone, by applying an adaptation of the Lawrence-Varchenko polytope decomposition to rational polyhedral cones.
## 1. Introduction
There has been considerable interest in \(K\)-contact and Sasaki manifolds in recent years, in part due to their role in theoretical physics as backgrounds in supersymmetric gauge theories [11, 25] and the AdS/CFT correspondence [12, 13, 22, 23]. Let \((M,H)\) be a \((2n+1)\)-dimensional compact co-oriented contact manifold with contact form \(\alpha\) and associated Reeb vector field \(R_{\alpha}\). We say that \((M,H)\) is toric if it carries an effective action by a torus \(G\) of dimension \(n+1\) preserving the contact structure. Recall that \((M,H)\) is of Reeb type if the Reeb vector field is generated by a one-parameter subgroup of \(G\). Toric contact manifolds of Reeb type carry an invariant Sasakian structure [4] and as shown by Lerman [17], they admit a combinatorial description in terms of their moment map images, which are strictly convex rational polyhedral cones. This structure has been exploited to compute various invariants of toric contact manifolds, such as the volume [14, 22], the first and second homotopy groups [18], the equivariant cohomology ring [21] and the cylindrical contact homology [2].
In this paper, we consider the index of the horizontal Dolbeault operator on compact toric contact manifolds of Reeb type endowed with an invariant Sasakian structure. This operator is the odd dimensional analogue of the Dolbeault operator in Kahler geometry. It appears for instance in [11, 25] in the calculation of perturbative partition functions of certain supersymmetric field theories, in [26] in relation to deformations of Sasakian structures and in [5] to compute the dimension of moduli spaces of instantons on contact \(5\)-manifolds. The operator is elliptic transverse to the Reeb foliation and on toric Sasaki manifolds, it is elliptic in directions transversal to the \(G\)-orbits.
In [3], Atiyah-Singer proved that a pseudodifferential operator \(A\) that is \(G\)-transversally elliptic may have an infinite-dimensional kernel and cokernel, but they define a virtual trace-class representation of \(G\). The index of \(A\) can therefore
be defined as a generalised function on \(G\) by
\[\operatorname{ind}_{G}^{M}(A)(t)=\operatorname{tr}(t|_{\ker A})-\operatorname{tr} (t|_{\operatorname{coker}A}).\]
Decomposing \(\ker A\) and \(\operatorname{coker}A\) into isotypical components, we have
\[\operatorname{ind}_{G}^{M}(A)(t)=\sum_{\mu\in\hat{G}}m(\mu)\chi_{\mu},\]
where \(m\colon\hat{G}\to\mathbb{Z}\) encodes the multiplicities of the irreducible \(G\)-representations appearing in the index character. We will compute explicitly the function \(m\) when \(A\) is the horizontal Dolbeault operator \(\overline{\partial}_{H}\) and more generally, we derive a localisation theorem for the index when \(\overline{\partial}_{H}\) is coupled to a holomorphic bundle.
Our first main result is an Atiyah-Bott-Lefschetz type formula for the twisted horizontal Dolbeault operator:
**Theorem 1.1**.: _Let \(\overline{\partial}_{H}^{E}\) be the horizontal Dolbeault operator on a compact toric Sasaki \((2n+1)\)-manifold \(M\) twisted by a \(G\)-equivariant transversally holomorphic bundle \(E\) over \(M\). For any \(t\in G\),_
\[\text{ind}_{G}^{M}(\overline{\partial}_{H}^{E})(t)=\sum_{L\in E(C)}\chi_{E|_{ L}}(t)\prod_{i=1}^{n}\left(\frac{1}{1-t^{-w_{L}^{i}}}\right)^{\pm}\delta(1-t^{\mu_{L} }),\]
_where \(E(C)\) is the set of edges of the moment cone \(C\), \(\{w_{L}^{1},\dots,w_{L}^{n}\}\) are the isotropy weights and \(\mu_{L}\) is the weight of the action of \(G\) on the closed Reeb orbit corresponding to \(L\)._
The signs \(\pm\) dictate whether the denominator is expanded about \(t=0\) or \(t=\infty\) and are fixed by the pairing of the isotropy weights with a polarizing vector, see Section 4 for detailed explanations.
Our method is based on Atiyah's algorithm outlined in [3] to stratify \(M\) using the torus action and reduce the index calculation to computations on lower dimensional submanifolds. Using the good cone property of toric contact manifolds of Reeb type, we construct a deformation vector field that in fact localises the index to contributions from a finite number of closed Reeb orbits corresponding to the edges of the moment cone. We note that a general cohomological formula for the index of \(G\)-transversally elliptic operators was obtained in [6, 7] and more specifically for contact manifolds in [10]. These formulas are however not well-adapted to computing the multiplicities since they provide an expression for the index that is valid only on a neighbourhood of each \(t\in G\). Even in the elliptic case, deducing the function \(m\) from the Atiyah-Segal-Singer fixed point theorem is not easy. Our approach is to exploit the combinatorial structure of the manifold \(M\) to determine the function \(m\) as explicitly as possible. For instance, Theorem 1.1 can be readily applied to compute the dimensions of moduli spaces of instantons [5] and transverse Seiberg-Witten monopoles [16] on toric Sasaki manifolds.
We further remark that in Theorem 1.1 the Sasakian structure is not assumed to be quasi-regular and the formula applies in particular to the irregular \(Y^{p,q}\) spaces [13]. When the Sasakian structure is regular, the manifold is a principal circle bundle over a toric Kahler manifold \(X\). In this case, \(\overline{\partial}_{H}\) descends to the usual
Dolbeault operator on \(X\) twisted by the character line bundles of the circle and the index can be computed using the Lefschetz fixed point formula.
Our second result is an expression for the index of \(\overline{\partial}_{H}\) in terms of the integral points of the moment cone:
**Theorem 1.2**.: _The index of the horizontal Dolbeault operator \(\overline{\partial}_{H}\) is given by_
\[\text{ind}_{G}^{M}(\overline{\partial}_{H})(t)=(-1)^{n}\sum_{\mu\in C^{\circ} \cap\mathbb{Z}_{G}^{\bullet}}t^{\mu}+\sum_{\mu\in(-C)\cap\mathbb{Z}_{G}^{ \bullet}}t^{\mu},\]
_where \(\mathbb{Z}_{G}^{\bullet}\) denotes the dual integral lattice of \(\mathfrak{g}\), \(C^{\circ}\) the interior of the moment cone and \(-C\) the negative cone._
This follows by adapting and applying a version of the Lawrence-Varchenko formula to the polar decomposition of cones over a polytope (Proposition 5.14). Another key ingredient in the proof is Lerman's local description of toric contact manifolds [17], that allows us to relate the weights of the torus action to the moment cone and identify the localisation formula in Theorem 1.1 with the Lawrence-Varchenko formula for rational polyhedral cones.
In [23], Martelli-Sparks-Yau considered the Dolbeault operator on an orbifold resolution of the non-compact Kahler cone of a Sasaki manifold and showed that the equivariant index equals the integral points of the moment cone. This corresponds essentially to the second term of the index of \(\overline{\partial}_{H}\) in Theorem 1.2. A similar limiting argument as in [23] applied to this term would compute the volume of the momentum polytope, which up to a constant equals the volume of the toric Sasaki manifold. Another related recent work is by Lin-Loizides-Sjamaar-Song [20], where they study the equivariant index of the basic Dirac operator on Riemannian foliations whose leaf space is symplectic and establish a quantization commutes with reduction theorem. This includes toric \(K\)-contact manifolds as a special case, however their setup is complementary to ours as their operator only acts on basic sections, corresponding to the invariant part of our index.
The paper is structured as follows. In Section 2 we provide a brief review of contact and Sasakian structures, recall some results from [17] including a local normal form for toric contact manifolds and Lerman's construction of a toric contact manifold from a good cone. We also introduce the main object of this paper, the horizontal Dolbeault operator. Section 3 introduces the fundamental concepts in the theory of \(G\)-transversally elliptic operators and our main computational tool, Atiyah's algorithm for localising the index. In Section 4 we introduce a deformation vector field and apply the localisation argument to derive Theorem 1.1. Finally, in Section 5, we prove a cone version of the Lawrence-Varchenko formula and apply it to the index of the horizontal Dolbeault operator to obtain an explicit lattice point formula.
## 2. Toric contact manifolds
Let \(M\) be a smooth compact manifold of dimension \(2n+1\). Recall that a contact structure on \(M\) is a hyperplane distribution \(H\subset TM\) defined globally by \(H=\ker\alpha\)
for a \(1\)-form \(\alpha\) such that \(d\alpha|_{H}\) is non-degenerate. The contact form \(\alpha\) defines a volume form \(\alpha\wedge(d\alpha)^{n}\) on \(M\) and its conformal class determines a co-orientation of the pair \((M,H)\). Associated to every co-oriented contact manifold is a symplectic cone \((C(M),\omega)\) defined by
\[C(M)=M\times\mathbb{R}\;\;\text{and}\;\;\omega=d(e^{r}\alpha),\]
where \(r\) is the coordinate in \(\mathbb{R}^{+}\) and \(2\frac{\partial}{\partial r}\) is the Liouville vector field. The Reeb vector field associated to \(\alpha\) is the unique vector field \(R_{\alpha}\) satisfying \(\iota_{R_{\alpha}}\alpha=1\) and \(\iota_{R_{\alpha}}d\alpha=0\). We let \(V\subset TM\) denote the rank one sub-bundle spanned by \(R_{\alpha}\).
A _contact metric structure_ on \((M,\alpha,R_{\alpha})\) is a reduction of structure of the tangent bundle to \(U(n)\subset GL(2n+1,\mathbb{R})\). Alternatively, it consists of an endomorphism \(\Phi\colon TM\to TM\) and a Riemannian metric \(g\) such that \(\Phi^{2}=-I+\alpha\otimes R_{\alpha}\) and \(g(\Phi X,\Phi Y)=g(X,Y)-\alpha(X)\alpha(Y)\) for all vector fields \(X,Y\). This yields an orthogonal decomposition \(TM=V\oplus H\) together with a unitary structure on \(H\). The restriction \(J=\Phi|_{H}\) of \(\Phi\) to \(H\) defines the complex structure on \(H\) and \(d\alpha(X,Y)=g(X,\Phi Y)\) restricted to \(H\) is the Hermitian \(2\)-form associated to \(J\).
We say that \(M\) is a _\(K\)-contact manifold_ if \(R_{\alpha}\) is a Killing vector field with respect to \(g\). This is equivalent to the characteristic foliation generated by \(R_{\alpha}\) being a Riemannian foliation. Extending \(g\) to the symplectic cone, we obtain a metric \(h=dr^{2}+r^{2}g\) on \(C(M)\) and an associated almost complex structure \(J_{C}\) defined by \(h(X,J_{C}Y)=\omega(X,Y)\). A contact metric structure \((\alpha,R_{\alpha},\Phi,g)\) on \(M\) is called _Sasakian_ if \((h,J_{C},\omega)\) is a Kahler structure on \(C(M)\). Sasaki manifolds constitute the most important class of \(K\)-contact manifolds and are the odd dimensional counterparts to Kahler manifolds.
**Example 2.1**.: Geometric quantisation provides examples of _quasi-regular_\(K\)-contact manifolds, that is when all leaves of the characteristic foliation are circles. Let \((B,\omega)\) be a symplectic manifold such that \([\omega]\in H^{2}(B,\mathbb{Z})\). Let \(M\) denote the principal \(S^{1}\)-bundle over \(B\) with Chern class equal to \([\omega]\). There is a connection form \(\alpha\) on \(M\) such that \(d\alpha=\pi^{*}\omega\). Since \(\omega\) is symplectic, we have \(\alpha\wedge(d\alpha)^{n}=\alpha\wedge\pi^{*}\omega^{n}\neq 0\), so \(\alpha\) is a contact form and its Reeb vector field \(R_{\alpha}\) is the generator of the free \(S^{1}\)-action on \(M\). Such contact manifolds are regular and the projection \(\pi\colon M\to B\) is known as the Boothby-Wang fibration [8]. This construction generalises to symplectic orbifolds \((B,\omega)\) such that \([\omega]\in H^{2}(B,\mathbb{R})\) admits a lift to a class \(c\in H^{2}_{\text{orb}}(B,\mathbb{Z})\), the degree \(2\) orbifold cohomology of \(B\). Then \(c\) defines a Seifert fibration \(\pi\colon M\to B\) carrying a pseudo-free \(S^{1}\)-action and \(M\) admits the structure of a quasi-regular \(K\)-contact manifold. When \(B\) is a Kahler orbifold, \(M\) acquires a Sasakian structure.
Let \(G\) be a torus of dimension \(n+1\), \(\mathfrak{g}\) its Lie algebra and \(\mathfrak{g}^{*}\) its dual Lie algebra. We will denote by \(\mathbb{Z}_{G}=\ker(\exp\colon\mathfrak{g}\to G)\) the integral lattice of \(\mathfrak{g}\). Suppose that \(G\) acts on a manifold \(M\). If \(v\in\mathfrak{g}\), we denote by \(v(p)\in T_{p}M\) the tangent vector induced by the action of \(G\) on \(M\).
**Definition 2.2**.: A contact manifold \((M,H)\) of dimension \(2n+1\) is called _toric_ if there is an effective action by an \((n+1)\)-dimensional torus \(G\) on \(M\) preserving the
contact form. The \(\alpha\)_-moment map_\(\phi_{\alpha}\colon M\to\mathfrak{g}^{*}\) is defined by
\[\langle\phi_{\alpha}(p),v\rangle=\alpha_{p}(v(p))\]
for all \(p\in M\) and \(v\in\mathfrak{g}\). The _moment cone_ associated with \(\phi_{\alpha}\) is defined by
\[C=\left\{t\phi_{\alpha}(p)\ |\ t\geq 0,p\in M\right\},\]
and can be identified with the union of \(\left\{0\right\}\) with the image of the moment map \(e^{r}\phi_{\alpha}\) of the lifted Hamiltonian \(G\)-action on \(C(M)\), where \(G\) acts trivially on \(\mathbb{R}\).
The classification of compact toric contact manifold was completed by Lerman [17]. In dimensions greater than three and when the \(G\)-action is not free, the contact toric manifolds are classified by their moment cones, which are good cones:
**Definition 2.3**.: A cone \(C\subset\mathfrak{g}^{*}\) is _good_ if there exists a minimal set of primitive vectors \(v_{1},\dots,v_{d}\in\mathbb{Z}_{G}\), with \(d\geq n+1\), such that:
1. \(C=\bigcap\limits_{j=1}^{d}\left\{y\in\mathfrak{g}^{*}\ |\ \left\langle y,v_{j} \right\rangle\geq 0\right\}\),
2. Any codimension-\(k\) face of \(C\), \(1\leq k\leq n\), is the intersection of exactly \(k\) facets whose set of normals can be completed to an integral base of \(\mathbb{Z}_{G}\).
_Remark 2.4_.: Good cones are rational polyhedral, meaning that the normals to the facets are integral vectors.
Toric contact manifolds can be further divided into Reeb and non-Reeb types. We say that \(M\) is of _Reeb type_ if \(R_{\alpha}\) is generated by an element \(R\in\mathfrak{g}\).
**Theorem 2.5** ([4], [17]).: _If \((M,\alpha)\) is a toric contact manifold of Reeb type, then its moment cone \(C\) is a strictly convex good cone. The image of the \(\alpha\)-moment map \(\phi_{\alpha}\) is a compact convex simple polytope \(P\) given by the intersection of the characteristic hyperplane_
\[\mathcal{H}=\left\{\eta\in\mathfrak{g}^{*}\ |\ \eta(R)=1\right\},\]
_determined by the vector \(R\), with the moment cone \(C\)._
_Remark 2.6_.: Strictly convex means that \(C\) contains no linear subspaces of positive dimension, so it is a cone over a polytope. Toric contact manifolds with good moment cones \(C\) that are not strictly convex are diffeomorphic to \(\mathbb{T}^{k}\times S^{k+2l-1}\), for some \(k>1,l\geq 0\)[19].
Toric contact manifolds with an invariant \(K\)-contact structure must be of Reeb type [18] and they always admit an invariant Sasakian structure [4]. In the sequel, we will therefore assume that our toric contact manifolds are of Reeb type of dimension greater than three and equipped with an invariant Sasakian structure. We will need the following result characterising the Reeb vector fields associated to a Sasakian structure:
**Theorem 2.7** ([22]).: _Let \(v_{1},\dots,v_{d}\in\mathfrak{g}\) be the defining integral normals of the moment cone \(C\in\mathfrak{g}^{*}\) associated with a toric contact manifold of Reeb type \((M,H)\)
_A vector \(R\in\mathfrak{g}\) generates the Reeb vector field of an invariant Sasakian \(1\)-form \(\alpha\) such that \(H=\ker\alpha\) if and only if_
\[R=\sum_{j=1}^{d}a_{j}v_{j},\text{ with }a_{j}\in\mathbb{R}^{+}\text{ for all }j=1,\ldots,d.\]
**Example 2.8**.: Returning to Example 2.1, if \((B,\omega)\) is an integral toric symplectic manifold, then the principal \(S^{1}\)-bundle over \(B\) is a good toric contact manifold with moment cone
\[C=\{r(x,1)\in\mathbb{R}^{n}\times\mathbb{R}\ |\ x\in P,r\geq 0\},\]
where \(P\subset\mathbb{R}^{n}\) is the integral Delzant polytope associated with \(B\).
**Example 2.9**.: For each pair of coprime integers \(p,q\) with \(0<q<p\), the \(Y^{p,q}\) spaces are toric Sasaki-Einstein metrics on \(S^{2}\times S^{3}\). The Sasakian structure is irregular of rank \(2\) whenever \(4p^{2}-3q^{2}\) is a not perfect square [13]. In higher dimensions they generalise to the family of toric contact manifolds \(N^{2n+1}_{k,m}\), \(n\geq 2\), \(k\geq 1\) and \(0\leq m<kn\), associated to the good cones \(C(k,m)\subset(\mathbb{R}^{n+1})^{*}\) defined by the normals
\[v_{i}=e_{i}+e_{n+1},\ v_{n}=-\sum_{i=0}^{n-1}e_{i}+me_{n}+e_{n+1},\ v_{-}=ke_{ n}+e_{n+1},\ v_{+}=-e_{n}+e_{n+1},\]
where \(e_{i}\in(\mathbb{R}^{n+1})\), \(i=1,\ldots,n\), are the canonical basis vectors of \(\mathbb{R}^{n+1}\). Unlike the \(Y^{p,q}\) spaces, \(N^{2n+1}_{k,m}\) are not all diffeomorphic [1].
### Lerman's construction
The classification of toric contact manifolds of Reeb type is analogous to Delzant's classification of toric symplectic manifolds [4], [17]. In this section, we briefly recall the construction of a toric contact manifold from its moment cone and elucidate the relation with the isotropy weights.
Let \(C\subset\mathfrak{g}^{*}\) be a strictly convex good cone given by
\[C=\bigcap_{i=1}^{d}\{\eta\in\mathfrak{g}^{*}\ |\ \eta(v_{i})\geq 0\},\]
where \(v_{i}\in\mathbb{Z}_{G}\), \(i=1,\ldots,d\), are the inward pointing normals of \(C\) and \(\dim\mathfrak{g}^{*}>2\).
Let \(\{e_{1},\ldots,e_{d}\}\) denote the standard basis of \(\mathbb{R}^{d}\) and define the map \(\beta\colon\mathbb{R}^{d}\to\mathfrak{g}\) by \(\beta(e_{i})=v_{i}.\) Denote by \(\mathfrak{k}\) the kernel of \(\beta\). Since \(C\) is strictly convex, \(\beta\) is surjective and we have the short exact sequences
\[0\to\mathfrak{k}\overset{\iota}{\to}\mathbb{R}^{d}\overset{\beta}{\to} \mathfrak{g}\to 0\ \text{ and }\ 0\to\mathfrak{g}^{*}\overset{\beta^{*}}{\longrightarrow}(\mathbb{R}^{d})^{*} \overset{\iota^{*}}{\longrightarrow}\mathfrak{k}^{*}\to 0.\]
Since \(\beta(\mathbb{Z}^{d})\subset\mathbb{Z}_{G}\), \(\beta\) induces a map \(\tilde{\beta}\colon\mathbb{T}^{d}=\mathbb{R}^{d}/\mathbb{Z}^{d}\to\mathfrak{g }/\mathbb{Z}_{G}=G\). Let
\[K=\left\{[t]\in\mathbb{T}^{d}\ |\ \sum_{i=1}^{d}t_{i}v_{i}\in\mathbb{Z}_{G}\right\}\]
denote the kernel of \(\tilde{\beta}\). It is a compact abelian subgroup with Lie algebra \(\mathfrak{k}=\ker(\beta)\). Consider the standard action of \(\mathbb{T}^{d}\) on \((\mathbb{C}^{d},\omega_{st}=i/2\pi\sum_{j=1}^{d}dz_{j}\wedge d\overline{z}_{j})\) given by
\[[t]\cdot(z_{1},\ldots,z_{d})=(e^{2\pi it_{1}}z_{1},\ldots,e^{2\pi it_{d}}z_{d}).\]
The corresponding moment map \(\phi\colon\mathbb{C}^{d}\to(\mathbb{R}^{d})^{*}\) is given by
\[\phi(z_{1},\ldots,z_{d})=\sum_{j=1}^{d}\mid z_{j}\mid^{2}e_{j}^{*},\]
where \(\{e_{j}^{*}\}\) is the basis dual to the canonical basis \(\{e_{j}\}\). Since \(K\) is a subgroup of \(\mathbb{T}^{d}\), it acts on \(\mathbb{C}^{d}\) with moment map
\[\phi_{K}(z_{1},\ldots,z_{d})=\sum_{j=1}^{d}\mid z_{j}\mid^{2}\iota^{*}(e_{j}^{* })\in\mathfrak{k}^{*}.\]
The reduced space \(W_{C}=\frac{(\phi_{K}^{-1}(0)\backslash\{0\})}{K}\) is a toric symplectic cone with symplectic form \(\omega_{C}\) induced by \(\omega_{st}\). It carries an action of \(G=\mathbb{T}^{d}/K\) induced by the \(\mathbb{T}^{d}\)-action and an action of \(\mathbb{R}\) induced by the standard radial \(\mathbb{R}\)-action on \(\mathbb{C}^{d}\).
Let \(\sigma\) be a section of \(\beta\colon\mathbb{R}^{d}\to\mathfrak{g}\), giving a splitting \(\mathbb{R}^{d}\cong\iota(\mathfrak{k})\oplus\sigma(\mathfrak{g})\). Since \(\sigma\) is injective, its image defines an \(n\)-torus \(\sigma(G)\subset\mathbb{T}^{d}\). The action of \(G\) on \((\phi_{K}^{-1}(0)\backslash\{0\})\) via \(\sigma(G)\subset\mathbb{T}^{d}\) is Hamiltonian with moment map \(\tilde{\phi}=\sigma^{*}\circ\phi\colon(\phi_{K}^{-1}(0)\backslash\{0\})\to \mathfrak{g}^{*}\). The \(G\)-action and the moment map \(\tilde{\phi}\) descend to the quotient \(W_{C}\) making it a Hamiltonian \(G\)-space with moment map
\[\phi_{G}\colon W_{C} \to\mathfrak{g}^{*}\] \[[z_{1},\ldots,z_{d}] \mapsto\sigma^{*}(\phi(z_{1},\ldots,z_{d})),\]
where we denote by \([z_{1},\ldots,z_{d}]\in W_{C}\) the class of \((z_{1},\ldots,z_{d})\in(\phi_{K}^{-1}(0)\backslash\{0\})\) in the quotient. The image of \(\phi_{G}\) is the cone \(C\backslash\{0\}\). The sphere \(S^{2d-1}=\{z\in\mathbb{C}^{d};|z|\}\) is a \(\mathbb{T}^{d}\)-invariant hypersurface of contact type in \(\mathbb{C}^{d}\) and
\[M_{C}=\frac{(\phi_{K}^{-1}(0)\bigcap S^{2d-1})}{K}\]
is a \(G\)-invariant hypersurface of contact type in \(W_{C}\). Therefore it has a toric contact structure induced by the \(G\)-invariant contact form \(\alpha=i_{X_{C}}\omega_{C}\), where \(X_{C}\) is the Liouville vector field induced by the \(\mathbb{R}\)-action on \(W_{C}\). The moment cone of \((M_{C},\alpha)\) is \(C\).
**Lemma 2.10**.: \(\phi_{K}^{-1}(0)=\phi^{-1}(\beta^{*}(C))\)__
Proof.: Since \(0\to\mathfrak{g}^{*}\xrightarrow{\beta^{*}}(\mathbb{R}^{d})^{*}\xrightarrow{ \iota^{*}}\mathfrak{k}^{*}\to 0\) is exact, we have \((\iota^{*})^{-1}(0)=\beta^{*}(\mathfrak{g}^{*})\). Therefore \(\phi_{K}^{-1}(0)=(\iota^{*}\circ\phi)^{-1}(0)=\phi^{-1}((\iota^{*})^{-1}(0))= \phi^{-1}(\beta^{*}(\mathfrak{g}^{*}))=\phi^{-1}(\beta^{*}(\mathfrak{g}^{*}) \cap\phi(\mathbb{C}^{d}))\). It follows from
\[\beta^{*}(\mathfrak{g}^{*})\cap\phi(\mathbb{C}^{d}) =\{\beta^{*}(\eta)\mid\eta\in\mathfrak{g}^{*}and\,\langle\beta^{* }(\eta),e_{i}\rangle\geq 0\text{ for all }i\}\] \[=\{\beta^{*}(\eta)\mid\eta\in\mathfrak{g}^{*}and\,\langle\eta, \beta(e_{i})\rangle\geq 0\text{ for all }i\}\] \[=\{\beta^{*}(\eta)\mid\eta\in\mathfrak{g}^{*}and\,\langle\eta,v_{ i}\rangle\geq 0\text{ for all }i\}\] \[=\{\beta^{*}(\eta)\mid\eta\in C\}\]
that \(\phi_{K}^{-1}(0)=\phi^{-1}(\beta^{*}(C))\).
The following lemma informs us how to read the isotropy groups from the moment cone.
**Lemma 2.11**.: _Let \((M,\alpha)\) be a toric contact manifold of Reeb type with moment cone \(C\). Let \(p\in M\) and \(\eta=\phi_{\alpha}(p)\) the image of \(p\) under the \(\alpha\)-moment map. If \(\eta(v_{i})=0\), for a subset of indices \(i\in I\subset\{1,\dots,d\}\), then its isotropy Lie algebra \(\mathfrak{g}_{p}\) is generated by the vectors \(v_{i}\), \(i\in I\)._
Proof.: Lerman's construction implies that \(M\cong M_{C}\) and that \(C=\phi_{G}(W_{C})\). Therefore \(\eta(v_{i})=0\) if and only if \(\phi_{G}(p)(v_{i})=0\), where we are considering \(p\) as an element of \(W_{C}\) via the inclusion \(M\cong M_{C}\subset W_{C}\). From Lemma 2.10 we have that \(\phi_{K}^{-1}(0)=\phi^{-1}(\beta^{*}(C))\). Let \(z=(z_{1},\dots,z_{d})\in\phi_{K}^{-1}(0)\) be such that \(\beta^{*}(\eta)=\phi(z)\). Then
\[|z_{j}|^{2}=\langle\phi(z),e_{j}\rangle=\langle\beta^{*}(\eta),e_{j}\rangle= \langle\eta,\beta(e_{j})\rangle=\langle\eta,v_{j}\rangle\,.\]
Therefore \(z_{j}=0\) if and only if \(\eta(v_{j})=0\). The torus \(\mathbb{T}^{d}\) acts on \(\phi_{K}^{-1}(0)\subset\mathbb{C}^{d}\) via the standard action and \(\eta_{(}v_{j})=0\) if and only if \(e_{j}\in\mathfrak{t}_{z}^{d}\). The \(G\)-action on \(\phi_{K}^{-1}(0)\) is given by a section \(\sigma\) of \(\beta\colon\mathbb{T}^{d}\to G\). Since \(K=\ker\beta\), we have \(\sigma_{*}(v_{j})=e_{j}+k\), where \(k\in\mathfrak{k}\) and \(\sigma_{*}\) is the Lie algebra map induced by \(\sigma\). It follows that \(\mathfrak{g}_{p}=\mathfrak{t}_{z}^{d}/\mathfrak{k}\), therefore \([\sigma_{*}(v_{j})]=[e_{j}]\in\mathfrak{t}_{z}^{d}/\mathfrak{k}\). Since the \(G\)-action is given by the section \(\sigma\), we have that \(v_{j}\in\mathfrak{g}_{p}\) is equivalent to \([\sigma_{*}(v_{j})]\in\mathfrak{t}_{z}^{d}/\mathfrak{k}\), and therefore \(v_{j}\in\mathfrak{g}_{p}\) if and only if \(\eta(v_{j})=0\). Since \(C\) is a good cone, the \(v_{j}\) satisfying \(\eta(v_{j})=0\) form an integral basis of \(\mathfrak{g}_{p}\).
Next we consider the weights of the isotropy representations. First we need a lemma from [9]:
**Lemma 2.12**.: _Let \(\rho\colon G\to GL(V)\) be a faithful representation of a torus \(G\) preserving a symplectic form \(\omega\) such that \(\dim V=2\dim G\). If \(\rho\) is faithful, then its weights form a basis of the weight lattice \(\mathbb{Z}_{G}^{*}\) of \(G\)._
We have the following specialisation of Lerman's local normal form for the moment map [17], when restricted to a vertex.
**Theorem 2.13**.: _Let \(p\in M\) be such that \(\phi_{\alpha}(p)\) is a vertex of the convex polytope \(P=\phi_{\alpha}(M)\) and let \(V=H_{p}\) be the fibre of the contact distribution on \(p\). The isotropy group \(G_{p}\) acts on \(V\) preserving the symplectic form \(d\alpha|_{V}\). Then_
\[\mathfrak{g}_{p}^{\circ}=\mathbb{R}\phi_{\alpha}(p)\]
_and we can choose a splitting_
\[\mathfrak{g}^{*}=\mathfrak{g}_{p}^{\circ}\oplus\mathfrak{g}_{p}^{*}=\mathbb{R }\phi_{\alpha}(p)\oplus\mathfrak{g}_{p}^{*}.\]
_Let \(i\colon\mathfrak{g}_{p}^{*}\to\mathfrak{g}^{*}\) be the corresponding embedding. Then there exists a \(G\)-invariant neighbourhood \(U\) of the zero section \(G\cdot[1,0]\) in_
\[N=G\times_{G_{p}}V\]
_and an open \(G\)-equivariant embedding \(\varphi\colon U\to M\) with \(\varphi([1,0])=p\) and a \(G\)-invariant \(1\)-form \(\alpha_{N}\) on \(N\) such that_
1. \(\varphi^{*}\alpha=e^{f}\alpha_{N}\) _form some function_ \(f\in C^{\infty}(U)\)
_._
2. _the_ \(\alpha_{N}\)_-moment map_ \(\phi_{\alpha_{N}}\) _is given by_ \[\phi_{\alpha_{N}}([a,v])=\phi_{\alpha}(p)+i(\phi_{V}(v)),\] _where_ \(\phi_{V}\colon V\to\mathfrak{g}^{*}\) _is the moment map for the representation of_ \(G_{p}\) _on_ \(V\)_._
_Consequently,_
\[\phi_{\alpha}\circ\varphi([a,v])=(e^{f}\phi_{\alpha_{N}})([a,v])=e^{f([a,v])}( \phi_{\alpha}(p)+i(\phi_{V}(v)))\]
_for some \(G\)-invariant function \(f\) on \(N\)._
The following corollary shows how the isotropy weights relate to the moment cone.
**Corollary 2.14**.: _Let \(p\in M\) be such that \(\phi_{\alpha}(p)\) is a vertex of the convex polytope \(P=\phi_{\alpha}(M)\). Then the representation \(G_{p}\to GL(V)\) is faithful, and the weights of the action of the isotropy group \(G_{p}\) on \(V=H_{p}\), the fibre of the contact distribution at \(p\), form a basis of the weight lattice \(\mathbb{Z}_{G_{p}}^{*}\) of \(\mathfrak{g}_{p}^{*}\) that is dual to the basis \(\{v_{1}^{p},\ldots,v_{n}^{p}\}\) of \(\mathfrak{g}_{p}\), where \(v_{1}^{p},\ldots,v_{n}^{p}\) are the normals to the faces of the moment cone \(C(\phi_{\alpha})\) meeting at \(\phi_{\alpha}(p)\)._
Proof.: Theorem 2.13 ensures that there is a neighbourhood of \(G\cdot p\) that is \(G\)-equivariantly diffeomorphic to a neighbourhood of the zero section of \(N=G\times_{G_{p}}V\). Since the action of \(G\) on \(N\) is effective, the representation of \(G_{p}\) on \(V\) must be faithful. The image of the moment map \(\phi_{V}\) is
\[\phi_{V}(V)=\left\{\sum_{i=1}^{n}s_{i}w_{p}^{i}\ |\ s_{i}\geq 0\right\}\subset \mathfrak{g}_{p}^{*},\]
where the \(w_{p}^{i}\) are the weights of the isotropy action of \(G_{p}\) on \(V=H_{p}\). Therefore the weights \(w_{p}^{i}\) generate the edges of the cone \(\phi_{V}(V)\). Lemma 2.12 shows that the weights \(w_{p}^{i}\) form a basis of the integral lattice of \(\mathfrak{g}_{p}^{*}\). It follows that the weights \(i(w_{p}^{j})\in\mathfrak{g}^{*}\) satisfy \(i(w_{p}^{j})(v_{k}^{p})=\delta_{jk}\). This implies that \(w_{p}^{j}(v_{k}^{p})=\delta_{jk}\), where we are viewing the normals \(v_{1}^{p},\ldots,v_{n}^{p}\) as elements of \(\mathfrak{g}_{p}\). Thus \(w_{p}^{1},\ldots,w_{p}^{n}\in\mathfrak{g}_{p}^{*}\) is the dual basis to \(v_{1}^{p},\ldots,v_{n}^{p}\).
_Remark 2.15_.: Corollary 2.14 allows us to read the weights of the isotropy representation \(G_{p}\to GL(V)\) from the moment cone; one simply needs to choose a vector \(v_{0}^{p}\) completing the set \(\{v_{1}^{p},\ldots,v_{n}^{p}\}\) to an integral basis of the lattice \(\mathbb{Z}_{G}\subset\mathfrak{g}\).
### The horizontal Dolbeault operator
Let \(M\) be a toric contact manifold of Reeb type with \(\dim M>3\) endowed with a Sasakian structure \((\alpha,R_{\alpha},\Phi,g)\). The transverse complex structure \(J=\Phi|_{H}\) allows us to introduce a horizontal Dolbeault operator \(\overline{\partial}_{H}\), and more generally \(\overline{\partial}_{H}^{E}\) twisted by a transverse holomorphic bundle \(E\). We briefly recall the definitions.
Let \(\Omega_{H}^{k}(M)=\{\omega\in\Omega^{k}(M)\ |\ {\iota}_{R_{\alpha}}\omega=0\}= \Gamma(M,\bigwedge^{k}H^{*})\) denote the space of _horizontal_\(k\)-forms. The projection operators \(P_{V}=\alpha\wedge{\iota}_{R_{\alpha}}\) and \(P_{H}=1-P_{V}\) determine a splitting
\[\Omega^{*}(M)=\Omega_{V}^{*}(M)\oplus\Omega_{H}^{*}(M)=P_{V}(\Omega^{*}(M)) \oplus P_{H}(\Omega^{*}(M))\]
into horizontal and vertical forms, and \(d_{H}=P_{H}\circ d\) is a differential on \(\Omega^{k}_{H}(M)\). The transverse complex structure gives the usual decomposition of \(\Omega^{*}_{H}(M)\otimes\mathbb{C}\) into horizontal \((p,q)\)-forms and defines the horizontal Dolbeault complex:
\[0\to\Omega^{0,0}_{H}(M)\xrightarrow{\overline{\varepsilon}_{H}}\Omega^{0,1}_{ H}(M)\xrightarrow{\overline{\varepsilon}_{H}}\cdots\xrightarrow{\overline{ \varepsilon}_{H}}\Omega^{0,n}_{H}(M)\to 0,\]
with the associated symbol complex
\[0\to\pi^{*}(\bigwedge^{0,0}H^{*})\xrightarrow{\sigma(\overline{\varepsilon}_{ H})}\pi^{*}(\bigwedge^{0,1}H^{*})\xrightarrow{\sigma(\overline{\varepsilon}_{H})} \cdots\xrightarrow{\sigma(\overline{\varepsilon}_{H})}\pi^{*}(\bigwedge^{0,n }H^{*})\to 0,\]
where \(\pi\colon T^{*}M\to M\) is the projection and \(\sigma(\overline{\varepsilon}_{H})_{(p,\xi)}(\bar{w})=\xi^{0,1}_{H}\wedge\bar{w}\), where \((p,\xi)\in T^{*}M\), \(\bar{w}\in\bigwedge^{0,k}H^{*}|_{p}\) and \(\xi^{0,1}_{H}=P_{H}(\xi)^{0,1}\) is the \((0,1)\)-component of the horizontal projection of \(\xi\).
A \(G\)-equivariant vector bundle \(\pi_{E}\colon E\to M\) is transversally holomorphic if with respect to an open cover \(\{U_{\alpha}\}\) of \(M\), it is determined by transition functions \(g_{\alpha\beta}\) satisfying \(\overline{\partial}_{H}(g_{\alpha\beta})=0\). The twisted horizontal Dolbeault operator \(\overline{\partial}_{H}^{E}\) is given locally by
\[\overline{\partial}_{H}^{E}\colon\Omega^{0,k}_{H}(M,E) \to\Omega^{0,k+1}_{H}(M,E)\] \[w\otimes u_{\alpha} \mapsto\overline{\partial}_{H}(w)\otimes u_{\alpha}.\]
and we have the twisted horizontal Dolbeault complex
\[0\to\Omega^{0,0}_{H}(M,E)\xrightarrow{\overline{\varepsilon}_{H}^{E}}\Omega^ {0,1}_{H}(M,E)\xrightarrow{\overline{\varepsilon}_{H}^{E}}\cdots\xrightarrow {\overline{\varepsilon}_{H}^{E}}\Omega^{0,n}_{H}(M,E)\to 0,\]
with the symbol \(\sigma(\overline{\partial}_{H}^{E})_{(p,\xi)}(\bar{w}\otimes\bar{e})=\xi^{0,1 }_{H}\wedge\bar{w}\otimes\bar{e}\), where \(\bar{w}\otimes\bar{e}\in\bigwedge^{0,k}H|_{p}\otimes E|_{p}\).
## 3. Equivariant Index
### Transversally elliptic operators
In this section, we collect relevant facts about transversally elliptic operators [3, 24].
Let \(G\) be a compact Lie group and \(M\) a compact \(G\)-manifold. We denote by \(\pi\colon T^{*}M\to M\) the natural projection. A complex of \(G\)-invariant pseudodifferential operators is called transversally elliptic, if it is elliptic in the directions transversal to \(G\)-orbits. More precisely, let \(T^{*}_{G}M\subset T^{*}M\) be the closed subset defined by the union of conormals to the \(G\)-orbits,
\[T^{*}_{G}M=\{(p,\xi)\in T^{*}M\ |\ \xi(v(p))=0\ \text{for all}\ v\in\mathfrak{g}\}.\]
A complex of \(G\)-invariant pseudodifferential operators \(P\) is a sequence
\[0\to\Gamma(E^{0})\xrightarrow{P_{1}}\Gamma(E^{1})\xrightarrow{P_{2}}\cdots \xrightarrow{P_{n}}\Gamma(E^{n})\to 0,\]
where the \(P_{i}\) are \(G\)-invariant pseudodifferential operators and \(\Gamma(E^{i})\) is the space of sections of the \(G\)-vector bundle \(E^{i}\), \(i=1,\dots,n\). Its symbol \(\sigma_{P}\) on \(T^{*}M\) is given by the complex
\[0\to\pi^{*}E^{0}\xrightarrow{\sigma_{1}}\pi^{*}E^{1}\xrightarrow{\sigma_{2}} \cdots\xrightarrow{\sigma_{n}}\pi^{*}E^{n}\to 0,\]
where \(\sigma_{i}=\sigma(P_{i})\) is the symbol of the pseudodifferential operator \(P_{i}\). Let
\[\operatorname{Char}(\sigma_{P})=\{(p,\xi)\in T^{*}M\ |\ \sigma_{P}(p,\xi)\ \text{is not exact}\}\]
denote the characteristic set of the complex.
**Definition 3.1**.: A complex of \(G\)-invariant pseudodifferential operators \(P\) is \(G\)-_transversally elliptic_ if \(\operatorname{Char}(\sigma_{P})\bigcap T_{G}^{\ast}M\) is compact.
A \(G\)-transversally elliptic symbol \(\sigma_{P}\) defines an element \([\sigma_{P}]\in K_{G}^{\ast}(T_{G}^{\ast}M)\). Conversely, given a symbol class \([\sigma]\in K_{G}^{\ast}(T_{G}^{\ast}M)\), there is a \(G\)-transversally elliptic pseudodifferential operator \(A\colon\Gamma(M,E)\to\Gamma(M,F)\) such that \(\sigma(A)=\sigma\). Let \(\hat{G}\) be the set of isomorphism classes of irreducible complex representations of \(G\) and \(\hat{R}(G)\) be the space of \(\mathbb{Z}\)-valued functions on \(\hat{G}\). The elements \(V\in\hat{R}(G)\) are thus infinite series
\[V=\sum_{\mu\in\hat{G}}m(\mu)\chi_{V_{\mu}}\]
with \(m(\mu)\in\mathbb{Z}\). The kernel of \(A\) is not finite-dimensional, but it is shown in [3] that for every \(\mu\in\hat{G}\) the space \(\operatorname{Hom}_{G}(V_{\mu},\ker(A))\) is a finite dimensional vector space of dimension \(m(V_{\mu},A)\). The integer \(m(V_{\mu},A)-m(V_{\mu},A\ast)\) depends only on the class of the symbol \(\sigma(A)\) in \(K_{G}^{\ast}(T_{G}^{\ast}M)\) and the index of \(\sigma\) is defined by
\[\operatorname{ind}_{G}^{M}(\sigma)=\sum_{\mu\in\hat{G}}(m(V_{\mu},A)-m(V_{\mu },A\ast))\chi_{V_{\mu}},\]
where the adjoint \(A\ast\) of \(A\) is also a \(G\)-transversally elliptic pseudo-differential operator and defined by choosing a \(G\)-invariant metric. Atiyah showed in [3] that \(\operatorname{ind}_{G}(\sigma)\) defines a distribution on \(G\). The index depends only on the symbol class \([\sigma]\in K_{G}^{\ast}(T_{G}^{\ast}M)\) and descends to a map
\[\operatorname{ind}_{G}^{M}\colon K_{G}^{\ast}(T_{G}^{\ast}M)\to\hat{R}(G).\]
The following basic example will be important in the sequel.
**Example 3.2**.: Let \(M=S^{1}\) with \(S^{1}\) acting on \(M\) by left translation. Let \(E=M\times\mathbb{C}\) be the trivial line bundle and \(F=M\times\{0\}\). Then \(\Gamma(M,E)=C^{\infty}(S^{1})\) and \(\Gamma(M,F)=0\). The operator \(D=0\colon\Gamma(M,E)\to\Gamma(M,F)\) has symbol
\[\sigma_{D} \colon\pi\mbox{*}E \to\pi\mbox{*}F\] \[(\xi,e) \mapsto(\xi,0).\]
Since \(T_{S^{1}}^{\ast}M=M\times\{0\}\), the operator \(D\) is \(S^{1}\)-transversally elliptic. The index of \(\sigma_{D}\) is
\[\operatorname{ind}_{S^{1}}^{M}(\sigma_{D})(t)=\sum_{n\in\mathbb{Z}}\chi_{V_{n} }(t)=\sum_{n\in\mathbb{Z}}t^{n},\]
where \(V_{n}=\mathbb{C}\) is the \(S^{1}\)-module with the action \(t\cdot z=t^{n}z\).
Let \(M\) be a \((2n+1)\)-dimensional toric contact manifold of Reeb type, \(n>1\), with an invariant Sasakian structure. Let \(\overline{\partial}_{H}^{E}\) be the twisted horizontal Dolbeault complex on \(M\) as defined in Section 2.2.
**Proposition 3.3**.: \(\sigma(\overline{\partial}_{H}^{E})\) _is a \(G\)-transversally elliptic symbol._
Proof.: Recall that the symbol \(\sigma_{k}(\overline{\partial}_{H}^{E})\colon\pi^{*}(\bigwedge^{0,k}H^{*}\otimes E )\to\pi^{*}(\bigwedge^{0,k+1}H^{*}\otimes E)\) is given by
\[\sigma_{k}(\overline{\partial}_{H}^{E})_{(p,\xi)}(\bar{w}\otimes\bar{e})= \xi_{H}^{0,1}\wedge\bar{w}\otimes\bar{e},\]
where \((p,\xi)\in T^{*}M\), \(\bar{w}\otimes\bar{e}\in\bigwedge^{0,k}H|_{p}\otimes E|_{p}\). The complex \(\sigma(\overline{\partial}_{H}^{E})_{(p,\xi)}\) is exact so long as \(\xi_{H}^{0,1}\neq 0\), that is when \(\xi\) is not a multiple of the contact form \(\alpha\). Since \(M\) is of Reeb type, a vector \(R\in\mathfrak{g}\) generates the Reeb vector field. We have
\[(p,\xi)\in T_{G}^{*}M\implies\xi(v(p))=0\text{ for all }v\in\mathfrak{g}.\]
Since the action of \(G\) generates the Reeb vector field, we have \(\xi(R(p))=0\). By \(\alpha_{p}(R(p))=1\), it follows that \(\xi\) is not a multiple of \(\alpha_{p}\) and therefore \(\sigma(\overline{\partial}_{H}^{E})_{(p,\xi)}\) is exact.
We conclude by recalling the multiplicative and excision properties of the index [3]. Consider a compact Lie group \(G_{2}\) acting on two manifolds \(M_{1}\) and \(M_{2}\) and assume that another compact Lie group \(G_{1}\) acts on \(M_{1}\) commuting with the action of \(G_{2}\). The exterior product of vector bundles induces a multiplication map
\[\boxtimes\colon K_{G_{1}\times G_{2}}(T_{G_{1}}^{*}M_{1})\otimes K_{G_{2}}(T_ {G_{2}}^{*}M_{2})\to K_{G_{1}\times G_{2}}(T_{G_{1}\times G_{2}}^{*}(M_{1} \times M_{2})). \tag{3.1}\]
**Theorem 3.4** (Multiplicative Property).: _For any \(\sigma_{1}\in K_{G_{1}\times G_{2}}(T_{G_{1}}^{*}M_{1})\) and any \(\sigma_{2}\in K_{G_{2}}(T_{G_{2}}^{*}M_{2})\), we have_
\[\text{ind}_{G_{1}\times G_{2}}^{M_{1}\times M_{2}}(\sigma_{1}\boxtimes \sigma_{2})=\text{ind}_{G_{1}\times G_{2}}^{M_{1}}(\sigma_{1})\text{ind}_{G_{ 2}}^{M_{2}}(\sigma_{2}).\]
**Theorem 3.5** (Excision Property).: _Let \(j\colon U\to M\) be an open \(G\)-embedding into a compact \(G\)-manifold \(M\). We have a pushforward map \(j_{*}\colon K_{G}(T_{G}^{*}U)\to K_{G}(T_{G}^{*}M)\) and the composition_
\[K_{G}(T_{G}^{*}U)\xrightarrow{j*}K_{G}(T_{G}^{*}M)\xrightarrow{\text{ind}_{G }^{M}}\hat{R}(G)\]
_is independent of \(j\colon U\to M\)._
The product of a symbol \(\sigma\) by a \(G\)-equivariant vector bundle \(E\) is the symbol given by
\[(\sigma\otimes E)(p,\xi)=\sigma(p,\xi)\otimes\text{Id}_{E}.\]
Note that the symbol \(\sigma(\overline{\partial}_{H}^{E})\) is of this form.
**Proposition 3.6**.: _Let \(\sigma\in K_{G}(T_{G}^{*}M)\), \(E\) a \(G\)-module and \(\underline{E}\) the corresponding trivial \(G\)-equivariant bundle over \(M\), then_
\[\text{ind}_{G}^{M}(\sigma\otimes\underline{E})=\text{ind}_{G}^{M}(\sigma) \otimes\chi_{E}\in\hat{R}(G),\]
_where \(\chi_{E}\) is the character of the \(G\)-module \(E\)._
### Localisation
In this section, we review the \(K\)-theoretic localisation method for computing the index of transversally elliptic operators developed in [3]. The core idea is to choose a filtration of the manifold that allows one to decompose the symbol into contributions from lower dimensional spaces, essentially reducing the problem to a computation on vector spaces. The main ingredient involved in the computations is Atiyah's pushed symbol \(\sigma^{\epsilon}\).
Let \(G\) be a \(n\)-dimensional torus acting on \(\mathbb{C}^{n}\) with no fixed vector and weights \(w^{i}\in\mathfrak{g}^{*}\), \(i=1,\ldots,n\). A \(G\)-invariant Riemannian metric \(h\) on \(\mathbb{C}^{n}\) induces an isomorphism
\[T\mathbb{C}^{n} \to T^{*}\mathbb{C}^{n}\] \[v \mapsto\tilde{v}=h(v,\cdot).\]
Given \(\epsilon\in\mathfrak{g}\) we will denote by \(\epsilon(p)\in T_{p}\mathbb{C}^{n}\) the vector generated by \(\epsilon\) at \(p\) and by \(\widetilde{\epsilon(p)}\in T_{p}^{*}\mathbb{C}^{n}\) its image under the isomorphism defined by \(h\). Let \(\sigma(\widetilde{\phi})\) be the symbol
\[0\to\pi^{*}(\bigwedge^{0,0}T^{*}\mathbb{C}^{n})\xrightarrow{\sigma( \overline{\phi})}\pi^{*}(\bigwedge^{0,1}T^{*}\mathbb{C}^{n})\xrightarrow{ \sigma(\overline{\phi})}\cdots\xrightarrow{\sigma(\overline{\phi})}\pi^{*}( \bigwedge^{0,n}T^{*}\mathbb{C}^{n})\to 0,\]
where \(\pi\colon T^{*}\mathbb{C}\to\mathbb{C}\) is the projection and \(\sigma_{(p,\xi)}(\overline{\xi})(w)=\xi^{0,1}\wedge w\). The symbol \(\sigma(\overline{\phi})\) is exact away from the zero section of \(T^{*}\mathbb{C}^{n}\), in fact, \(\operatorname{Char}(\sigma(\overline{\phi}))=\mathbb{C}\times\{0\}\). Since \(\operatorname{Char}(\sigma(\overline{\phi}))\cap T_{G}^{*}\mathbb{C}^{n}= \operatorname{Char}(\sigma(\overline{\phi}))\) is non-compact, it is not a \(G\)-transversally elliptic symbol. Atiyah shows in [3] how to obtain a \(G\)-transversally elliptic symbol \(\sigma^{\epsilon}\) by deforming \(\sigma(\overline{\phi})\) using the \(G\)-action. Namely, let \(H_{i}=\left\{\epsilon\in\mathfrak{g}\ |\ w^{i}(\epsilon)=0\right\}\) be the hyperplane in \(\mathfrak{g}\) determined by the weight \(w^{i}\) and pick a vector \(\epsilon\in\mathfrak{g}\) away from the hyperplanes \(H_{i}\).
**Definition 3.7**.: Atiyah's pushed symbol \(\sigma^{\epsilon}\) is defined by
\[\sigma^{\epsilon}_{(p,\xi)}(w)=\sigma(\overline{\phi})_{(p,\xi+g(|\xi|) \widetilde{\epsilon(p)})}(w)=(\xi+g(|\ \xi\ |)\widetilde{\epsilon(p)})^{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
**Definition 3.8**.: Let \(t\in G\) and \(\alpha\in\mathfrak{g}^{*}\), then
\[\left(\frac{1}{1-t^{-\alpha}}\right)^{+}=-\sum_{k=1}^{\infty}t^{k\alpha}\text{ and }\left(\frac{1}{1-t^{-\alpha}}\right)^{-}=\sum_{k=0}^{\infty}t^{-k\alpha},\]
are the expansions in positive and negative powers of \(\alpha\). Alternatively, \((\cdot)^{\pm}\) are the Laurent expansions of a rational function around \(t=\infty\) and \(t=0\), respectively.
**Theorem 3.9**.: _Let \(\sigma^{\epsilon}\) be Atiyah's pushed symbol, then_
\[\text{ind}_{G}^{\mathbb{C}^{n}}(\sigma^{\epsilon})(t)=\prod_{j=1}^{n}\left( \frac{1}{1-t^{\alpha_{j}}}\right)^{s_{j}}\in\hat{R}(G),\]
_where \(s_{j}=+\) if \(\alpha_{j}(\epsilon)>0\) and \(s_{j}=-\) if \(\alpha_{j}(\epsilon)<0\)._
Proof.: See [3] (Theorem 8.1) or [7].
Atiyah shows in [3] how to extend the ideas above to evaluate the index for symbol classes in \(K_{G}(T_{G}^{*}M)\), when \(M\) is a compact manifold with an action of a \((n+1)\)-dimensional torus \(G\). The \(G\)-action provides \(M\) with a filtration by closed subsets
\[M=M_{0}\supset M_{1}\supset\dots\supset M_{n+1}\supset M_{n+2}=\emptyset\]
where \(M_{i}=\{p\in M\ |\ \dim G_{p}\geq i\}\). This filtration determines a split exact sequence for each \(i\),
and a decomposition
\[K_{G}(T_{G}^{*}M)\cong\bigoplus_{i=0}^{n+1}\theta_{i}K_{G}(T_{G}^{*}M|_{(M_{i }-M_{i+1})}).\]
Note that \(T_{G}^{*}M|_{(M_{i}-M_{i+1})}\) is a complex vector bundle over \(T_{G}^{*}(M_{i}-M_{i+1})\), therefore we can compose the splittings \(\theta_{i}\) with the Thom isomorphism, which we denote by \(\phi_{i}\), and obtain
\[K_{G}(T_{G}^{*}M)\cong\bigoplus_{i=0}^{n+1}\phi_{i}K_{G}(T_{G}^{*}(M_{i}-M_{ i+1})).\]
This decomposition allows us break up a symbol and evaluate its index on each piece separately. For instance, if \([\sigma]\in K_{G}(T_{G}^{*}M)\) then
\[\text{ind}_{G}^{M}(\sigma)=\sum_{i=0}^{n+1}\text{ind}_{G}^{M}(\phi_{i}(\sigma| _{M_{i}-M_{i+1}})).\]
To compute the index, we need to describe the maps \(\phi_{i}\). The Thom isomorphism is well-understood. Let us recall the definition of the splitting maps \(\theta_{i}\) from [3]. The approach is analogous to the construction of the pushed symbol in Theorem 3.9.
First, note that we can define \(\theta_{i}\) on each connected component \(K\) of \(M_{i}-M_{i+1}\) separately, and each of these components is an open subset of a unique fixed point
set \(M^{\mathbb{T}^{i}}\), where \(\mathbb{T}^{i}\) is a torus of dimension \(i\). The normal bundle \(N\) of \(K\) has an action of \(\mathbb{T}^{i}\) that leaves no non-zero vectors fixed. We can therefore pick a vector \(\epsilon\in\operatorname{Lie}(\mathbb{T}^{i})\) in a tubular neighbourhood \(U\subset M\) of \(K\) and proceed as in Theorem 3.9 to define the maps \(\theta_{i}\). Note that the splittings \(\theta_{i}\) depend on the choice of vector fields used for the deformation. To use this decomposition, we need to evaluate the index on each level of the filtration. For a given symbol, the following proposition gives a systematic approach for applying the localisation argument.
**Proposition 3.10**.: _Let \([\sigma]\in K_{G}(T_{G}^{*}M)\) and let \(v\) be a vector field on \(M\) such that:_
* \(\sigma(x,\xi+\lambda\tilde{v}(p))\) _is an isomorphism for every_ \(p\in M\backslash M_{j}\)_,_ \(\lambda\in\mathbb{R}\backslash\{0\}\)_,_ \(\xi\in(T_{G}^{*}M)_{x}\) _and some_ \(j\in\{1,\ldots,n+1\}\)_;_
* \(v(p)\) _is tangent to the orbit_ \(G\cdot p\)_;_
* \(v(p)=0\) _if and only if_ \(p\in M_{j}\)_;_
_where \(\tilde{v}=h(v,\cdot)\) for a \(G\)-invariant Riemannian metric \(h\) on \(M\). Then for \(i<j\), we can use \(v(p)\) to construct the map \(\theta_{i}\) and we have \(\theta_{i}(\sigma)=0\)._
Proof.: If \(i<j\), the vector field \(v\) is non-vanishing on \(M_{i}-M_{i+1}\). Using this vector field to define the splitting \(\theta_{i}\), we have
\[\theta_{i}(\sigma)(p,\xi)=\sigma(p,\xi+g(|\ \xi\ |)\tilde{v}(p)),\]
where \((p,\xi)\in(T_{G}^{*}U)_{p}\) and \(U\) is a tubular neighbourhood of \(M_{i}-M_{i+1}\) in \(M-M_{i+1}\) and \(g\) is a bump function supported on a small neighbourhood of the zero section of \(T_{G}^{*}U\). Since \(\sigma(p,\xi)\) is an isomorphism for \(\xi\neq 0\) it follows that \(\theta_{i}(\sigma)(p,\xi)\) fails to be an isomorphism exactly when \(\xi+g(|\ \xi\ |)\tilde{v}(p)=0\). Since \(v(p)\) is tangent to \(G\cdot p\) and \(\xi\) is orthogonal to \(G\cdot p\) we have
\[\xi+g(|\ \xi\ |)\tilde{v}(p)=0\iff\xi=0\ \text{and}\ v(p)=0.\]
If \(p\in M_{i}-M_{i+1}\), then \(v(p)\neq 0\) and therefore \(\theta_{i}(\sigma)(p,\xi)\) is an isomorphism for all \((p,\xi)\in(T_{G}^{*}U)_{p}\) and \(\theta_{i}(\sigma)\) is a representative of the zero class in \(K_{G}(T_{G}^{*}(M-M_{i+1}))\).
_Remark 3.11_.: Proposition 3.10 is particularly useful in the following situation. Suppose that there is a \(k\in\{1,\ldots,n+1\}\) such that \(M_{j}=\emptyset\) for all \(j>k\). If there is a vector field \(v(p)\) satisfying the above hypothesis for \(j=k\), then only the level \(k\) of the filtration contributes to the index and we have
\[\operatorname{ind}_{G}^{M}(\sigma)=\operatorname{ind}_{G}^{M}\phi_{k}(\sigma| _{M_{k}}).\]
## 4. The index of \(\overline{\partial}_{H}^{E}\)
Adopting the localisation technique outlined in Section 3.2, we will compute the index of the twisted Dolbeault operator \(\overline{\partial}_{H}^{E}\) and show that the contributions to the index come from a finite number of closed Reeb orbits.
Let \(M\) be a \((2n+1)\)-dimensional toric contact manifold of Reeb type, \(n>1\), equipped with an invariant Sasakian structure. We proceed by constructing a vector field satisfying the hypothesis of Proposition 3.10 for \(j=n\). Let us fix a determinant \(\det\in\bigwedge^{n+1}\mathfrak{g}^{*}\) and a vector \(R\in\mathfrak{g}\) that generates the Reeb vector field. Given a
closed Reeb orbit \(L\) corresponding to an edge of the moment cone, it follows by the good cone condition that there exists a vector \(v_{0}^{L}\in\mathbb{Z}_{G}\) such that
\[\det(v_{0}^{L},v_{1}^{L},\ldots,v_{n}^{L})=1,\]
where \(v_{i}^{L}\), \(i=1,\ldots,n\), are the cone normals at \(L\). Since \(\{v_{0}^{L},v_{1}^{L},\ldots,v_{n}^{L}\}\) is an integral basis of \(\mathbb{Z}_{G}\), its dual basis \(\{\mu_{L},w_{L}^{1},\ldots,w_{L}^{n}\}\) is an integral basis of the integral weights lattice \(\mathbb{Z}_{G}^{*}\). We expand
\[R=\mu_{L}(R)v_{0}^{L}+w_{L}^{1}(R)v_{1}^{L}+\cdots+w_{L}^{n}(R)v_{n}^{L}.\]
Since \(R\) generates the Reeb, its infinitesimal action is non-zero everywhere. Therefore its \(v_{0}^{L}\) component, \(\mu_{L}(R)\), cannot be zero, since the infinitesimal action of \(v_{i}^{L}\), \(i=1,\ldots,n\), on \(L\) is zero. It follows that
\[\det\big{(}R,v_{1}^{L},\ldots,v_{n}^{L}\big{)}=\mu_{L}(R)\neq 0,\]
and \(\{R,v_{1}^{L},\ldots,v_{n}^{L}\}\) forms a basis of \(\mathfrak{g}\) (not necessarily integral).
Given a vector \(\epsilon\in\mathfrak{g}\), we will define the vector field \(\epsilon^{\perp}\) as its orthogonal complement with respect to the Reeb vector field. More precisely, let \(\epsilon\in\mathfrak{g}\) and \(L_{i}\) be a Reeb orbit corresponding to an edge of the moment cone. Write
\[\epsilon=\eta_{L_{i}}(\epsilon)R+\eta_{L_{i}}^{1}(\epsilon)v_{1}^{L_{i}}+ \cdots+\eta_{L_{i}}^{n}(\epsilon)v_{n}^{L_{i}}.\]
Let \(U_{i}\) be an open neighbourhood of \(L_{i}\) and \(V_{i}\) a closed neighbourhood such that \(L_{i}\subset V_{i}\subset U_{i}\). We can assume that the \(U_{i}\)'s are all disjoint. Define
\[\phi^{L_{i}}(p)=\begin{cases}0&\text{if }p\in M\backslash U_{i},\\ -\eta_{L_{i}}(\epsilon)&\text{if }p\in V_{i},\end{cases}\]
and extend it to smoothly interpolate between \(0\) and \(-\eta_{L_{i}}(\epsilon)\) on \(U_{i}\backslash V_{i}\), defining a smooth bump function. Projecting out the Reeb vector field component corresponds to shrinking the contribution of \(R\) to the vector field generated by \(\epsilon\). We define \(\epsilon^{\perp}\) as
\[\epsilon^{\perp}(p)=\epsilon(p)+\sum_{i=1}^{N}\phi^{L_{i}}(p)R(p).\]
**Definition 4.1**.: An element \(\epsilon\in\mathfrak{g}\) is called a _polarizing vector_ if \(\eta_{L}^{i}(\epsilon)\neq 0\) for \(i=1,\ldots,n\) and for every \(L\subset M_{n}\). We say that a vector field \(\epsilon^{\perp}\) is a _good deformation vector field_ for \([\sigma]\in K_{G}(T_{G}^{*}M)\) if it satisfies the hypothesis of Proposition 3.10 for \(j=n\).
**Proposition 4.2**.: _If \(\epsilon\in\mathfrak{g}\) is a polarizing vector, then \(\epsilon^{\perp}\) is a good deformation vector field for \(\sigma(\overline{\partial}_{H}^{\,E})\in K_{G}(T_{G}^{*}M)\)._
Proof.: By construction \(\epsilon^{\perp}(p)\) is tangent to the \(G\)-orbits. The symbol \(\sigma(\overline{\partial}_{H})(p,v(p))\) is invertible if \(v(p)\) is not parallel to the Reeb vector field \(R(p)\), so we only need to prove that \(\epsilon^{\perp}(p)=0\) if and only if \(p\in M_{n}\). Let \(p\in U_{i}\) and suppose that \(\epsilon^{\perp}(p)=0\). Let \(L=L_{i}\) be the orbit corresponding to the normals \(v_{1}^{L},\ldots,v_{n}^{L}\) and write
\[\epsilon^{\perp}(p)=(\eta_{L}(\epsilon)+\sum_{j=1}^{N}\phi^{L_{j}}(p))R(p)+ \eta_{L}^{1}(\epsilon)v_{1}^{L}(p)+\cdots+\eta_{L}^{n}(\epsilon)v_{n}^{L}(p)=0.\]
This implies that
\[(\eta_{L}(\epsilon)+\sum_{j=1}^{N}\phi^{L_{j}}(p))R+\eta_{L}^{1}(\epsilon)v_{1}^{ L}+\cdots+\eta_{L}^{n}(\epsilon)v_{n}^{L}\in\mathfrak{g}_{p}.\]
For \(p\in U_{i}\), there is a finite number of possible isotropy algebras \(\mathfrak{g}_{x}\); they are all generated by a subset of \(\{v_{1}^{L},\ldots,v_{n}^{L}\}\). This implies that
\[(\eta_{L}(\epsilon)+\sum_{j=1}^{N}\phi^{L_{j}}(p))=0,\]
and since \(\{v_{1}^{L},\ldots,v_{n}^{L}\}\) is a linearly independent set, we have \(v_{j}^{L}\in\mathfrak{g}_{x}\) for all \(j=1,\ldots,n.\) It follows that the image of \(p\) under the moment map lies in the intersection of the faces determined by the normals \(v_{1}^{L},\ldots,v_{n}^{L}\), so \(p\) is a point in the orbit \(L\). Let \(p\in M\backslash\bigcup_{j=1}^{N}U_{j}\), we have
Let \(L\) be one of the orbits \(L_{i}\) and write
\[\epsilon=\eta_{L}(\epsilon)R+\eta_{L}^{1}(\epsilon)v_{1}^{L}+\cdots+\eta_{L}^ {n}(\epsilon)v_{n}^{L}.\]
Since \(R,v_{1}^{L},\ldots,v_{n}^{L}\) is a basis of \(\mathfrak{g}\) and \(G\) acts freely on \(M\backslash\bigcup_{j=1}^{N}U_{j}\), we have \(\eta_{L}^{j}(\epsilon)\neq 0\) and \(\epsilon(p)\) must be non-zero. Hence, \(\epsilon^{\perp}\) is a good deformation vector field for \(\sigma(\overline{\partial}_{H})\) and also for \(\sigma(\overline{\partial}_{H}^{E})\) as they have the same characteristic sets.
The level \(n\) filtration \(M_{n}\subset M\) is a disjoint union of closed Reeb orbits \(L_{e}\) indexed by the set \(E(C)\) of edges of the moment cone \(C\),
\[M_{n}=\bigsqcup_{e\in E(C)}L_{e}.\]
Since \(M_{n+1}=\emptyset\), it follows by Proposition 4.2 that
\[\operatorname{ind}_{G}^{M}(\sigma(\overline{\partial}_{H}))=\operatorname{ ind}_{G}^{M}\phi_{n}(\sigma(\overline{\partial}_{H})|_{M_{n}})=\sum_{e\in E(C)} \operatorname{ind}_{G}^{M}\phi_{n}(\sigma(\overline{\partial}_{H})|_{L_{e}}).\]
Given an orbit \(L\subset M_{n}\) and a vector \(\epsilon\in\mathfrak{g}\), we denote by \(\epsilon_{L}^{\perp}\) the vector
\[\eta_{L}^{1}(\epsilon)v_{1}^{L}+\cdots+\eta_{L}^{n}(\epsilon)v_{n}^{L}\in \mathfrak{g}.\]
In a neighbourhood of each closed orbit \(L\subset M_{n}\), there is a vector \(\epsilon_{L}^{\perp}\in\mathfrak{g}\) generating the vector field \(\epsilon^{\perp}\).
**Proposition 4.3**.: _Let \(L\subset M_{n}\) be a closed Reeb orbit. For any \(t\in G\), we have_
\[\operatorname{ind}_{G}^{M}\phi_{n}(\sigma(\overline{\partial}_{H})|_{L})(t)= \left(\frac{1}{1-t^{-w_{L}^{1}}}\right)^{s_{L}^{1}}\cdots\left(\frac{1}{1-t^{- w_{L}^{m}}}\right)^{s_{L}^{1}}\delta(1-t^{\mu_{L}}),\]
_where \(\{\mu_{L},w_{L}^{1},\ldots,w_{L}^{n}\}\) is a basis of the weight lattice \(\mathbb{Z}_{G}^{\ast}\) dual to \(\{v_{0}^{L},\ldots,v_{n}^{L}\}\), \(s_{L}^{i}=+\) if \(w_{L}^{i}(\epsilon_{L}^{\perp})>0\) and \(s_{L}^{i}=-\) if \(w_{L}^{i}(\epsilon_{L}^{\perp})<0\)._
Proof.: To evaluate the index, we need to understand the symbols \(\phi_{n}(\sigma(\overline{\mathcal{C}}_{H}|)_{L_{e}})\). The map \(\phi_{n}\) is a composition of the Thom isomorphism with the splitting homomorphism \(\theta_{n}\). Let \(L\) be a connected component of \(M_{n}\) and let \(N\) be its normal bundle in \(M\). Since \(L\subset M\) is an embedded circle, \(N\) is a trivial complex bundle \(N\cong L\times\mathbb{C}^{n}\). Write \(G=G_{L}\times S^{1}_{0}\), where \(G_{L}\) is the isotropy group associated with \(L\) and \(S^{1}_{0}\) the circle generated by the vector \(v^{L}_{0}\in\mathbb{Z}_{G}\). Taking \(G_{1}=G_{L}\), \(G_{2}=S^{1}_{0}\), \(M_{1}=L\) and \(M_{2}=\mathbb{C}^{n}\) in (3.1), we get
\[\boxtimes\colon K_{G}(T^{\ast}_{G}L)\otimes K_{G_{L}}(T^{\ast}_{G_{L}}\mathbb{ C}^{n})\to K_{G_{L}\times S^{1}_{0}}(T^{\ast}_{G_{L}\times S^{1}_{0}}(L \times\mathbb{C}^{n}))\cong K_{G}(T^{\ast}_{G}N).\]
The map \(\phi_{n}\) is given by taking the Bott element \([\sigma^{\varepsilon^{\perp}_{L}}]\in K_{G_{L}}(T^{\ast}_{G_{L}}\mathbb{C}^{n})\) in this product. That is, given \(\sigma\in K_{G}(T^{\ast}_{G}L)\) we have
\[\phi_{n}(\sigma)=\sigma\boxtimes\sigma^{\varepsilon^{\perp}_{L}}\in K_{G}(T^{ \ast}_{G}N).\]
Identifying \(N\) with a tubular neighbourhood \(U\) of \(L\) in \(M\) and using excision to extend the symbol to \(M\), we obtain \(\phi_{n}(\sigma)=\sigma\boxtimes\sigma^{\varepsilon^{\perp}_{L}}\in K_{G}(T^ {\ast}_{G}M)\). It follows from Theorem 3.4 that
\[\operatorname{ind}_{G}^{M}\phi_{n}(\overline{\mathcal{C}}_{H}|_{L})= \operatorname{ind}_{G_{L}}^{\mathbb{C}^{n}}(\sigma^{\varepsilon^{\perp}_{L}}) \text{ind}_{G}^{L}(0),\]
where \(0\) is the zero operator on \(L\cong S^{1}\) discussed in Example 3.2. According to Theorem 3.9,
\[\operatorname{ind}_{G_{L}}^{\mathbb{C}^{n}}(\sigma^{\varepsilon^{\perp}_{L}}) (g)=\left(\frac{1}{1-g^{-\alpha_{1}}}\right)^{s^{1}_{L}}\cdots\left(\frac{1}{1 -g^{-\alpha_{n}}}\right)^{s^{n}_{L}},\]
where \(\alpha_{1},\dots,\alpha_{n}\) are the weights of the \(G_{L}\)-action on \(\mathbb{C}^{n}\), \(g\in G_{L}\), \(s^{i}_{L}=+\) if \(\alpha^{i}_{L}(\varepsilon^{\perp}_{L})>0\) and \(s^{i}_{L}=-\) if \(\alpha^{i}_{L}(\varepsilon^{\perp}_{L})<0\). By Corollary 2.14, the weights \(\alpha_{1},\dots,\alpha_{n}\) determine a basis of the weight lattice \(\mathbb{Z}_{G}^{\ast}\subset\mathfrak{g}^{\ast}\) that is dual to \(\{v^{L}_{1},\dots,v^{L}_{n}\}\subset\mathfrak{g}_{L}\).
Writing \(G=G_{L}\times S^{1}_{0}\), the \(G\)-action on \(L\) is given by
\[t=(g,s)\cdot p=sp,\]
where \(t=(g,s)\in G=G_{L}\times S^{1}_{0}\) and \(p\in L\). We identify the subgroup \(S^{1}_{0}\subset G\) generated by \(v^{L}_{0}\) with \(S^{1}\) via
\[e^{2\pi isv^{L}_{0}}\mapsto e^{2\pi is}\in S^{1}.\]
This identification is determined by the weight \(\mu_{L}\in\mathbb{Z}_{G}^{\ast}\) defined by \(\mu_{L}(v^{L}_{0})=1\) and \(\mu_{L}(v^{L}_{j})=0\), \(j\neq 0\). In fact, let \(s=e^{2\pi isv^{L}_{0}}\in S^{1}_{0}\). Then \(s^{\mu_{L}}=e^{2\pi is\mu_{L}(v^{L}_{0})}=e^{2\pi is}\in S^{1}\) and therefore
\[\operatorname{ind}_{G}^{L}(0)(g)=\operatorname{ind}_{G_{L}\times S^{1}_{0}}^{L} (0)(t,s)=\operatorname{ind}_{S^{1}}^{L}(0)(s^{\mu_{L}}).\]
Since \(S^{1}_{0}\) acts freely and transitively on \(L\) we have
\[\operatorname{ind}_{S^{1}}^{L}(0)(s^{\mu_{L}})=\sum_{k=-\infty}^{\infty}s^{k \mu_{L}}=\delta(1-s^{\mu_{L}}).\]
We extend the weight vectors \(\alpha_{i}\in\mathbb{Z}_{G_{L}}^{\ast}\), \(i=1,\dots,n\) to \(\mathbb{Z}_{G}^{\ast}\) by defining \(\alpha_{i}(v^{L}_{0})=0\). Denote these extensions by \(w^{i}_{L}\in\mathbb{Z}_{G}^{\ast}\), \(i=1,\dots,n\). Note that \(\{\mu_{L},w^{1}_{L},\dots,w^{n}_{L}\}\) is a basis of \(\mathbb{Z}_{G}^{\ast}\) dual to \(\{v^{L}_{0},v^{L}_{1},\dots,v^{L}_{n}\}\) and \(w^{i}_{L}(\epsilon^{\perp}_{L})=\alpha_{i}(\epsilon^{\perp}_{L})\). Let \(t=(g,s)\in G=G_{L}\times S^{1}_{0}\) and \(\eta\in\mathfrak{g}^{\ast}\). We will write \(t^{\eta}=(g,s)^{\eta}=g^{\eta}s^{\eta}.\) Since \(G_{L}\) is generated by
\(\{v_{1}^{L},\ldots,v_{n}^{L}\}\) and \(S_{0}^{1}\) is generated by \(v_{0}^{L}\), we have \(t^{w_{i}}=(g,s)^{w_{i}}=g^{w_{i}}s^{w_{i}}=g^{\alpha_{i}}\) and \(t^{\mu_{L}}=(g,s)^{\mu_{L}}=g^{\mu_{L}}s^{\mu_{L}}=s^{\mu_{L}}\). Given \(t=(g,s)\in G=G_{L}\times S_{0}^{1}\), we have
\[\operatorname{ind}_{G}^{M}\phi_{n}(\sigma(\overline{\partial}_{H })|_{L})(t) =\operatorname{ind}_{G_{L}\times S_{0}^{1}}^{M}(\sigma(\overline{ \partial}_{H})|_{L})(g,s)\] \[=\operatorname{ind}_{G_{L}}^{\mathbb{C}^{n}}(\overline{\partial }_{\epsilon^{\perp}})(g)\operatorname{ind}_{G_{L}\times S_{0}^{1}}^{L}(0)(g,s)\] \[=\left(\frac{1}{1-t^{-w_{L}^{1}}}\right)^{s_{L}^{1}}\cdots \left(\frac{1}{1-t^{-w_{L}^{1}}}\right)^{s_{L}^{n}}\delta(1-t^{\mu_{L}}).\]
Next we allow for twistings by an auxiliary bundle and derive the main result of this section, which is a Lefschetz type formula for the index of \(\overline{\partial}_{H}^{E}\). Let \(L\) be a closed orbit corresponding to an edge of the moment cone \(C\) and let \(E\to M\) be a \(G\)-equivariant transversally holomorphic bundle. Since \(L=G/G_{L}\), the restriction \(E|_{L}\) to \(L\) is a vector bundle of the form \(G\times_{G_{L}}F_{L}\) for some \(G_{L}\)-module \(F_{L}\). Since \(G=L\times G_{L}\) we have
\[G\times_{G_{L}}F=(L\times G_{L})\times_{G_{L}}F=L\times F.\]
Recall that \(M_{n}\) is the disjoint union of closed Reeb orbits \(L_{e}\) indexed by the edges of the moment cone \(C\).
**Theorem 4.4**.: _Let \(\overline{\partial}_{H}^{E}\) be the horizontal Dolbeault operator on a toric compact Sasaki manifold twisted by a \(G\)-equivariant transversally holomorphic bundle \(E\). For any \(t\in G\),_
\[\operatorname{ind}_{G}^{M}\sigma(\overline{\partial}_{H}^{E})(t)=\sum_{L\subset M _{n}}\prod_{i=1}^{n}\chi_{E|_{L}}(t)\left(\frac{1}{1-t^{-w_{L}^{1}}}\right)^{ s_{L}^{i}}\delta(1-t^{\mu_{L}}),\]
_where \(\chi_{E|_{L}}\) is the character of the \(G_{L}\)-module associated to the restriction \(E|_{L}\), and_
\[s_{L}^{i}=\begin{cases}+&\text{ if }w_{L}^{i}(\epsilon_{L}^{\perp})>0,\\ -&\text{ if }w_{L}^{i}(\epsilon_{L}^{\perp})<0.\end{cases}\]
Proof.: We have that
\[\operatorname{ind}_{G}^{M}\sigma(\overline{\partial}_{H}^{E})=\sum_{L\in M_{n} }\operatorname{ind}_{G}^{M}\phi_{n}(\sigma(\overline{\partial}_{H}^{E})|_{L}).\]
Restricting \(\sigma(\overline{\partial}_{H}^{E})\) to \(L\subset M_{n}\) we get the symbol \(\sigma(\overline{\partial}_{H})|_{L}\otimes E|_{L}\). If follows that \(E|_{L}=L\times F_{L}\), for some \(G_{L}\)-module \(F_{L}\) and \(\chi_{E|_{L}}=\chi_{F_{L}}\). By the multiplicative property of the index, Proposition 3.6 and Proposition 4.3, we get
\[\operatorname{ind}_{G}^{M}\phi_{n}(\sigma(\overline{\partial}_{H} ^{E})|_{L}(t) =\operatorname{ind}_{G}^{M}(\sigma(\overline{\partial}_{H})|_{L}) \chi_{F_{L}}\] \[=\operatorname{ind}_{G_{L}}^{\mathbb{C}^{n}}(\overline{\partial}_{ \epsilon^{\perp}_{L}})(t)\text{ind}_{G}^{L}(0)(t)\chi_{E|_{L}}(t)\] \[=\chi_{E|_{L}}(t)\left(\frac{1}{1-t^{-w_{L}^{1}}}\right)^{s_{L}^ {1}}\cdots\left(\frac{1}{1-t^{-w_{L}^{n}}}\right)^{s_{L}^{n}}\delta(1-t^{\mu_{ L}}).\]
The result follows by summing over all the closed Reeb orbits in \(M_{n}\).
## 5. A lattice point formula
In this section, we relate the index of the horizontal Dolbeault operator \(\overline{\partial}_{H}\) to the lattice points of the moment cone.
### Polar decomposition of polytopes
The Lawrence-Varchenko formula expresses the characteristic function of a polytope as an alternating sum of characteristic functions of certain cones associated to vertices of the polytope. By extending the formula to polyhedral rational cones, it will allow us to collect the multiplicities in the expression for the index in Theorem 4.4, once expanded into power series.
We begin by presenting the formula and relating the characteristic function of the interior of a polytope to the dual cones. Let \(P\) be a simple convex polytope in an \(n\)-dimensional vector space \(V^{*}\). Let \(F\) be a face of \(P\). The tangent cone to \(P\) at \(F\) is defined by
\[C_{F}=\{y+r(x-y)|r\geq 0,y\in F,x\in P\}.\]
Let \(\sigma_{1},\ldots,\sigma_{d}\) denote the facets of \(P\). Since \(P\) is simple, exactly \(n\) facets intersect at each vertex. We will denote the set of vertices of \(P\) by \(\operatorname{Vert}(P)\). For each face \(F\) of \(P\), let \(I_{F}\subset\{1,\ldots,d\}\) be the set of indices of the facets meeting at \(F\) so that
\[i\in I_{F}\text{ if and only if }F\subset\sigma_{i}.\]
In particular, if \(F=p\in\operatorname{Vert}(P)\) we have \(i\in I_{v}\) if and only if \(v\in\sigma_{i}\).
Let \(p\in\operatorname{Vert}(P)\) and denote by \(w^{i}_{p}\), \(i\in I_{p}\), the edge vector emanating from \(p\) that lies along the unique edge at \(p\) which is not contained on the facet \(\sigma_{i}\). Notice that the \(w^{i}_{p}\) are only determined up to a positive scalar.
**Definition 5.1**.: A vector \(\xi\in V\) such that all the pairings \(\left\langle w^{i}_{p},\xi\right\rangle\) are non-zero is called a _polarizing vector_ for \(P\).
Let \(H_{1},\ldots,H_{N}\) be the hyperplanes in \(V\) determined by the edges of \(P\) under the pairing between \(V\) and \(V^{*}\). A vector \(\xi\in V\) is a polarizing vector for \(P\) if and only if it belongs to the complement
\[V_{P}=V\backslash(H_{1}\cup\cdots\cup H_{N}).\]
The connected components of \(V_{P}\) are called chambers. The signs of the pairings \(\left\langle w^{i}_{p},\xi\right\rangle\) depend only on the chamber of \(V_{P}\) containing \(\xi\).
**Definition 5.2**.: Let \(\xi\in V_{P}\) be a polarizing vector. For each vertex \(p\in\operatorname{Vert}(P)\) and each edge vector \(w^{i}_{p}\) emanating from \(p\), we define the corresponding polarized edge vector to be
\[{w^{i}_{p}}^{\#}=\begin{cases}w^{i}_{p}&\text{if }\left\langle w^{i}_{p},\xi \right\rangle>0,\\ -w^{i}_{p}&\text{if }\left\langle w^{i}_{p},\xi\right\rangle<0.\end{cases}\]
**Definition 5.3**.: Given a polarizing vector \(\xi\in V_{P}\), the _polarized tangent cone_ at \(p\in\operatorname{Vert}(\mathrm{P})\) is defined by
\[C^{\#}_{p}=p+\sum_{w\in E^{+}_{P}(\xi)}\mathbb{R}_{<0}w+\sum_{w\in E^{-}_{P}( \xi)}\mathbb{R}_{\geq 0}w,\]
where
\[E_{p}^{+}(\xi)=\{w_{p}^{i}\ |\ \left\langle w_{p}^{i},\xi\right\rangle>0\}\text{ and }E_{p}^{-}(\xi)=\{w_{p}^{i}\ |\ \left\langle w_{p}^{i},\xi\right\rangle<0\}.\]
**Theorem 5.4** (Lawrence-Varchenko).: _Let \(P\subset V^{*}\) be a simple convex polytope and \(\xi\in V_{P}\) a polarizing vector for \(P\). Then for any \(x\in V^{*}\), we have_
\[\mathbf{1}_{P}(x)=\sum_{p\in\text{Vert}(P)}(-1)^{\left|E_{p}^{+}(\xi)\right|} \mathbf{1}_{C_{p}^{\#}}(x), \tag{5.1}\]
_where \(\mathbf{1}_{C_{p}^{\#}}\) is the characteristic function of the polarized cone \(C_{p}^{\#}\)._
Proof.: See Theorem 3.2 in [15].
Next we show that by flipping the cones in (5.1) yields a cone decomposition of the interior of the polytope \(P\).
**Definition 5.5**.: Define the _dual polarized tangent cone_ at \(p\in\text{Vert}(P)\) by
\[\widecheck{C}_{p}^{\#}=p+\sum_{w\in E_{p}^{+}(\xi)}\mathbb{R}_{>0}w+\sum_{w\in E _{p}^{-}(\xi)}\mathbb{R}_{\leq 0}w\]
Suppose that \(\xi\in V\) lies in one of the walls \(H_{j}\) separating the chambers of \(V_{P}\). Let \(e\) be an edge of \(P\) perpendicular to this wall and let \(p\) be an endpoint of \(e\). The edge vectors at \(p\) are \(w_{p}^{j}\) for \(j\in I_{e}\), and an edge vector that lies along \(e\) is denoted by \(w_{p}^{e}\).
**Definition 5.6**.: The dual polarized tangent cone at the edge \(e\) is defined by
\[\widecheck{C}_{e}^{\#}=p+\mathbb{R}w_{p}^{e}+\sum_{w\in E_{p}^{+}(\xi)} \mathbb{R}_{>0}w+\sum_{w\in E_{p}^{-}(\xi)}\mathbb{R}_{\leq 0}w.\]
One verifies that the cone \(\widecheck{C}_{e}^{\#}\) is independent of the choice of endpoint of the edge \(e\). We also note that if \(x\in\widecheck{C}_{p}^{\#}\), then
\[\left\langle\xi,x\right\rangle\geq\left\langle\xi,p\right\rangle. \tag{5.2}\]
Indeed, if \(x\in\widecheck{C}_{p}^{\#}\) we have
\[x=p+\sum_{w\in E_{p}^{+}(\xi)}a_{w}w+\sum_{w\in E_{p}^{-}(\xi)}b_{w}w,\]
where \(a_{w}>0\), \(b_{w}\leq 0\). Therefore
\[\left\langle\xi,x\right\rangle=\left\langle\xi,p\right\rangle+\sum_{w\in E_{p }^{+}(\xi)}\overbrace{a_{w}\left\langle\xi,w\right\rangle}^{\geq 0}+\sum_{w\in E _{p}^{-}(\xi)}\overbrace{b_{w}\left\langle\xi,w\right\rangle}^{\geq 0}\geqslant \left\langle\xi,p\right\rangle\]
**Theorem 5.7**.: _Let \(P\subset V^{*}\) be a simple convex polytope and \(\xi\in V_{P}\) a polarizing vector for \(P\). Then for any \(x\in V^{*}\), we have_
\[(-1)^{n}\mathbf{1}_{P^{\circ}}(x)=\sum_{p\in\text{Vert}(P)}(-1)^{\left|E_{p}^ {+}(\xi)\right|}\mathbf{1}_{\widecheck{C}_{p}^{\#}}(x), \tag{5.3}\]
_where \(n=\dim V\) and \(P^{\circ}\) denotes the interior of \(P\)._
Proof.: The proof proceeds along the same lines as that of Theorem 5.4 in [15] and comes down to verifying the identity (5.3) in three separate cases and proving independence of the choice of polarizing vector \(\xi\).
1. Suppose that \(x\in P^{\circ}\). Pick any polarizing vector \(\xi\in V_{P}\). Let \(p\in\operatorname{Vert}(P)\) be the vertex for which \(\left\langle\xi,p\right\rangle\) is minimal. Then \(E_{p}^{-}(\xi)=\emptyset\) and we have \(P^{\circ}\subset\widetilde{C}_{p}^{\#}\). For any other vertex \(q\in\operatorname{Vert}(P)\), at least one of the \(w_{q}^{j}\)'s is flipped, and so \(\widetilde{C}_{q}^{\#}\cap P^{\circ}=\emptyset\). Hence \(P^{\circ}\) is disjoint from the cones \(\widetilde{C}_{q}^{\#}\) for all other \(q\neq p\) and (5.3), when evaluated at \(x\), reads \((-1)^{n}=(-1)^{n}\).
2. Suppose that \(x\in\partial P\). Let \(\sigma\) be a facet that contains \(x\) and \(p\in\operatorname{Vert}(P)\) be such that \(p\in\sigma\). Assume that given another facet \(\sigma^{\prime}\), we have \(x\notin\sigma^{\prime}\) if \(p\notin\sigma^{\prime}\). Choose a polarizing vector \(\xi\in V_{P}\) such that \[\left\langle\xi,p\right\rangle=\min_{y\in P}\left\langle\xi,y\right\rangle.\] We have \(E_{p}^{-}(\xi)=\emptyset\) and therefore \(x\notin\widetilde{C}_{p}^{\#}\) because \(\widetilde{C}_{p}^{\#}=C_{p}^{\circ}\) and \(x\in\sigma\subset\partial C_{p}\). We show that \(x\notin\widetilde{C}_{q}^{\#}\) for any other \(q\in\operatorname{Vert}(P)\). Suppose that \(q-p\) is not an edge of \(P\). Let \(\sigma_{q}=\sigma\backslash\bigcup_{j\in I_{q}}\sigma_{j}\), then \(\sigma_{q}\subset C_{q}^{\circ}\). Since \(E_{q}^{-}(\xi)\neq\emptyset\) we have that \(\widetilde{C}_{q}^{\#}\cap C_{q}^{\circ}=\emptyset\) and therefore \(\sigma_{q}\cap\widetilde{C}_{q}^{\#}=\emptyset\). Thus if \(x\in\sigma_{q}\) for some \(q\in\operatorname{Vert}(P)\) we have \(x\notin\widetilde{C}_{q}^{\#}\). If \(q-p\) is an edge of \(P\), we have \(\left\langle\xi,q-p\right\rangle>0\). Any element \(y\) of \(\widetilde{C}_{q}^{\#}\) can be written uniquely as (5.4) \[y=q+a(p-q)+r,\] where \(a\leqslant 0\) and \(r\) is a linear combination of the edges \(w_{q}^{j}\) that are not parallel to the edge \(p-q\). Since \(x\in\sigma\), we can write \(x\) uniquely as \[x=q+b(p-q)+s,\] where \(b\geqslant 0\) and \(s\) is a linear combination of the edges \(w_{q}^{j}\) that are not parallel to the edge \(p-q\). Since \(x\) does not belong to any facet that does not contain \(p\), we have that \(b\neq 0\), and it follows from (5.4) that \(x\notin\widetilde{C}_{q}^{\#}\). This proves that \(x\notin\widetilde{C}_{q}^{\#}\) for any vertex \(q\in\operatorname{Vert}(P)\). Therefore (5.3), when evaluated at \(x\), reads \(0=0\).
3. Suppose that \(x\notin P\). Choose a polarizing vector \(\xi\in V_{P}\) satisfying \[\left\langle\xi,x\right\rangle<\min_{y\in P}\left\langle\xi,y\right\rangle.\] It follows from (5.2) that \(x\) is not in \(\widetilde{C}_{p}^{\#}\) for any \(p\in\operatorname{Vert}(P)\). Thus (5.3) for the polarizing vector \(\xi\), when evaluated at \(x\), reads \(0=0\).
The final step is to show that the right-hand side of (5.3) is independent of \(\xi\). More precisely, we prove that the right-hand side of (5.3) does not change when \(\xi\) crosses the walls \(H_{j}\). Suppose \(H_{j}\) is not perpendicular to any edge vectors at \(p\). The signs of \(\left\langle\xi,w_{p}^{j}\right\rangle\) do not change, so the cone \(\widetilde{C}_{p}^{\#}\) does not change as \(\xi\) crosses the wall. The vertices whose contributions to the right-hand side of (5.3) change as \(\xi\) crosses \(H_{j}\) come in pairs because each edge of \(P\) that is perpendicular to \(H_{j}\) has two endpoints. For each such vertex \(p\), denote by \(Q_{p}(x)\) and \(Q_{p}^{\prime}(x)\) its contributions to the right-hand side of (5.3) before and after \(\xi\) crossed \(H_{j}\). Let \(e\) be
an edge perpendicular to \(H_{j}\) and \(p\) an endpoint of \(e\). Let \(Q_{e}(x)\) be the characteristic function of the cone \(\widetilde{C}_{e}^{\#}\) corresponding to the value of \(\xi\) as it crosses \(H_{j}\). We have
\[Q_{p}(x)=(-1)^{|E_{p}^{+}(\xi)|}\mathbf{1}_{\widetilde{C}_{p}^{\#}}\text{ and }Q_{p}^{\prime}(x)=(-1)^{|E_{p}^{+}(\xi)|+1}\mathbf{1}_{\widetilde{C}_{p}^{\#}},\]
therefore
\[Q_{p}(x)-S_{p}^{\prime}(x) =(-1)^{|E_{p}^{+}(\xi)|}\mathbf{1}_{\widetilde{C}_{p}^{\#}}-(-1) ^{|E_{p}^{+}(\xi)|+1}\mathbf{1}_{\widetilde{C}_{p}^{\#}}\] \[=(-1)^{|E_{p}^{+}(\xi)|}(\mathbf{1}_{\widetilde{C}_{p}^{\#}}+ \mathbf{1}_{\widetilde{C}_{p}^{\#}})\] \[=(-1)^{|E_{p}^{+}(\xi)|}(\mathbf{1}_{\widetilde{C}_{p}^{\#}})=( -1)^{|E_{p}^{+}(\xi)|}Q_{e}(x)\]
If \(q\) is the other endpoint of \(e\), then \(|E_{p}^{+}(\xi)|=|E_{q}^{+}(\xi)|\pm 1\). Hence
\[Q_{q}(x)-Q_{q}^{\prime}(x)=(-1)^{|E_{q}^{+}(\xi)|}Q_{e}(x)=(-1)^{|E_{p}^{+}( \xi)|+1}Q_{e}(x)\]
and
\[(Q_{p}(x)+Q_{q}(x))-(Q_{p}^{\prime}(x)+Q_{q}^{\prime}(x))=(-1)^{|E_{p}^{+}(\xi )|}Q_{e}(x)+(-1)^{|E_{p}^{+}(\xi)|+1}Q_{e}(x)=0\]
Thus crossing \(H_{j}\) does not change the right-hand side of (5.3).
Formulas (5.1) and (5.3) also have an expression in terms of generating series.
**Definition 5.8**.: Let \(V\) be a vector space with basis \(e_{1},\dots,e_{n}\) and \(\mathbb{Z}_{V}\) its integral lattice. If \(A\subset V^{*}\) is a subset, we denote the generating series of \(A\) by
\[A(x)=\sum_{\mu\in A\cap\mathbb{Z}_{V}^{*}}x^{\mu},\]
where \(\mathbb{Z}_{V}^{*}\) is the dual of the integral lattice \(\mathbb{Z}_{V}\), \(x=(x_{1},\dots,x_{n})\) and \(x^{\mu}=(x_{1}^{\mu_{1}},\dots,x_{n}^{\mu_{n}})\).
**Theorem 5.9**.: _Let \(P\subset V^{*}\) be a simple convex polytope and \(\xi\in V_{P}\) a polarizing vector for \(P\). Then,_
\[P(x)=\sum_{p\in\text{Vert}(P)}(-1)^{|E_{p}^{+}(\xi)|}C_{p}^{\#}(x)\]
Proof.: The proof follows directly from Theorem 5.4. We have
\[\sum_{p\in\text{Vert}(P)}(-1)^{|E_{p}^{+}(\xi)|}C_{v}^{\#}(x) =\sum_{p\in\text{Vert}(P)}(-1)^{|E_{p}^{+}(\xi)|}\sum_{\mu\in C_{ v}^{\#}\cap\mathbb{Z}_{V}^{*}}x^{\mu}\] \[=\sum_{p\in\text{Vert}(P)}(-1)^{|E_{p}^{+}(\xi)|}\sum_{\mu\in \mathbb{Z}_{V}^{*}}\mathbf{1}_{C_{v}^{\#}}(\mu)x^{\mu}\] \[=\sum_{\mu\in\mathbb{Z}_{V}^{*}}\left(\sum_{p\in\text{Vert}(P)}(- 1)^{|E_{p}^{+}(\xi)|}\mathbf{1}_{C_{v}^{\#}}(\mu)\right)x^{\mu}\] \[=\sum_{\mu\in\mathbb{Z}_{V}^{*}}\mathbf{1}_{P}(\mu)x^{\mu}=\sum_ {\mu\in P\cap\mathbb{Z}_{V}^{*}}x^{\mu}=P(x).\]
Similarly, for the dual Lawrence-Varchenko formula, we have:
**Theorem 5.10**.: \[(-1)^{n}P^{\circ}(x)=\sum_{p\in\mathit{Vert}(P)}(-1)^{\left|E_{p}^{+}(\xi) \right|}\breve{C}_{p}^{\#}(x).\]
### Polar decomposition of cones
In this section we explain how to adapt the Lawrence-Varchenko formula (5.1) to produce a polar decomposition of a rational polyhedral cone. More precisely, let \(P\subset V^{*}\) be a simple polytope and let \(C\subset V^{*}\times\mathbb{R}^{*}\) be the cone over \(P\), i.e.
\[C=\{r(\eta,1^{*})\in V^{*}\times\mathbb{R}^{*}\ |\ \eta\in P,r\geq 0\}.\]
The cone \(C\) is the lift of the left-hand side of (5.1) from \(V^{*}\) to \(V^{*}\times\mathbb{R}^{*}\). Lifting the right-hand side of (5.1), we can expect to obtain a polar decomposition of \(C\). We will see that this is almost true; one must introduce an error term to obtain an identity.
Let \(p\in P\) be a vertex of the polytope. We will denote by \(\mu_{p}\in\mathbb{Z}_{V}^{*}\times\mathbb{Z}^{*}\) the primitive edge vector of \(C\) going through \(p\). Given a polarizing vector \(\xi\in V_{P}\) for \(P\), let \(C_{p}^{\#}\) be the polarized tangent cone of \(P\) at \(p\) and define
\[K_{p}^{\#}=C_{p}^{\#}+\mathbb{R}\mu_{p}.\]
**Definition 5.11**.: Let \(S(x)\) be the function defined as
\[S(x)=\sum_{p\in Vert(P)}(-1)^{\left|E_{p}^{+}(\xi)\right|}\mathbf{1}_{K_{p}^{ \#}}(x).\]
The function \(S(x)\) is the lift of the right-hand side of (5.1). Let \(\mathcal{H}=\{(\eta,\frac{1}{2}^{*})\in V^{*}\times\mathbb{R}^{*}\ |\ \eta\in V^{*}\}\) be the characteristic hyperplane and let
\[\mathcal{H}_{\lambda}=\{(\eta,\lambda^{*})\in V^{*}\times\mathbb{R}^{*}\ |\ \eta\in V^{*},\lambda^{*}\in\mathbb{R}^{*}\},\]
be its parallel shifts. The polytope \(P\) is the intersection of \(C\) with \(\mathcal{H}\) and for \(\lambda\geq 0\) the intersection of \(\mathcal{H}_{\lambda}\) with \(C\) will be denoted \(P_{\lambda}\). The projection \(\pi\colon V^{*}\times\mathbb{R}^{*}\to V^{*}\) identifies the hyperplanes \(\mathcal{H}_{\lambda}\) with the vector space \(V^{*}\) and the polytopes \(P_{\lambda}\) with \(\lambda P\subset V^{*}\). When \(\lambda<0\), we define \(P_{\lambda}\) as the intersection \(P_{\lambda}=\mathcal{H}_{\lambda}\cap(-C)\). In this case, \(\pi\) also identifies \(P_{\lambda}\) with \(\lambda P\subset V^{*}\). If \(\xi\in V_{P}\) is a polarizing vector for \(P\), then \(\xi\) is a polarizing vector for every \(P_{\lambda}\).
Let \(p\in P\) be a vertex, then \(\lambda p\) is a vertex of the polytope \(\lambda P\). The intersection of \(\mathcal{H}_{\lambda}\) with \(K_{p}^{\#}\) is equal to \(C_{\lambda p}^{\#}\) for \(\lambda\geq 0\), where \(C_{\lambda p}^{\#}\) is the polarized polarized tangent cone cone at \(\lambda p\). When \(\lambda<0\) the intersection becomes \(K_{p}^{\#}\cap\mathcal{H}_{\lambda}=\breve{C}_{\lambda p}^{\#}\), the dual polarized tangent cone at \(\lambda p\). Therefore, restricting \(S\) to \(\mathcal{H}_{\lambda}\) we get
\[S|_{\mathcal{H}_{\lambda}}(x) =\sum_{p\in Vert(P)}(-1)^{\left|E_{v}^{+}(\xi)\right|}\mathbf{1} _{K_{p}^{\#}}|_{\mathcal{H}_{\lambda}}(x)=\sum_{p\in Vert(P)}(-1)^{\left|E_{v} ^{+}(\xi)\right|}\mathbf{1}_{K_{p}^{\#}\cap\mathcal{H}_{\lambda}}(x)\] \[=\sum_{p\in Vert(P)}(-1)^{\left|E_{v}^{+}(\xi)\right|}\mathbf{1} _{C_{\lambda p}^{\#}}(x)=P_{\lambda}(x),\]
if \(\lambda\geq 0\). Similarly, if \(\lambda<0\) we have
\[S|_{\mathcal{H}_{\lambda}}(x) =\sum_{p\in Vert(P)}(-1)^{\left|E_{v}^{+}(\xi)\right|}\mathbf{1}_{K _{p}^{\#}}|_{\mathcal{H}_{\lambda}}(x)=\sum_{p\in Vert(P)}(-1)^{\left|E_{v}^{+}( \xi)\right|}\mathbf{1}_{K_{P}^{\#}\cap\mathcal{H}_{\lambda}}(x)\] \[=\sum_{p\in Vert(P)}(-1)^{\left|E_{v}^{+}(\xi)\right|}\mathbf{1}_ {\widetilde{C}_{\lambda p}^{\#}}(x)=(-1)^{n}P_{\lambda}^{\circ}(x).\]
It follows that
\[S(x)=\mathbf{1}_{C}(x)+(-1)^{n}\mathbf{1}_{-C^{\circ}}(x). \tag{5.5}\]
In a similar manner for the dual polarized tangent cones, let
\[\widetilde{K}_{p}^{\#}=\widetilde{C}_{p}^{\#}+\mathbb{R}\mu_{p}.\]
We have \(\widetilde{K}_{p}^{\#}\cap\mathcal{H}_{\lambda}=\widetilde{C}_{\lambda p}^{\#}\) for \(\lambda\geq 0\) and \(\widetilde{K}_{p}^{\#}\cap\mathcal{H}_{\lambda}=C_{\lambda p}^{\#}\).
**Definition 5.12**.: Let \(\widetilde{S}(x)\) be the function defined by
\[\widetilde{S}(x)=\sum_{p\in Vert(P)}(-1)^{\left|E_{p}^{+}(\xi)\right|}\mathbf{ 1}_{\widetilde{K}_{p}^{\#}}(x),\]
This is the lift of the right-hand side of (5.3).
Restricting to the hyperplanes \(\mathcal{H}_{\lambda}\) we get
\[\widetilde{S}|_{\mathcal{H}_{\lambda}}(x) =\sum_{p\in Vert(P)}(-1)^{\left|E_{v}^{+}(\xi)\right|}\mathbf{1}_ {\widetilde{K}_{p}^{\#}}|_{\mathcal{H}_{\lambda}}(x)=\sum_{p\in Vert(P)}(-1)^ {\left|E_{v}^{+}(\xi)\right|}\mathbf{1}_{\widetilde{K}_{P}^{\#}\cap\mathcal{H} _{\lambda}}(x)\] \[=\sum_{p\in Vert(P)}(-1)^{\left|E_{v}^{+}(\xi)\right|}\mathbf{1}_ {\widetilde{C}_{\lambda p}^{\#}}(x)=(-1)^{n}P_{\lambda}^{\circ}(x),\]
when \(\lambda\geq 0\), and
\[\widetilde{S}|_{\mathcal{H}_{\lambda}}(x) =\sum_{p\in Vert(P)}(-1)^{\left|E_{v}^{+}(\xi)\right|}\mathbf{1}_ {\widetilde{K}_{p}^{\#}}|_{\mathcal{H}_{\lambda}}(x)=\sum_{p\in Vert(P)}(-1)^ {\left|E_{v}^{+}(\xi)\right|}\mathbf{1}_{\widetilde{K}_{P}^{\#}\cap\mathcal{H} _{\lambda}}(x)\] \[=\sum_{p\in Vert(P)}(-1)^{\left|E_{v}^{+}(\xi)\right|}\mathbf{1}_ {C_{\lambda p}^{\#}}(x)=P_{\lambda}(x),\]
when \(\lambda<0\). Therefore
\[\widetilde{S}(x)=(-1)^{n}\mathbf{1}_{C^{\circ}}(x)+\mathbf{1}_{-C}(x). \tag{5.6}\]
Formulas (5.5) and (5.6) can again be expressed in terms of generating series.
**Proposition 5.13**.: \[\sum_{p\in Vert(P)}(-1)^{\left|E_{p}^{+}(\xi)\right|}K_{p}^{\#}(x) =C(x)+(-1)^{n}(-C^{\circ})(x)\] \[=\sum_{\mu\in C\cap(\mathbb{Z}_{V}^{\#}\times\mathbb{R}^{\#})}x^{ \mu}+(-1)^{n}\sum_{\mu\in(-C^{\circ})\cap(\mathbb{Z}_{V}^{\#}\times\mathbb{R} ^{\#})}x^{\mu}.\]
Proof.: The proof is a straightforward application of (5.5).
\[\sum_{p\in Vert(P)}(-1)^{|E^{+}_{p}(\xi)|}K^{\#}_{p}(x) =\sum_{p\in Vert(P)}(-1)^{|E^{+}_{p}(\xi)|}\sum_{\mu\in K^{\#}_{p} \cap(\mathbb{Z}^{\bullet}_{V}\times\mathbb{R}^{\bullet})}x^{\mu}\] \[=\sum_{p\in Vert(P)}(-1)^{|E^{+}_{p}(\xi)|}\sum_{\mu\in(\mathbb{Z} ^{\#}_{V}\times\mathbb{R}^{\bullet})}\mathbf{1}_{K^{\#}_{p}}(\mu)x^{\mu}\] \[=\sum_{\mu\in(\mathbb{Z}^{\#}_{V}\times\mathbb{R}^{\bullet})} \left(\sum_{p\in Vert(P)}(-1)^{|E^{+}_{p}(\xi)|}\mathbf{1}_{K^{\#}_{p}}(\mu) \right)x^{\mu}\] \[=\sum_{\mu\in(\mathbb{Z}^{\#}_{V}\times\mathbb{R}^{\bullet})} \left(\mathbf{1}_{C}(\mu)+(-1)^{n}\mathbf{1}_{(-C^{\circ})}(\mu)\right)x^{\mu}\] \[=\sum_{\mu\in C\cap(\mathbb{Z}^{\#}_{V}\times\mathbb{R}^{\bullet} )}x^{\mu}+(-1)^{n}\sum_{\mu\in(-C^{\circ})\cap(\mathbb{Z}^{\#}_{V}\times \mathbb{R}^{\bullet})}x^{\mu}.\]
Similarly, we have:
**Proposition 5.14**.: \[\sum_{p\in Vert(P)}(-1)^{|E^{+}_{p}(\xi)|}\widetilde{K}^{\#}_{p}(x) =(-1)^{n}C^{\circ}(x)+(-C)(x)\] \[=(-1)^{n}\sum_{\mu\in C^{\circ}\cap(\mathbb{Z}^{\#}_{V}\times \mathbb{R}^{\bullet})}x^{\mu}+\sum_{\mu\in(-C)\cap(\mathbb{Z}^{\#}_{V}\times \mathbb{R}^{\bullet})}x^{\mu}.\]
These results can be slightly generalised as follows. Let \(W^{\bullet}\) be a vector space and let \(P\subset W^{\bullet}\) be a simple convex polytope sitting on a hyperplane
\[\mathcal{H}=\{\eta\in W^{\bullet}\ |\ \left<\eta,R\right>=1\}\]
determined by a vector \(R\in W\). Let
\[C=\{r\eta\in W^{\bullet}\ |\ \eta\in P,r\geq 0\}\]
be the cone over \(P\) and for each vertex \(p\in P\) denote by \(\mu_{p}\) the primitive edge vector of \(C\) going through \(p\). Let \(\xi\in H^{\bullet}_{P}\) be a polarizing vector for \(P\). As above, for each vertex \(p\in P\) denote by \(C^{\#}_{p}\subset H\) the polarized tangent cone of \(P\) at \(p\) and by \(\widetilde{C}^{\#}_{p}\subset H\) the dual polarized tangent cone of \(P\) at \(p\).
**Definition 5.15**.: Define the cones \(K^{\#}_{p}\) and \(\widetilde{K}^{\#}_{p}\) by
\[K^{\#}_{p}=C^{\#}_{p}+\mathbb{R}\mu_{p}\ \text{ and }\ \widetilde{K}^{\#}_{p}= \widetilde{C}^{\#}_{p}+\mathbb{R}\mu_{p}.\]
Let \(\{e_{1},\dots,e_{n+1}\}\) be a basis of \(W\) such that \(e_{n+1}=R\). Then \(\{e^{\bullet}_{1},\dots,e^{\bullet}_{n}\}\) is a basis of \(\mathcal{H}\) and we have a linear isomorphism
\[T\colon W^{\bullet} \to\mathcal{H}\times\mathbb{R}^{\bullet}\] \[e^{\bullet}_{i} \mapsto(e^{\bullet}_{i},0)\] \[e^{\bullet}_{n+1} \mapsto(0,1^{\bullet})\,.\]
The map \(T\) takes \(\mathcal{H}\) to the hyperplane
\[T(\mathcal{H})=\{(\eta,1^{*})\in\mathcal{H}\times\mathbb{R}^{*}\ |\ \eta\in \mathcal{H}\}\]
and \(P\) to a polytope \(T(P)\subset T(\mathcal{H})\). Restricting \(T\) to \(\mathcal{H}\), we get a linear automorphism \(T\colon\mathcal{H}\to\mathcal{H}\) that we will also denote by \(T\). Let \(T^{-1}\) be the inverse of \(T\) and \((T^{-1})^{*}\) its adjoint. Let \(v\in\mathcal{H}^{*}\) and \(\eta\in\mathcal{H}\), then
\[\left\langle\eta,v\right\rangle=\left\langle T(\eta),(T^{-1})^{*}(v)\right\rangle.\]
Since the edges of \(P\) are taken to the edges of \(T(P)\), the vector \((T^{-1})^{*}(\xi)\) induces a polarization of \(T(P)\) such that \(T(C_{p}^{\#})=C_{T(p)}^{\#}\) for every vertex \(p\in P\). The identity (5.5) implies that
\[\sum_{p\in Vert(P)}(-1)^{\left|E_{p}^{+}(\xi)\right|}\mathbf{1}_{K_{T(p)}^{\# }}(x)=\mathbf{1}_{T(C)}(x)+(-1)^{n}\mathbf{1}_{(-T(C)^{\circ})}(x).\]
Let \(S(x)=\sum_{p\in Vert(P)}(-1)^{\left|E_{p}^{+}(\xi)\right|}\mathbf{1}_{K_{p}^{ \#}}(x).\) Since \(K_{T(p)}^{\#}=T(K_{p}^{\#})\), we have \(\mathbf{1}_{K_{p}^{\#}}(x)=\mathbf{1}_{T(K_{p}^{\#})}(Tx)\) and therefore
\[S(x) =\sum_{p\in Vert(P)}(-1)^{\left|E_{p}^{+}(\xi)\right|}\mathbf{1}_ {K_{p}^{\#}}(x)=\sum_{p\in Vert(P)}(-1)^{\left|E_{p}^{+}(\xi)\right|}\mathbf{1 }_{T(K_{p}^{\#})}(Tx)\] \[=\mathbf{1}_{T(C)}(Tx)+(-1)^{n}\mathbf{1}_{(-T(C)^{\circ})}(Tx)\] \[=\mathbf{1}_{C}(x)+(-1)^{n}\mathbf{1}_{(-C^{\circ})}(x),\]
In summary, we have:
**Proposition 5.16**.: _Let \(W^{*}\) be a vector space and \(P\subset W^{*}\) a simple convex polytope on a hyperplane \(H\) determined by a vector \(R\in W\). Then, given a polarizing vector \(\xi\in H_{P}^{*}\), we have_
\[S(x)=\sum_{p\in Vert(P)}(-1)^{\left|E_{p}^{+}(\xi)\right|}\mathbf{1}_{K_{p}^{ \#}}(x)=\mathbf{1}_{C}(x)+(-1)^{n}\mathbf{1}_{(-C^{\circ})}(x).\]
A similar argument applied to the dual polarized tangent cones \(\widetilde{K}_{p}^{\#}=\widetilde{C}_{p}^{\#}+\mathbb{R}\mu_{p}\) gives:
**Proposition 5.17**.: \[\widetilde{S}(x)=\sum_{p\in Vert(P)}(-1)^{\left|E_{p}^{+}(\xi)\right|} \mathbf{1}_{\widetilde{K}_{p}^{\#}}(x)=(-1)^{n}\mathbf{1}_{C^{\circ}}(x)+ \mathbf{1}_{-C}(x).\]
The identities involving generating series in Proposition 5.13 and 5.14 continue to hold, with the lattice \(\mathbb{Z}_{W}^{*}\) in place of \(\mathbb{Z}_{V}^{*}\).
### A formula for \(\mathbf{ind}_{G}^{M}(\sigma(\overline{\mathcal{C}}_{H}))\)
Applying the results in the previous section, we show next how to obtain explicitly the multiplicities \(m(\mu)\) associated to the weights \(\mu\in\mathfrak{g}^{*}\) appearing in the index
\[\operatorname{ind}_{G}^{M}(\sigma(\overline{\mathcal{C}}_{H}))(t)=\sum_{\mu \in\mathbb{Z}_{G}^{\#}}m(\mu)t^{\mu}.\]
Let \(R\in\mathfrak{g}\) be the generator of the Reeb vector field, \(\mathcal{H}\) the characteristic hyperplane determined by \(R\) and \(C\) the moment cone. The polytope \(P=\mathcal{H}\cap C\) is the
image of the \(\alpha\)-moment map \(\phi_{\alpha}\). Each vertex of \(P\) corresponds to an edge of \(C\), corresponding to a connected component \(L\) of \(M_{n}\).
Given a vertex \(p\in P\), let \(L\subset M_{n}\) be the closed Reeb orbit corresponding to \(p\). Since \(C\) is a good cone there is a vector \(v_{0}^{L}\in\mathfrak{g}\) such that \(\{v_{0}^{L},v_{1}^{l},\ldots,v_{n}^{L}\}\) is an integral basis of \(\mathbb{Z}_{G}\), where \(\{v_{1}^{l},\ldots,v_{n}^{L}\}\) is the set of normals to faces meeting at \(p\). Let \(\{\mu_{L},w_{L}^{1},\ldots,w_{L}^{n}\}\) be the dual basis of \(\{v_{0}^{L},v_{1}^{l},\ldots,v_{n}^{L}\}\), Theorem 4.4 tells us that if \(\epsilon\in\mathfrak{g}\) is a polarizing vector, as in Definition 4.1, then the index \(\operatorname{ind}_{G}^{M}(\sigma(\overline{\mathcal{C}}_{H}))\) is given by
\[\operatorname{ind}_{G}^{M}\sigma(\overline{\mathcal{C}}_{H})(t)=\sum_{L\subset M _{n}}\left(\frac{1}{1-t^{-w_{L}^{1}}}\right)^{s_{L}^{1}}\cdots\left(\frac{1}{ 1-t^{-w_{L}^{p}}}\right)^{s_{L}^{\tau}}\delta(1-t^{\mu_{L}}),\]
where \(s_{L}^{i}=+\) if \((\epsilon_{L}^{\perp})>0\) and \(s_{L}^{i}=-\) if \((\epsilon_{L}^{\perp})<0\). Define the index sets
\[W_{L}^{+}(\epsilon_{L}^{\perp})=\{i\in\{1,\ldots,n\}\ |\ w_{L}^{i}(\epsilon_{L}^{ \perp})>0\}\]
and
\[W_{L}^{-}(\epsilon_{L}^{\perp})=\{i\in\{1,\ldots,n\}\ |\ w_{L}^{i}(\epsilon_{L}^{ \perp})<0\}.\]
We can write
\[\left(\frac{1}{1-t^{-w_{L}^{1}}}\right)^{s_{L}^{1}}\cdots\left(\frac{1}{1-t^{- w_{L}^{n}}}\right)^{s_{L}^{n}}\delta(1-t^{\mu_{L}})=(-1)^{|W_{L}^{+}(\epsilon_{L}^{ \perp})|}\sum_{\mu\in\mathbb{Z}_{G}^{\mathfrak{g}}\cap K_{L}^{\#}(\epsilon^{ \perp})}t^{\mu},\]
since \(\{\mu_{L},w_{L}^{1},\ldots,w_{L}^{n}\}\) is an integral basis of \(\mathbb{Z}_{G}^{\mathfrak{s}}\), and \(K_{L}^{\#}(\epsilon^{\perp})\) is the cone defined by
\[K_{L}^{\#}(\epsilon^{\perp})=\mathbb{R}\mu_{L}+\sum_{i\in W_{L}^{+}(\epsilon_ {L}^{\perp})}\mathbb{R}_{>0}w_{L}^{i}+\sum_{i\in W_{L}^{-}(\epsilon_{L}^{ \perp})}\mathbb{R}_{\leqslant 0}w_{L}^{i}.\]
Since \(\{\mu_{L},w_{L}^{1},\ldots,w_{L}^{n}\}\) is the dual basis of \(\{v_{0}^{L},\ldots,v_{n}^{L}\}\), the cone \(K_{L}^{\#}(\epsilon^{\perp})\) can also be written as
\[K_{L}^{\#}(\epsilon^{\perp})=\bigcap_{i\in W_{L}^{+}(\epsilon_{L}^{\perp})} \{w\in\mathfrak{g}^{\ast}\ |\ w(v_{i}^{L})>0\}\bigcap_{i\in W_{L}^{-}(\epsilon_{L}^{\perp})}\{w\in \mathfrak{g}^{\ast}\ |\ w(v_{i}^{L})\leqslant 0\}.\]
The following lemma gives yet another description of the cone \(K_{L}^{\#}(\epsilon^{\perp})\).
**Lemma 5.18**.: _Let \(\{\eta^{1},\ldots,\eta^{n}\}\subset\mathfrak{g}^{\ast}\) be a set of vectors satisfying \(\eta^{i}(v_{j}^{L})=\delta_{ij}\), \(i,j=1,\ldots,n\) and let \(K\) be the cone_
\[K=\mathbb{R}\mu_{L}+\sum_{i\in W_{L}^{+}(\epsilon_{L}^{\perp})}\mathbb{R}_{>0 }\eta^{i}+\sum_{i\in W_{L}^{-}(\epsilon_{L}^{\perp})}\mathbb{R}_{\leqslant 0 }\eta^{i}.\]
_Then \(K=K_{L}^{\#}(\epsilon^{\perp})\)._
Proof.: Let \(w\in K\) and write
\[w=r\mu_{L}+\sum_{i\in W_{L}^{+}(\epsilon_{L}^{\perp})}a_{i}\eta_{L}^{i}+\sum_{ i\in W_{L}^{-}(\epsilon_{L}^{\perp})}b_{i}\eta_{L}^{i},\]
where \(r,a_{i},b_{i}\in\mathbb{R}\), \(a_{i}>0\) and \(b_{i}\leqslant 0,i=1,\ldots,n\). Computing \(w(v_{i}^{L})\) for \(i=1,\ldots,n\) we get
\[w(v_{i}^{L})=a_{i}>0,\ \text{for}\ i\in W_{L}^{+}(\epsilon_{L}^{\perp})\ \text{and}\ w(v_{i}^{L})=b_{i}\leqslant 0\ \text{for}\ i\in W_{L}^{-}(\epsilon_{L}^{\perp}).\]
Therefore
\[K\subset\bigcap_{i\in W_{L}^{+}(\epsilon_{L}^{\perp})}\{w\in\mathfrak{g}^{*}\ |\ w(v_{i}^{L})>0\}\bigcap_{i\in W_{L}^{-}(\epsilon_{L}^{\perp})}\{w\in \mathfrak{g}^{*}\ |\ w(v_{i}^{L})\leq 0\}=K_{L}^{\#}(\epsilon^{\perp}).\]
Let \(w\in K_{L}^{\#}(\epsilon^{\perp})\), since \(\{\mu_{L},\eta^{i},\ldots,\eta^{n}\}\) forms a basis for \(\mathfrak{g}^{*}\) we can write
\[w=r\mu_{L}+\sum_{i\in W_{L}^{\perp}(\epsilon_{L}^{\perp})}a_{i}\eta_{L}^{i}+ \sum_{i\in W_{L}^{-}(\epsilon_{L}^{\perp})}b_{i}\eta_{L}^{i}.\]
Since \(w\in K_{L}^{\#}(\epsilon^{\perp})\), computing \(w(v_{i}^{L})\), \(i=1,\ldots,n\), we find that \(a_{i}>0\) and \(b_{i}\leq 0\) which implies that \(w\in K\). Therefore \(K=K_{L}^{\#}(\epsilon^{\perp})\).
Let \(\{\eta_{L},\eta_{L}^{1},\ldots,\eta_{L}^{n}\}\) be the dual basis of \(\{R,v_{1}^{L},\ldots,v_{n}^{L}\}\), then \(\eta_{L}^{i}(v_{j}^{L})=\delta_{ij}\) and Lemma 5.18 implies that
\[K_{L}^{\#}(\epsilon^{\perp})=\mathbb{R}\mu_{L}+\sum_{i\in W_{L}^{+}(\epsilon_ {L}^{\perp})}\mathbb{R}_{>0}\eta_{L}^{i}+\sum_{i\in W_{L}^{-}(\epsilon_{L}^{ \perp})}\mathbb{R}_{\leq 0}\eta_{L}^{i}.\]
The index sets \(W_{L}^{+}(\epsilon_{L}^{\perp})\) and \(W_{L}^{-}(\epsilon_{L}^{\perp})\) can also be expressed in terms of the basis \(\{\eta_{L},\eta_{L}^{1},\ldots,\eta_{L}^{n}\}\).
**Lemma 5.19**.: \[W_{L}^{+}(\epsilon_{L}^{\perp})=\{i\in\{1,\ldots,n\}\ |\ w_{L}^{i}(\epsilon_{L}^{\perp})>0\}=\{i\in\{1,\ldots,n\}\ |\ \eta_{L}^{i}(\epsilon)>0\}\]
_and_
\[W_{L}^{-}(\epsilon_{L}^{\perp})=\{i\in\{1,\ldots,n\}\ |\ w_{L}^{i}(\epsilon_{L}^{ \perp})<0\}=\{i\in\{1,\ldots,n\}\ |\ \eta_{L}^{i}(\epsilon)<0\}.\]
Proof.: Since the weights \(\{\alpha_{1},\ldots,\alpha_{n}\}\) of the \(G_{L}\)-action on \(\mathbb{C}^{n}\) form a dual basis to \(\{v_{1}^{L},\ldots,v_{n}^{L}\}\) in \(\mathfrak{g}_{L}^{*}\) and
\[\epsilon_{L}^{\perp}=\eta_{L}^{1}(\epsilon)v_{1}^{L}+\cdots+\eta_{L}^{n}( \epsilon)v_{n}^{L},\]
we have \(\alpha_{i}(\epsilon_{L}^{\perp})=w_{L}^{i}(\epsilon_{L}^{\perp})=\eta_{L}^{i} (\epsilon)\), for \(i=1,\ldots,n\).
**Theorem 5.20**.: _The index of the horizontal Dolbeault operator \(\overline{\partial}_{H}\) is given by_
\[\text{ind}_{G}^{M}(\sigma(\overline{\partial}_{H}))(t)=(-1)^{n}\sum_{\mu\in C ^{\circ}\cap\mathbb{Z}_{G}^{*}}t^{\mu}+\sum_{\mu\in(-C)\cap\mathbb{Z}_{G}^{*}} t^{\mu}.\]
Proof.: We note that \(\{\eta_{L}^{1},\ldots,\eta_{L}^{n}\}\) are primitive vectors determining the edge directions of \(P\) at \(p\), since \(\eta_{L}^{i}(R)=0\) for \(i=1,\ldots,n\) and \(\eta_{L}^{i}(v_{L}^{L})=\delta_{ij}\). Let \(\epsilon\in\mathfrak{g}\) be a polarizing vector, as in Proposition 4.1, that is, \(\epsilon\) satisfies \(\eta_{L}^{i}(\epsilon)\neq 0\) for \(i=1,\ldots,n\) and for all \(L\subset M_{n}\). A polarizing vector for the polytope \(P\) is a vector in the dual vector space \(\mathcal{H}^{*}\). The vector \(\epsilon\in\mathfrak{g}\) determines a polarizing vector \(\epsilon_{H}\) for the polytope \(P\) by
\[\epsilon_{H}(\eta)=\eta(\epsilon),\ \text{for all}\ \eta\in\mathcal{H}.\]
Since the edge vectors \(\{\eta_{L}^{1},\ldots,\eta_{L}^{n}\}\) satisfy \(\epsilon_{H}(\eta_{L}^{i})=\eta_{L}^{i}(\epsilon)\neq 0\), \(i=1,\ldots,n\), \(\epsilon_{H}\) is a polarizing vector for the polytope \(P\). Since \(\eta_{L}^{i}(\epsilon)=\epsilon_{H}(\eta_{L}^{i})\), Lemma 5.19 implies that
\[W_{L}^{-}(\epsilon_{L}^{\perp})=\{i\in\{1,\ldots,n\}\ |\ \eta_{L}^{i}(\epsilon)<0\}=E_{p}^{-}( \epsilon_{H})\]
and
\[W_{L}^{+}(\epsilon_{L}^{\perp})=\{i\in\{1,\ldots,n\}\ |\ \eta_{L}^{i}(\epsilon)>0\}=E_{p }^{+}(\epsilon_{H}),\]
where \(E_{p}^{-}(\epsilon_{H})\) and \(E_{p}^{+}(\epsilon_{H})\) correspond to the edges \(\eta_{L}^{i}\) of \(P\) such that \(\eta_{L}^{i}(\epsilon_{H})<0\) and \(\eta_{L}^{i}(\epsilon_{H})>0\), respectively. The vector \(\epsilon_{H}\) determines a cone \(\widetilde{K}_{p}^{\#}\), as in Definition 5.15. Since the vectors \(\eta_{L}^{i}\) are the edge vectors of \(P\) meeting at \(p\), the identity \(\widetilde{K}_{p}^{\#}=K_{L}^{\#}(\epsilon^{\perp})\) holds. It follows that
\[\operatorname{ind}_{G}^{M}\sigma(\overline{\partial}_{H})(t) =\sum_{L\subset M_{n}}\left(\frac{1}{1-t^{-w_{L}^{1}}}\right)^{s _{L}^{1}}\cdots\left(\frac{1}{1-t^{-w_{L}^{1}}}\right)^{s_{L}^{n}}\delta(1-t^ {\mu_{L}})\] \[=\sum_{L\subset M_{n}}(-1)^{\left|W_{L}^{+}(\epsilon_{L}^{\perp} )\right|}\sum_{\mu\in\mathbb{Z}_{G}^{\bullet}\cap K_{L}^{\#}(\epsilon^{\perp})} t^{\mu}\] \[=\sum_{p\in\operatorname{Vert}(\operatorname{P})}(-1)^{\left|E_ {p}^{+}(\epsilon_{H})\right|}\sum_{\mu\in\mathbb{Z}_{G}^{\bullet}\cap \widetilde{K}_{p}^{\#}}t^{\mu}\]
and the result follows by applying Proposition 5.14.
|
2308.06348
|
Developing machine-learned potentials to simultaneously capture the
dynamics of excess protons and hydroxide ions in classical and path integral
simulations
|
The transport of excess protons and hydroxide ions in water underlies
numerous important chemical and biological processes. Accurately simulating the
associated transport mechanisms ideally requires utilizing ab initio molecular
dynamics simulations to model the bond breaking and formation involved in
proton transfer and path-integral simulations to model the nuclear quantum
effects relevant to light hydrogen atoms. These requirements result in a
prohibitive computational cost, especially at the time and length scales needed
to converge proton transport properties. Here, we present machine-learned
potentials (MLPs) that can model both excess protons and hydroxide ions at the
generalized gradient approximation and hybrid density functional theory levels
of accuracy and use them to perform multiple nanoseconds of both classical and
path-integral proton defect simulations at a fraction of the cost of the
corresponding ab initio simulations. We show that the MLPs are able to
reproduce ab initio trends and converge properties such as the diffusion
coefficients of both excess protons and hydroxide ions. We use our
multi-nanosecond simulations, which allow us to monitor large numbers of proton
transfer events, to analyze the role of hypercoordination in the transport
mechanism of the hydroxide ion and provide further evidence for the asymmetry
in diffusion between excess protons and hydroxide ions.
|
Austin O. Atsango, Tobias Morawietz, Ondrej Marsalek, Thomas E. Markland
|
2023-08-11T19:01:18Z
|
http://arxiv.org/abs/2308.06348v1
|
Developing machine-learned potentials to simultaneously capture the dynamics of excess protons and hydroxide ions in classical and path integral simulations
###### Abstract
The transport of excess protons and hydroxide ions in water underlies numerous important chemical and biological processes. Accurately simulating the associated transport mechanisms ideally requires utilizing _ab initio_ molecular dynamics simulations to model the bond breaking and formation involved in proton transfer and path-integral simulations to model the nuclear quantum effects relevant to light hydrogen atoms. These requirements result in a prohibitive computational cost, especially at the time and length scales needed to converge proton transport properties. Here, we present machine-learned potentials (MLPs) that can model both excess protons and hydroxide ions at the generalized gradient approximation and hybrid density functional theory levels of accuracy and use them to perform multiple nanoseconds of both classical and path-integral proton defect simulations at a fraction of the cost of the corresponding _ab initio_ simulations. We show that the MLPs are able to reproduce _ab initio_ trends and converge properties such as the diffusion coefficients of both excess protons and hydroxide ions. We use our multi-nanosecond simulations, which allow us to monitor large numbers of proton transfer events, to analyze the role of hypercoordination in the transport mechanism of the hydroxide ion and provide further evidence for the asymmetry in diffusion between excess protons and hydroxide ions.
+
Footnote †: preprint: APS/123-QED
## I Introduction
Water's ability to autoionize and efficiently transport its ionization products--excess protons and hydroxide ions--through its hydrogen bond network is a fundamental characteristic that underlies multiple processes ranging from acid-base chemistry to the operation of proton exchange membrane fuel cells [1] and voltage-gated proton channels in biological cell membranes. [2] Excess protons and hydroxide ions are known to diffuse via structural (Grotthuss-like) mechanisms, [3] which involve the making and breaking of chemical bonds through a series of proton transfer reactions between neighboring water molecules. These structural diffusion mechanisms allow both species to diffuse much faster than water itself and are intricately linked to the structure and dynamics of the hydrogen bonds that sulfate proton defects in water. [4] Excess protons and hydroxide ions thus exhibit different diffusion rates in water due to their different solvation motifs, and nuclear magnetic resonance (NMR) [5; 6] and conductivity [7; 5] measurements have shown that excess protons diffuse \(\sim\)1.8 times faster than hydroxide ions at room temperature. The need for a deeper understanding of the complex molecular structures and motions that lead to these diffusion mechanisms and the differences between the diffusion rates of excess protons and hydroxide ions has led to extensive theoretical studies. [9; 10; 11; 12; 13; 14; 15; 16; 17] However, due to the reactive nature of the defects, which necessitate a quantum mechanical treatment of the electrons to allow chemical bonds to be made and broken during the simulation, and their low mass, which requires consideration of nuclear quantum effects, resolving the interplay of these physical effects and how they are engendered in the diffusion mechanism has remained a subject of significant debate.
Early studies of proton transport in water invoked symmetry between the hydronium (H\({}_{3}\)O\({}^{+}\)) and hydroxide (OH\({}^{-}\)) ions [18; 19; 20; 21; 22; 23; 24; 25] to suggest a framework where the structural transport mechanism of hydroxide ions could be inferred directly as the inverse of the corresponding mechanism for excess protons. However, it has since become clear that the two ions follow distinct proton transfer pathways, a phenomenon that is commonly attributed to differences in their solvation patterns. [18; 19; 20; 21; 22; 23; 24; 25] In particular, OH\({}^{-}\) can exhibit a hypercoordinated configuration where it accepts four hydrogen bonds, [16] whereas the H\({}_{3}\)O\({}^{+}\) ion can only donate three. This effect underscores the importance of water's complex hydrogen bond network in facilitating and ultimately influencing the rate of proton transport.
One of the most commonly invoked approaches to simulate the bond making and breaking that accompanies proton transfer in the structural diffusion of proton defects has been to perform computationally costly _ab initio_ molecular dynamics (AIMD) simulations, where forces are obtained on the fly from electronic structure calculations. In addition, due to the light hydrogen nuclei involved, capturing a complete picture of the transport of proton defects requires including nuclear quantum effects (NQE) such as tunnelling and zero-point energy. _Ab initio_ path-integral simulations include both of these effects and have been shown to be vital for the accurate description of the structure and transport of both H\({}_{3}\)O\({}^{+}\) and OH\({}^{-}\). [26; 25; 16; 3] While imaginary-time path-integral simulations exactly include NQEs for structural properties, path-integral-based methods such as centroid molecular dynamics (CMD) [27; 28] and ring polymer molecular dynamics (RPMD) [29; 30] have been shown to provide reliable approximate quantum dynamics for condensed phase systems. However, 30-100 replicas of a classical system are usually needed for path-integral simulations of aqueous systems at room tem
perature when using the most commonly employed second-order path integral discretization approach,[31] increasing the computational cost by at least 30 times compared to AIMD simulations with classical nuclei. As such, _ab initio_ path-integral molecular dynamics (AI-PIMD) simulations of the lengths required to sample many proton transfer events (on the order of nanoseconds) and hence reliably converge proton transport properties have traditionally been prohibitively costly. Recent path integral acceleration approaches[32] such as those that combine multiple time scale molecular dynamics[33] and ring polymer contraction[34; 35; 36; 37; 38] have made these timescales accessible for AI-PIMD simulations of 100's of picoseconds for systems of 300-500 atoms,[39; 26] albeit still at a formidable computational cost.
The recent ability to perform condensed-phase simulations that combine electronic structure methods--most commonly density functional theory (DFT)--with path-integral methods has led to the identification of failures of the electronic structure treatments that were previously obfuscated when the nuclei were treated classically. Since the zero-point energy in the OH stretch provides additional energy equivalent to raising the temperature of that coordinate by \(\sim\)2000 K, performing AI-PIMD simulations of liquid water explores much higher-energy regions of the potential energy surface, such as long chemical bond extensions, which causes significant issues when lower-tier generalized gradient approximation (GGA) exchange correlation functionals are employed.[39; 31; 37; 39] For example, when the nuclei are treated classically, spurious self-interaction in the revPBE-D3 GGA functional, which leads to an overly weak OH covalent bond, is fortuitously largely canceled out by the exclusion of NQEs, and hence the reintroduction of NQEs worsens the GGA functional's description of water.[39] While it has been shown that this deficiency can be alleviated by combining PIMD calculations with more costly hybrid functionals such as revPBE0-D3,[39; 41] the charged nature of proton defects is likely to exacerbate these issues further. Given the number of vital chemical processes which involve proton defects in nanoconfinement or at interfaces that typically require system sizes of more than 500 atoms and even longer timescales (multi-nanosecond) to average over the heterogeneity of the environment, performing converged AI-PIMD simulations of proton defects in these systems is likely to be impractical for the foreseeable future.
Machine-learned potentials (MLPs) have recently emerged as a compelling alternative to _ab initio_ simulations.[42; 43; 44] By training MLPs on the energies and/or forces obtained from _ab initio_ calculations on a small number of suitably selected configurations (typically on the order of 1000s), MLPs have been shown to be able to interpolate, and in certain cases extrapolate, the _ab initio_ potential energy surface over a wide range of conditions. While MLPs that can successfully model the reactive dynamics of protonated water clusters[45; 46] and NaOH solutions[47; 48] have previously been developed, they have not aimed to capture the behavior of both H\({}_{3}\)O\({}^{+}\) and OH\({}^{-}\) concurrently. Here, we develop and introduce a training set sampled from _ab initio_ simulations of excess protons, hydroxide ions, and proton-hydroxide recombination events and use it to train MLPs at the GGA (revPBE-D3) and hybrid (revPBE0-D3) levels of theory. We show that these MLPs can be used to simultaneously capture the properties of both types of proton defects in water, thus allowing the study of excess proton diffusion, hydroxide ion diffusion, water autoionization, and defect recombination processes. We utilize these MLPs to run both classical and path-integral AIMD simulations, allowing us to assess the role of different tiers of treatment of the electronic structure and NQEs in determining the mechanism of proton transport.
## II Building the machine-learned potential
### Training Set Creation
We utilized a training set of 37102 configurations, prepared by combining 4594 bulk water configurations randomly sampled from a previously reported dataset[49] with 32508 new configurations containing proton defects. The added proton defect configurations consist entirely of neutral frames of water molecules that contain a proton defect pair (both an excess proton and hydroxide ion). We do not include frames of water molecules containing the excess proton or hydroxide ion in isolation because such configurations require an opposite homogeneous background charge to neutralize the simulation box, which leads to box energies that vary depending on the box volume and Ewald summation parameters. The resulting variation in the energies complicates the fitting of an MLP, and hence we concentrate on fitting the MLP to the more physical neutral configurations where both the excess proton and hydroxide ion are present.
To prepare the training configurations that incorporate both excess protons and hydroxide ions, we selected frames from a revPBE-D3 AIMD trajectory of the hydroxide ion in water, identified the farthest water molecule from the OH\({}^{-}\), and added an excess proton to it, thereby neutralizing the frame. We used these frames to initialize classical and quantum revPBE-D3 AIMD trajectories with the aim of simulating proton defect recombination. From the resulting trajectories, we sampled configurations from the subset of frames where the proton defects had not recombined. Configurations were separately sampled to obtain a training set with near uniform distributions of the proton sharing coordinate (\(\delta\), see section III.2 for definition) for the excess proton and hydroxide ion. Finally, similar to the configurations in the starting water dataset,[49]\(\sim\)67% of the defect-separated frames were augmented by randomly displacing atoms to create configurations with higher forces, which served to improve model stability. The details of the training set are summarized in SI Table 3.
Once the training and validation set configurations were obtained, their energies and forces were re-evaluated at the revPBE[50; 51] and revPBE0[52; 53] levels of DFT with D3 dispersion[54] using the CP2K program.[55; 56] Atomic cores were represented via the Goedecker-Tetter-Hutter pseudopotentials.[57] We employed the hybrid Gaussian and plane wave density functional scheme,[58] where the Kohn-Sham orbitals were expanded in the larger molecular optimized (MOLOPT)[59] TZV2P basis set, and an auxiliary plane-wave basis was used
to represent the density with a cutoff of 400 Ry for the revPBE-D3 calculation and 900 Ry for revPBE0-D3. Due to the relatively compact size of our training set, we are able to evaluate all of the configurations using a more accurate basis set that would be exceptionally computationally costly to use in AIMD simulations.
### Architecture and Training of the Machine-Learned Potential
Our revPBE-D3 MLP was fit using the RuNNer package,[60] while the revPBE0-D3 MLP was fit using the n2p2 package.[61] We employed Behler-Parrinello neural networks[42; 62] with two hidden layers containing 25 nodes each and an input layer containing 56 and 46 nodes for the H and O neural networks respectively. Chemical environments were described by radial and angular atom-centered symmetry function descriptors[43] with a cutoff of 6.35 A. We employed a 90/10 train/validation split, with 33425 configurations in the training set and 3677 configurations in the validation set. Training was done over 20 epochs, and MLP weights were fit to forces and energies. The final energy and force validation errors were 0.433 meV/atom and 65.8 meV/A for the GGA MLP and 0.485 meV/atom and 39.4 meV/A for the hybrid MLP respectively.
## III Simulation Details
### MD Simulations
We performed classical and path integral simulations of both the excess proton and hydroxide ion in water under NVT conditions at T=300 K. The potential energy surfaces were described by MLPs trained on configurations from revPBE[51; 50] (GGA) and revPBE0[52; 53] (hybrid) AIMD simulations with D3 dispersion[54] (see subsections II.1 and II.2), yielding four simulation protocols: classical GGA, classical hybrid, quantum GGA, and quantum hybrid. For all simulation protocols, we used a cubic box of length 15.66 A with periodic boundary conditions. The simulation box contained 128 water molecules from which one proton was either removed, creating a hydroxide ion, or to which a proton was added, yielding an excess proton and resulting in a proton defect concentration in both cases of 0.43 M. To investigate finite-size effects, two sets of additional classical GGA trajectories were run in cubic boxes of lengths 19.73 A and 24.86 A. In these one proton was added (excess proton) or removed (hydroxide ion) from simulations consisting of 256 and 512 water molecules, yielding proton defect concentrations of 0.22 M and 0.11 M respectively.
Classical MLP simulations were run with a 0.5 fs time step using the LAMMPS package,[63] which employed n2p2[64] to incorporate the MLP. A stochastic velocity rescaling (SVR) thermostat[65] with a time constant of 1 ps was used to sample the canonical ensemble. For both the classical GGA and classical hybrid simulation protocols, we ran 107\(\times\) and 70\(\times\) 200-ps trajectories of the excess proton and hydroxide ion respectively at a box size of 15.66 A. This resulted in 21.4 ns of acid trajectory and 14 ns of base trajectory for each of the classical GGA and classical hybrid simulation protocols, with frames that were recorded every 2 fs. For the bigger cell sizes (boxes of lengths 19.73 A and 24.86 A), we ran \(\sim\)100\(\times\) and \(\sim\)80\(\times\) 200-ps classical GGA trajectories of both the acid and base, resulting in 20 ns and 16 ns of trajectory respectively.
Path-integral MLP simulations were run with a 0.25 fs time step by employing the i-PI program,[66; 67] which used the LAMMPS package[63] (with n2p2[64] used for the MLP) to compute energies and forces. The quantum path-integral simulations were performed via thermostatted ring polymer dynamics (TRPMD)[68; 29; 30] using 32 beads that were thermostatted with the path integral Langevin equation (PILE).[69] Under this scheme, ring polymer internal modes with frequency \(\omega_{k}\) were subjected to a Langevin thermostat with friction \(\gamma_{k}=2\lambda\omega_{k}\) and \(\lambda=0.5\). For the quantum GGA and quantum hybrid simulation protocols, we ran 10 \(\times\) 200-ps trajectories of both the excess proton and hydroxide ion, resulting in 2 ns of quantum trajectories for each combination of quantum simulation protocol and proton defect. Like in the classical case, frames were recorded every 2 fs.
We also performed classical GGA AIMD simulations of the excess proton and hydroxide ion in water under NVT conditions at T=300 K to serve as a benchmark for our MLP simulations. The AIMD simulations were run under periodic boundary conditions in cubic boxes of length 12.42 A containing 64 water molecules from which a proton was removed or to which a proton was added. We employed the i-PI program[66; 67] and its MTS[33] implementation[38] to propagate the nuclei. The full and reference forces were evaluated using the CP2K program.[55; 56] Full forces were computed at the revPBE[51; 50] level of DFT with D3 dispersion.[54] Atomic cores were represented via the Godecker-Tetter-Hutter pseudopotentials.[57] We employed the hybrid Gaussian and plane wave density functional scheme,[58] where the Kohn-Sham orbitals were expanded in the TZV2P basis set, and an auxiliary plane-wave basis with a cutoff of 400 Ry was used to represent the density. The self-consistent field cycle was converged to an electronic gradient tolerance of \(\epsilon_{\text{SCF}}=5\times 10^{-7}\) using the orbital transformation method[70] with the initial guess provided by the always stable predictor-corrector extrapolation method[71; 72] at each AIMD step. Reference forces for the MTS were evaluated at the SCC-DFTB level in periodic boundary conditions using Ewald summation for electrostatics. The parametrizations for H and O atoms provided by CP2K were used and the D3 dispersion correction was added. The AIMD simulations were performed using an MTS propagator with the full force evaluated with a time step of 2.0 fs and the reference force with a time step of 0.5 fs. The SVR thermostat was employed with a time constant of 1 ps. We obtained total simulation times of 718 ps divided over 2 trajectories for the acid and 800 ps divided over 4 trajectories for the base.
### Trajectory Analysis
A great deal of the results presented here arise from tracking the proton defect, which was identified at each frame by assigning every H atom to its nearest O and picking out the triply coordinated O atom (H\({}_{3}\)O\({}^{+}\)) in the excess proton trajectories and the singly coordinated O atom (OH\({}^{-}\)) in the hydroxide ion trajectory. The atoms that make up the proton defect are referred to as O\({}^{+}\) and H\({}^{+}\) throughout this manuscript. Occasionally, highly transient water autolysis events (2H\({}_{2}\)O \(\rightarrow\) H\({}_{3}\)O\({}^{+}\) + OH\({}^{-}\)) would occur, forming excess (\(>\)1) proton defects at their respective frame. These events were rare, ranging from a maximum of 0.044% of all frames for the quantum GGA excess proton trajectories to a minimum of 6.5\(\times\) 10\({}^{-5}\)% of all frames for the classical hybrid excess proton trajectories. The ions resulting from these events were disregarded in our analysis, which instead focused on tracking the movement of the proton defect present at t=0. We defined the hydrogen bond geometrically as an atomic triplet O\({}_{d}\)-H\({}_{d}\)...O\({}_{a}\) (where O\({}_{d}\) and H\({}_{d}\) are covalently bonded donor atoms and O\({}_{a}\) is an acceptor atom) with \(|\)O\({}_{d}\)O\({}_{a}|\)\(\leq\) 3.5 A and \(\angle\) O\({}_{a}\)O\({}_{d}\)H\({}_{d}\)\(\leq\) 30 \({}^{\circ}\).[73]
We computed mean square displacements (MSDs) for both the proton defect and water (O atoms) in our trajectories via the formula:
\[\text{MSD}(\Delta t)=\langle|\mathbf{r}(t_{0}+\Delta t)-\mathbf{r}(t_{0})|^{2}\rangle, \tag{1}\]
where \(\langle\rangle\) is an ensemble average computed over all time origins \(t_{0}\) and all relevant atoms. Diffusion coefficients were obtained by performing a linear fit to the MSD in the range 4 ps \(\leq\Delta t\leq\) 20 ps and dividing it by \(2d=6\), where \(d\) refers to the 3 dimensions of the simulation. We performed finite-size corrections for the water diffusion coefficient according to:[74]
\[D(\infty)=D(L)+\frac{\xi k_{\text{B}}T}{6\pi\eta L}, \tag{2}\]
where \(L\) is the length of the simulation cell, \(k_{\text{B}}\) is the Boltzmann constant, and \(T\) is the temperature, \(\xi=2.837297\) is based on the cubic geometry of the simulation cell, and \(\eta=0.8925\times 10^{-3}\) Pa s is the experimental shear viscosity.
For the TRPMD simulations, we used the positions of the centroids to compute the MSD for the diffusion coefficient (which is a property of the long-time slope and hence gives the same result as using the beads); all other observables were calculated from the positions of the individual beads.
## IV Validation of the Machine-Learned Potential
We begin by evaluating how well classical molecular dynamics ran with our GGA-trained MLP reproduces observables from classical GGA _ab initio_ molecular dynamics (AIMD) simulations. We benchmark on classical GGA since it has the lowest computational cost of the electronic structure and dynamics approaches presented in this study, and hence we can generate relatively long (718 ps for the acid and 800 ps for the base) AIMD trajectories with minimal statistical noise. Thus the discrepancies between the MLP and AIMD simulations in obtaining the properties of the systems discussed below, with the exception of the diffusion coefficients, do not arise from statistical sampling errors but rather either from errors in the MLP or due to the fact that the MLP was fit to a larger and more accurate MOLOPT basis set which was too computationally expensive to use for our long AIMD simulations. As shown in Fig. 5, even nanosecond-long trajectories lead to significant statistical error bars in the diffusion coefficients, so benchmarking the MLP on this property using AIMD is extremely challenging. This is one of the main motivations for the development of the MLP, which allows for the generation of trajectories that are long enough to converge this important property.
Figure 1: Comparison of the H\({}^{+}\) VDOS for revPBE-D3 AIMD and the revPBE-D3-trained MLP trajectories. The top panel shows this comparison for acid trajectories, while the bottom panel shows the comparison for base trajectories.
Figure 2: Comparison of the O\({}^{+}\) VDOS for revPBE-D3 AIMD and the revPBE-D3-trained MLP trajectories. The top panel shows this comparison for acid trajectories, while the bottom panel shows the comparison for base trajectories.
We consider the vibrational density of states (VDOS) for H\({}^{*}\) and O\({}^{*}\)--i.e., the H and O atoms that make up the H\({}_{3}\)O\({}^{+}\) and OH\({}^{-}\) defects in simulations with an excess proton and a hydroxide ion respectively (see Sec. III.2)--shown in Figs. 1 and 2 respectively. These VDOS focus on the reactive defects and thus provide a stricter test of the MLP than the VDOS of all H and O atoms in the system (see SI Sec. II), most of which are bound to water molecules. For the excess proton, the MLP manages to accurately reproduce all features of the H\({}^{*}\) VDOS that are characteristic of the reactive defects.[17; 75] These include the blue-shifted librational band at \(\sim\)800 cm\({}^{-1}\), the broadened bending mode peak at \(\sim\)1600 cm\({}^{-1}\), the broad absorption band between the bend and the OH stretch region (2000-3600 cm\({}^{-1}\)), and a peak around \(\sim\)1250 cm\({}^{-1}\) which has previously been assigned to a Zundel-like proton shuttling motion.[76; 77; 78] The H\({}^{*}\) VDOS for the hydroxide ion has fewer features: a librational band at \(\sim\)600 cm\({}^{-1}\), a comparatively smaller bending mode peak at \(\sim\)1600 cm\({}^{-1}\), and an OH stretch peak at \(\sim\)3600 cm\({}^{-1}\) that is sharper than that of pure liquid water.[17; 79] While the MLP manages to accurately reproduce the frequencies of the features, it overestimates the amplitude of the librational band (\(\sim\)600 cm\({}^{-1}\)) for the hydroxide ion. Figure 2 shows quantitative agreement in the O\({}^{*}\) VDOS between the MLP and AIMD trajectories, with the much weaker bending mode peaks (\(\sim\)1600 cm\({}^{-1}\) for both the excess proton and hydroxide ion), the absorption band (2000-3600 cm\({}^{-1}\) for the excess proton), the stretch signal (\(\sim\)3600 cm\({}^{-1}\) for the hydroxide ion) and low-frequency features (\(<\) 500 cm\({}^{-1}\)) being faithfully reproduced by the MLP.
In Figures 3 and 4, we evaluate how well the GGA-trained MLP reproduces the AIMD free energy along the proton sharing coordinate. As illustrated in Figs. 3 and 4, \(\delta=d_{\rm{OTH}^{*}}-d_{\rm{O^{-}H^{*}}}\) for the excess proton and \(\delta=d_{\rm{O^{-}H^{*}}}-d_{\rm{O^{-}H^{*}}}\) for the hydroxide ion, where \(d\) denotes the distance between the respective atoms. O\({}^{*}\) and H\({}^{*}\) are the defect atoms defined above, while O\({}^{\prime}\) and H\({}^{\prime}\) are atoms in the first solvation shell of the charge defect. For the excess proton simulations, of the \(\delta\) values from the three H\({}^{*}\) atoms connected to O\({}^{*}\), only the lowest was used, and for the hydroxide ion simulation, the \(\delta\) values were calculated based only on the H\({}^{\prime}\) closest to O\({}^{*}\). The free energy along the delta coordinate \(\Delta F(\delta)=-k_{\rm{B}}T\ln P(\delta)\) was calculated from the resulting \(\delta\) probability distribution, \(P(\delta)\). The two free energy minima along the proton sharing coordinate, \(\delta\), thus correspond to the covalent bonding of the hydrogen atom to one or the other oxygen atom, and the height of the free energy barrier between the two minima is located at \(\delta\)=0 due to the symmetry of the coordinate.
Figures 3 and 4 show that the MLP simulations accurately reproduce both the positions of the free energy minima and height of the free energy barrier at \(\delta=0\), \(\delta F(\delta=0)\), obtained from AIMD, with the MLP overestimating it by 0.02 kcal/mol for the acid and underestimating it by 0.12 kcal/mol for the base. At 300 K, these errors correspond to 0.03 and 0.2 \(k_{\rm{B}}T\) respectively, are much smaller than the thermal energy in the system, and are completely dwarfed by the zero-point energy along these coordinates as discussed further in Sec. V.
To further validate that the structural and dynamical properties of the excess proton and hydroxide ion in liquid water are captured by our GGA MLP, we show in SI Sec. I that the MLP also quantitatively reproduces the AIMD O\({}^{*}\)-O, O\({}^{*}\)-H, and H\({}^{*}\)-H radial distribution functions for both simulations. Finally, the MLP trajectories yield O\({}^{*}\) diffusion coefficients of 8.04 \(\times\) 10\({}^{-9}\) and 4.95 \(\times\) 10\({}^{-9}\) m\({}^{2}\)/s for the excess proton and hydroxide ion respectively, which compare favorably to corresponding AIMD values of 9.87 \(\times\) 10\({}^{-9}\) and 3.13 \(\times\) 10\({}^{-9}\) m\({}^{2}\)/s. As discussed in Sec. V, the slow convergence of the proton defect diffusion coefficient with simulation time is such that the difference between the MLP and AIMD diffusion coefficients can be accounted for by statistical uncertainty. This is further illustrated in SI Fig. 7.
the GGA and hybrid levels of DFT with and without nuclear quantum effects. Due to the low computational cost of evaluating MLPs, we investigate properties (such as the proton defect diffusion coefficients) that require multiple nanoseconds of simulation to reliably converge. The importance of using such long simulations is illustrated in Fig. 5, which shows the distribution of diffusion coefficients obtained from 50 ps, 100 ps, and 200 ps trajectory segments derived from subdividing our total of 20 ns of revPBE-D3 (GGA) MLP trajectory. A single simulation performed on a 50 ps, 100 ps, or 200 ps timescale would thus correspond to picking a single realization from these distributions (with each diffusion coefficient corresponding to the linear fit to one of the MSD curves shown in SI Figures 9 and 10, as detailed in Section III.2). As one can see, even with a 200 ps simulation, a time longer than that in many previous AIMD studies of proton defects, the wide distribution of diffusion coefficients that can be obtained does not allow one to reliably distinguish between the higher expected diffusion coefficient of an excess proton and the lower one expected for the hydroxide ion. This emphasizes the need for multiple-nanosecond simulations to comment on the relative transport rates of the excess proton and hydroxide ion in liquid water.
Figure 6 summarizes the water molecule and proton defect diffusion coefficients calculated from classical and quantum (TRPMD) simulations of MLPs trained on revPBE-D3 (GGA) and revPBE0-D3 (hybrid) AIMD simulations. For each combination of dynamics approach (classical or quantum) and electronic structure approach (GGA or hybrid), we performed separate simulations of an excess proton in water and a hydroxide ion in water, i.e., simulations of a water box that initially contained 128 water molecules to which one proton has been added or from which one proton has been removed. Each set of four bars shown in Fig. 6 corresponds to a particular simulation protocol, i.e., a combination of dynamics and electronic structure approaches, and within each set, the colored bars correspond to the diffusion coefficients of water molecules in the basic solution (dark green), water molecules in the acidic solution (light green), and the proton defect diffusion coefficients for the hydroxide ion (blue/navy) and the excess proton (yellow/red) respectively.
From Fig. 6, one can see that in all cases, proton defect diffusion coefficients are significantly higher than those of water molecules and that the excess proton diffuses faster than the hydroxide ion, which is in line with the experimentally observed trend.[8] The solid horizontal lines in Fig. 6 show the experimentally observed diffusion coefficients of the relevant species.[8; 8] Similar to previous studies, the water molecule diffusion coefficients obtained from revPBE-D3 and revPBE0-D3 AIMD simulations of pure water are in good agreement with the experimentally observed value when finite-size corrections[74] are applied.[39] In Fig. 6, finite-size corrections are shown as the hatched regions of the green bars and were computed using the experimental shear viscosity. For both the excess proton and the hydroxide ion simulations, the diffusion coefficients of the water molecules are within 0.1 \(\times\) 10\({}^{-9}\) m\({}^{2}\)/s of each other, indicating that at this low proton defect concentration (0.43 M), the defect has only a minor effect on the diffusion of the molecules in the liquid.
For the water molecules, the hybrid functional exhibits faster diffusion than the GGA, and within a given choice of functional, the quantum simulations show slightly slower diffusion than the classical ones. The former observation can be rationalized as due to the hybrid functional's partial taming of the delocalization error inherent in (GGA) DFT, which alleviates the stronger hydrogen bonds[81] and slower diffusion observed under GGA. The latter observation of slower diffusion upon including NQEs arises from the subtle balance of competing quantum effects in liquid water[82; 83; 84] and other hydrogen-bonded systems,[85; 86; 87] which in the case of DFT water[88; 89; 90; 91] and the revPBE-D3 and revPBE0-D3 functionals[39] has generally led to a slight structuring of the liquid and corresponding lowering of the diffusion coefficient. We note that due to the subtle cancellation of NQEs in liquid water at 300 K, NNPs[92] and other potentials[93; 94; 95; 96; 97] fit to higher-level electronic structure methods such as CCSD(T) and AFQMC have shown a slight increase in water's diffusion coefficient upon treating the nuclei quantum mechanically. Of all the MLPs, the quantum hybrid trajectory most closely reproduces the experimental water diffusion coefficient, with computed system size-corrected diffusion coefficients of 2.33 \(\times\) 10\({}^{-9}\) m\({}^{2}\)/s and 2.38 \(\times\) 10\({}^{-9}\) m\({}^{2}\)/s for the acid and base respectively, compared to the experimental value of 2.41 \(\times\) 10\({}^{-9}\) m\({}^{2}\)/s.[80]
The diffusion coefficients of the excess proton (yellow/red) and hydroxide ion (blue/navy) are shown in Fig. 6, with the horizontal red and blue lines showing the experimental diffusion coefficients. For both acid and base trajectories, the
Figure 5: Proton defect diffusion coefficient distributions (PDO\({}^{\circ}\)) of the MLP GGA simulations computed at different trajectory lengths, with the corresponding means displayed as dashed lines. The degree of separation between the D\({}_{O^{\circ}}\) distributions for the acid and base increases with the trajectory length, underscoring the importance of computing D\({}_{O^{\circ}}\) values from trajectories that are at least hundreds of picoseconds long.
charge defect diffusion coefficient follows the trend: quantum GGA \(>\) quantum hybrid \(>\) classical GGA \(>\) classical hybrid. These trends for the defect diffusion are roughly the opposite of what is seen for water molecules, with the GGA giving rise to faster defect diffusion than the hybrid and nuclear quantum effects also increasing the diffusion rate. The white bars in Fig. 6 show the vehicular component of the diffusion obtained by decomposing the total diffusion coefficients into their structural components, which arise entirely from intermolecular proton transfer events, and their vehicular components, which arise from the molecular motion of the proton defects (see SI Sec. III). We observe that in all cases, the vehicular component is a small and nearly constant part of proton defect diffusion and hence the changes in the total diffusion coefficients arise from variations in the dominant structural diffusion mechanism upon changing the exchange-correlation functional or including NQEs.
To understand the origins of the trends in the rate of diffusion of proton defects in water, we begin by analyzing the free energy barrier for proton transfer under the different simulation protocols. Fig. 7 shows the free energy profile along the proton sharing coordinate, \(\Delta F(\delta)\), defined for the excess proton and hydroxide ion in Sec. IV. The height of the free energy barrier, \(\Delta F(\delta=0)\), is larger for the hydroxide ion than for the excess proton, which is consistent with the slower diffusion of hydroxide ions compared to excess protons. For classical nuclei, which exhibit the most pronounced barrier, the difference in the barrier height between the two types of proton defect is 0.92 kcal/mol for the GGA and 1.23 kcal/mol for the hybrid functional. The proton transfer barrier obtained from the hybrid functional is higher than that of the GGA functional for both the excess proton and hydroxide ion by 0.28 kcal/mol and 0.60 kcal/mol respectively when a classical description of the nuclei is used. This behavior follows from the fact that the top of the barrier at \(\delta\)=0 corresponds to a scenario where the proton is equidistant from two O atoms and is thus a state of large charge separation. Due to the delocalization error in DFT charge, separated states under GGA are spuriously lowered in energy relative to charge-localized states, which decreases the free energy barrier. The hybrid functional somewhat alleviates this issue and, in turn, raises the free energy barrier along \(\delta\). For both the excess proton and hydroxide ion, the minima in \(\Delta F(\delta)\) obtained from the classical simulations are shifted closer to \(\delta=0\) for GGA than the hybrid, indicating a more shared equilibrium position of the proton for the former.
Nuclear quantum effects are expected to play a major role in determining the free energy profile along the proton-sharing coordinate, which describes the movement of a light (hydrogen) atom across an energy barrier. The zero-point energy of an O-H stretch in water (\(\hbar\omega\), \(\omega\)=3600 cm\({}^{-1}\)) is 5.15 kcal/mol, which for the excess proton, is larger than the free energy barriers of 0.46 kcal/mol and 0.75 kcal/mol along the proton sharing coordinate obtained from both the classical GGA and hybrid simulations respectively. Upon including NQEs, the free energy barrier for the excess proton is whittled down to 0 kcal/mol for the GGA simulation and reduced to 0.03 kcal/mol for the hybrid simulation. In the case of the hydroxide ion, the classical free energy barriers of 1.38 kcal/mol and 1.98 kcal/mol for the GGA and hybrid simulations are sub
Figure 6: Acid and base diffusion coefficients calculated from the MLP trajectories. We report six values for each MLP run: the molecular diffusion coefficient of water (measured by tracking O atoms) in the acid and base, the proton defect diffusion coefficient (measured by tracking O\({}^{*}\)) in the acid and base, and the vehicular component of the proton defect diffusion coefficient for the acid and base. Experimental values for the diffusion coefficients of water,[80] H\({}_{3}\)O\({}^{*}\),[8] and OH\({}^{-8}\) are shown for comparison.
stantially reduced by 1.32 kcal/mol and 1.60 kcal/mol respectively upon including NQEs. We note that these reductions in the free energy barrier are considerably smaller than the reduction that might be estimated by considering the ZPE along that coordinate, emphasizing that the mechanism of proton transport in solution is not fully captured by motion along this single coordinate.
It is instructive to investigate how the variation observed in the height of the free energy barrier along the proton sharing coordinate (\(\Delta F(\delta=0)\)) under different simulation protocols correlates with the proton defect diffusion coefficients, D\({}_{\mathrm{O^{*}}}\). Due to the commonly observed exponential dependence of rates of processes on their associated free energy barrier, Fig. 8 plots the natural logarithm of the defect diffusion coefficients log[D\({}_{\mathrm{O^{*}}}\)] against the free energy barrier along the proton sharing coordinate \(\Delta F(\delta=0)\), which should give a linear relationship. As expected, there is an inverse linear relationship between them, i.e., an increase in the free energy barrier along the proton sharing coordinate decreases the likelihood of intermolecular proton hops and thus inhibits the transport of the proton defect. This inverse relationship is much stronger for the base than for the acid, which suggests that supramolecular factors beyond the intermolecular proton transfer barrier play a bigger role in the transport of H\({}_{3}\)O\({}^{+}\).
Comparing the simulated diffusion coefficients to their experimental values obtained from conductivity data at 301 K for H\({}_{3}\)O\({}^{+}\) (9.4 \(\times\) 10\({}^{-9}\) m\({}^{2}\)/s) and OH\({}^{-}\) (5.2 \(\times\) 10\({}^{-9}\) m\({}^{2}\)/s),[8] both the classical GGA and quantum hybrid H\({}_{3}\)O\({}^{+}\) simulations (8.0 \(\times\) 10\({}^{-9}\) and 10.9 \(\times\) 10\({}^{-9}\) m\({}^{2}\)/s) most closely reproduce the experimental H\({}_{3}\)O\({}^{+}\) diffusion coefficient, while the classical GGA OH\({}^{-}\) simulations (4.9 \(\times\) 10\({}^{-9}\) m\({}^{2}\)/s) most closely reproduce the experimental OH\({}^{-}\) diffusion coefficient. The strong performance of the classical GGA trajectories can likely be attributed to the cancellation of error between proton delocalization due to overestimated hydrogen bond strengths and proton localization due to the classical treatment of nuclei. Quantum hybrid trajectories also perform relatively well because they incorporate NQEs, and the revPBE0-D3 functional less severely overestimates hydrogen bond strengths. Notably, when NQEs are included for both the GGA and hybrid functionals, the ratios of the excess proton diffusion coefficient to that of the hydroxide ion diffusion coefficient, 1.1 and 1.2 respectively, are lower than the experimentally observed value of 1.8, and are in worse agreement for this property than when the nuclei are treated classically (1.6 and 2.2 for the GGA and hybrid simulations respectively). This arises from the much more pronounced NQEs on the hydroxide diffusion coefficient than the excess proton, with a factor of 2.5 and 2.6 increase for the GGA and hybrid upon going from classical to quantum for the hydroxide ion but only 1.7 and 1.4 for the excess proton.
To further analyze trends in the hydroxide ion diffusion coefficient and the impact of NQEs, we now explore the relationship between the rate of diffusion of OH\({}^{-}\) and the proton transfer free energy barrier, \(\Delta F(\delta=0)\). Previous studies have suggested that OH\({}^{-}\) predominantly exists in an inert hypercoordinated state where it accepts four hydrogen bonds and transiently donates one.[98; 99] In this picture, proton transfer occurs after thermal fluctuations break one of the accepted hydrogen bonds, converting the inert hypercoordinated OH\({}^{-}\) to a tetrahedral active state that accepts thre
Figure 8: Diffusion coefficients as a function of the proton sharing coordinate free energy barrier, \(\Delta\) F (\(\delta=0\)), for the excess proton (top) and hydroxide ion (bottom) obtained from our MLP simulations. Our data show a much stronger correlation for the base than for the acid.
Figure 7: Free energy, \(\Delta\)F, along the proton sharing coordinate \(\delta\), for the acid (top) and base (bottom) obtained from MLP simulations at 300 K. The dashed line shows \(k_{\mathrm{B}}\)T at 300 K.
nates one. The OH- is then ready to accept a proton from a neighboring molecule because it has assumed the tetrahedral geometry typical of neutral water molecules, a concept known as presolvation.[98] The hypercoordination of OH- is the primary reason why the mechanism of transport of OH- cannot simply be inferred from that of H\({}_{3}\)O+, which typically donates three hydrogen bonds.
Our MLP simulations support the hypercoordination picture, with the less active state where OH- accepts four hydrogen bonds making up the majority of frames for all of the trajectories, i.e. 63%, 56%, 50%, and 58% of all frames in the classical GGA, classical hybrid, quantum GGA, and quantum hybrid trajectories respectively. In all cases, the percentage of all proton hops where the recipient is a triply-coordinated OH- is higher than the percentage of all OH- configurations that are triply coordinated. In particular, for the classical simulations, 51% and 44% (GGA and hybrid) of all proton hops are to triply-coordinated OH- while only 13% and 7% of all OH- configurations are triply coordinated. A similar trend is observed for the quantum simulations, where 62.6% and 51.5% (GGA and hybrid) of all proton hops are to triply-coordinated OH-, while only 41% and 20% of all OH- configurations are triply coordinated. Further analysis shows that the hydroxide ion diffusion coefficient is positively correlated with the percentage of proton hops to a triply-coordinated OH-. Additionally, there is a clear inverse correlation between the free energy barrier along the proton sharing coordinate and the OH- coordination number. Figure 9 shows the computed free energy barriers for OH- at different numbers of accepted hydrogen bonds (n={3,4,5}). OH- ions that accept three hydrogen bonds have the lowest \(\Delta\)F across all trajectories, further suggesting that the n=3 state is indeed the active proton transfer state, in line with previous studies.[98; 99]
We observe that the impact of NQEs on the proton transfer barrier is two-fold: the direct effect of lowering the barrier along the proton sharing coordinate and the indirect effect of shifting the distribution of hypercoordinated states towards the more favorable states for proton transfer to occur. Specifically, in Fig. 9 one sees that \(\Delta F(\delta=0)\) is lower for the quantum trajectories for any given value of the OH- coordination (e.g. n=3, n=4, etc.). This direct effect lowers the barrier by as much as 1.92 kcal/mol in the n=5 state and by 0.83 kcal/mol in the n=3 state. Conversely, NQEs provide an indirect effect by markedly increasing the incidence of triply coordinated OH- configurations: 12.6%, and 6.7% of all frames for classical GGA and hybrid trajectories respectively, compared to 41% and 20% of all frames in the quantum GGA and hybrid trajectories respectively.
## VI Conclusion
We have presented two MLPs--trained on revPBE-D3 (GGA) and revPBE0-D3 (hybrid) AIMD energies and forces--that simultaneously capture the properties of excess protons and hydroxide ions in water. To test the validity of our MLPs, we benchmarked the GGA MLP against independent GGA AIMD trajectories of an excess proton in water and a hydroxide ion in water. Overall, the GGA MLP faithfully reproduced several of the most challenging _ab initio_ properties relevant to proton defects, namely the H- and O- VDOS, the free energy barrier along \(\delta\), and the RDFs (H-H, O-H, O-O, O-O).
Our hybrid and GGA MLPs were then used to perform multi-nanosecond classical and TRPMD trajectories of the excess proton and hydroxide ion, enabling us to obtain diffusion properties of the proton defects with minimal statistical noise. By analyzing these simulations, we elucidated how the choice of DFT functional (GGA vs hybrid) and nuclear representation (classical vs quantum) affects the rate of both molecular and proton defect diffusion. By comparing the proton defect diffusion coefficient to the free energy barrier along the proton sharing coordinate (\(\delta\)), we showed that a higher free energy barrier is correlated with a low rate of proton transfer, although the correlation is stronger for the hydroxide ion than for the excess proton. Additionally, by calculating the free energy barrier along \(\delta\) for different coordination states of the OH- ion, we showed that our data agree with prior studies[98; 99] that posit a predominantly inert quadruply hydrogen-bonded OH- that occasionally undergoes thermal fluctuations to lose one of its accepted hydrogen bonds in order to enter a tetrahedral state that is conducive to proton transfer.
The MLP models we introduce here provide a means to run multi-nanosecond molecular dynamics simulations of proton defects in water at DFT-level accuracy and low computational cost, thus enabling one to study rare events with un
Figure 9: Values for the free energy barrier along the proton sharing coordinate \(\delta\) at 3, 4, and 5 accepted hydrogen bonds at the OH- for all MLP trajectories. The AF(6) for all OH- is also shown for each trajectory.
precedented statistical accuracy. In addition, the training set constructed in this work provides a starting point for training MLPs that are able to treat both proton and hydroxide defects, and hence processes such as autoionization, at higher levels of electronic structure theory or for more diverse chemical environments. This lays the groundwork for improving our understanding of the finer details of the proton transfer mechanism in water, as well as the mechanics of autoionization and proton-hydroxic recombination events.
###### Acknowledgements.
This work was supported by National Science Foundation Grant No. CHE-2154291 (to T.E.M.). A.O.A. acknowledges support from the Stanford Diversifying Academia, Recruiting Excellence Fellowship. O.M. acknowledges support from the Czech Science Foundation, project No. 21-27987S. T.M. is grateful for financial support by the DFG (MO 3177/1-1).
## Data availability
The data that supports the findings of this study are available within the article and its supplementary material.
|
2307.09826
|
On Rota-Baxter vertex operator algebras
|
Derivations play a fundamental role in the definition of vertex (operator)
algebras, sometimes regarded as a generalization of differential commutative
algebras. This paper studies the role played by the integral counterpart of the
derivations, namely Rota-Baxter operators, in vertex (operator) algebras. The
closely related notion of dendriform algebras is also defined for vertex
operator algebras. It is shown that the classical relations among dendriform
algebras, associative algebras, and Rota-Baxter algebras are preserved for
their vertex algebra analogs.
|
Chengming Bai, Li Guo, Jianqi Liu, Xiaoyan Wang
|
2023-07-19T08:30:47Z
|
http://arxiv.org/abs/2307.09826v1
|
# On Rota-Baxter vertex operator algebras
###### Abstract.
Derivations play a fundamental role in the definition of vertex (operator) algebras, sometimes regarded as a generalization of differential commutative algebras. This paper studies the role played by the integral counter part of the derivations, namely Rota-Baxter operators, in vertex (operator) algebras. The closely related notion of dendriform algebras is also defined for vertex operator algebras. It is shown that the classical relations among dendriform algebras, associative algebras and Rota-Baxter algebras are preserved for their vertex algebra analogs.
Key words and phrases:Vertex algebra, vertex operator algebra, derivation, Rota-Baxter algebra, field algebra, dendriform algebra 2010 Mathematics Subject Classification: 17B69, 17B38, 17B10, 81R10 17B68, 17B65, 81R12
###### Contents
* 1 Introduction
* 1.1 Vertex operator algebras and derivations
* 1.2 Vertex operator algebras and Rota-Baxter operators
* 1.3 Layout of the paper
* 2 Differential operators on VOAs
* 2.1 The definition of vertex (operator) algebras
* 2.2 The \(\lambda\)-derivations on VOAs
* 3 Rota-Baxter operators on vertex algebras
* 3.1 Definition and first properties of Rota-Baxter operators on vertex algebras
* 3.2 The \(\lambda\)-derivations and the weak local Rota-Baxter operators
* 3.3 Properties and further examples of RBVAs
* 4 Dendriform vertex algebras
* 4.1 Dendriform field and vertex algebras
* 4.2 Characterizations of the dendriform vertex (Leibniz) algebras
* 4.3 The modules structures induced by dendriform vertex (Leibniz) algebras
## 1. Introduction
This paper studies Rota-Baxter operators on vertex operator algebras (VOAs), as an integral counterpart of the derivations which play an essential role in VOAs. The closely related dendriform algebras for VOAs are also studied.
### Vertex operator algebras and derivations
Vertex algebras and VOAs, introduced by Borcherds in [10] and Frenkel, Lepowsky and Meurman in [20] respectively, were developed in conjunction with conformal field theory, "monstrous moonshine", string theory and infinite-dimensional Lie algebras, with applications in geometric Langlands program. See [20, 28, 21, 22, 42].
On the other hand, the study of differential algebras, defined to be an associative algebra equipped with a derivation, has its origin in Ritt's algebraic approach to differential equations [39] and further developed by Kolchin [29]. Since then, the subject has evolved into a vast area involving differential aspects of Galois theory, algebraic geometry, computational algebra and logic
(see for example [38]). More generally, differential algebras with weights were introduced as an abstraction of the differential quotients whose limits define the derivation in analysis [24].
A vertex algebra in general can be viewed as a generalization of a commutative differential algebra. In fact, any commutative differential unital algebra \((A,D,1)\) is naturally a vertex algebra (cf. [10]). More generally, the notion of vertex algebras, as well as the related notion of vertex algebras without vacuum (cf. [27]), can be equivalently formulated in terms of a "weakly commutative vertex operator" equipped with a special derivation. See [28, 27, 31, 35] and Theorem 2.3 and Proposition 2.6. On the other hand, the study of derivations of vertex (operator) algebras has drawn much attention. The derivations of VOAs naturally give rise to automorphisms of VOAs (cf. [19, 13]), which is an important notion that relates to many subbranches like the moonshine conjecture, orbifold theory, and quantum Galois theory, etc., see [20, 17, 14]. Huang observed in [25] that the set of derivations of a VOA with coefficients in an ordinary module can be identified with the first-order cohomology of VOAs, similar to the case of Lie algebras and associative algebras. The structure of the derivation algebras of strongly rational VOAs was determined by Dong and Griess in [13]. They also proved in [13] that the derivation algebra of a strongly rational VOA \(V\) generates the connected component of the automorphism group of \(V\).
### Vertex operator algebras and Rota-Baxter operators
Given the close relationship between derivations and vertex (operator) algebras, it is natural to study the role played by integrations in vertex (operator) algebras. The integral counterpart of the differential algebra (with a weight) is the Rota-Baxter algebra, whose study originated from the 1960 work of G. Baxter [9] in probability and can be tracked back further under the disguise of a linear transformation [12, 41]. For Lie algebras, they were rediscovered in the 1980s as the operator forms of the classical Yang-Baxter equation (CYBE) and modified Yang-Baxter equation. To further develop the operator forms of the CYBE [40], the more general notion of a relative Rota-Baxter operator was introduced by Kupershmidt who called it an \(\mathcal{O}\)-operator. [30]
Rota-Baxter operators have been defined on a very wide range of algebraic structures, and indeed on algebraic operads [37]. Given the importance of VOAs and Rota-Baxter algebras, it is a natural question to investigate the possibility of combining the theories of Rota-Baxter operators and VOAs. However, due to the complexity of VOAs, this has not been carried out.
The purpose of this paper is to define, on VOAs, the closely related notions of differential operators, Rota-Baxter operators and dendriform algebras. As it turns out, in a special case, the axiom of Rota-Baxter operators on VOAs coincides with X. Xu's definition of \(R\)-matrices for VOAs [43], as a vertex operator algebra analog of the \(r\)-matrices for Lie algebras as solutions to the CYBE. In a separate work [4], we introduced the tensor form of the CYBE for VOAs.
A closely related notion to the (relative) Rota-Baxter associative algebra is the dendriform algebra, introduced by Loday from the periodicity of \(K\)-theory and plays an essential role of splitting of the associativity. In fact, the (tri)dendriform algebra can be naturally derived from Rota-Baxter algebra. It can further be characterized by (relative) Rota-Baxter algebras in terms of representations (see [6] for example). Since a vertex algebra can be viewed as a generalization of both the commutative differential algebra and the Lie algebra, it is natural to investigate the analogs of these structures and their relations with (relative) Rota-Baxter operators in the context of vertex (operator) algebras.
### Layout of the paper
In Section 2, we first recall the basic notions of vertex (operator) algebras with emphasis on their derivational characterizations. We then introduce a notion of \(\lambda\)-derivations on VOAs, as a preparation for the next section. We will prove that the set of 1-differentials on a simple VOA \(V\) can be identified with the automorphism group of \(V\).
In Section 3, we define Rota-Baxter operators (RBOs) on VOAs, provide examples and study their relation with differential operators. To allow more flexibility for examples and applications, we also introduce variations of RBOs such as weak local RBOs, weak global RBOs, and ordinary local RBOs. Homogeneous RBOs on certain CFT-type VOAs are classified. Xu's theorem on \(R\)-matrix and modified Yang-Baxter equations for VOAs [43] was reinterpreted as a special case of a general theorem in Rota-Baxter vertex algebras (Theorem 3.26).
Section 4 is devoted to extending the notion of dendriform algebras to VOAs. We define dendriform field and vertex (Leibniz) algebras and show that they fulfill the splitting property of the operations in VOAs (Theorem 4.7). We also obtained some new kinds of Jacobi identities from our definition of the dendriform vertex algebras (Theorem 4.11). Furthermore, the characterization of dendriform algebras in terms of module structures and relative Rota-Baxter operators is transported to VOAs (see Proposition 4.16, Corollary 4.18 and Proposition 4.19), further justify our notion of dendriform vertex algebras.
### Conventions
Throughout this paper, all vector spaces are defined over \(\mathbb{C}\). \(\mathbb{N}\) denotes the set of nonnegative integers, \(\mathbb{Q}\) denotes the set of rational numbers.
## 2. Differential operators on VOAs
In this section, we first recall the notions of vertex (operator) algebras and some related definitions. Then we introduce the notion of \(\lambda\)-differential operators on VOAs and discuss some of its properties. The \(\lambda\)-differential operators are closely related to the Rota-Baxter operators to be studied in the next section.
### The definition of vertex (operator) algebras
We briefly recall the background on VOAs that will be needed in the sequel and refer the readers to [10, 19, 20, 26, 28, 31] for further details.
**Definition 2.1**.: A **vertex algebra** is a triple \((V,Y,\mathbf{1})\) consisting of a vector space \(V\), a linear map \(Y\) called the **vertex operator** or the state-field correspondence:
\[Y:\!V\to(\operatorname{End}V)[[z,z^{-1}]],\quad\ a\mapsto Y(a,z)=\sum_{n\in \mathbb{Z}}a_{n}z^{-n-1}\quad(a_{n}\in\operatorname{End}V),\]
and a distinguished element \(\mathbf{1}\in V\) called the **vacuum vector**, satisfying the following conditions:
1. (Truncation property) For any given \(a,b\in V\), \(a_{n}b=0\) when \(n\) is sufficiently large.
2. (Vacuum property) \(Y(\mathbf{1},z)=\operatorname{Id}_{V}\).
3. (Creation property) For \(a\in V\), we have \(Y(a,z)\mathbf{1}\in V[[z]]\) and \(\lim_{z\to 0}Y(a,z)\mathbf{1}=a\).
4. (The Jacobi identity) For \(a,b\in V\), \[z_{0}^{-1}\delta\left(\frac{z_{1}-z_{2}}{z_{0}}\right)Y(a,z_{1})Y(b,z_{2})-z_ {0}^{-1}\delta\left(\frac{-z_{2}+z_{1}}{z_{0}}\right)Y(b,z_{2})Y(a,z_{1}) =z_{2}^{-1}\delta\left(\frac{z_{1}-z_{0}}{z_{2}}\right)Y(Y(a,z_{0})b,z_{2}).\] Here \(\delta(x):=\sum_{n\in\mathbb{Z}}x^{n}\) is the formal delta function.
The following equivalent characterization of the Jacobi identity is useful in our later discussions (cf. [31]; see also [19]).
**Theorem 2.2**.: _A vertex algebra \((V,Y,\mathbf{1})\) satisfies the following properties._
1. (_weak commutativity_) _For_ \(a,b\in V\)_, there exists_ \(k\in\mathbb{N}\) _such that_ (1) \[(z_{1}-z_{2})^{k}Y(a,z_{1})Y(b,z_{2})=(z_{1}-z_{2})^{k}Y(b,z_{2})Y(a,z_{1}).\] (2) (_weak associativity_) _For_ \(a,b,c\in V\)_, there exists_ \(k\in\mathbb{N}\) (_depending on_ \(a\) _and_ \(c\)_) such that_ (2) \[(z_{0}+z_{2})^{k}Y(Y(a,z_{0})b,z_{2})c=(z_{0}+z_{2})^{k}Y(a,z_{0}+z_{2})Y(b,z_ {2})c.\]
_Moreover, if \(Y:V\to(\operatorname{End}V)[[z,z^{-1}]]\) is a linear map that satisfies the truncation property, then the Jacobi identity of \(Y\) in the definition of a vertex algebra is equivalent to the weak commutativity together with the weak associativity._
Let \((V,Y,\mathbf{1})\) be a vertex algebra. Define a translation operator \(D:V\to V\) by letting \(Da:=a_{-2}\mathbf{1}\), for all \(a\in V\). Then \((V,Y,D,\mathbf{1})\) satisfies the \(D\)**-derivative property**:
\[Y(Da,z)=\frac{d}{dz}Y(a,z), \tag{3}\]
the \(D\)**-bracket derivative property**:
\[[D,Y(a,z)]=\frac{d}{dz}Y(a,z), \tag{4}\]
and the **skew-symmetry** formula:
\[Y(a,z)b=e^{zD}Y(b,-z)a,\quad a,b\in V. \tag{5}\]
Equalities (3) and (4) together are called the \(D\)**-translation invariance property**. On the other hand, a vertex algebra also has the following equivalent condition (cf. [31]).
**Theorem 2.3**.: _Consider a triple \((V,Y,\mathbf{1})\) consisting of a vector space \(V\), a linear map \(Y:V\to(\operatorname{End}V)[[z,z^{-1}]]\) and a distinguished vector \(\mathbf{1}\). The triple is a vertex algebra if and only if it satisfies the truncation, vacuum and creation properties_ (i)-(iii)_, the weak commutativity_ (1)_, and allows a derivation \(D\) that satisfies the \(D\)-bracket derivative property_ (4)_._
Our later discussion needs the following weaker notions of vertex algebras introduced in [35].
**Definition 2.4**.: A **vertex Leibniz algebra**\((V,Y)\) is a vector space \(V\) equipped with a linear map \(Y:V\to(\operatorname{End}V)[[z,z^{-1}]]\), satisfying the truncation property and the Jacobi identity.
As a special case, a subspace \(U\) of a vertex algebra \((V,Y,\mathbf{1})\) is a vertex Leibniz subalgebra with respect to the restricted vertex operator \(Y|_{U}\) if it satisfies \(a_{n}b\in U\), for all \(a,b\in U\), and \(n\in\mathbb{Z}\). A related notion is the vertex algebra without vacuum (see [27]):
**Definition 2.5**.: A **vertex algebra without vacuum** is a vector space \(V\), equipped with a linear map \(Y:V\to(\operatorname{End}V)[[z,z^{-1}]]\) and a linear operator \(D:V\to V\) satisfying the truncation property, the Jacobi identity, the \(D\)-derivative property (3), and the skew-symmetry (5). We denote a vertex algebra without vacuum by \((V,Y,D)\).
The following fact is proved by Huang and Lepowsky in [27]; see also [35]:
**Proposition 2.6**.: _Let \(V\) be a vector space, equipped with a linear map \(Y:V\to(\operatorname{End}V)[[z,z^{-1}]]\), satisfying the truncation property. If \(D:V\to V\) is another linear map that satisfies the \(D\)-bracket derivative property_ (4) _and skew-symmetry_ (5)_, then the weak commutativity_ (1) _of \(Y\) follows from the weak associativity_ (2)_.
**Definition 2.7**.: A **vertex operator algebra (VOA)** is a quadruple \((V,Y,\mathbf{1},\omega)\), where
* \((V,Y(\cdot,z),\mathbf{1})\) is a \(\mathbb{Z}\)-graded vertex algebra: \(V=\bigoplus_{n\in\mathbb{Z}}V_{n}\), such that \(\mathbf{1}\in V_{0}\), \(\dim V_{n}<\infty\) for each \(n\in\mathbb{Z}\), and \(V_{n}=0\) for \(n\) sufficiently small,
* \(\omega\in V_{2}\) is another distinguished element, called the **Virasoro element** and for which we denote \(Y(\omega,z)=\sum_{n\in\mathbb{Z}}L(n)z^{-n-2}\), so \(L(n)=\omega_{n+1}\), \(n\in\mathbb{Z}\). It satisfies the following additional conditions.
* (The Virasoro relation) \([L(m),L(n)]=(m-n)L(m+n)+\frac{1}{12}(m^{3}-m)\delta_{m+n,0}c\), where \(c\in\mathbb{C}\) is called the central charge (or rank) of \(V\).
6. \((L(-1)\)-derivation property) \(D=L(-1)\), and \[\frac{d}{dz}Y(v,z)=Y(L(-1)v,z)=[L(-1),Y(v,z)].\]
7. \((L(0)\)-eigenspace property) \(L(0)v=nv\), for all \(v\in V_{n}\) and \(n\in\mathbb{Z}\).
A VOA \(V\) is said to be of the **CFT-type**, if \(V=V_{0}\oplus V_{+}\), where \(V_{0}=\mathbb{C}\mathbf{1}\) and \(V_{+}=\bigoplus_{n=1}^{\infty}V_{n}\).
We also need the notion of modules over VOAs (cf. [19, 20, 31]).
**Definition 2.8**.: Let \((V,Y,\mathbf{1},\omega)\) be a VOA, a **weak \(V\)-module**\((W,Y_{W})\) is a vector space \(W\) equipped with a linear map
\[Y_{W}:V\to(\operatorname{End}W)[[z,z^{-1}]],\quad a\mapsto Y_{W}(a,z)=\sum_{n \in\mathbb{Z}}a_{n}z^{-n-1},\]
satisfying the following axioms.
1. (Truncation property) For \(a\in V\) and \(v\in W\), \(a_{n}v=0\) for \(n\) sufficiently large.
2. (Vacuum property) \(Y_{W}(\mathbf{1},z)=\operatorname{Id}_{W}\).
3. (The Jacobi identity) For \(a,b\in V\), and \(u\in W\) \[z_{0}^{-1}\delta\left(\frac{z_{1}-z_{2}}{z_{0}}\right)Y_{W}(a,z_{1})Y_{W}(b,z _{2})u-z_{0}^{-1}\delta\left(\frac{-z_{2}+z_{1}}{z_{0}}\right)Y_{W}(b,z_{2})Y_ {W}(a,z_{1})u\] \[=z_{2}^{-1}\delta\left(\frac{z_{1}-z_{0}}{z_{2}}\right)Y_{W}(Y(a,z_{0})b, z_{2})u.\]
A weak \(V\)-module \(W\) is called **admissible (or \(\mathbb{N}\)-gradable)** if \(W=\bigoplus_{n\in\mathbb{N}}W(n)\), with \(\dim W(n)<\infty\) for each \(n\in\mathbb{N}\), and \(a_{m}W(n)\subset W(\operatorname{wt}(a)-m-1+n)\) for homogeneous \(a\in V\), \(m\in\mathbb{Z}\), and \(n\in\mathbb{N}\).
An **ordinary \(V\)-module** is an admissible \(V\)-module \(W\) such that each \(W(n)\) is an eigenspace of the operator \(L(0)=\operatorname{Res}zY_{W}(\omega,z)\) of eigenvalue \(\lambda+n\), where \(\lambda\in\mathbb{Q}\) is a fixed number called the conformal weight of \(W\).
Let \((W,Y_{W})\) be a weak module over a VOA \(V\). Write \(Y_{W}(\omega,z)=\sum_{n\in\mathbb{Z}}L(n)z^{-n-2}\). It is proved in [15] that \(Y_{W}\) also satisfies the \(L(-1)\)-derivative property (in Definition 2.7) and the \(L(-1)\)-bracket derivative property in (4): \(Y_{W}(L(-1)a,z)=\frac{d}{dz}Y_{W}(a,z)=[L(-1),Y_{W}(a,z)]\).
A VOA \((V,Y,\mathbf{1},\omega)\) is obviously an ordinary module over \(V\) itself, with \(Y_{W}=Y\). \((V,Y)\) is called the adjoint \(V\)-module (see Section 2 in [19]). \(V\) is called **simple** if the adjoint module \(V\) has no proper submodules.
### The \(\lambda\)-derivations on VOAs
Recall that a derivation on a VOA \((V,Y,\mathbf{1},\omega)\) is a linear map \(d:V\to V\) satisfying \(d\mathbf{1}=0\), \(d\omega=0\), and \(d(a_{n}b)=(da)_{n}b+a_{n}(db)\) for all \(a,b\in V\) and \(n\in\mathbb{Z}\) (cf. [13, 25]). We introduce the following generalized notion of derivations with weights for VOAs.
**Definition 2.9**.: Let \((V,Y,\mathbf{1})\) be a vertex algebra, and \(\lambda\in\mathbb{C}\) be a fixed complex number. A linear map \(d:V\to V\) is called a **weak \(\lambda\)-derivation** of \(V\) if it satisfies
\[d(Y(a,z)b)=Y(da,z)b+Y(a,z)db+\lambda Y(da,z)db,\quad a,b\in V. \tag{6}\]
In other words, \(d(a_{m}b)=(da)_{m}b+a_{m}(db)+\lambda(da)_{m}(db)\), for all \(a,b\in V,m\in\mathbb{Z}\).
Let \((V,Y,\mathbf{1},\omega)\) be a VOA. A \(\lambda\)**-derivation** on \(V\) is a weak \(\lambda\)-derivation \(d:V\to V\) such that \(d\mathbf{1}=0\) and \(d\omega=0\). The space of \(\lambda\)-derivations on \(V\) is denoted by \(\operatorname{Diff}_{\lambda}(V)\).
In particular, for a \(\lambda\)-derivation \(d:V\to V\) on a VOA \((V,Y,\mathbf{1},\omega)\), since \(d\omega=0\), by (6) we have \(d(Y(\omega,z)b)=Y(d\omega,a)+Y(\omega,z)db+\lambda Y(d\omega,z)db\), which implies \(d(Y(\omega,z)b)=Y(\omega,z)db\) and hence \(dL(-1)b=L(-1)db\) for \(b\in V.\) Thus, the \(\lambda\)-derivation \(d\) is automatically compatible with the derivation \(D=L(-1)\) on the VOA: \(dL(-1)=L(-1)d\).
By Definition 2.9, it is easy to see that a \(0\)-differential operator is just a derivation on \(V\). i.e., \(\operatorname{Diff}_{0}(V)=\operatorname{Der}(V)\). The \(1\)-derivations have a nice correspondence with the automorphisms on \(V\). Recall that an endomorphism of \(V\) is a linear map \(\phi:V\to V\) such that
\[\phi(Y(a,z)b)=Y(\phi(a),z)\phi(b), \tag{7}\]
\(\phi(\mathbf{1})=\mathbf{1}\), and \(\phi(\omega)=\omega\) (cf. [19]). The space of endomorphisms on \(V\) is denoted by \(\operatorname{End}(V)\).
**Lemma 2.10**.: _If \((V,Y,\mathbf{1},\omega)\) is a simple VOA, then \(\operatorname{End}(V)\) is a division algebra over \(\mathbb{C}\), with the unit group \(\operatorname{Aut}(V)\)._
Proof.: Let \(\phi\in\operatorname{End}_{V}(V)\) and \(\phi\neq 0\). It suffices to show that \(\phi\) is an automorphism. First we note that \(\ker\phi\) is an ideal of \(V\): for \(u\in\ker\phi\), \(a\in V\), and \(m\in\mathbb{Z}\), we have \(\phi(a_{m}u)=\phi(a)_{m}\phi(u)=0\) by (7). Since \(\phi\neq 0\), we have \(\ker\phi=0\), and \(\phi\) is injective. Moreover, for \(a\in V_{n}\), since \(\phi(\omega)=\omega\), we have \(L(0)\phi(a)=\phi(\omega)_{1}\phi(a)=\phi(L(0)a)=n\phi(a)\). It follows that \(\phi(V_{n})\subseteq V_{n}\) for \(n\in\mathbb{N}\). Since \(\dim V_{n}<\infty\), we have \(\phi|_{V_{n}}:V_{n}\to V_{n}\) is a linear isomorphism. Thus, \(\phi\) is an automorphism.
**Proposition 2.11**.: _Let \((V,Y,\mathbf{1},\omega)\) be a simple VOA. Then the map \(\alpha:\operatorname{Diff}_{1}(V)\to\operatorname{Aut}(V):d\mapsto d+ \operatorname{Id}_{V}\) is a bijection._
Proof.: Since \(d\mathbf{1}=0\) and \(d\omega=0\), we have \((d+\operatorname{Id}_{V})(\mathbf{1})=\mathbf{1}\) and \((d+\operatorname{Id}_{V})(\omega)=\omega\). Moreover,
\[(d+\operatorname{Id})(Y(a,z)b) =Y(da,z)b+Y(a,z)db+Y(da,z)db+Y(a,z)b\] \[=Y(da+a,z)b+Y(a+da,z)db\] \[=Y((d+\operatorname{Id})(a),z)(d+\operatorname{Id})(b).\]
Thus \(d+\operatorname{Id}_{V}\in\operatorname{End}_{V}(V)\). But \(d+\operatorname{Id}_{V}\neq 0\), since otherwise \(d=-\operatorname{Id}_{V}\) does not satisfy \(d\mathbf{1}=0\). Hence \(\alpha(d)=d+\operatorname{Id}_{V}\) is in \((\operatorname{End}_{V}(V))^{\times}=\operatorname{Aut}(V)\), in view of Lemma 2.10.
On the other hand, for \(\phi\in\operatorname{Aut}(V)\), we have \((\phi-\operatorname{Id})(\mathbf{1})=0\) and \((\phi-\operatorname{Id})(\omega)=0\), and
\[Y((\phi-\operatorname{Id})(a),z)b+Y(a,z)(\phi-\operatorname{Id})( b)+Y((\phi-\operatorname{Id})(a),z)(\phi-\operatorname{Id})(b)\] \[=Y(\phi(a),z)b-Y(a,z)b+Y(a,z)\phi(b)-Y(a,z)b+Y(\phi(a),z)\phi(b)-Y (a,z)\phi(b)\] \[\quad-Y(\phi(a),z)b+Y(a,z)b\] \[=(\phi-\operatorname{Id})(Y(a,z)b).\]
Thus, \(\phi-\operatorname{Id}\) is in \(\operatorname{Diff}_{1}(V)\). Clearly, \(\phi\mapsto\phi-\operatorname{Id}_{V}\) is the inverse of \(\alpha\). Hence \(\alpha\) is a bijection.
## 3. Rota-Baxter operators on vertex algebras
In this section, we introduce various types of Rota-Baxter operators on vertex algebras and the notion of Rota-Baxter vertex (operator) algebras. We will give a few examples of Rota-Baxter vertex algebras, some of which generalize the classical Rota-Baxter associative algebras and the Rota-Baxter Lie algebras.
### Definition and first properties of Rota-Baxter operators on vertex algebras
Recall that a **Rota-Baxter (associative) algebra** of weight \(\lambda\in\mathbb{C}\) (cf. [23]) is an associative algebra \((R,\cdot)\), equipped with a linear map \(P:R\to R\), called the **Rota-Baxter operator (RBO)**, satisfying
\[P(a)\cdot P(b)=P(P(a)\cdot b)+P(a\cdot P(b))+\lambda P(a\cdot b),\quad a,b\in R.\]
Similarly, a **Rota-Baxter Lie algebra** of weight \(\lambda\) is a Lie algebra \((\mathfrak{g},[\cdot,\cdot])\) equipped with a linear map \(P:\mathfrak{g}\to\mathfrak{g}\) such that
\[[P(a),P(b)]=P([P(a),b])+P([a,P(b)])+\lambda P([a,b]),\quad a,b\in\mathfrak{g}.\]
The Rota-Baxter operator can be defined over various other algebraic structures [3, 7].
The vertex operator \(Y\) can be viewed as a generating Laurent series of the infinitely many component binary operations \(a_{m}b,m\in Z\). We introduce the notion of Rota-Baxter operators on vertex algebras first for the component operations and then for the vertex operators.
**Definition 3.1**.: Let \((V,Y,\mathbf{1})\) be a vertex algebra, \(\lambda\in\mathbb{C}\) be a fixed complex number, and \(m\in\mathbb{Z}\).
1. An \(m\)**-(ordinary) Rota-Baxter operator (\(m\)-(ordinary) RBO) on \(V\) of weight \(\lambda\)** is a linear map \(P:V\to V\), satisfying the following condition: (8) \[(Pa)_{m}(Pb)=P(a_{m}(Pb))+P((Pa)_{m}b)+\lambda P(a_{m}b),\quad a,b\in V.\] We denote the set of \(m\)-RBOs by \(\operatorname{RBO}(V)(m)\).
2. An **(ordinary) RBO on \(V\) of weight \(\lambda\)** is a linear map \(P:V\to V\) satisfying (8) for every \(m\in\mathbb{Z}\). In other words, \(P\) satisfies the following condition: (9) \[Y(Pa,z)Pb=P(Y(Pa,z)b)+P(Y(a,z)Pb)+\lambda P(Y(a,z)b),\quad a,b\in V.\] We denote the set of RBOs on \(V\) by \(\operatorname{RBO}(V)=\bigcap_{m\in\mathbb{Z}}\operatorname{RBO}(V)(m)\).
3. An \((m\)-)ordinary RBO \(P\) on \(V\) is called **translation invariant** if \(PD=DP\), where \(D\) is the translation operator: \(Da=a_{-2}\mathbf{1}\).
4. Let \(V\) be a VOA, and let \(P\) be an \((m\)-)ordinary RBO. \(P\) is called **homogeneous of degree \(N\)** if \(P(V_{n})\subseteq V_{n+N}\) for all \(n\in\mathbb{N}\). Degree zero RBOs are called **level preserving**.
A **Rota-Baxter vertex algebra (RBVA)** is a vertex algebra \((V,Y,\mathbf{1})\), equipped with an ordinary RBO \(P:V\to V\) of weight \(\lambda\). We denote such an algebra by \((V,Y,\mathbf{1},P)\). We can similarly define a Rota-Baxter vertex operator algebra (RBVOA) \((V,Y,\mathbf{1},\omega,P)\).
**Remark 3.2**.: The notions of \(m\)-RBOs and RBOs on VOAs are closely related to the tensor form of the classical Yang-Baxter equations for VOAs; see [4] for more details.
It is clear that for any vertex algebra \(V\), \(P=-\lambda\mathrm{Id}_{V}\) satisfies (9). Hence any vertex algebra can be viewed as an RBVA trivially in this way.
Let \((V,Y,\mathbf{1},\omega,P)\) be an RBVOA of weight \(\lambda\). Recall that (cf. [10]) the first level \(\mathfrak{g}=V_{1}\) is a Lie algebra, with the Lie bracket \([a,b]=a_{0}b\), for all \(a,b\in\mathfrak{g}\). Then it follows from (9) that \((\mathfrak{g},P|_{\mathfrak{g}})\) is a Rota-Baxter Lie algebra. Conversely, if \(p:\mathfrak{g}\to\mathfrak{g}\) is an RBO of the Lie algebra \(\mathfrak{g}\), and \(\mathfrak{g}\) is the first level \(V_{1}\) of a VOA \(V\), then \(p\) can be easily extended to a \(0\)-ordinary RBO \(P:V\to V\) by letting \(P|_{V_{1}}=p\) and \(P(V_{n})=0\), for all \(n\neq 1\); see Example 2.10 in [4].
Our definition of the Rota-Baxter operators for vertex algebras is similar to the \(R\)-matrix for VOAs in [43] where an \(R\)**-matrix** for a VOA \((V,Y,\mathbf{1},\omega)\) is defined to be a linear map \(R:V\to V\) such that \([R,L(-1)]=0\), and \(Y_{R}:V\to(\operatorname{End}V)[[z,z^{-1}]]\) defined by
\[Y_{R}(a,z)b:=Y(Ra,z)b+Y(a,z)Rb \tag{10}\]
satisfies the Jacobi identity. The following conclusion is proved by Xu in [43].
**Proposition 3.3**.: _Let \((V,Y,\mathbf{1},\omega)\) be a VOA. If a linear map \(R:V\to V\) satisfies \([R,L(-1)]=0\) and the so called "modified Yang-Baxter equation"_
\[Y(Ra,z)Rb-R(Y_{R}(a,z)b)=\lambda Y(a,z)b, \tag{11}\]
_where \(\lambda=0\) or \(-1\), then \(R\) is an \(R\)-matrix for \(V\)._
Note that, in view of (10), \(R\) satisfying (11) with \(\lambda=0\) is the special case of a Rota-Baxter operator of weight \(0\) on \(V\) defined in (9). Hence a linear map \(P:V\to V\) on a VOA \(V\) is a translation invariant RBO \(P\) of weight \(0\) if and only if it is an \(R\)-matrix of \(V\) in the sense of [43].
On the other hand, the identity (11) with \(\lambda=-1\) for \(P\) is equivalent to the Rota-Baxter identity (9) with \(\lambda=-1\) for the operator \(Q=(\operatorname{Id}-2P)/2\) by a direct calculation, as in the classical cases of associative algebras and Lie algebras (see [23]).
Let \((A,d,1_{A})\) be a commutative unital differential algebra. Recall that \(A\) is a vertex algebra (cf. [10]) with the vertex operator \(Y\) given by
\[Y(a,z)b:=(e^{zd}a)\cdot b,\quad a,b\in A, \tag{12}\]
and \(\mathbf{1}:=1_{A}\). The differential operator \(d\) of \(A\) is the translation operator \(D\) in (4). In particular, let \(V=\mathbb{C}[t]\) be the polynomial algebra with variable \(t\). Define
\[Y(t^{m},z)t^{n}:=(e^{z\frac{d}{2}}t^{m})\cdot t^{n}=\sum_{j\geq 0}\binom{m}{j}t ^{m+n-j}z^{j},\quad m,n\in\mathbb{N}. \tag{13}\]
Then \((\mathbb{C}[t],Y,1)\) is a vertex algebra.
**Proposition 3.4**.: _Let \(P:\mathbb{C}[t]\to\mathbb{C}[t]\) be the usual (integration) Rota-Baxter operator on \(\mathbb{C}[t]\):_
\[P(t^{m})=\int_{0}^{t}s^{m}ds=\frac{t^{m+1}}{m+1}\quad m\in\mathbb{N}.\]
_Then \((\mathbb{C}[t],Y,1,P)\) is an RBVA of weight \(0\)_
We give this result as a special case of a more general setting. The divided power algebra is the vector space \(A=\bigoplus_{m=0}^{\infty}\mathbb{C}_{m}\), equipped with the product \(t_{m}\cdot t_{n}:=\binom{m+n}{n}t_{m+n},\quad m,n\in\mathbb{N}.\) Then \(\cdot\) is commutative associative with unit \(1_{A}=t_{0}\). Note that \(d:A\to A,\ d(t_{m})=t_{m-1}\) is a derivation on \(A\), and so \((A,Y,1_{A})\) is a vertex algebra, with
\[Y(t_{m},z)t_{n}:=(e^{zd}t_{m})\cdot t_{n}=\sum_{j\geq 0}\frac{(m+n-j)!}{(m-j)!n! j!}t_{m+n-j}z^{j},\quad m,n\in N. \tag{14}\]
**Proposition 3.5**.: _Define_
\[P:A\to A,\quad P(t_{m}):=t_{m+1},m\in\mathbb{N}.\]
_Then \((A,Y,1_{A},P)\) is an RBVA of weight \(0\)._
As is well known, the polynomial algebra \(\mathbb{C}[t]\), with the basis \(t_{n}:=t^{n}/n!\), is a (realization of the) divided power algebra. Thus Proposition 3.4 is a consequence of Proposition 3.5.
Proof of Proposition 3.5.: For \(m,n\in\mathbb{N}\), by (14) we have
\[Y(Pt_{m},z)Pt_{n} =Y(t_{m+1},z)t_{n+1}=\sum_{j\geq 0}\frac{(m+n+2-j)!}{(m+1-j)!(n+1)! j!}t_{m+n+2-j}z^{j},\] \[P(Y(Pt_{m},z)t_{n}) =P(Y(t_{m+1},z)t_{n})=P\bigg{(}\sum_{j\geq 0}\frac{(m+n+1-j)!}{(m+ 1-j)!n!j!}t_{m+n+1-j}z^{j}\bigg{)}\] \[=\sum_{j\geq 0}\frac{(m+n+1-j)!}{(m+1-j)!n!j!}t_{m+n+2-j}z^{j},\] \[P(Y(t_{m},z)Pt_{n}) =P(Y(t_{m},z)t_{n+1})=P\bigg{(}\sum_{j\geq 0}\frac{(m+n+1-j)!}{(m -j)!(n+1)!j!}t_{m+n+1-j}z^{j}\bigg{)}\] \[=\sum_{j\geq 0}\frac{(m+n+1-j)!}{(m-j)!(n+1)!j!}t_{m+n+2-j}z^{j}.\]
Since we have
\[\sum_{j\geq 0}\frac{(m+n+1-j)!}{(m+1-j)!n!j!}+\sum_{j\geq 0}\frac{(m +n+1-j)!}{(m-j)!(n+1)!j!}\] \[=\sum_{j\geq 0}\frac{(m+n+1-j)!(n+1)+(m+n+1-j)!(m+1-j)}{(m+1-j)!(n +1)!j!}\] \[=\sum_{j\geq 0}\frac{(m+n+1-j)!(m+n+2-j)}{(m+1-j)!(n+1)!j!}= \sum_{j\geq 0}\frac{(m+n+2-j)!}{(m+1-j)!(n+1)!j!},\]
it follows that \(Y(Pt_{m},z)Pt_{n}=P(Y(Pt_{m},z)t_{n})+P(Y(t_{m},z)Pt_{n})\). Hence \((A,Y,1_{A},P)\) is an RBVA of weight \(0\).
Both \((\mathbb{C}[t],\frac{d}{dt},P,1_{\mathbb{C}[t]})\) and \((A=\bigoplus_{m=0}^{\infty}\mathbb{C}t_{m},d,P,1_{A})\) are special cases of the commutative unital differential Rota-Baxter algebras \((A,d,P,1_{A})\). The latter means that, \(d\) is a derivation on \(A\), \(P\) is an RBO of weight \(0\) on \(A\), and \(d\circ P=\operatorname{Id}_{A}\); see [24] for more details. We have the following property. See Corollary 3.12 for another related result.
**Proposition 3.6**.: _Let \((A,d,P,1_{A})\) be an commutative unital differential Rota-Baxter algebra, and let \(Y(a,z)b=(e^{zd}a)\cdot b\). Then we have:_
\[Y(Pa,z)Pb-P(Y(Pa,z)b)-P(Y(a,z)Pb)\in(\ker d)[[z]],\quad a,b\in V.\]
_In particular, \((A,Y,1_{A},P)\) is an RBVA of weight \(0\) if \(\ker d=0\)._
Proof.: First we note that \(P(a)_{-1}P(b)=P(P(a)_{-1}b)+P(a_{-1}P(b))\) for all \(a,b\in A\), since the product of \(A\) is given by \(x\cdot y=x_{-1}y\) for all \(x,y\in A\).
Now assume that \(n\geq 1\). By (4) we have \(d(a_{-n}b)=(da)_{-n}b+a_{-n}db\) and \((da)_{-n}=nda_{-n-1}\). Moreover, \(a-Pd(a)\in\ker d\) for all \(a,b\in A\) as \(d\circ P=\operatorname{Id}_{A}\), hence we have:
\[nP(a)_{-n-1}P(b)-nP(P(a)_{-n-1}b)-nP(a_{-n-1}P(b))\] \[=(dP(a))_{-n}P(b)-P((dP(a))_{-n}b)-P((da)_{-n}P(b))\] \[=a_{-n}P(b)-Pd(a_{-n}P(b))\] \[\equiv 0\pmod{\ker d}.\]
This finishes the proof because \(Y(a,z)b=\sum_{n\geq 0}(a_{-n-1}b)z^{n}\).
We also have the notion of relative Rota-Baxter operators introduced in [4], as the operator form of the classical Yang-Baxter equation for VOAs.
**Definition 3.7**.: Let \((V,Y,\mathbf{1},\omega)\) be a VOA and \((W,Y_{W})\) be a weak \(V\)-module. A **relative Rota-Baxter operator (relative RBO)** is a linear map \(T:W\to V\) such that
\[Y(Tu,z)Tv=T(Y_{W}(Tu,z)v)+T(Y_{WV}^{W}(u,z)Tv),\quad u,v\in W. \tag{15}\]
In Section 4, we will use the dendriform vertex algebra structure to give examples of relative Rota-Baxter operators on vertex algebras without vacuum.
### The \(\lambda\)-derivations and the weak local Rota-Baxter operators
Propositions 3.4 and 3.5 indicate that a right inverse \(P\) of the translation operator \(D\) on certain commutative vertex algebras can give rise to an ordinary RBO of weight \(0\).
However, in the case of non-commutative VOAs, the translation operator \(D=L(-1)\) and most of the derivations are _not_ invertible globally--they only admit local inverses. On the other hand, by its defining identity (9), if \(P:V\to V\) is an RBO, then we must have \(P(a)_{m}P(b)\in P(V)\) for
all \(a,b\in V\) and \(m\in\mathbb{Z}\). i.e., \(P(V)\subseteq V\) is a vertex Leibniz subalgebra (see Definition 2.4). This is also a strong condition imposed on \(P\). If we weaken these conditions, we can expect to construct more examples of Rota-Baxter type operators from the "right inverses" of \(\lambda\)-derivations on vertex algebra \(V\) on a suitable domain.
**Definition 3.8**.: Let \((V,Y,\mathbf{1})\) be a vertex algebra, \(\lambda\in\mathbb{C}\) be a fixed complex number, and \(U\subset V\) be a linear subspace.
1. A **weak local Rota-Baxter operator (RBO) on \(U\) of weight \(\lambda\)** is a linear map \(P:U\to V\), satisfying the following condition. Whenever \(a,b\in U\) and \(m\in\mathbb{Z}\) such that \(P(a)_{m}P(b)\in P(U)\), one has \(a_{m}(Pb)+(Pa)_{m}b+\lambda a_{m}b\in U\), and (16) \[(Pa)_{m}(Pb)=P\Big{(}a_{m}(Pb)+(Pa)_{m}b+\lambda a_{m}b\Big{)}.\] A **weak global RBO of weight \(\lambda\)** is a weak local RBO on \(V\).
2. An **ordinary local RBO on \(U\) of weight \(\lambda\)** is a weak local RBO \(P:U\to V\) of weight \(\lambda\) such that \(P(U)\) is a vertex Leibniz subalgebra of \(V\). In other words, \(P:U\to V\) is a linear map satisfying: (17) \[Y(Pa,z)Pb=P\Big{(}Y(Pa,z)b+Y(a,z)Pb+\lambda Y(a,z)b\Big{)},\quad a,b\in U.\] An **ordinary global RBO of weight \(\lambda\)** is an ordinary local RBO on \(V\). In particular, by equation (17), the notion of ordinary global RBOs is the same as the notion of ordinary RBOs in Definition 3.1.
A local RBO (weak or ordinary) \(P:U\to V\) is called **translation invariant**, if \(DU\subseteq U\) and \(PD=DP\) on \(U\). Let \(V\) be a VOA, and let \(P:U\to V\) be a local RBO. Then \(P\) is called **homogeneous of degree \(N\)**, if \(U\subset V\) is a homogeneous subspace: \(U=\bigoplus_{n=0}^{\infty}U_{n}\), and \(P(U_{n})\subseteq V_{n+N}\) for all \(n\in\mathbb{N}\).
**Remark 3.9**.: There are key distinctions between a weak local RBO \(P\) and an ordinary RBO. One is that here \(P\) is defined locally on a subspace \(U\) of \(V\). The other one is that in equations (16) and (17), we do not require \(P(a)_{m}b\) or \(a_{m}P(b)\) to be individually contained in the domain \(U\) of \(P\). So their right hand sides cannot be separated into three terms, as in (8) and (9). The following diagram illustrated the relations between these concepts:
\[\begin{CD}\{\text{ordinary global RBOs}\}@>{\epsilon^{\text{ subset}}}>{}>\{\text{weak global RBOs}\}\\ @V{}V{\text{subset}}V@V{}V{\text{subset}}V\\ \{\text{ordinary local RBOs}\}@>{\epsilon^{\text{ subset}}}>{}>\{\text{weak local RBOs}\}\end{CD}\]
We extend the following properties on RBOs to weak and ordinary local RBOs for later use.
**Proposition 3.10**.: _Let \((V,Y,\mathbf{1})\) be a vertex algebra, and \(U\subset V\) be a linear subspace._
1. _If_ \(P:U\to V\) _is a weak (resp. ordinary) local RBO on_ \(U\) _of weight_ \(\lambda\neq 0\)_, then_ \(-P/\lambda\) _is a weak (resp. ordinary) local RBO on_ \(U\) _of weight_ \(-1\)_. If_ \(P:U\to V\) _is a weak (resp. ordinary) local RBO of weight_ \(1\)_, then_ \(\lambda P\) _is a weak (resp. ordinary) local RBO of weight_ \(\lambda\)_._
2. _Let_ \(P\) _be an ordinary local RBO on_ \(U\) _of weight_ \(\lambda\)_. Then_ \(\tilde{P}=-\lambda\mathrm{Id}_{V}-P\) _is an ordinary local RBO on_ \(U\) _of weight_ \(\lambda\)_._
Proof.: (i) Let \(P:U\to V\) be a weak local RBO of weight \(\lambda\neq 0\). Let \(a,b\in U\) and \(n\in\mathbb{Z}\) satisfy \((-P/\lambda)(a)_{n}(-P/\lambda)(b)\in(-P/\lambda)(U)=P(U)\). Then \(P(a)_{n}P(b)\in P(U)\), and by Definition 3.1, we
have \(a_{n}P(b)+(Pa)_{n}b+\lambda a_{n}b\in U\), and \((Pa)_{n}(Pb)=P(a_{n}(Pb)+(Pa)_{n}b+\lambda a_{n}b)\). It follows that \(a_{n}(-P/\lambda)(b)+((-P/\lambda)(a))_{n}b-a_{n}b\in U\) and
\[((-P/\lambda)(a))_{n}((-P/\lambda)(b))=(-P/\lambda)(a_{n}(-P/\lambda)(b)+((-P/ \lambda)(a))_{n}b-a_{n}b).\]
Thus, \(-P/\lambda:U\to V\) is a weak local RBO of weight \(-1\). The proof of the rest for (i) and (ii) is similar.
An immediate advantage of the local RBOs is that a local inverse of a weak \(\lambda\)-derivation (see Definition 2.9) of a vertex algebra gives rise to a weak local RBO of weight \(\lambda\).
**Proposition 3.11**.: _Let \((V,Y,\mathbf{1})\) be a vertex algebra, and let \(d:V\to V\) be a weak \(\lambda\)-derivation. Suppose that there exists a linear map \(P:U(:=dV)\to V\) such that \(d\circ P=\mathrm{Id}_{U}\). Then \(P:U\to V\) is a weak local RBO on \(U\) of weight \(\lambda\)._
Proof.: Let \(a,b\in U\) and \(n\in\mathbb{Z}\) satisfy \((Pa)_{n}(Pb)=P(c)\in P(U)\). Then we have
\[dP(c) =d((Pa)_{n}(Pb))=(dP)(a)_{n}(Pb)+(Pa)_{n}(dPb)+\lambda(dPa)_{n}( dPb)\] \[=a_{n}(Pb)+(Pa)_{n}b+\lambda a_{n}b,\]
and \(dP(c)=c\) since \(dP=\mathrm{Id}_{U}\). Thus, \(a_{n}(Pb)+(Pa)_{n}b+\lambda a_{n}b=c\in U\), and
\[(Pa)_{n}(Pb)=PdP(c)=P(a_{n}(Pb)+(Pa)_{n}b+\lambda a_{n}b).\]
Hence \(P:U\to V\) satisfies (8), and so \(P\) is a weak local RBO on \(U\) of weight \(\lambda\).
**Corollary 3.12**.: _Let \((A,d,P,1_{A})\) be an commutative unital differential Rota-Baxter algebra. Then the vertex algebra \((A,Y,1_{A},P)\) with \(Y\) given by (12) is an RBVA of weight \(0\), if \(P\) satisfies \(P(a)\cdot P(b)\in P(A)\) and \((d^{n}a)\cdot P(b)\in P(A)\), for all \(a,b\in A\) and \(n\in\mathbb{N}\)._
Proof.: By (4) and Definition 2.9, \(d=D:A\to A\) is an \(0\)-derivation of the vertex algebra \((A,Y,1_{A})\). Since \(d\circ P=\mathrm{Id}_{A}\) by the definition of a differential RBA, it follows that \(dA=A\), and \(P:A(=dA)\to A\) is a weak global RBO of weight \(0\) on the vertex algebra \(A\) by Proposition 3.11. If \(P\) satisfies the last condition, then \(Y(P(a),z)P(b)=P(a)\cdot P(b)+\sum_{j\geq 1}\frac{1}{j!}(d^{j-1}a)\cdot P(b) \in P(A)((z))\), and so \(P:A\to A\) is an ordinary RBO of weight \(0\).
By (13) and (14), it is easy to check that the conditions in Corollary 3.12 are satisfied by \((\mathbb{C}[t],\frac{d}{dt},P,1_{\mathbb{C}[t]})\) and \((A=\bigoplus_{m=0}^{\infty}\mathbb{C}t_{m},d,P,1_{A})\). This provides us with another proof of Propositions 3.4 and 3.5.
There are many examples of weak \(0\)-derivations on VOAs. We can use them to construct examples of weak local RBOs on general VOAs by Proposition 3.11.
**Example 3.13**.: Let \((V,Y,\mathbf{1},\omega)\) be a CFT-type VOA. By the main Theorem in [16], the operator \(L(-1):V\to V\) is injective on \(V_{+}\). Moreover, we have \(L(-1)\mathbf{1}=\mathbf{1}_{-2}\mathbf{1}=0\), and \(L(-1)\) is a weak \(0\)-derivation by (3) and (4).
Let \(U=L(-1)V=L(-1)V_{+}\). Define \(P:U\to V\) by setting
\[P(u):=L(-1)^{-1}u, \tag{18}\]
for all \(u\in L(-1)V_{+}\). Clearly, \(P\) is well defined and \(L(-1)P=\mathrm{Id}_{U}\). Then by Proposition 3.11, \(P:U\to V\) given by (18) is a weak local RBO on \(U=L(-1)V_{+}\) of weight \(0\), and it is homogeneous of degree \(-1\) and translation invariant. Note that \(P\) is not ordinary since \(P(U)=V_{+}\) is not a vertex Leibniz subalgebra of \(V\).
**Example 3.14**.: Let \(V=M_{\hat{b}}(k,0)\) be the level \(k\neq 0\) Heisenberg VOA of rank \(r\) (cf. [20], see also [22]). Recall that \(\hat{b}\) is an \(r\)-dimensional vertex space, equipped with a nondegenerate symmetric bilinear form \((\cdot|\cdot)\), and \(M_{\hat{b}}(k,0)\) is the Verma module over the infinite dimensional Heisenberg Lie algebra: \(\hat{b}=\hat{b}\otimes\mathbb{C}[t,t^{-1}]\oplus\mathbb{C}K\), with
\[[\alpha(m),\beta(n)]=m(\alpha|\beta)\delta_{m+n,0}K,\quad m,n\in\mathbb{Z}, \tag{19}\]
where \(\alpha(m)=\alpha\otimes t^{m}\). We have \(\hat{b}=\widehat{b}_{\geq 0}\oplus\widehat{h}_{<0}\), where \(\widehat{h}_{\geq 0}=\hat{b}\otimes\mathbb{C}[t]\oplus\mathbb{C}K\) and \(\widehat{b}_{<0}=\hat{b}\otimes t^{-1}\mathbb{C}[t^{-1}]\), and \(M_{\hat{b}}(k,0)=U(\hat{b})\otimes_{U(\hat{b}_{\geq 0})}\mathbb{C}\mathbf{1}\), with \(\alpha(n)\mathbf{1}=0\) for all \(n\geq 0\) and \(\alpha\in\mathfrak{h}\), and \(K\mathbf{1}=k\mathbf{1}\).
In particular, \(\alpha(0)\in\hat{b}\) is a central element by (19), and \(\alpha(0)u=0\) for all \(u\in M_{\hat{b}}(k,0)\). Fix a nonzero element \(\alpha\in\mathfrak{h}\), and consider the operator \(d=\alpha(1):V\to V.\) For \(u,v\in V=M_{\hat{b}}(k,0)\) and \(n\in\mathbb{Z}\), we have
\[\alpha(1)(u_{n}v)=u_{n}(\alpha(1)v)+[\alpha(1),u_{n}]v=u_{n}(\alpha(1)v)+\sum _{j\geq 0}\binom{1}{j}(\alpha(j)u)_{1+n-j}v=u_{n}(\alpha(1)v)+(\alpha(1)u)_{n}v,\]
since \(\alpha(0)u=0\). Thus, \(d=\alpha(1)\) is a weak \(0\)-derivation on \(M_{\hat{b}}(k,0)\). By (19), it is also easy to see that \(d=\alpha(1)\) acts as \(=k\frac{\partial}{\partial\alpha(-1)}\) on \(M_{\hat{b}}(k,0)\). Hence \(\alpha(1)M_{\hat{b}}(k,0)=M_{\hat{b}}(k,0)\). Define a linear map \(P:\alpha(1)V=V\to V\) as follows.
\[\begin{split} P:=\frac{1}{k}\int(\cdot)d\alpha(-1)\mathbf{1}:M_{ \hat{b}}(k,0)&\to M_{\hat{b}}(k,0),\\ h^{1}(-n_{1})\dots h^{k}(-n_{k})\alpha(-1)^{m}\mathbf{1}& \mapsto\frac{1}{k(m+1)}h^{1}(-n_{1})\dots h^{k}(-n_{k})\alpha(-1)^{m+1} \mathbf{1},\end{split} \tag{20}\]
where \(S=\{\alpha=\alpha_{1},\alpha_{2},\dots\alpha_{r}\}\) is a basis of \(\mathfrak{h}\), and \(h^{1},\dots,h^{k}\in S\) are not equal to \(\alpha\). Clearly, we have \(dP=\operatorname{Id}_{V}\), and so \(P:V\to V\) is a weak global RBO on \(M_{\hat{b}}(k,0)\) of weight \(0\) by Proposition 3.11. The operator \(P\) is also homogeneous of degree \(1\). However, it is not an ordinary RBO since \(P(V)=\alpha(-1)M_{\hat{b}}(k,0)\) is not a vertex Leibniz subalgebra.
**Example 3.15**.: Let \(V=V_{\hat{b}}(k,0)\) be the level \(k\) vacuum module VOA associated with \(\mathfrak{g}=\mathfrak{sl}(2,\mathbb{C})=\mathbb{C}e\oplus\mathbb{C}h\oplus \mathbb{C}f\) (cf. [22]). Let \(V_{\hat{b}}(k,0)=U(\hat{\mathfrak{g}})\otimes_{U(\hat{\mathfrak{g}}_{\geq 0})} \mathbb{C}\mathbf{1}\) be the Weyl vacuum module over the affine Lie algebra \(\hat{\mathfrak{g}}=\mathfrak{g}\otimes\mathbb{C}[t,t^{-1}]\oplus\mathbb{C}K\).
Since \(h=h(-1)\mathbf{1}\in V_{1}\), the map \(d=o(h)=h(0):V\to V\) is a \(0\)-derivation of \(V\) (cf. [13]). Moreover, \(V_{\hat{b}}(k,0)\) is a sum of \(h(0)\)-eigenspaces ([18]): \(V_{\hat{b}}(k,0)=\bigoplus_{\lambda\in 2\mathbb{Z}}V_{\hat{b}}(k,0)(\lambda)\), where \(V_{\hat{b}}(k,0)(\lambda)=\{v\in V_{\hat{b}}(k,0)\,|\,h(0)v=\lambda v\}\) for all \(\lambda\in 2\mathbb{Z}\).
Let \(U\) be the sum of nonzero eigenspaces of \(h(0)\): \(U=\bigoplus_{\lambda\in 2\mathbb{Z}\setminus\{0\}}V_{\hat{b}}(k,0)(\lambda)\), and let \(P:U\to V\) be given by \(P(u)=\frac{1}{\lambda}u\) for all \(u\in V_{\hat{b}}(k,0)(\lambda)\), with \(\lambda\neq 0\). Then \(dP=\operatorname{Id}_{U}\), and so \(P:U\to V\) is a weak local RBO on \(U\) of weight \(0\). Moreover, \(P\) is homogeneous of weight \(0\). However, it is not ordinary since \(P(U)=U\) is not a subalgebra.
Let \(d_{1}:=e^{h(0)}-1:V\to V\). Then \(d_{1}\) is a \(1\)-derivation by Proposition 2.11. Let \(P_{1}:U\to V\) be given by \(P_{1}(u)=\frac{1}{e^{1}-1}u\), for all \(u\in V_{\hat{b}}(k,0)(\lambda)\), with \(\lambda\neq 0\). Then \(d_{1}P_{1}=\operatorname{Id}_{U}\), and by Proposition 3.11, \(P_{1}:U\to V\) is a weak local RBO of weight \(1\).
### Properties and further examples of RBVAs
The next theorem generalizes a basic property of RBOs (see [23, Theorem 1.1.13]) and provides a systematic way to produce examples of RBVAs.
**Theorem 3.16**.: _Let \((V,Y,\mathbf{1})\) be a vertex algebra and \(P:V\to V\) a linear map. Then \(P\) is an idempotent RBO of weight \(-1\), if and only if \(V\) admits a decomposition: \(V=V^{1}\oplus V^{2}\) into a direct
sum of vertex Leibniz subalgebras \(V^{1}\) and \(V^{2}\), and \(P:V\to V^{1}\) is the projection map onto \(V^{1}\):_
\[P(a^{1}+a^{2})=a^{1},\quad a^{1}\in V^{1},a^{2}\in V^{2}.\]
_In particular, \(V^{1}=P(V)\) and \(V^{2}=(\operatorname{Id}-P)(V)\)._
Proof.: Let \(P:V\to V\) be an idempotent RBO of weight \(-1\). Then the idempotency gives the direct sum \(V=P(V)\oplus(\operatorname{Id}-P)(V)\). Further, \(V^{1}=P(V)\subseteq V\) is closed under the vertex operator \(Y\), since by (9), we have
\[(Pa)_{n}(Pb)=P(a_{n}P(b)+P(a)_{n}b-a_{n}b)\in P(V),\quad a,b\in V.\]
Hence \((V^{1},Y|_{V^{1}})\) is a vertex Leibniz subalgebra of \(V\). By Proposition 3.10, \(V^{2}=(\operatorname{Id}-P)(V)\) is also a vertex Leibniz subalgebra.
Conversely, suppose that \(V\) has a decomposition \(V=V^{1}\oplus V^{2}\) into vertex Leibniz subalgebras, and \(P:V\to V^{1}\) is the projection. Then \(P\) is idempotent. Further, for \(a=a^{1}+a^{2}\) and \(b=b^{1}+b^{2}\) in \(V\), with \(a^{i},b^{i}\in V^{i}\) for \(i=1,2\), we have
\[P(a)_{n}P(b) =a_{n}^{1}b^{1},\quad P((Pa)_{n}b)=P(a_{n}^{1}b^{1}+a_{n}^{1}b^{2 })=a_{n}^{1}b^{1}+P(a_{n}^{1}b^{2}),\] \[P(a_{n}P(b)) =P(a_{n}^{1}b^{1}+a_{n}^{2}b^{1})=a_{n}^{1}b^{1}+P(a_{n}^{2}b^{1}),\] \[P(a_{n}b) =P(a_{n}^{1}b^{1}+a_{n}^{1}b^{2}+a_{n}^{2}b^{1}+a_{n}^{2}b^{2})=a_{ n}^{1}b^{1}+P(a_{n}^{1}b^{2})+P(a_{n}^{2}b^{1}).\]
It follows that \(P(a)_{n}P(b)=P((Pa)_{n}b)+P(a_{n}P(b))-P(a_{n}b)\), for all \(a,b\in V\). Thus \(P:V\to V\) is an idempotent RBO of weight \(-1\).
**Example 3.17**.: Let \(V=V_{L}\) be the lattice VOA (cf. [20]) associated with the rank one positive definite even lattice \(L=\mathbb{Z}\alpha\), with \((\alpha|\alpha)=2N\) for some \(N\in\mathbb{Z}_{>0}\). Recall that \(V_{L}=M_{\natural}(1,0)\otimes\mathbb{C}^{\ast}[L]\), where \(\natural=\mathbb{C}\alpha\), and \(\epsilon:L\times L\to\{\pm 1\}\) is a \(2\)-cocycle. The vertex operators are given by
\[Y(\alpha(-1)\mathbf{1},z)=\alpha(z)=\sum_{n\in\mathbb{Z}}\alpha( n)z^{-n-1},\] \[Y(e^{m\alpha},z)=E^{-}(-m\alpha,z)E^{+}(-m\alpha,z)e_{m\alpha}z^ {m\alpha},\] \[Y(\alpha(-n_{1}-1)\ldots\alpha(-n_{k}-1)e^{m\alpha},z)={}_{ \circ}^{\circ}(\partial_{z}^{(n_{1})}\alpha(z))\ldots(\partial_{z}^{(n_{k})} \alpha(z))Y(e^{m\alpha},z){}_{\circ}^{\circ},\]
where \(E^{\pm}(\alpha,z)=\exp\left(\sum_{n\in\mathbb{Z}_{\pm}}\frac{\alpha(n)}{n}z^{- n}\right)\), and \(\partial_{z}^{(n)}=\frac{1}{n!}\frac{d^{n}}{dz^{n}}.\) Also recall that \(V_{L}\) has the following decomposition as a module over the Heisenberg VOA \(M_{\natural}(1,0)\):
\[V_{\mathbb{Z}\alpha}=\bigoplus_{m\in\mathbb{Z}}M_{\natural}(1,m\alpha). \tag{21}\]
Note that for \(m,n\in\mathbb{Z}_{<0}\), we have
\[Y(e^{m\alpha},z)e^{n\alpha} =E^{-}(-m\alpha,z)E^{+}(-m\alpha,z)e_{m\alpha}z^{m\alpha}(e^{n \alpha})\] \[=E^{-}(-m\alpha,z)E^{+}(-m\alpha,z)\epsilon(m\alpha,n\alpha)z^{2 Nmn}e^{(m+n)\alpha},\]
which is contained in \(M_{\natural}(1,(m+n)\alpha)((z))\), with \(m+n\in\mathbb{Z}_{<0}\), in view of the decomposition (21). Then it follows that
\[V^{1}:=\bigoplus_{m\in\mathbb{Z}_{<0}}M_{\natural}(1,m\alpha)\quad\text{and} \quad V^{2}:=\bigoplus_{m\in\mathbb{Z}_{<0}}M_{\natural}(1,m\alpha)\]
are vertex Leibniz subalgebras of \(V_{\mathbb{Z}\alpha}\), and \(V_{\mathbb{Z}\alpha}=V^{1}\oplus V^{2}\). Note that \(\mathbf{1}\in M_{\natural}(1,0)\subset V^{1}\). Then by Theorem 3.16, the projection \(P:V_{\mathbb{Z}\alpha}\to V^{1}\) is a level-preserving idempotent RBO of weight \(-1\). This construction can also be generalized to the higher rank case. Let
be a positive-definite even lattice of rank \(n\), with an orthonormal basis \(\{\alpha_{1},\ldots\alpha_{n}\}\). Consider the following additive subgroups of \(L\):
\[L^{1}:=\mathbb{Z}\alpha_{1}\oplus\ldots\mathbb{Z}\alpha_{n-1}\oplus\mathbb{Z}_{ \geq 0}\alpha_{n}\quad\text{and}\quad L^{2}:=\mathbb{Z}\alpha_{1}\oplus\ldots \mathbb{Z}\alpha_{n-1}\oplus\mathbb{Z}_{<0}\alpha_{n}. \tag{22}\]
Then we have \(L=L^{1}\cup L^{2}\) and \(L^{1}\cap L^{2}=\emptyset\). Let
\[V^{1}:=\bigoplus_{\alpha\in L^{1}}M_{\hat{\mathfrak{b}}}(1,\alpha)\quad\text {and}\quad V^{2}:=\bigoplus_{\beta\in L^{2}}M_{\hat{\mathfrak{b}}}(1,\beta). \tag{23}\]
Then for a similar reason as the rank one case, \(V^{1}\) and \(V^{2}\) are vertex Leibniz subalgebras of \(V_{L}\), and so the projection map \(P:V_{L}\to V^{1}\) along \(V^{2}\) is a level-preserving RBO of weight \(-1\). Furthermore, recall that the Virasoro element of \(V_{L}\) is \(\omega=\frac{1}{2}\sum_{i=1}^{n}\alpha_{i}^{2}(-1)\mathbf{1}\). In particular, we have \(L(-1)M_{\hat{\mathfrak{b}}}(1,\alpha)\subseteq M_{\hat{\mathfrak{b}}}(1,\alpha)\) for \(\alpha\in L\). Thus \(L(-1)V^{i}\subseteq V^{i}\) for \(i=1,2\), and so the RBO \(P:V_{L}\to V^{1}\) is translation invariant: \(PL(-1)=L(-1)P\).
There is another general construction of ordinary RBOs on vertex algebras. First, we have the following lemma whose proof is straightforward.
**Lemma 3.18**.: _Let \((V,Y,\mathbf{1})\) be a vertex algebra, and \((A,\cdot,1)\) be a commutative unital associative algebra. Let \(\hat{V}:=V\otimes_{\mathbb{C}}A\). Extend \(Y:V\to\operatorname{End}(V)[[z,z^{-1}]]\) to_
\[\hat{Y}:\hat{V}\to\operatorname{End}(\hat{V})[[z,z^{-1}]],\quad\hat{Y}(a \otimes f,z)(b\otimes g):=Y(a,z)b\otimes f\cdot g,\quad a,b\in V,f,g\in A. \tag{24}\]
_Then \((\hat{V},\hat{Y},\mathbf{1}\otimes 1)\) is a vertex algebra, and the translation operator \(\hat{D}:\hat{V}\to\hat{V}\) is given by \(\hat{D}(a\otimes f)=a_{-2}\mathbf{1}\otimes f=D(a)\otimes f\)._
**Proposition 3.19**.: _With the notations as above, let \((V,Y,\mathbf{1})\) be a vertex algebra, and \((A,\cdot,1,P)\) be a commutative unital Rota-Baxter algebra. Let_
\[\hat{P}:\hat{V}\to\hat{V},\quad\hat{P}(a\otimes f):=a\otimes P(f),\quad a \otimes f\in\hat{V}. \tag{25}\]
_Then \((\hat{V},\hat{Y},\mathbf{1}\otimes 1,\hat{P})\) is an Rota-Baxter vertex algebra of weight \(\lambda\)._
Proof.: By (24) and (25), we have
\[\hat{P}\left(\hat{Y}(\hat{P}(a\otimes f),z)b\otimes g\right) =\hat{P}\left(Y(a,z)b\otimes P(f)\cdot g\right)=Y(a,z)b\otimes P( P(f)\cdot g),\] \[\hat{P}\left(\hat{Y}(a\otimes f,z)\hat{P}(b\otimes g)\right) =\hat{P}\left(Y(a,z)b\otimes f\cdot P(g)\right)=Y(a,z)b\otimes P (f\cdot P(g)),\] \[\lambda\hat{P}\left(\hat{Y}(a\otimes f,z)b\otimes g\right) =\lambda\hat{P}\left(Y(a,z)b\otimes f\cdot g\right)=Y(a,z)b \otimes\lambda P(f\cdot g),\] \[\hat{Y}\left(\hat{P}(a\otimes f),z\right)\hat{P}(b\otimes g) =\hat{Y}\left(a\otimes P(f),z\right)b\otimes P(g)=Y(a,z)b \otimes P(f)\cdot P(g),\]
for all \(a,b\in V\), and \(f,g\in A\). Since \(P(f)\cdot P(g)=P(P(f)\cdot g)+P(f\cdot P(g))+\lambda P(f\cdot g)\), it follows that \(\hat{P}:\hat{V}\to\hat{V}\) is a RBO of the vertex algebra \(\hat{V}\) of weight \(\lambda\).
**Example 3.20**.: Recall that the Laurent polynomial ring has a decomposition into sub-algebras: \(\mathbb{C}((t))=t^{-1}\mathbb{C}[t^{-1}]\oplus\mathbb{C}[[t]]\), and the projection operator \(P:\mathbb{C}((t))\to t^{-1}\mathbb{C}[t^{-1}]\) is a RBO of weight \(-1\) (see [23, Example 1.1.10]).
Then by Proposition 3.19, for any vertex algebra \((V,Y,\mathbf{1})\), the projection map
\[\hat{P}:V\otimes\mathbb{C}((t))\to V\otimes t^{-1}\mathbb{C}[t^{-1}],\quad\hat{ P}(a\otimes f(t))=a\otimes P(f(t)),\quad a\in V,f(t)\in\mathbb{C}((t)),\]
is a RBO of weight \(-1\) on the vertex algebra \((\hat{V}=V\otimes\mathbb{C}((t)),\hat{Y},\mathbf{1}\otimes 1)\) given by Lemma 3.18, and \(\hat{P}\) is translation invariant. In fact, this conclusion also follows from Theorem 3.16 since it is clear that \(V\otimes t^{-1}\mathbb{C}[t^{-1}]\) and \(V\otimes\mathbb{C}[[t]]\) are vertex Leibniz sub-algebras of \(V\otimes\mathbb{C}((t))\).
We can similarly build examples of RBVAs from some of the examples of commutative unital Rota-Baxter algebras such as in [23].
Although it is generally not easy to classify all the Rota-Baxter operators on a given VOA, we can determine all the homogeneous Rota-Baxter operators of non-positive degree on certain CFT-type VOAs (as defined in Definition 2.7).
**Lemma 3.21**.: _Let \((V,Y,\mathbf{1},\omega)\) be a VOA of CFT-type, and \(P:V\to V\) be a homogeneous RBO of weight \(\lambda\neq 0\) and degree \(N\leq 0\). Then we have \(P(\mathbf{1})=0\) or \(P(\mathbf{1})=-\lambda\mathbf{1}\). Furthermore, \(P^{2}+\lambda P=0\)._
Proof.: Since \(V_{n}=0\) for \(n<0\), \(V_{0}=\mathbb{C}\mathbf{1}\), and \(PV_{0}\subseteq V_{N}\) for some \(N\leq 0\), we have \(P(\mathbf{1})=\mu\mathbf{1}\) for some \(\mu\in\mathbb{C}\). Recall that \(\mathbf{1}_{-1}\mathbf{1}=\mathbf{1}\). Then by (9), we have
\[P(\mathbf{1})_{-1}P(\mathbf{1})=P(P(\mathbf{1})_{-1}\mathbf{1})+P(\mathbf{1}_ {-1}P(\mathbf{1}))+\lambda P(\mathbf{1}_{-1}\mathbf{1}),\]
yielding \(\mu^{2}\mathbf{1}=\mu^{2}\mathbf{1}+\mu^{2}\mathbf{1}+\lambda\mu\mathbf{1}.\) Hence \(\mu\) is either \(0\) or \(-\lambda\). i.e., \(P(\mathbf{1})=0\) or \(-\lambda\mathbf{1}\). Furthermore, again by (9) we have
\[P(a)_{-1}P(\mathbf{1})=P(P(a)_{-1}\mathbf{1})+P(a_{-1}P(\mathbf{1}))+\lambda P (a_{-1}\mathbf{1}),\quad a\in V. \tag{26}\]
If \(P(\mathbf{1})=0\), then (26) becomes \(0=P(P(a))+P(a_{-1}0)+\lambda P(a)\), and so \(P^{2}(a)+\lambda P(a)=0\) for all \(a\in V\). On the other hand, if \(P(\mathbf{1})=-\lambda\mathbf{1}\), then \(-\lambda P(a)=P(P(a))-\lambda P(a)+\lambda P(a)\), which also implies \(P^{2}(a)+\lambda P(a)=0\), for all \(a\in V\). Therefore, \(P^{2}+\lambda P=0\).
**Proposition 3.22**.: _Let \((V,Y,\mathbf{1},\omega)\) be a VOA of CFT-type, and \(P:V\to V\) be a homogeneous RBO of weight \(\lambda\neq 0\) and degree \(N\leq 0\). Then \(V=V^{1}\oplus V^{2}\), where \(V^{1}\) and \(V^{2}\) are graded vertex Leibniz subalgebras of \(V\), with \(V_{n}=V_{n}^{1}\oplus V_{n}^{2}\) for each \(n\in\mathbb{N}\), and_
\[P:V\to V^{1},a^{1}+a^{2}\mapsto-\lambda a^{1},\quad a^{i}\in V^{i},i=1,2.\]
_Moreover, we have \(P(\mathbf{1})=0\) if and only if \(V_{0}^{1}=0\), and \(P(\mathbf{1})=-\lambda\mathbf{1}\) if and only if \(V_{0}^{2}=0\)._
Proof.: Since \(P^{2}+\lambda P=0\) by Lemma 3.21, and \(\lambda\neq 0\) by assumption, the RBO \(-P/\lambda\) is an idempotent. Then by Proposition 3.10\(-P/\lambda\) is an RBO on \(V\) of weight \(-1\). By Theorem 3.16, we have \(V=V^{1}\oplus V^{2}\), where \(V^{1}=(-P/\lambda)(V)=P(V)\) and \(V^{2}=\ker(-P/\lambda)=\ker P\) are vertex Leibniz subalgebras, and \(-P/\lambda\) is the projection
\[-\frac{P}{\lambda}:V\to V^{1},a^{1}+a^{2}\mapsto a^{1}.\]
Hence \(P(a^{1}+a^{2})=-\lambda a^{1}\). Moreover, since \(P(V_{n})\subset V_{n+N}\) for all \(n\in\mathbb{N}\), we have \(V^{1}=PV=\bigoplus_{m=-N}^{\infty}P(V_{m})=\bigoplus_{n=0}^{\infty}V_{n}^{1}\), and \(V^{2}=\bigoplus_{m=-N}^{\infty}\ker(P|_{V_{m}})=\bigoplus_{n=0}^{\infty}V_{n}^ {2}\), where \(V_{n}^{i},i=1,2\), are eigenspaces of \(L(0)\) of eigenvalue \(n\), and \(V_{n}=V_{n}^{1}\oplus V_{n}^{2}\) for each \(n\in\mathbb{N}\). Now the last statement is also clear as \(V_{0}^{1}\oplus V_{0}^{2}=V_{0}=\mathbb{C}\mathbf{1}\).
**Corollary 3.23**.: _Let \(V\) be the level one Heisenberg VOA \(M_{\natural}(1,0)\) associated with \(\natural=\mathbb{C}\alpha\) or the Virasoro VOA \(L(c,0)\) (see [22]), and let \(P:V\to V\) be a homogeneous RBO of degree \(N\leq 0\) and weight \(\lambda\neq 0\). Then \(P\) is either \(0\) or \(-\lambda\mathrm{Id}_{V}\)._
Proof.: Let \(V=M_{\natural}(k,0)\) or \(L(c,0)\). By Proposition 3.22, \(V=V^{1}\oplus V^{2}\) for some graded vertex Leibniz subalgebras \(V^{1},V^{2}\subset V\). But \(V\) is generated by a single homogeneous element: \(V=M_{\natural}(k,0)\) is generated as a vertex algebra by \(\alpha(-1)\mathbf{1}\in V_{1}\) and \(V_{1}=\mathbb{C}\alpha(-1)\mathbf{1}=V_{1}^{1}\oplus V_{1}^{2}\), and \(V=L(c,0)\) is generated by \(\omega\in V_{2}=\mathbb{C}\omega=V_{2}^{1}\oplus V_{2}^{2}\). Then the single generator \(u\) of \(V\) is contained in either \(V^{1}\) or \(V^{2}\) for both cases. If \(u\in V^{1}\), then \(V=V^{1}\) and \(P=-\lambda\mathrm{Id}_{V}\); if \(u\in V^{2}\) then \(V=V^{2}\) and \(P=0\)
**Definition 3.24**.: Let \((V,Y,\mathbf{1},P)\) be an RBVA of weight \(\lambda\). Define a new linear operator \(Y^{\star_{P}}:V\rightarrow(\operatorname{End}V)[[z,z^{-1}]]\) as follows:
\[Y^{\star_{P}}(a,z)b:=Y(a,z)Pb+Y(Pa,z)b+\lambda Y(a,z)b,\quad a,b\in V. \tag{27}\]
Note that \(Y^{\star_{P}}\) is a generalization of \(Y_{P}\) in (11).
**Lemma 3.25**.: _The operator \(Y^{\star_{P}}\) satisfies the truncation property and the skew-symmetry (5). If, in addition, \(P\) is translation invariant (\(DP=PD\)), then \(Y^{\star_{P}}\) also satisfies the \(D\)-derivative property (3) and the \(D\)-bracket derivative property (4)._
Proof.: Given \(a,b\in V\), since \(Y(a,z)Pb\), \(Y(Pa,z)b\) and \(\lambda Y(a,z)b\) are all truncated from below, we have \(Y^{\star_{P}}(a,z)b\in V((z))\). Moreover, by (27) and the skew-symmetry of \(Y\), we have
\[Y^{\star_{P}}(a,z)b=e^{zD}Y(Pb,-z)a+e^{zD}Y(b,-z)Pa+\lambda e^{zD}Y(b,-z)a=e^{ zD}Y^{\star_{P}}(b,-z)a.\]
Hence \(Y^{\star_{P}}\) also satisfies the skew-symmetry. Now assume that \(DP=PD\). By (27), (3) and (4) for \(Y\), we have
\[Y^{\star_{P}}(Da,z)b =Y(Da,z)Pb+Y(PDa,z)b+\lambda Y(Da,z)b\] \[=\frac{d}{dz}Y(a,z)Pb+Y(DPa,z)b+\lambda\frac{d}{dz}Y(a,z)b=\frac{ d}{dz}Y^{\star_{P}}(a,z)b,\] \[b =DY(a,z)Pb-Y(a,z)PDb+[D,Y(Pa,z)]b+\lambda[D,Y(a,z)]b\] \[=[D,Y(a,z)]Pb+[D,Y(Pa,z)]b+\lambda[D,Y(a,z)]b=\frac{d}{dz}Y^{\star_ {P}}(a,z)b.\]
Hence \(Y^{\star_{P}}\) satisfies the \(D\)-derivative and \(D\)-bracket derivative properties.
The next theorem is the vertex algebra analog of the derived product structure of RBOs, first discovered for Lie algebras by Semenov-Tian-Shansky [40]. See [23, Theorem 1.1.17] for associative algebras. For vertex algebras, it shows that \(Y^{\star_{P}}\) gives a new structure of a vertex Leibniz algebra (see Definition 2.4) or a vertex algebra without vacuum (see Definition 2.5) on an RBVA \((V,Y,\mathbf{1},P)\).
**Theorem 3.26**.: _Let \((V,Y,\mathbf{1},P)\) be an RBVA of weight \(\lambda\), and \(Y^{\star_{P}}\) be given by (27). Then we have_
1. \(P(Y^{\star_{P}}(a,z)b)=Y(Pa,z)Pb\)_, for all_ \(a,b\in V\)_._
2. \((V,Y^{\star_{P}})\) _is a vertex Leibniz algebra. If, furthermore,_ \(P\) _is translation invariant, then_ \((V,Y^{\star_{P}},D)\) _is a vertex algebra without vacuum._
3. \(P\) _is an RBO of weight_ \(\lambda\) _on the vertex Leibniz algebra_ \((V,Y^{\star_{P}})\)_._
Proof.: (i) By (8) and (27), we have
\[Y(Pa,z)Pb=P(Y(a,z)Pb)+P(Y(Pa,z)b)+\lambda P(Y(a,z)b)\] \[=P(Y(a,z)Pb+Y(Pa,z)b+\lambda Y(a,z)b)\] \[=P(Y^{\star_{P}}(a,z)b).\]
(ii) By Lemma 3.25, to show that \((V,Y^{\star_{P}})\) is a vertex Leibniz algebra, we only need to show that \(Y^{\star_{P}}\) satisfies the Jacobi identity, or equivalently, the weak commutativity and weak associativity, in view of Theorem 2.3.
We only prove the weak commutativity. The proof of the weak associativity is similar. Given \(a,b,c\in V\), we need to find some \(N\in\mathbb{N}\) (depending on \(a\) and \(b\)) such that
\[(z_{1}-z_{2})^{N}Y^{\star_{P}}(a,z_{1})Y^{\star_{P}}(b,z_{2})c=(z_{1}-z_{2})^{N }Y^{\star_{P}}(b,z_{2})Y^{\star_{P}}(a,z_{1})c. \tag{28}\]
We again apply (27) to expand \(Y^{\star_{P}}(a,z_{1})Y^{\star_{P}}(b,z_{2})c\) and \(Y^{\star_{P}}(b,z_{2})Y^{\star_{P}}(a,z_{1})c\) as follows.
\[Y^{\star_{P}}(a,z_{1})Y^{\star_{P}}(b,z_{2})c\] \[=Y(a,z_{1})P(Y(b,z_{2})Pc)+Y(Pa,z_{1})Y(b,z_{2})Pc+\lambda Y(a,z_{ 1})Y(b,z_{2})Pc\] \[\quad+Y(a,z_{1})P(Y(Pb,z_{2})c)+Y(Pa,z_{1})Y(Pb,z_{2})c+\lambda Y(a,z_{1})Y(Pb,z_{2})c\] \[\quad+\lambda Y(a,z_{1})P(Y(b,z_{2})c)+\lambda Y(Pa,z_{1})Y(b,z_{ 2})c+\lambda^{2}Y(a,z_{1})Y(b,z_{2})c;\] \[Y^{\star_{P}}(b,z_{2})Y^{\star_{P}}(a,z_{1})\] \[=Y(b,z_{2})P(Y(a,z_{1})Pc)+Y(Pb,z_{2})Y(a,z_{1})Pc+\lambda Y(b,z_ {2})Y(a,z_{1})Pc\] \[\quad+Y(b,z_{2})P(Y(Pa,z_{1})c)+Y(Pb,z_{2})Y(Pa,z_{1})c+\lambda Y( b,z_{2})Y(Pa,z_{1})c\] \[\quad+\lambda Y(b,z_{2})P(Y(a,z_{1})c)+\lambda Y(Pb,z_{2})Y(a,z_{ 1})c+\lambda^{2}Y(b,z_{2})Y(a,z_{1})c.\]
By (8) again, we have
\[Y(a,z_{1})P(Y(b,z_{2})Pc)+Y(a,z_{1})P(Y(Pb,z_{2})c)+\lambda Y(a,z _{1})P(Y(b,z_{2})c)\] \[=Y(a,z_{1})Y(Pb,z_{2})Pc,\] \[Y(b,z_{2})P(Y(a,z_{1})Pc)+Y(b,z_{2})P(Y(Pa,z_{1})c)+\lambda Y(b,z _{2})P(Y(a,z_{1})c)\] \[=Y(P(a),z_{1})Y(b,z_{2})Pc.\]
We can find a common natural number \(N\in\mathbb{N}\), depending on \(a\) and \(b\), that ensures all the following weak commutativities of \(Y\):
\[(z_{1}-z_{2})^{N}Y(a,z_{1})Y(Pb,z_{2})Pc =(z_{1}-z_{2})^{N}Y(Pb,z_{2})Y(a,z_{2})Pc,\] \[(z_{1}-z_{2})^{N}Y(b,z_{2})Y(Pa,z_{1})Pc =(z_{1}-z_{2})^{N}Y(P(a),z_{1})Y(b,z_{2})Pc,\] \[(z_{1}-z_{2})^{N}Y(a,z_{1})Y(b,z_{2})Pc =(z_{1}-z_{2})^{N}Y(b,z_{2})Y(a,z_{1})Pc,\] \[(z_{1}-z_{2})^{N}Y(Pa,z_{1})Y(Pb,z_{1})c =(z_{1}-z_{2})^{N}Y(Pb,z_{2})Y(Pa,z_{1})c,\] \[(z_{1}-z_{2})^{N}Y(a,z_{1})Y(Pb,z_{2})c =(z_{1}-z_{2})^{N}Y(Pb,z_{2})Y(a,z_{1})c,\] \[(z_{1}-z_{2})^{N}Y(Pa,z_{1})Y(b,z_{2}c) =(z_{1}-z_{2})^{N}Y(b,z_{2})Y(Pa,z_{1})c,\] \[(z_{1}-z_{2})^{N}Y(a,z_{1})Y(b,z_{2})c =(z_{1}-z_{2})^{N}Y(b,z_{2})Y(a,z_{1})c.\]
This shows (28) by comparing the expansions. Thus, \((V,Y^{\star_{P}})\) is a vertex Leibniz algebra. If the RBO \(P\) is also translation invariant, then by Lemma 3.25, \((V,Y^{\star_{P}},D)\) is a vertex algebra without vacuum.
(iii) By (27) and (i), we have
\[Y^{\star_{P}}(Pa,z)Pb =Y(Pa,z)P(Pb)+Y(P(Pa),z)P(b)+\lambda Y(Pa,z)Pb\] \[=P(Y^{\star_{P}}(a,z)Pb)+P(Y^{\star_{P}}(Pa,z)b)+\lambda P(Y^{ \star_{P}}(a,z)b).\]
By Definition 2.4, \(P\) is an RBO of weight \(\lambda\) on the vertex Leibniz algebra \((V,Y^{\star_{P}})\).
**Remark 3.27**.: If \(\lambda=0\), then Theorem 3.26 gives an alternative proof to Proposition 3.3. In particular, if \(R=P:V\to V\) is an RBO of weight \(0\), or equivalently, satisfies the "modified Yang-Baxter equation", then \(Y_{R}(a,z)b=Y(Ra,z)b+Y(a,z)Rb=Y^{\star_{P}}(a,z)b\) satisfies the Jacobi identity of VOAs, or equivalently, \(R=P\) is an \(R\)-matrix for \(V\).
**Example 3.28**.: Let \(V=V_{L}\) be a lattice VOA, and let \(P:V_{L}\to V^{1}\) be the projection RBO in Example 3.17, where \(L=L^{1}\sqcup L^{2}\) and \(V_{L}=V^{1}\oplus V^{2}\) as in (22) and (23). For
and \(b=b^{1}+b^{2}\) in \(V_{L}\), where \(a^{i},b^{i}\in V^{i}\) for \(i=1,2\), we have \(P(a^{1})=a^{1},P(b^{1})=b^{1}\), and \(P(a^{2})=P(b^{2})=0\). Then by (27), with \(\lambda=-1\), we have
\[Y^{\star_{P}}(a,z)b =Y(a,z)Pb+Y(Pa,z)b-Y(a,z)b\] \[=Y(a^{1}+a^{2})b^{1}+Y(a^{1},z)(b^{1}+b^{2})-Y(a^{1}+a^{2},z)(b^{1 }+b^{2})\] \[=Y(a^{1},z)b^{1}-Y(a^{2},z)b^{2}.\]
Since \(P\) is translation invariant, by Theorem 3.26, \((V_{L},Y^{\star_{P}},L(-1))\) is a vertex algebra without vacuum, with \(Y^{\star_{P}}\) given by
\[Y^{\star_{P}}(a,z)b=Y(a^{1},z)b^{1}-Y(a^{2},z)b^{2}. \tag{29}\]
Note that the vacuum element \(\mathbf{1}\) of \(V_{L}\) is contained in \(M_{\natural}(1,0)\subset V^{1}\), and it cannot be the vacuum element of \((V_{L},Y^{\star_{P}},L(-1))\), since \(Y^{\star_{P}}(\mathbf{1},z)a^{2}=0\) for all \(a^{2}\in V^{2}\) by (29).
## 4. Dendriform vertex algebras
In this section, we study the dendriform structure under the framework of vertex algebras, which has an intimate relation with the Rota-Baxter operators on vertex algebras. Since the usual dendriform axioms are the splitting axioms of associativity, while the key axiom of a vertex algebra is the Jacobi identity, we take the viewpoint that the Jacobi identity is the combination of weak associativity together with the \(D\)-bracket derivative property and skew-symmetry (cf. [27, 33]), and define our dendriform structures gradually. In particular, we show that our definitions preserve the usual properties of a dendriform algebra, and they give rise to some new kinds of Jacobi identities (see Theorem 4.11).
### Dendriform field and vertex algebras
A **dendriform algebra** is a vector space \(A\) over a field \(k\), equipped with two binary operators \(\prec\) and \(\succ\), satisfying
\[(x\prec y)\prec z =x\prec(y\prec z+y\succ z), \tag{31}\] \[(x\succ y)\prec z =x\succ(y\prec z),\] (32) \[(x\prec y+x\succ y)\succ z =x\succ(y\succ z),\quad x,y,z\in A. \tag{30}\]
Given a dendriform algebra \((A,\prec,\succ)\), the product
\[x\star y:=x\prec y+x\succ y,\quad x,y\in A. \tag{33}\]
is associative. Furthermore, Rota-Baxter algebras give rise to dendriform algebras. For instance, an Rota-Baxter algebra \((R,P)\) of weight \(0\) defines a dendriform algebra \((R,\prec_{P},\succ_{P})\)[1], where
\[x\prec_{P}y=xP(y),\qquad x\succ_{P}y=P(x)y,\quad x,y\in R. \tag{34}\]
The associative analog of vertex algebras is the notion of field algebras [8], or the nonlocal vertex algebras [33]. A **field algebra**\((V,Y,\mathbf{1},D)\) is a vector space \(V\), equipped with a linear map \(Y:V\to(\mathrm{End}V)[[z,z^{-1}]]\), a distinguished vector \(\mathbf{1}\), and a linear map \(D:V\to V\), satisfying the truncation property, the vacuum and creation properties, the \(D\)(bracket) derivative properties (3) and (4), and the weak associativity (1).
Since the axioms of a dendriform are extracted from the associativity axiom, the weak associativity axiom (2) of a field algebra is enough for our purpose. We introduce the following notion, which can be viewed as a weaker version of both field algebras and vertex Leibniz algebras.
**Definition 4.1**.: A **field Leibniz algebra** is a vector space \(V\), equipped with a linear map \(Y:V\to(\operatorname{End}V)[[z,z^{-1}]]\), satisfying the truncation property and the weak associativity (2). We denote a field Leibniz algebra by \((V,Y)\).
An **ordinary Rota-Baxter operator (ordinary RBO)** on a field Leibniz algebra \((V,Y)\) of weight \(\lambda\in\mathbb{C}\) is a linear map \(P:V\to V\), satisfying the compatibility (9).
By a similar argument as the proof of Proposition 2.6 (cf. [27]), together with Lemma 2.7 in [35], we can derive the following relation between the field Leibniz algebras and the vertex algebras without vacuum.
**Proposition 4.2**.: _Let \((V,Y)\) be a field Leibniz algebra. Suppose that there exists a linear map \(D:V\to V\) satisfying the D-bracket derivative property (4) and the skew-symmetry (5). Then \((V,Y,D)\) is a vertex algebra without vacuum._
Recall the notions of vertex Leibniz algebras and vertex algebras without vacuum in Definitions 2.4 and 2.5, respectively. It is clear that we have the following embedding of categories.
\[\text{vertex alg. without vacuum}\subset\text{ vertex Leibniz alg.}\subset\text{ field Leibniz alg.} \tag{35}\]
Inspired by the axioms (30)-(32) of dendriform algebras and the weak-associativity (2), we expect to decompose the vertex operator \(Y(\cdot,z)\) into a sum of two operators: \(Y(\cdot,z)=Y_{<}(\cdot,z)+Y_{>}(\cdot,z)\), whose properties are consistent with both the Rota-Baxter type axiom and the weak associativity axiom. Furthermore, we want the embedding of categories (35) to be reflected as well. This leads us to the following definition.
**Definition 4.3**.: Let \(V\) be a vector space, and
\[Y_{<}(\cdot,z):V\to\operatorname{Hom}(V,V((z))),\ a\mapsto Y_{<}(a,z),\] \[Y_{>}(\cdot,z):V\to\operatorname{Hom}(V,V((z))),\ a\mapsto Y_{> }(a,z)\]
be two linear operators associated with a formal variable \(z\). For simplicity, we denote \(Y_{<}(\cdot,z)\) by \(\cdot\prec_{z}\cdot\), and \(Y_{>}(\cdot,z)\) by \(\cdot\succ\cdot\), respectively, and write
\[Y_{<}(a,z)b=a\prec_{z}b,\quad Y_{>}(a,z)b=a\succ_{z}b,\quad a,b\in V.\]
1. A triple \((V,\prec_{z},\succ_{z})\) is called a **dendriform field algebra** if for \(a,b,c\in V\), there exists some \(N\in\mathbb{N}\) depending on \(a\) and \(c\), satisfying (36) \[(z_{0}+z_{2})^{N}(a\prec_{z_{0}}b)\prec_{z_{2}}c =(z_{0}+z_{2})^{N}a\prec_{z_{0}+z_{2}}(b\succ_{z_{2}}c+b\prec_{z _{2}}c),\] (37) \[(z_{0}+z_{2})^{N}(a\succ_{z_{0}}b)\prec_{z_{2}}c =(z_{0}+z_{2})^{N}a\succ_{z_{0}+z_{2}}(b\prec_{z_{2}}c),\] (38) \[(z_{0}+z_{2})^{N}(a\succ_{z_{0}}b+a\prec_{z_{0}}b)\succ_{z_{2}}c =(z_{0}+z_{2})^{N}a\succ_{z_{0}+z_{2}}(b\succ_{z_{2}}c).\]
2. A triple \((V,\prec_{z},\succ_{z})\) is called a **dendriform vertex Leibniz algebra** if it is a dendriform field algebra and satisfies the following additional conditions: for \(a,b,c\in V\), there exists some \(N\in\mathbb{N}\) depending on \(a\) and \(b\), such that (39) \[(z_{1}-z_{2})^{N}a\succ_{z_{1}}(b\prec_{z_{2}}c) =(z_{1}-z_{2})^{N}b\prec_{z_{2}}(a\succ_{z_{1}}c+a\prec_{z_{1}}c),\] (40) \[(z_{1}-z_{2})^{N}a\succ_{z_{1}}(b\succ_{z_{2}}c) =(z_{1}-z_{2})^{N}b\succ_{z_{2}}(a\succ_{z_{1}}c).\]
3. Let \(D:V\to V\) be a linear map. A quadruple \((V,\prec_{z},\succ_{z},D)\) is called a **dendriform vertex algebra (without vacuum)** if \((V,\prec_{z},\succ_{z})\) is a dendriform field algebra, and \(D\), \(\prec_{z}\), and \(\succ_{z}\) satisfy the following compatibility properties: (41) \[e^{zD}(a\prec_{z}b)=b\succ_{z}a\quad\text{and}\quad e^{zD}(a\succ_{z}b)=b \prec_{z}a;\]
\[D(a\prec_{z}b)-a\prec_{z}(Db)=\frac{d}{dz}(a\prec_{z}b)\quad\text{and}\quad D(a \succ_{z}b)-a\succ_{z}(Db)=\frac{d}{dz}(a\succ_{z}b). \tag{42}\]
**Remark 4.4**.: By the definition above, any dendriform vertex Leibniz algebra is automatically a dendriform field algebra. Later, we will see that the equations (36)-(38) are the underlying axioms of the weak associativity (2), while the equations (39) and (40) are the underlying axioms of the weak commutativity (1).
We will also show that the equations (36)-(38) are equivalent with equations (39) and (40) if there exists \(D:V\to V\) satisfying (41) and (42) (Proposition 4.8). Thus any dendriform vertex algebra is also a dendriform vertex Leibniz algebra (Corollary 4.9), and we have a similar embedding of categories as (35):
\[\text{dendriform vertex alg.}\subset\text{dendriform vertex Leibniz alg.} \subset\text{dendriform field alg.} \tag{43}\]
We can regard the equality (41) as the analog of skew-symmetry (5) satisfied by the partial operators \(\prec_{z}\) and \(\succ_{z}\). The equality (42) can be viewed as the \(D\)-bracket derivative property (4) for the partial operators. Similar to Lemma 2.7 in [35], we have the following result.
**Lemma 4.5**.: _Let \(V\) be a vector space, equipped with two operators \(\prec_{z},\succ_{z}\): \(V\times V\to V((z))\) and a linear operator \(D:V\to V\), satisfying the skew-symmetry (41). Then the \(D\)-bracket derivative property (42) is equivalent to the following \(D\)-derivative property._
\[(Da)\prec_{z}b=\frac{d}{dz}(a\prec_{z}b),\qquad(Da)\succ_{z}b=\frac{d}{dz}(a \succ_{z}b),\quad a,b\in V. \tag{44}\]
_In particular, a dendriform vertex algebra \((V,\prec_{z},\succ_{z},D)\) can be defined as a dendriform field algebra \((V,\prec_{z},\succ_{z})\) that satisfies (41) and (44)._
Proof.: Similar to the proof of Lemma 2.7 in [35], for \(a,b\in V\), we have
\[(Da)\prec_{z}b-\frac{d}{dz}(a\prec_{z}b) =e^{zD}b\succ_{-z}Da-De^{zD}(b\succ_{z}a)-e^{zD}\frac{d}{dz}(b \succ_{-z}a)\] \[=e^{zD}\left(b\succ_{-z}Da-D(b\succ_{-z}a)-\frac{d}{dz}(b \succ_{-z}a)\right),\] \[(Da)\succ_{z}b-\frac{d}{dz}(a\succ_{z}b) =e^{zD}b\prec_{-z}Da-De^{zD}(b\prec_{z}a)-e^{zD}\frac{d}{dz}(b \prec_{-z}a)\] \[=e^{zD}\left(b\prec_{-z}Da-D(b\prec_{-z}a)-\frac{d}{dz}(b \prec_{-z}a)\right).\]
Thus, (42) is equivalent to (44).
The axioms of the dendriform algebra are closely related to the properties of Rota-Baxter operators. As their vertex algebra analogs, the following theorem shows that an RBVA of weight \(\lambda\) gives rise to a dendriform field algebra, and an RBVA of weight \(0\) gives rise to a dendriform vertex algebra.
**Theorem 4.6**.: _Let \((V,Y,\mathbf{1},P)\) be an RBVA of weight \(\lambda\)._
1. _Let_ \(\lambda\) _be arbitrary. Then_ \((V,Y,\mathbf{1},P)\) _defines a dendriform field algebra_ \((V,\prec_{z}^{\prime},\succ_{z}^{\prime})\)_, where_ (45) \[a\prec_{z}^{\prime}b=Y(a,z)P(b)+\lambda Y(a,z)b,\qquad a\succ_{z}^{\prime}b= Y(P(a),z)b,\quad a,b\in V.\]
_._
2. _If_ \(\lambda=0\)_, then_ \((V,Y,\mathbf{1},P)\) _defines a dendriform vertex Leibniz algebra_ \((V,\prec_{z},\succ_{z})\)_, where_ (46) \[a\prec_{z}b:=Y(a,z)P(b),\qquad a\succ_{z}b:=Y(P(a),z)b,\quad a,b\in V.\] _If_ \(P\) _is also translation invariant, then_ \((V,\prec_{z},\succ_{z},D)\) _is a dendriform vertex algebra._
Proof.: (i). To verify equation (36), we have
\[(z_{0}+z_{2})^{N}(a\prec_{z_{0}}^{\prime}b)\prec_{z}^{\prime}c =(z_{0}+z_{2})^{N}(Y(Y(a,z_{0})P(b)+\lambda Y(a,z_{0})b,z_{2})P(c)\] \[+\lambda Y(Y(a,z_{0})P(b)+\lambda Y(a,z_{0})b,z_{2})c).\]
On the other hand,
\[(z_{0}+z_{2})^{N}a\prec_{z_{0}+z_{2}}^{\prime}(b\succ_{z_{2}}^{ \prime}c+b\prec_{z_{2}}^{\prime}c)\] \[=(z_{0}+z_{2})^{N}(Y(a,z_{0}+z_{2})P(Y(P(b),z_{2})c+Y(b,z_{2})P(c )+\lambda Y(b,z_{2})c)\] \[+\lambda Y(a,z_{0}+z_{2})(Y(P(b),z_{2})c+Y(b,z_{2})P(c)+\lambda Y (b,z_{2})c))\] \[=(z_{0}+z_{2})^{N}(Y(a,z_{0}+z_{2})Y(P(b),z_{2})P(c)\] \[+\lambda Y(a,z_{0}+z_{2})(Y(P(b),z_{2})c+Y(b,z_{2})P(c)+\lambda Y (b,z_{2})c)).\]
Take a common \(N\) that ensures the weak associativity (2) for \((a,P(b),P(c))\), \((a,b,P(c))\), \((a,P(b),c)\), and \((a,b,c)\) at the time. Then equation (36) holds. (37) and (38) can be proved similarly, we omit the details.
(ii). By taking \(\lambda=0\), we see that \((V,\prec_{z},\succ_{z})\) given by (46) is a dendriform field algebra. Furthermore, by (46) and (9), we have
\[a\succ_{z_{1}}(b\prec_{z_{2}}c) =a\succ_{z_{1}}(Y(b,z_{2})P(c))=Y(P(a),z_{1})Y(b,z_{2})P(c),\] \[b\prec_{z_{2}}(a\succ_{z_{1}}c+a\prec_{z_{1}}c) =Y(b,z_{2})P(Y(P(a),z_{1})c+Y(a,z_{1})P(c))=Y(b,z_{2})Y(P(a),z_{1 })P(c).\]
Choose a common \(N\in\mathbb{N}\) so that the weak commutativity (1) holds for \((P(a),b,P(c))\). Then we have (39). By (46) and (9) again,
\[a\succ_{z_{1}}(b\succ_{z_{2}}c) =a\succ_{z_{1}}(Y(P(b),z_{2})c)=Y(P(a),z_{1})Y(P(b),z_{2})c,\] \[b\succ_{z_{2}}(a\succ_{z_{1}}c) =b\succ_{z_{2}}(Y(P(a),z_{1})c)=Y(P(b),z_{2})Y(P(a),z_{1})c.\]
Choose an \(N\in\mathbb{N}\) so that the weak commutativity (1) holds for \((P(a),P(b),c)\). Then we have (40). Note that the \(N\) we have chosen depends only on \(a\) and \(b\) since \(P\) is fixed. Thus, \((V,\prec_{z},\succ_{z})\) is a dendriform vertex Leibniz algebra.
Finally, if \(P\) is translation invariant \((PD=DP)\), then by (46) we have
\[e^{zD}(a\prec_{-z}b) =e^{zD}Y(a,-z)P(b)=Y(P(b),z)a=b\succ_{z}a,\] \[e^{zD}(a\succ_{-z}b) =e^{zD}Y(P(a),-z)b=Y(b,z)P(a)=b\prec_{z}a.\]
Moreover, the \(D\)-derivative and bracket derivative properties (3) and (4) yield
\[D(a\prec_{z}b)-a\prec_{z}(Db) =DY(a,z)P(b)-Y(a,z)P(Db)=[D,Y(a,z)]P(b)\] \[=\frac{d}{dz}Y(a,z)P(b)=\frac{d}{dz}(a\prec_{z}b),\] \[D(a\succ_{z}b)-a\succ_{z}(Db) =DY(P(a),z)b-Y(P(a),z)Db=[D,Y(P(a),z)]b\] \[=\frac{d}{dz}(Y(P(a),z)b)=\frac{d}{dz}(a\succ_{z}b).\]
Hence \((V,\prec_{z},\succ_{z},D)\) is a dendriform vertex algebra, in view of (41) and (42).
Dendriform field algebras, vertex Leibniz algebras, and vertex algebras also give rise to field Leibniz algebras (see Definition 4.1), vertex Leibniz algebras (see Definition 2.4), and vertex algebras without vacuum (see Definition 2.5), respectively.
**Theorem 4.7**.: _Let \(V\) be a vector space, equipped with two linear maps \(\prec_{z},\succ_{z}\): \(V\to\operatorname{Hom}(V,V((z)))\) as in Definition 4.3. Define \(Y:V\to(\operatorname{End}V)[[z,z^{-1}]]\) by_
\[Y(a,z)b:=a\prec_{z}b+a\succ_{z}b,\quad a,b\in V. \tag{47}\]
_Then we have the following properties, with \(Y\) given by (47):_
1. _If_ \((V,\prec_{z},\succ_{z})\) _is a dendriform field algebra, then_ \((V,Y)\) _is a field Leibniz algebra._
2. _If_ \((V,\prec_{z},\succ_{z})\) _is a dendriform vertex Leibniz algebra, then_ \((V,Y)\) _is a vertex Leibniz algebra._
3. _If there exists a linear map_ \(D:V\to V\) _such that_ \((V,\prec_{z},\succ_{z},D)\) _is a dendriform vertex algebra, then_ \((V,Y,D)\) _is a vertex algebra without vacuum._
Proof.: Clearly, \(Y\) defined by (47) satisfies the truncation property, in view of Definition 4.3. Let \((V,\prec_{z},\succ_{z})\) be a dendriform field algebra. We claim that \(Y\) satisfies the weak associativity (2). Indeed, for all \(a,b,c\in V\), we have
\[(z_{0}+z_{2})^{N}Y(Y(a,z_{0})b,z_{2})c=(z_{0}+z_{2})^{k}Y(a\prec_ {z_{0}}b+a\succ_{z_{0}}b,z_{2})c\\ =(z_{0}+z_{2})^{N}(a\prec_{z_{0}}b+a\succ_{z_{0}}b)\prec_{z_{2}} c+(a\prec_{z_{0}}b+a\succ_{z_{0}}b)\succ_{z_{2}}c\\ =(z_{0}+z_{2})^{N}((a\prec_{z_{0}}b)\prec_{z_{2}}c+(a\succ_{z_{0} }b)\prec_{z_{2}}c+(a\prec_{z_{0}}b+a\succ_{z_{0}}b)\succ_{z_{2}}c),\\ (z_{0}+z_{2})^{N}Y(a,z_{0}+z_{2})Y(b,z_{2})c=(z_{0}+z_{2})^{k}Y(a,z _{0}+z_{2})(b\prec_{z_{2}}c+b\succ_{z_{2}}c)\\ =(z_{0}+z_{2})^{N}(a\prec_{z_{0}+z_{2}}(b\prec_{z_{2}}c+b\succ_{z _{2}}c)+a\succ_{z_{0}+z_{2}}(b\prec_{z_{2}}c+b\succ_{z_{2}}c))\\ =(z_{0}+z_{2})^{N}(a\prec_{z_{0}+z_{2}}(b\prec_{z_{2}}c+b\succ_{z _{2}}c)+a\succ_{z_{0}+z_{2}}(b\prec_{z_{2}}c)+a\succ_{z_{0}+z_{2}}(b\succ_{z _{2}}c)).\]
We take a common \(N>0\) depending on \(a\) and \(c\), such that equations (36), (37), and (38) are satisfied at the same time. Then we have the weak associativity (2). Hence \((V,Y)\) is a field Leibniz algebra.
Let \((V,\prec_{z},\succ_{z})\) be a dendriform vertex Leibniz algebra. Choose a \(N\in\mathbb{N}\) depending on \(a\) and \(b\) such that (39) and (40) hold simultaneously. Then
\[(z_{1}-z_{2})^{N}Y(a,z_{1})Y(b,z_{2})c\\ =(z_{1}-z_{2})^{N}(a\prec_{z_{1}}(b\prec_{z_{2}}c+b\succ_{z_{2}}c )+a\succ_{z_{1}}(b\prec_{z_{2}}c)+a\succ_{z_{1}}(b\succ_{z_{2}}c))\\ =(z_{1}-z_{2})^{N}(b\succ_{z_{2}}(a\prec_{z_{1}}c))+(z_{1}-z_{2} )^{N}(b\prec_{z_{2}}(a\succ_{z_{1}}c+a\prec_{z_{1}}c))+(z_{1}-z_{2})^{N}b \succ_{z_{2}}(a\succ_{z_{1}}c)\\ =(z_{1}-z_{2})^{N}Y(b,z_{2})Y(a,z_{1})c.\]
Thus, \((V,Y)\) satisfies the weak commutativity (1) and weak associativity (2). Then by Theorem 2.2, \((V,Y)\) is a vertex Leibniz algebra.
Finally, if there exists a linear map \(D:V\to V\) such that \((V,\prec_{z},\succ_{z},D)\) is a dendriform vertex algebra, then by (41) and (42), we have
\[e^{zD}Y(a,-z)b=e^{zD}(a\prec_{-z}b)+e^{zD}(a\succ_{-z}b)=b\succ_{z}a+b\prec_{z }a=Y(b,z)a,\quad a,b\in V.\]
Hence \((V,Y,D)\) satisfies the skew-symmetry (5). Moreover,
\[D(Y(a,z)b)-Y(a,z)Db =D(a\prec_{z}b)+D(a\succ_{z}b)-a\prec_{z}(Db)-a\succ_{z}(Db)\] \[=\frac{d}{dz}(a\prec_{z}Db)+\frac{d}{dz}(a\succ_{z}b)=\frac{d}{ dz}Y(a,z)b,\quad a,b\in V.\]
Hence \((V,Y,D)\) satisfies the \(D\)-bracket derivative property (4). Then by Proposition 2.6, \(Y\) also satisfies the weak commutativity, and by Theorem 2.2, \((V,Y,D)\) satisfies the Jacobi identity. This shows that \((V,Y,D)\) is a vertex algebra without vacuum.
### Characterizations of the dendriform vertex (Leibniz) algebras
Theorem 4.7 shows that the equations (36)-(38) are the axioms underlying the weak associativity (2) of the vertex operator \(Y\) given by \(Y(a,z)b=a\prec_{z}b+a\succ_{z}b\), while equations (39) and (40) are the axioms underlying the weak commutativity (1). On the other hand, by Proposition 2.6, for any vertex operator map \(Y\) on a vector space \(V\), the weak associativity and weak commutativity are equivalent if there exists \(D:V\to V\) that satisfies some nice properties. So it is natural to expect the same kind of equivalency for the dendriform structures as well. In fact, we have the following conclusion.
**Proposition 4.8**.: _Consider a quadruple \((V,\prec_{z},\succ_{z},D)\), where \(V\) is a vector space, \(\prec_{z},\succ_{z}:V\to\operatorname{Hom}(V,V((z)))\) are two linear maps, and \(D:V\to V\) is a linear map. Assume that \((V,\prec_{z},\succ_{z},D)\) satisfies (41) and (42). Then \((V,\prec_{z},\succ_{z},D)\) is a dendriform vertex algebra if and only if it satisfies (39) and (40). In other words, in this case, equations (36)-(38) are equivalent to equations (39) and (40)._
Proof.: It follows from (42) that
\[e^{z_{0}D}a\prec_{z}e^{-z_{0}D}b=a\prec_{z+z_{0}}b,\quad e^{z_{0}D}a\succ_{z}e ^{-z_{0}D}b=a\succ_{z+z_{0}}b,\quad a,b\in V. \tag{48}\]
The proof of (48) is similar to the proof of the conjugation formula of the vertex operator \(e^{z_{0}D}Y(a,z)e^{-z_{0}D}=Y(a,z+z_{0})\) by applying the \(D\)-bracket derivative property (4), for which we refer the reader to [19, 31] for details.
By (48) and (41), we can express the two sides of (36) as
\[(z_{0}+z_{2})^{N}a\prec_{z_{0}+z_{2}}(b\succ_{z_{2}}c+b\prec_{z_{ 2}}c) =(z_{0}+z_{2})^{N}e^{z_{2}D}a\prec_{z_{0}}e^{-z_{2}D}(b\succ_{z_{2} }c+b\prec_{z_{2}}c)\] \[=(z_{0}+z_{2})^{N}e^{z_{2}D}a\prec_{z_{0}}(c\prec_{-z_{2}}b+c \succ_{-z_{2}}b),\] \[(z_{0}+z_{2})^{N}(a\prec_{z_{0}}b)\prec_{z_{2}}c =(z_{0}+z_{2})^{N}e^{z_{2}D}c\succ_{-z_{2}}(a\prec_{z_{0}}b),\]
where \(N\in\mathbb{N}\) depends on \(a\) and \(c\). Hence \((z_{0}+z_{2})^{N}a\prec_{z_{0}}(c\prec_{-z_{2}}b+c\succ_{-z_{2}}b)=(z_{0}+z_{ 2})^{N}c\succ_{-z_{2}}(a\prec_{z_{0}}b).\) By replacing \((z_{0},z_{2})\) with \((z_{2},-z_{1})\), and replacing \((c,a,b)\) with the ordered triple \((a,b,c)\) in this equation, we obtain
\[(z_{2}-z_{1})^{N}b\prec_{z_{2}}(a\prec_{z_{1}}c+a\succ_{z_{1}}c)=(z_{2}-z_{1 })^{N}a\succ_{z_{1}}(b\prec_{z_{2}}c),\]
where \(N\) depends on \(a\) and \(b\). This equation is equivalent to (39) since \(N\geq 0\). Similarly, we can express the two sides of (37) as
\[(z_{0}+z_{2})^{N}a\succ_{z_{0}+z_{2}}(b\prec_{z_{2}}c) =(z_{0}+z_{2})^{N}e^{z_{2}D}a\succ_{z_{0}}e^{-z_{2}D}(b\prec_{z_ {2}}c)\] \[=(z_{0}+z_{2})^{N}e^{z_{2}D}a\succ_{z_{0}}(c\succ_{-z_{2}}b),\] \[(z_{0}+z_{2})^{N}(a\succ_{z_{0}}b)\prec_{z_{2}}c =(z_{0}+z_{2})^{N}e^{z_{2}D}c\succ_{-z_{2}}(a\succ_{z_{0}}b),\]
where \(N\) depends on \(a\) and \(c\). Then \((z_{0}+z_{2})^{N}a\succ_{z_{0}}(c\succ_{-z_{2}}b)=(z_{0}+z_{2})^{N}c\succ_{-z_ {2}}(a\succ_{z_{0}}b)\), and by replacing \((z_{0},z_{2})\) with \((z_{1},-z_{2})\), and \((a,c,b)\) with \((a,b,c)\), we have \(N\) depends on \(a\) and \(b\), and
\[(z_{1}-z_{2})^{N}a\succ_{z_{1}}(b\succ_{z_{2}}c)=(z_{1}-z_{2})^{N}b\succ_{z_{2 }}(a\succ_{z_{1}}c),\]
which is (40). Finally, we can express each side of (38) as
\[(z_{0}+z_{2})^{N}a\succ_{z_{0}+z_{2}}(b\succ_{z_{2}}c) =(z_{0}+z_{2})^{N}e^{z_{2}D}a\succ_{z_{0}}e^{-z_{2}D}(b\succ_{z_ {2}}c)\] \[=(z_{0}+z_{2})^{N}e^{z_{2}D}a\succ_{z_{0}}(c\prec_{-z_{2}}b),\]
\[(z_{0}+z_{2})^{N}(a>_{z_{0}}b+a<_{z_{0}}b)>_{z_{2}}c=(z_{0}+z_{2})^{N}e^{z_{2}D}c<_{ -z_{2}}(a>_{z_{0}}b+a<_{z_{0}}b),\]
where \(N\) depends on \(a\) and \(c\). Then \((z_{0}+z_{2})^{N}a>_{z_{0}}(c<_{-z_{2}}b)=(z_{0}+z_{2})^{N}c<_{-z_{2}}(a>_{z_{0 }}b+a<_{z_{0}}b)\), and by replacing \((z_{0},z_{2})\) with \((z_{1},-z_{2})\), and \((a,c,b)\) with \((a,b,c)\), we have \(N\) depends on \(a\) and \(b\), and
\[(z_{1}-z_{2})^{N}a>_{z_{1}}(b\prec_{z_{2}}c)=(z_{1}-z_{2})^{N}b\prec_{z_{2}}(a> _{z_{1}}c+a<_{z_{1}}c),\]
which is (39). Conversely, if \((V,<_{z},>_{z},D)\) satisfies (39) and (40), by reversing the argument above, we can show that \((V,<_{z},>_{z},D)\) also satisfies (36)-(38).
By Proposition 4.8, any dendriform vertex algebra \((V,\prec_{z},>_{z},D)\) also satisfies (39) and (40). Hence we have the following consequence.
**Corollary 4.9**.: _Let \((V,<_{z},>_{z},D)\) be a dendriform vertex algebra. Then \((V,<_{z},>_{z})\) is a dendriform vertex Leibniz algebra._
Therefore, we have the following diagram that illustrate the relations between our notions of dendriform algebras in Definition 4.1 and the original (vertex) algebras:
\[\begin{CD}\text{vertex alg. without vacuum}@>{\text{\rm subcat.}}>{}>\text{ vertex Leibniz alg.}@>{\text{\rm full subcat.}}>{}>\text{field Leibniz alg.}\\ @V{\text{\rm induce}}V{}V@V{\text{\rm induce}}V{}V\\ \text{\rm dendriform vertex alg.}@>{\text{\rm subcat.}}>{}>\text{\rm dendriform vertex Leibniz alg.}@>{\text{\rm full subcat.}}>{}>\text{\rm dendriform field alg.}\end{CD}\]
We can also obtain an analog of the Jacobi identity for the operators \(\prec_{z}\) and \(\succ_{z}\). We recall the following Lemma 2.1 in [34].
**Lemma 4.10**.: _Let \(U\) be a vector space, and let \(A(z_{1},z_{2})\in U((z_{1}))((z_{2}))\), \(B(z_{1},z_{2})\in U((z_{2}))((z_{1}))\), and \(C(z_{0},z_{2})\in U((z_{2}))((z_{0}))\). Then_
\[z_{0}^{-1}\delta\left(\frac{z_{1}-z_{2}}{z_{0}}\right)A(z_{1},z_{2})-z_{0}^{-1 }\delta\left(\frac{-z_{2}+z_{1}}{z_{0}}\right)B(z_{1},z_{2})=z_{2}^{-1}\delta \left(\frac{z_{1}-z_{0}}{z_{2}}\right)C(z_{0},z_{2}) \tag{49}\]
_holds if and only if there exists \(k,l\in\mathbb{N}\) such that_
\[(z_{1}-z_{2})^{k}A(z_{1},z_{2}) =(z_{1}-z_{2})^{k}B(z_{1},z_{2}), \tag{51}\] \[(z_{0}+z_{2})^{l}A(z_{0}+z_{2},z_{2}) =(z_{0}+z_{2})^{l}C(z_{0},z_{2}). \tag{50}\]
**Theorem 4.11**.: _Let \((V,<_{z},>_{z})\) be a dendriform vertex Leibniz algebra. Then we have the following three Jacobi identities involving the operators \(\prec_{z}\) and \(\succ_{z}\)._
\[\begin{split}& z_{0}^{-1}\delta\left(\frac{z_{1}-z_{2}}{z_{0}} \right)a>_{z_{1}}(b\prec_{z_{2}}c)-z_{0}^{-1}\delta\left(\frac{-z_{2}+z_{1}}{z_ {0}}\right)b\prec_{z_{2}}(a>_{z_{1}}c+a<_{z_{1}}c)\\ &=z_{2}^{-1}\delta\left(\frac{z_{1}-z_{0}}{z_{2}}\right)(a>_{z_{0 }}b)\prec_{z_{2}}c,\end{split} \tag{52}\]
\[\begin{split}& z_{0}^{-1}\delta\left(\frac{z_{1}-z_{2}}{z_{0}} \right)a>_{z_{1}}(b>_{z_{2}}c)-z_{0}^{-1}\delta\left(\frac{-z_{2}+z_{1}}{z_{0} }\right)b>_{z_{2}}(a>_{z_{1}}c)\\ &=z_{2}^{-1}\delta\left(\frac{z_{1}-z_{0}}{z_{2}}\right)(a>_{z_{0 }}b+a<_{z_{0}}b)>_{z_{2}}c,\end{split} \tag{53}\]
\[\begin{split}& z_{0}^{-1}\delta\left(\frac{z_{1}-z_{2}}{z_{0}}\right)a \prec_{z_{1}}(b\prec_{z_{2}}c+b\succ_{z_{2}}c)-z_{0}^{-1}\delta\left(\frac{-z_ {2}+z_{1}}{z_{0}}\right)b\succ_{z_{2}}(a\prec_{z_{1}}c)\\ &=z_{2}^{-1}\delta\left(\frac{z_{1}-z_{0}}{z_{2}}\right)(a \prec_{z_{0}}b)\prec_{z_{2}}c,\end{split} \tag{54}\]
_where \(a,b,c\in V\), and \(z_{0},z_{1},z_{2}\) are formal variables._
_Furthermore, (52), (53) and (54) for a dendriform vertex algebra \((V,\prec_{z},\succ_{z},D)\) are mutually equivalent. We call (53) the Jacobi identity for the dendriform vertex algebra \((V,\prec_{z},\succ_{z},D)\)._
Proof.: By Proposition 4.8 and the formulas (36)-(38), we have
\[\begin{split}&(z_{0}+z_{2})^{k}a\succ_{z_{0}+z_{2}}(b\prec_{z_{ 2}}c)=(z_{0}+z_{2})^{k}(a\succ_{z_{0}}b)\prec_{z_{2}}c,\\ &(z_{1}-z_{2})^{l}a\succ_{z_{1}}(b\prec_{z_{2}}c)=(z_{1}-z_{2})^ {l}b\prec_{z_{2}}(a\succ_{z_{1}}c+a\prec_{z_{1}}c),\end{split}\]
for some \(k,l\in\mathbb{N}\). Then \(A(z_{1},z_{2})=a\succ_{z_{1}}(b\prec_{z_{2}}c)\), \(B(z_{1},z_{2})=b\prec_{z_{2}}(a\succ_{z_{1}}c+a\prec_{z_{1}}c)\), and \(C(z_{0},z_{2})=(a\succ_{z_{0}}b)\prec_{z_{2}}c\) satisfy the conditions (50) and (51) in Lemma 4.10. Then the Jacobi identity (52) follows from (49).
Similarly, the Jacobi identity (53) follows from Lemma 4.10 and
\[\begin{split}&(z_{0}+z_{2})^{k}a\succ_{z_{0}+z_{2}}(b\succ_{z_{ 2}}c)=(z_{0}+z_{2})^{k}(a\succ_{z_{0}}b+a\prec_{z_{0}}b)\succ_{z_{2}}c,\\ &(z_{1}-z_{2})^{l}a\succ_{z_{1}}(b\succ_{z_{2}}c)=(z_{1}-z_{2})^ {l}b\succ_{z_{2}}(a\succ_{z_{1}}c),\end{split}\]
for some \(k,l\in\mathbb{N}\). The Jacobi identity (54) follows from Lemma 4.10 and
\[\begin{split}&(z_{0}+z_{2})^{k}a\prec_{z_{0}+z_{2}}(b\succ_{z_{ 2}}c+b\prec_{z_{2}}c)=(z_{0}+z_{2})^{k}(a\prec_{z_{0}}b)\prec_{z_{2}}c,\\ &(z_{1}-z_{2})^{l}a\prec_{z_{1}}(b\prec_{z_{2}}c+b\succ_{z_{2}}c )=(z_{1}-z_{2})^{l}b\succ_{z_{2}}(a\prec_{z_{1}}c),\end{split}\]
for some \(k,l\in\mathbb{N}\).
Now let \((V,\prec_{z},\succ_{z})\) be a dendriform vertex algebra. The equivalency of these Jacobi identities essentially corresponds to the \(S_{3}\)-symmetry of the Jacobi identity, see Section 2.7 in [19]. The proof is also similar as follows.
Assume that (52) holds. By the skew-symmetry (41), we have
\[\begin{split}& z_{0}^{-1}\delta\left(\frac{z_{1}-z_{2}}{z_{0}} \right)a\succ_{z_{1}}e^{z_{2}D}(c\succ_{-z_{2}}b)-z_{0}^{-1}\delta\left(\frac {-z_{2}+z_{1}}{z_{0}}\right)e^{z_{2}D}(a\succ_{z_{1}}c+a\prec_{z_{1}}c) \succ_{-z_{2}}b\\ &=z_{2}^{-1}\delta\left(\frac{z_{1}-z_{0}}{z_{2}}\right)(a \succ_{z_{0}}b)\prec_{z_{2}}c=z_{2}^{-1}\delta\left(\frac{z_{1}-z_{0}}{z_{2}} \right)e^{z_{2}D}c\succ_{-z_{2}}(a\succ_{z_{0}}b).\end{split}\]
Then by (48) and properties of the formal \(\delta\)-functions (see Section 2.1 in [19]), we have
\[\begin{split}& z_{1}^{-1}\delta\left(\frac{z_{2}+z_{0}}{z_{1}} \right)c\succ_{-z_{2}}(a\succ_{z_{0}}b)=z_{2}^{-1}\delta\left(\frac{z_{1}-z_{ 0}}{z_{2}}\right)c\succ_{-z_{2}}(a\succ_{z_{0}}b)\\ &=z_{0}^{-1}\delta\left(\frac{z_{1}-z_{2}}{z_{0}}\right)a\succ_{z _{1}-z_{2}}(c\succ_{-z_{2}}b)-z_{0}^{-1}\delta\left(\frac{-z_{2}+z_{1}}{z_{0}} \right)(a\succ_{z_{1}}c+a\prec_{z_{1}}c)\succ_{-z_{2}}b\\ &=z_{1}^{-1}\delta\left(\frac{z_{0}+z_{2}}{z_{1}}\right)a\succ_{z _{0}}(c\succ_{-z_{2}}b)-z_{0}^{-1}\delta\left(\frac{-z_{2}+z_{1}}{z_{0}} \right)(a\succ_{z_{1}}c+a\prec_{z_{1}}c)\succ_{-z_{2}}b.\end{split}\]
Changing the formal variables \((z_{0},z_{1},z_{2})\mapsto(w_{1},w_{0},-w_{2})\) in the equations above, we obtain
\[\begin{split}& w_{0}^{-1}\delta\left(\frac{-w_{2}+w_{1}}{w_{0}} \right)c\succ_{w_{2}}(a\succ_{w_{1}}b)\\ &=w_{0}^{-1}\delta\left(\frac{w_{1}-w_{2}}{w_{0}}\right)a\succ_{w _{1}}(c\succ_{w_{2}}b)-w_{1}^{-1}\delta\left(\frac{w_{2}+w_{0}}{w_{1}}\right)(a \succ_{w_{0}}c+a\prec_{w_{0}}c)\succ_{w_{2}}b.\end{split}\]
This equation is the same as (53) under the change of variables \((c,b)\) to \((b,c)\). This shows the equivalence of (52) and (53). The equivalency of the equations (54) and (53) can be proved by a similar method. Thus (52), (53) and (54) are mutually equivalent.
**Remark 4.12**.: By adding the three Jacobi identities (52)-(54), we derive the Jacobi identity for the vertex operator \(Y(a,z)=a<_{z}b+a\succ_{z}b\), giving an alternative proof of Theorem 4.7
On the other hand, since the Jacobi identity (49) can also give rise to (50) and (51), we have the following characterization of a dendriform vertex Leibniz algebra:
**Corollary 4.13**.: _A dendriform vertex Leibniz algebra is a vector space \(V\), equipped with two linear operators \(\prec_{z},\succ_{z}:V\to\operatorname{Hom}(V,V((z)))\), satisfying the Jacobi identities (52), (53), and (54)._
By Theorem 4.11, we also obtain a second equivalent condition for dendriform vertex algebras, in addition to Proposition 4.8.
**Corollary 4.14**.: _A dendriform vertex algebra is a vector space \(V\), equipped with a linear map \(D:V\to V\) and two linear operators \(\prec_{z},\succ_{z}:V\to\operatorname{Hom}(V,V((z)))\), satisfying (41), (42), and the Jacobi identity (53)._
### The modules structures induced by dendriform vertex (Leibniz) algebras
An usual dendriform associative algebra \((A,\prec,\succ)\) defined by (30)-(32) gives rise to a bi-module structure \((A,L_{\sim},R_{\sim})\) of \((A,\cdot)\) on \(A\) itself, where \(a\cdot b:=a\prec b+a\succ b\), \(L_{\sim}(a)(b):=a\succ b\), and \(R_{\sim}(a)(b):=a\prec b\), for all \(a,b\in A\). See [2, 5] for more details.
It is natural to expect a similar result to be true for our definition of the dendriform vertex (Leibniz) algebras in Definition 4.3. However, unlike the associative algebra case, vertex algebra also satisfies the weak commutativity and a module over a vertex algebra is more like a module over Lie algebras. In fact, with Theorem 4.11, we indeed have a module structure induced by a dendriform vertex algebra. First, we recall the following definition. See Definition 2.9 in [35].
**Definition 4.15**.: Let \((V,Y,D)\) be a vertex algebra without vacuum. A \(V\)**-module**\((W,Y_{W})\) is a vector space \(W\), equipped with a linear map \(Y_{W}:V\to(\operatorname{End}W)[[z,z^{-1}]]\), satisfying the truncation property, the Jacobi identity for \(Y_{W}\) in Definition 2.8, and
\[Y_{W}(Da,z)=\frac{d}{dz}Y_{W}(a,z),\quad a\in V. \tag{55}\]
**Proposition 4.16**.: _Let \((V,\prec_{z},\succ_{z},D)\) be a dendriform vertex algebra, and let \((V,Y,D)\) be the associated vertex algebra without vacuum, where \(Y\) is given by (47): \(Y(a,z)b=a\prec_{z}b+a\succ_{z}b\). Let \(W=V\), and define_
\[Y_{W}:V\to(\operatorname{End}W)[[z,z^{-1}]],\quad Y_{W}(a,z)b:=a\succ_{z}b, \quad a\in V,b\in W. \tag{56}\]
_Then \((W,Y_{W})\) is a module over \((V,Y,D)\)._
Proof.: By Definition 4.3, clearly \(Y_{W}\) satisfies the truncation property. By the Jacobi identity (53) of \((V,\prec_{z},\succ_{z},D)\), (56) and (47), we have
\[z_{0}^{-1}\delta\left(\frac{z_{1}-z_{2}}{z_{0}}\right)Y_{W}(a,z_{1})Y_{W}(b,z _{2})c-z_{0}^{-1}\delta\left(\frac{-z_{2}+z_{1}}{z_{0}}\right)Y_{W}(b,z_{2}) Y_{W}(a,z_{1})c=z_{2}^{-1}\delta\left(\frac{z_{1}-z_{0}}{z_{2}}\right)Y_{W}(Y(a,z_{0})b,z_{2})c,\]
for \(a,b\in V,c\in W=V\). Finally, by Lemma 4.5, we have
\[Y_{W}(Da,z)b=(Da)\succ_{z}b=\frac{d}{dz}a\succ_{z}b,\quad a\in V,b\in W.\]
Thus, \((W,Y_{W})\) is a module over the vertex algebra without vacuum \((V,Y,D)\), according to Definition 4.15.
**Remark 4.17**.: Since we define \(Y_{W}\) by one of the partial operator \(\succ_{z}\) in (56), it is natural to consider the vertex operator \(\mathcal{Y}\) defined by the other partial operator \(\mathcal{Y}(a,z)b=a\prec_{z}b\). By the skew-symmetry (41), we have
\[\mathcal{Y}(a,z)b=a\prec_{z}b=e^{zD}(b\succ_{-z}a)=e^{zD}Y_{W}(b,-z)a=Y_{WV}^{W} (a,z)b, \tag{57}\]
where \(Y_{WV}^{W}\) is defined by the skew-symmetry formula (see Section 5 in [19]). i.e., \(\mathcal{Y}=Y_{WV}^{W}\). It is easy to see that the Jacobi identities (52) and (54) correspond to the following equation.
\[zz_{0}^{-1}\delta\left(\frac{z_{1}-z_{2}}{z_{0}}\right)Y_{W}(a,z_ {1})Y_{WV}^{W}(b,z_{2})c-z_{0}^{-1}\delta\left(\frac{-z_{2}+z_{1}}{z_{0}} \right)Y_{WV}^{W}(b,z_{2})Y(a,z_{1})c\] \[=z_{2}^{-1}\delta\left(\frac{z_{1}-z_{0}}{z_{2}}\right)Y_{WV}^{W }(Y_{W}(a,z_{0})b,z_{2})c,\quad a,b,c\in V=W.\]
Moreover, \(Y_{WV}^{W}(Da,z)b=(Da)\prec_{z}b=\frac{d}{dz}a\prec_{z}b\) by Lemma 4.5. Thus, if the vertex algebra without vacuum \((V,Y,D)\) is an underlying structure of some VOA \((V,Y,\mathbf{1},\omega)\), with \(D=L(-1)\), then \(Y_{WV}^{W}(a,z)b=a\prec_{z}b\) is an intertwining operator of type \(\left(\begin{smallmatrix}W\\ WV\end{smallmatrix}\right)\).
**Corollary 4.18**.: _Let \((V,Y,\mathbf{1},\omega)\) be a VOA. Assume that the underlying vertex algebra without vacuum structure \((V,Y,D=L(-1))\) is induced from a dendriform vertex algebra structure \((V,\prec_{z},D)\) by Theorem 4.7. Let \((W,Y_{W})\) be the weak \(V\)-module given by Proposition 4.16. Then the identity map \(T=\operatorname{Id}:W\to V\) is a relative RBO._
Proof.: By (47), (56), (57), and the assumption that \(T=\operatorname{Id}\), we have
\[Y(Tu,z)Tv =u\prec_{z}v+u\succ_{z}v=Y_{W}(u,z)v+Y_{WV}^{W}(u,z)v\] \[=T(Y_{W}(Tu,z)v)+T(Y_{WV}^{W}(u,z)Tv),\quad u,v\in W.\]
So \(T=\operatorname{Id}\) is a relative RBO as defined in (15)
Conversely, we have the following relation between dendriform vertex algebras and vertex algebra without vacuum, which gives a characterization of the dendriform vertex algebras.
**Proposition 4.19**.: _Let \(V\) be a vector space, equipped with linear maps \(\prec_{z},\succ_{z}\): \(V\to\operatorname{Hom}(V,V((z)))\), and \(D:V\to V\). Define \(Y(a,z)b:=a\prec_{z}b+a\succ_{z}b\) for all \(a,b\in V\) as in (47). Then \((V,\prec_{z},\succ_{z},D)\) is a dendriform vertex algebra if and only if the following conditions are satisfied._
1. \((V,Y,D)\) _is a vertex algebra without vacuum._
2. \(\prec_{z}\)_,_ \(\succ_{z}\)_, and_ \(D\) _satisfy the equations (_41_) and (_42_)._
3. \((V,\succ_{z},D)\) _defines a module structure of_ \((V,Y,D)\) _on_ \(V\) _itself._
Proof.: If \((V,\prec_{z},\succ_{z},D)\) forms a dendriform vertex algebra, then (i), (ii), and (iii) follows from Theorem 4.7, Definition 4.3, and Proposition 4.16, respectively. Conversely, since \(\succ_{z}\): \(V\to\operatorname{End}(V)[[z,z^{-1}]]\) defines a module structure, we have the Jacobi identity:
\[z_{0}^{-1}\delta\left(\frac{z_{1}-z_{2}}{z_{0}}\right)a\succ_{z_ {1}}(b\succ_{z}c)-z_{0}^{-1}\delta\left(\frac{-z_{2}+z_{1}}{z_{0}}\right)b \succ_{z_{2}}(a\succ_{z_{1}}c)\] \[=z_{2}^{-1}\delta\left(\frac{z_{1}-z_{0}}{z_{2}}\right)(Y(a,z_{0} )b)\prec_{z_{2}}c\] \[=z_{2}^{-1}\delta\left(\frac{z_{1}-z_{0}}{z_{2}}\right)(a\succ_{z _{0}}b+a\prec_{z_{0}}b)\prec_{z_{2}}c,\quad a,b,c\in V.\]
Hence \((V,\prec_{z},\succ_{z},D)\) is a quadruple satisfying (41), (42), and (53). Then by Corollary 4.14, \((V,\prec_{z},\succ_{z},D)\) is a dendriform field vertex algebra.
However, for dendriform vertex Leibniz algebras, since we do not have the skew-symmetry property, the left and right module actions cannot be combined into one action. The notion of left module over a vertex Leibniz algebra was introduced in [35] (see Definition 2.15 there). Inspired by the Jacobi identities in Theorem 4.11 and Proposition 4.16, we introduce the notion of bi-modules over a vertex Leibniz algebra as follows.
**Definition 4.20**.: Let \((V,Y)\) be a vertex Leibniz algebra. A **bi-module over \((V,Y)\)** is a triple \((W,Y_{W},Y_{WV}^{W})\), where \(W\) is a vector space, and
\[Y_{W}:V\to\operatorname{Hom}(W,W((z))),\quad Y_{WV}^{W}:W\to\operatorname{ Hom}(V,W((z)))\]
are linear operators, satisfying the following axioms:
\[\begin{split}& z_{0}^{-1}\delta\left(\frac{z_{1}-z_{2}}{z_{0}} \right)Y_{W}(a,z_{1})Y_{WV}^{W}(b,z_{2})c-z_{0}^{-1}\delta\left(\frac{-z_{2}+z_ {1}}{z_{0}}\right)Y_{WV}^{W}(b,z_{2})Y(a,z_{1})c\\ &=z_{2}^{-1}\delta\left(\frac{z_{1}-z_{0}}{z_{2}}\right)Y_{WV}^{ W}(Y_{W}(a,z_{0})b,z_{2})c,\quad a,c\in V,b\in W,\end{split} \tag{58}\]
\[\begin{split}& z_{0}^{-1}\delta\left(\frac{z_{1}-z_{2}}{z_{0}} \right)Y_{W}(a,z_{1})Y_{W}(b,z_{2})c-z_{0}^{-1}\delta\left(\frac{-z_{2}+z_{1}} {z_{0}}\right)Y_{W}(b,z_{2})Y_{W}(a,z_{1})c\\ &=z_{2}^{-1}\delta\left(\frac{z_{1}-z_{0}}{z_{2}}\right)Y_{W}(Y(a,z_{0})b,z_{2})c,\quad a,b\in V,c\in W,\end{split} \tag{59}\]
\[\begin{split}& z_{0}^{-1}\delta\left(\frac{z_{1}-z_{2}}{z_{0}} \right)Y_{WV}^{W}(a,z_{1})Y(b,z_{2})c-z_{0}^{-1}\delta\left(\frac{-z_{2}+z_{1 }}{z_{0}}\right)Y_{W}(b,z_{2})Y_{WV}^{W}(a,z_{1})c\\ &=z_{2}^{-1}\delta\left(\frac{z_{1}-z_{0}}{z_{2}}\right)Y_{WV}^{ W}(Y_{WV}^{W}(a,z_{0})b,z_{2})c,\quad b,c\in V,a\in W.\end{split} \tag{60}\]
In particular, \((W,Y_{W})\) is a (left) module over the vertex Leibniz algebra \((V,Y)\) as in [35], in view of (59).
**Proposition 4.21**.: _Let \((V,\prec_{z},\succ_{z})\) be a dendriform vertex Leibniz algebra, and let \((V,Y)\) be the associated vertex Leibniz algebra, where \(Y\) is given by (47): \(Y(a,z)b=a\prec_{z}b+a\succ_{z}b\). Let \(W=V\), and define \(Y_{W}\) and \(Y_{WV}^{W}\) by_
\[Y_{W}(a,z)b:=a\succ_{z}b,\quad Y_{WV}^{W}(b,z)a:=b\prec_{z}a,\quad\forall a\in V,b\in W. \tag{61}\]
_Then \((W,Y_{W},Y_{WV}^{W})\) is a bi-module over \((V,Y)\)._
Proof.: We use the equivalent definition of dendriform vertex Leibniz algebra in Corollary 4.13. By our definition (61), it is clear that (58), (59), and (60) are equivalent to (52), (53), and (54), respectively. Therefore, \((W,Y_{W},Y_{WV}^{W})\) is a bi-module over \((V,Y)\) by Definition 4.20.
Similar to Proposition 4.19, we have the following characterization of the dendriform vertex Leibniz algebras.
**Proposition 4.22**.: _Let \(V\) be a vector space, equipped with linear maps \(\prec_{z},\succ_{z}\): \(V\to\operatorname{Hom}(V,V((z)))\). Let \(Y(a,z)b=a\prec_{z}b+a\succ_{z}b\) for all \(a,b\in V\). Then \((V,\prec_{z},\succ_{z})\) forms a dendriform vertex Leibniz algebra if and only if_
* \((V,Y)\) _forms a vertex Leibniz algebra, and_
* \((V,\succ_{z},\prec_{z})\) _forms a bi-module of the vertex Leibniz algebra_ \((V,Y)\)
Proof.: If \((V,\prec_{z},\succ_{z})\) forms a dendriform vertex Leibniz algebra, then (i) follows from Theorem 4.7, (ii) follows from Proposition 4.21. Conversely, by our assumption and Definition 4.20, equations (58)-(60) translates to equations (52)-(54). Then by Corollary 4.13, \((V,\prec_{z},\succ_{z})\) is a dendriform vertex Leibniz algebra.
**Remark 4.23**.: For dendriform field algebras, we can follow the same routine and introduce a notion of bi-modules over a field Leibniz algebra, which is coherent with equations (36)-(38). Then the axioms of dendriform field algebras can also be characterized by the bi-module axioms.
**Acknowledgments.** This research is supported by NSFC (11931009, 12271265, 12261131498), the Fundamental Research Funds for the Central Universities and Nankai Zhide Foundation.
**Declaration of interests.** The authors have no conflicts of interest to disclose.
**Data availability.** No new data were created or analyzed in this study.
|
2304.10827
|
Users volatility on Reddit and Voat
|
Social media platforms are like giant arenas where users can rely on
different content and express their opinions through likes, comments, and
shares. However, do users welcome different perspectives or only listen to
their preferred narratives? This paper examines how users explore the digital
space and allocate their attention among communities on two social networks,
Voat and Reddit. By analysing a massive dataset of about 215 million comments
posted by about 16 million users on Voat and Reddit in 2019 we find that most
users tend to explore new communities at a decreasing rate, meaning they have a
limited set of preferred groups they visit regularly. Moreover, we provide
evidence that preferred communities of users tend to cover similar topics
throughout the year. We also find that communities have a high turnover of
users, meaning that users come and go frequently showing a high volatility that
strongly departs from a null model simulating users' behaviour.
|
Niccolò Di Marco, Matteo Cinelli, Shayan Alipour, Walter Quattrociocchi
|
2023-04-21T09:11:42Z
|
http://arxiv.org/abs/2304.10827v1
|
# Users volatility on Reddit and Voat
###### Abstract
Social media platforms are like giant arenas where users can rely on different content and express their opinions through likes, comments, and shares. However, do users welcome different perspectives or only listen to their preferred narratives? This paper examines how users explore the digital space and allocate their attention among communities on two social networks, Voat and Reddit. By analysing a massive dataset of about 215 million comments posted by about 16 million users on Voat and Reddit in 2019 we find that most users tend to explore new communities at a decreasing rate, meaning they have a limited set of preferred groups they visit regularly. Moreover, we provide evidence that preferred communities of users tend to cover similar topics throughout the year. We also find that communities have a high turnover of users, meaning that users come and go frequently showing a high volatility that strongly departs from a null model simulating users' behaviour.
component, formatting, style, styling, insert
## I Introduction
Social media exposes us to vast information, creating an excess of choices for content consumption. For instance, the volume of videos uploaded to platforms like YouTube or TikTok exceeds the time available for human viewing. To cope with this information overload, in which providers we for our attention [1], we employ cognitive and cultural/atttitudinal mechanisms that emerge in various contexts, such as social interactions [2, 3, 4] communication [5, 6, 7] and mobility [8]. These mechanisms, although helpful, can also have detrimental results. For example, confirmation bias, or selective exposure [9, 10] in the case of controversial topics such as politics or vaccines, are the causes of polarization and echo chambers [11, 12, 13, 4, 4].
Besides the debate on controversial topics, our attention is also driven by our interests, which reflect our personality and culture [15, 16, 17, 18]. In the information foraging process [19, 20], platforms may influence us; they use recommendation algorithms and other features [21, 22, 23] to suggest new content and potentially shape our interests [24].
Before the advent of personalized recommendation platforms, users organized themselves into topical groups that still exist and can take various forms: pages, channels, lists, and communities of interest. The latter form seems especially relevant to capture the dynamics of users' interests online and is the basis of several social media platforms (and originally online fora) such as Reddit.
In such a framework, where users consume information and content in an interest-driven manner, we aim to investigate the following research questions:
* Do users tend to explore large portions of the digital space (i.e. social media platform) that they use for content consumption?
* Do users tend to be volatile, i.e. characterised by continuous shifts over time, with respect to the communities they interact with?
* Do users tend to be active within communities concerning the same topics over time?
This paper addresses these research questions by analyzing two datasets of comments from Reddit and Voat (a former alternative to Reddit with no content moderation policy) in 2019. Our data includes \(\approx 2B\) comments from Reddit and \(\approx 2M\) comments from Voat. Both platforms have an akin concept of community: Subreddits and Subverses, respectively. We present our results from both a user-centric and a community-centric perspective. This allows us to cross-check our findings using consistent data with different levels of detail.
From the users' perspective, our analysis reveals that the number of communities explored by users during the year grows sublinearly in time, indicating a continuous exploration of the digital space. However, most comments are concentrated in a small set of communities. We operationalise the set of preferred communities and we find that its size stays constant during the observed time span, reflecting the limited attention of users. Furthermore, by defining a vector space that represents communities' topics, we also find that, even though the preferred communities of users change over time, the topics they interact with remain roughly similar.
From the communities' point of view, we find a high turnover of users, as shown by the comments distribution of each group and by Correspondence Analysis. To describe this framework, we propose a mathematical expression that captures the similarity between commenting users in a community during each month of \(2019\). Interestingly, the similarity follows a power-law distribution, which implies that most users leave quickly, but a few stays for a long time. Finally, using Information Theory tools, we quantify the level of volatility that a community can achieve.
Our results show that users' interests and attention are influenced by both their personal preferences and the platforms' features. Users explore new communities over time but tend to focus on a few topics and (a small set of) communities that change over time and match their interests. Communities experience a high turnover of users, with only a few loyal ones. This implies that communities need to balance attracting new users and retaining existing ones and that platforms need to consider the trade-offs between diversity and relevance in their recommendations.
The paper is structured as follows. In the second section, we present some related works. In the third, we explain how we collect the data and how we decide to filter the initial dataset to avoid noisy results. Then, in the fourth section, we present our analysis.
## II Related works
Many works have focused on different aspects of Reddit and Voat. In particular, related to the users' behaviour, some studies have highlighted how it's possible to predict the involvement in conspiracy communities by examining their interaction network and their language [25, 26]. In these cases, the results show that dyadic interactions with members of conspiracy groups are the most important precursor to joining these communities. Other efforts have been made to understand the migration of users [27] using macro and micro scale analysis that reveal sub-network structures that indicate overlapping of user communities and consequences of platform moderation. In particular, also the special case after a ban of Subreddits has been studied [28, 29, 30]. The results of these works seem to suggest that bans have been useful to reduce the number of active users and the usage of toxic language. Other studies focused on how migration happens between conflicting communities [31] and in [27] authors show a relation between migration and controversy of the topics covered in a certain group.
Some efforts have been done also to understand the flow of users' attention between subreddits, using Graph Theory tools [32]. Finally, some research studies the concept of loyalty in Subreddits and, in [33, 34], the authors find that users' propensity to become loyal is apparent from certain patterns of their first interaction with a community.
## III Materials and Methods
In this section we introduce two data sources, namely _Reddit_ and _Voat_. In particular, we explain how we obtained the data and the preprocessing phase that lead to the final dataset.
### _Data collection_
**Reddit:** Reddit is a social content aggregation website, organized in communities constructed around a certain topic, named _subreddits_. Each user has an account corresponding to a user name used to post submissions or to comment on other submissions and other comments. In addition, users can also upvote or downvote a submission in order to show their appreciation or criticism for it. Differently from other social media, Reddit's homepage is organized around subreddits and not on user-to-user relationships. Therefore, subreddits chosen by users are likely to represent their preferred topics and the main source of information consumed on the website. We collect public data from Reddit using the Pushift collection [35].
**Voat:** Voat.co was a news aggregator website, open until 25 December 2020. It has become famous as a place of migration for communities banned from Reddit [36]. Its form was very similar to Reddit: discussions occurred in specific groups of interests called _subverses_. Users could subscribe to these communities and interact using comments, upvotes and downvotes, in a similar fashion to Reddit. In this paper, we used the dataset collected in [36].
### _Preprocessing_
We start with all the comments published on Reddit and Voat in \(2019\). Initially, we removed 8Chan, Anon and QRV because they provide identity anonymization to their users, i.e. users post with a code different from their usual username. For each community, we define its _size_ as the number of unique users that comments on it through \(2019\). Then, we select only subreddits and subverses that have a size of at least \(30\) users and that are commented on at least once in each month of \(2019\).
Table I shows a data breakdown of our final datasets.
### _Embeddings_
We utilized the all-mpnet-base-v2 model to generate embedding vectors for each subreddit and subverse, using their respective names and descriptions as input. The all-mpnet-base-v2 model is a sentence-transformer model that encodes short paragraphs into a 768-dimensional vector space, capturing semantic information [37]. This model has been shown to outperform other sentence transformers based on evaluations done on 14 different datasets that measure the quality of embedded sentences [38]. Prior to generating vectors, we conducted text preprocessing by splitting CamelCase naming formats, removing special characters, numbers, punctuation, and converting the text to lowercase.
## IV Results and Discussions
### _Users point of view_
A central question for the understanding of users' behaviour online regards their exploration of new communities. We denote with \(C_{i}(t)\) the number of known communities of user \(i\) at time \(t\), i.e. communities in which \(i\) commented at least once. For each user with at least 50 comments, we compute \(C_{i}(t)\)
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline & Users & Comments & Subverses \\ \hline Voat & 23189 & 2206665 & 237 \\ \hline \end{tabular}
\end{table} TABLE I: Data breakdown of our dataset after the preprocessing phase.
throughout the year. Interestingly, we find a time series that can be well approximated by a function such as \(C_{i}(t)\approx t_{i}^{\alpha}\). We then apply a linear regression on the logged version of the data to obtain the exponent value and study its distribution. Figure 1 shows the results for Voat (a) and Reddit (b).
For both social media, the exponent distribution of the whole set is homogeneous (i.e. it resembles a Gaussian distribution) across the population studied, with \(\alpha_{i}\) peaked at \(\bar{\alpha}=0.43\) for Voat and \(\bar{\alpha}=0.85\) for Reddit. The insets show the \(R^{2}\) distribution of fittings, indicating that for the majority of them, explanatory variables cover the greatest part of the variance in the data.
It is worth noticing that, in the case of Reddit, there is a lower percentage of users for which the fitting returns an exponent equal to \(0\), meaning that no new communities are discovered during the year. This effect is less prominent on Voat probably because of the nature of the platform, which hosts communities banned by Reddit [30, 36]. In fact, it is possible that users subscribe to Voat only to join the same community banned on Reddit, with no further interest for others [39]. What is displayed in Figure 1 indicates that users tend to explore new communities following a sublinear behaviour, confirmed by the values of \(\alpha\) mostly below one. This first analysis outlines the tendency of users to explore just a small portion of the available space but does not provide indications about how they allocate their attention/activity in this digital environment.
To investigate this aspect, we define \(PC_{i}(t)=\{c_{1},\ldots,c_{n}\}\) as the set of communities preferred by user \(i\) at time \(t\). In particular, we say that a community \(c_{j}\in PC_{i}(t)\) if user \(i\) allocates at least \(20\) comments in \(c_{j}\) at month \(t\). This threshold is justified by a power-law behaviour of the users in terms of community commenting due to which a value of 20 comments can be considered a relatively high one (approximately \(70\%\) of users left fewer comments than this threshold).
We are interested in characterising how this set evolves over time for each user. In particular, we define \(A_{i}(t)\) as the number of communities added to \(PC_{i}(t)\) and with \(D_{i}(t)\) the number of communities removed from \(PC_{i}(t-1)\). We compute for each month the difference \(U_{i}(t)=A_{i}(t)-D_{i}(t)\) for all the users that have at least one preferred community per month and we denote with \(\bar{U}(t)\) its mean value. Then, we employ a linear regression model \(\bar{U}(t)=mt+q\) to understand how \(\bar{U}(t)\) evolves over the year. Figure 2 shows the results.
For both Voat and Reddit, we find that the slope is not significantly different from \(0\) (\(p=0.82\) for Voat and \(p=0.45\) for Reddit).
This result, joined with the previous one, provides a response to **RQ1** by corroborating the hypothesis that users have limited attention and tend to interact with a fixed number of preferred communities and information sources. Interestingly, if we interpret a community as a point of interest in the virtual space, our result resonates with some findings about the limitations observed during the exploration of the physical space for human mobility [40, 41, 42, 43, 8].
To deepen the investigation of users' attention and provide a different point of view, we divide them into classes of activity. In particular, for a fixed community, we assign to each user a class indicating its activity in it. To define classes we use a non-parametric method developed for partitioning heavy-tailed distributions developed by [44] and recently employed in different domains [42, 45]. Briefly, we compute the mean number of comments \(\bar{x}\) and users that have left less than \(\bar{x}\) are assigned to class _low_. Then we delete these users from the distribution and recursively repeat the procedure using four classes of activity \(\{low,mid,high,very\ high\}\).
Figure 2 shows the distribution of the Gini index [46], representing the degree of inequality of a distribution per user class. For both Social Networks, users in higher activity classes tend to have a higher Gini Index. Therefore we find that higher volumes of interactions are less likely to be distributed homogeneously across communities.
### _Communities point of view_
In the previous section we showed that users have a limited attention span and that they tend to interact with a fixed (small)
Fig. 1: Distribution of exponents for \((a)\) Voat and \((b)\) Reddit. The insets show the distribution of \(R^{2}\).
Fig. 2: \((a)\) fitting of \(\bar{U}\) for Voat and Reddit. \(U_{i}(t)\) is an unbounded quantity thus observing \(\bar{U}\) ranging around the interval [-1,1] already signals a tendency of the users to interact with a relatively stable number of communities. \((b)\) and \((c)\) show Gini Index computed for each user on Voat and Reddit.
number of preferred communities. In this section, our focus shifts to how this heterogeneity manifests from the communities' point of view. Examining the users' exploration process in the digital space considering the communities perspective, may offer further evidence in favour of what we obtained by analysing the activity from a user-centric perspective.
We initially consider a fixed community \(c\) and the distribution \(D_{c}\) of comments per user, i.e. the number of comments left by each user in \(c\). Using the package _powerRlaw_ in R [47], we fit \(D_{c}\) with a power law distribution \(p(x)\sim x^{-\alpha}\). For each community, we obtain an exponent \(\alpha\) and a \(p-value\) for the statistical test \(H_{0}\) : _data comes from a power law distribution_. Thus, a \(p-value>0.05\) indicates that we can't reject the hypothesis that \(D_{c}\) is a power law. In Figure 3 we compare the exponents with the size of each community.
The results show that for Reddit the exponents tend to be \(\approx 3\), while for Voat they tend to be \(\approx 2.5\), meaning that a minority of users contribute to most of the comments within communities. Although the \(p-\)values distribution is right-skewed, for most of the communities it is not possible to reject \(H_{0}\) at a significance level of \(0.05\) (see Figure 1 in SI). In more detail, approximately \(72\%\) of subreddits have \(p>0.05\) and approximately \(89\%\) of subverses satisfy the same property. Therefore, most users left the community early, while only a minority stay longer, even if no information is obtained about their time of permanence.
In Figure 1 and Figure 1 of SI we provide anecdotal evidence, deriving from an exploratory analysis regarding the users' activity, that the (few) preferred communities of users are constantly changing. In fact, consistently with previous works [33, 34] but from a community point of view, we observe that the great majority of users leave communities quickly and that their engagement is one of the determining factors for remaining. Along with the results presented in the previous section, there is evidence that users maintain a stable, fixed number of communities, but these communities are constantly changing.
To generalise and extend the results suggested by Figure 1 and Figure 1 of SI, we compare the number of users that remain in each community considering January as a reference month. Namely, we compare the set of users commenting in January with those commenting in the other eleven months with _Jaccard Index_, which can be used to measure the similarity between the set of users that comment in different pairs of months. In particular,
\[J(t)=\frac{|\{user\ in\ January\}\cap\{user\ in\ t\}|}{|\{user\ in\ January\} \cup\{user\ in\ t\}|}\]
where \(t\) is the month being considered. The obtained values suggest that the number of common users decays in time for both Voat and Reddit with a behaviour similar to a power law. The exponent of this power law would then characterize how long users give attention to a certain community. Figure 4 shows the results of the fittings and compares \(\alpha\) with the size of the community. In particular, we perform a linear regression on the logged version of the data and, to avoid noisy inputs, we employ only subverses of size greater than 200 and subreddits of size greater than \(10^{3}\).
We note that subreddits with a greater size tend to have a value of the exponent approximately \(1\), while in Voat they seem to decrease with the size of the subverses. The insets show the distributions of \(R^{2}\), which confirm that the majority of the fittings explain most of the variance in the data. The results of Figure 4, regarding the similarity of the users set indicate that for most subreddits, the decay is superlinear, i.e. users populating them are changing fast, with the exception of the biggest ones that seem to attract users for a longer time. On the other hand, most of the subverses have a sublinear behaviour, i.e. users tend to stay longer in the same communities. It is worth noticing that, for this last case, the bigger the subverse (in terms of size), the lower the exponent, meaning that (as in Reddit) big subverses tend to attract users for a longer time.
We are now interested in giving another characterization of the users turnover from a community point of view. In this context, we use Correspondence Analysis (CA) that, compared to the previous analyses, is able to highlight the patterns realized by users on each community over time. Fixed a community \(k\) we consider the contingency table \(B_{k}\) of dimension \(n\times 12\), where \(n\) is the number of unique users that have commented in \(k\). The entry \(B_{k}(i,j)\) can be either \(0\) or \(1\) depending if the user \(i\) comments \(k\) on month \(j\). CA is used for visualizing rows and columns of \(B\) as points
Fig. 4: \((a),(b)\) distribution of exponents’ values for Subreddits and Subverses. The insets show histograms of \(R^{2}\). Only a sample of \(5000\) points is shown for Reddit.
Fig. 3: \((a)\) distribution of \(\alpha\) on Reddit compared to size of the relative community. \((b)\) shows the same but for Voat. For both \((a)\), \((b)\) only significant fittings (i.e. \(p>0.05\)) are considered). Only a sample of \(5000\) points is shown for Reddit.
in a low-dimensional space (two in our case), such that the positions of the row and column points are consistent with their associations in the table. Since we are interested in turnover, we plot only the points relative to months.
We apply CA using _FactoMineR_ package in R [48] to \(B\). Partial results are shown in Figure 5 (see SI for the coordinate obtained for subverses of size at least 1500 and subreddits of size at least \(10^{5}\)).
Panel \((a)\) shows the \(CA\) applied to _Whatever_, a popular subverse. The quadratic shape indicates that different months tend to be commented on by different users. In particular, distant months (in time) have approximately a bigger distance, indicating that users have a limited lifetime in the same community. Otherwise, if the same set of users would comment constantly on the same community we would obtain a higher concentration of points in the two-dimensional space. Panel \((b)\) shows the results of \(CA\) applied to _Memes_, a famous subreddit. A similar shape is shown, indicating a high turnover of users. In general, \(CA\) provides a technique to highlight turnover in communities as shown in the tables presented in SI.
Finally, we can confirm what obtained from Section IV-A from a community perspective.
### _Volatility over communities_
In this section we aim to answer to **RQ2**. In particular, we use Information Theory tools to provide a mathematical formulation of how much users are volatile from a community point of view.
Consider a community \(k\) and a user \(i\), we count the number of months \(i\) comments \(k\) and we denote this quantity as \(m(i,k)\). Obviously, \(1\leq m(i,k)\leq 12\). We consider the distribution of \(m(i,k)\) for each community \(k\).
We try to measure volatility using the following idea: if a community is composed of loyal users, then the distribution \(m(i,k)\) should be something similar to a uniform distribution, i.e. the users are constantly active in the community. On the other hand, if a community is largely composed of volatile users, then \(m(i,k)\) should be concentrated in one point, like a delta distribution. These two behaviours can be captured by the concept of (normalized) Shannon Entropy \(H(x)\). In particular, a community of non-volatile users should show an entropy near \(1\), while a volatile community should have a value near \(0\). Since \(H(x)\) can be interpreted, in this context, as a measure of stability, we summarise the volatility of users within a community using the complement of Shannon's entropy that we indicate as \(V(x)=1-H(x)\), where \(x\) is the community under consideration.
Figure 6 shows the distributions of \(V(x)\) for each subreddit and subverse.
Panel \((a)\) of Figure 6 displays the distribution of the volatility for each subreddit. On the right, it is shown the distribution of volatility for a null model in which the interactions between users and communities are randomised. Panel \((b)\) shows the same analysis on Voat.
It is worthwhile to notice that the volatility distribution has a left-skewed shape for Voat, while it looks more centred in the case of Reddit. This can be due to the low volume size effects of Voat, compared to Reddit.
It's also interesting to note that both random models show very left-skewed distributions, essentially different from the original ones. This suggests that in a random model users show higher volatility in the communities in which they comment. Moreover, their behaviour follows a non-random pattern and, despite being heterogeneous, their activity may be driven by some factors that we aim to investigate in the next section.
### _Evolution of user interest_
In the previous section we have shown evidence that users continuously explore new communities, maintaining a small set of preferred ones. Here, we deepen the analysis by studying how their interests evolve over time considering a semantic representation of each community. In other words, we aim to understand if the observed volatility implies also a shift in the set of users' interests or if they tend to make small variations. For the analysis, we embed each community in a vectorial space in which the coordinates indicate the topics discussed in that (see Section III-C for further details).
We consider the same set of users considered in Section IV-A and their preferred communities. Let \(u\) be a user and \(PC_{u}(t)\) the set of \(u\)'s preferred communities at month \(t\)
Fig. 5: Results of \(CA\) applied to a subverse \((a)\) and a subreddit \((b)\).
Fig. 6: Comparison of \(V(x)\) distributions for real data and a random null model. \((a)\) is obtained with Reddit Data while \((b)\) is obtained with Voat Data.
Moreover, denote with \(\mathbf{v_{c}}\) the embedding vector of community \(c\). We define the interest \(s_{u}(t)\) of users \(u\) at month \(t\) as the centroid of the vectors associated with \(u\)'s preferred communities, namely:
\[s_{u}(t)=\sum_{i\in PC_{u}(t)}\frac{\mathbf{v}_{i}}{768}. \tag{1}\]
After computing these quantities, we compare them between consecutive pairs of months for each user using cosine similarity. We employ a linear regression model to fit their evolution throughout the year. Therefore, for each user, we obtain the values of the slope and intercept of the fitting together with their p-values.
Figure 7 shows the bivariate distributions of the slope and intercept, colored using the \(p-\)value associated with the former, for each social media.
The results are similar for both Voat and Reddit. In particular, a high percentage of fitting in which the cosine similarity is constant throughout the year is present, with relatively high values of the intercept. Moreover, when this is not the case, the values of the slope are not significantly different from zero, indicating approximately constant (high) values of cosine similarity.
The analysis, considering also previous sections' results, corroborate the idea that, despite the continuous exploration of new spaces, users tend to visit communities that are about the same topics mostly. Thus, we can provide a positive answer to **RQ3**.
## V Conclusions
In this paper we analyze the behaviour of a large dataset of users populating communities on Reddit and Voat. First of all, we find a negative answer to **RQ1**. In fact, users tend to continually explore new communities, but with a sublinear behaviour. However, despite the new communities discovered, they tend to stay only in a small subset of them, constantly updated throughout the year.
Instead, we obtain a positive answer to **RQ2**. In fact, both regressions and CA indicate that the majority of users tend to leave very soon the communities. Moreover, using information theory tools, we also find that users tend to follow a behaviour that cannot be reproduced by a random null model.
Using embedding vectors of the communities space we also find evidence for a positive answer of **RQ3**, since the communities explored by users tend to cover essential the same topics.
It is worth noticing that all the obtained results are very similar for Voat and Reddit. In particular, it seems that, although Voat was born as a place of migration for communities banned from Reddit (mostly questionable ones), this does not play a role in users' behaviour and, instead, only the similarities in their organization influence that.
Finally, our results suggest that interests lead the exploration of new communities but the necessity of social feedback also leads users to stay in those communities that are more active, despite the presence of moderation policies.
|
2308.08365
|
DeepContrast: Deep Tissue Contrast Enhancement using Synthetic Data
Degradations and OOD Model Predictions
|
Microscopy images are crucial for life science research, allowing detailed
inspection and characterization of cellular and tissue-level structures and
functions. However, microscopy data are unavoidably affected by image
degradations, such as noise, blur, or others. Many such degradations also
contribute to a loss of image contrast, which becomes especially pronounced in
deeper regions of thick samples. Today, best performing methods to increase the
quality of images are based on Deep Learning approaches, which typically
require ground truth (GT) data during training. Our inability to counteract
blurring and contrast loss when imaging deep into samples prevents the
acquisition of such clean GT data. The fact that the forward process of
blurring and contrast loss deep into tissue can be modeled, allowed us to
propose a new method that can circumvent the problem of unobtainable GT data.
To this end, we first synthetically degraded the quality of microscopy images
even further by using an approximate forward model for deep tissue image
degradations. Then we trained a neural network that learned the inverse of this
degradation function from our generated pairs of raw and degraded images. We
demonstrated that networks trained in this way can be used out-of-distribution
(OOD) to improve the quality of less severely degraded images, e.g. the raw
data imaged in a microscope. Since the absolute level of degradation in such
microscopy images can be stronger than the additional degradation introduced by
our forward model, we also explored the effect of iterative predictions. Here,
we observed that in each iteration the measured image contrast kept improving
while detailed structures in the images got increasingly removed. Therefore,
dependent on the desired downstream analysis, a balance between contrast
improvement and retention of image details has to be found.
|
Nuno Pimpão Martins, Yannis Kalaidzidis, Marino Zerial, Florian Jug
|
2023-08-16T13:40:01Z
|
http://arxiv.org/abs/2308.08365v1
|
# DeepContrast: Deep Tissue Contrast Enhancement using Synthetic Data
###### Abstract
Microscopy images are crucial for life science research, allowing detailed inspection and characterization of cellular and tissue-level structures and functions. However, microscopy data are unavoidably affected by image degradations, such as noise, blur, or others. Many such degradations also contribute to a loss of image contrast, which becomes especially pronounced in deeper regions of thick samples. Today, best performing methods to increase the quality of images are based on Deep Learning approaches, which typically require ground truth (GT) data during training. Our inability to counteract blurring and contrast loss when imaging deep into samples prevents the acquisition of such clean GT data. The fact that the forward process of blurring and contrast loss deep into tissue can be modeled, allowed us to propose a new method that can circumvent the problem of unobtainable GT data. To this end, we first synthetically degraded the quality of microscopy images even further by using an approximate forward model for deep tissue image degradations. Then we trained a neural network that learned the inverse of this degradation function from our generated pairs of raw and degraded images. We demonstrated that networks trained in this way can be used out-of-distribution (OOD) to improve the quality of less severely degraded images, e.g. the raw data imaged in a microscope. Since the absolute level of degradation in such microscopy images can be stronger than the additional degradation introduced by our forward model, we also explored the effect of iterative predictions. Here, we observed that in each iteration the measured image contrast kept improving while detailed structures in the images got increasingly removed. Therefore, dependent on the desired downstream analysis, a balance between contrast improvement and retention of image details has to be found.
## 1 Biological Motivation
In this work we applied our method (DeepContrast) to microscopy images of liver tissue. The liver is a frequently studied system in biomedical research, due to its vital functions in the human body, _e.g._ blood detoxification and bile production. Liver tissue is dense and compact, composed of many different cell types that display an intricate three-dimensional architecture. Still, many aspects of this structure are not fully understood, which drives biomedical research to use modern microscopy techniques to image large 3D sections of liver tissue at the highest achievable quality and resolution.
Data presented in this work was obtained using a Laser Scanning Confocal Microscope (LSM). This modality allowed us to obtain highly detailed image data in large and thick samples with sub-cellular resolution in all three spatial dimensions. Unfortunately, the image quality inevitably degrades in deeper layers of the imaged liver tissue, mostly due to light scattering. This poses a challenge to better our understanding of mesoscale structures that shape the liver in its full 3D complexity. Therefore, methods that facilitate
Figure 1: Proposed scheme to improve deep tissue contrast. **(1)** Pairs of data for supervised training are generated by degrading raw microscopy images using a suitable degradation function \(d(x)\) composed of a blurring and a noising step. **(2)** During supervised network training, synthetically degraded images are used as inputs and the original images as targets. **(3)** During inference, we feed the original raw microscopy images once or iteratively into the trained network (see Section 3).
the downstream analysis of large 3D image data are much sought after.
## 2 Related Work
Classical algorithms to enhance contrast in images often rely on the intensity histogram, typically altering the overall histogram landscape with a set of predefined rules to either obtain a more uniformly sampled distribution or to match an histogram obtained from a desired reference. Examples to such algorithms are known as histogram matching [8] or histogram equalization [20]. These approaches are not content-aware,, they follow specific rules independent of the structures visible in the image to be modified. Contrast Limited Adaptive Histogram Equalization (CLAHE) [34] is an example of widely used histogram equalization method. We use this method as one of our baselines methods due to its popularity and widespread use in scientific image processing protocols.
Another popular family of algorithms used to improve image quality and contrast are feed-forward deconvolution methods such as the one by Richardson-Lucy [23, 14], or the popular Huygens software from Scientific Volume Imaging. These are iterative approaches that attempt to undo the blurring induced by the point spread function (PSF) of the microscope. The main drawback of such approaches is the assumption of spatial invariance of the PSF, which does not hold in thick dense tissue microscopy.
Deep Learning (DL) based applications have proven to perform especially well on several image restoration tasks like denoising [29, 11, 3, 7, 22, 21], deconvolution [4, 9, 13], and super-resolution [19, 18, 26, 31, 32].
Content Aware Image Restoration (CARE) [29], uses supervised DL methods to restores microscopy image quality in various ways. However, in order to use CARE, it is necessary to obtain low and high quality versions of the same objects and structures, which is not possible in many real-world scenarios, such as the one presented in this work. One interesting insight with respect to out-of-distribution (OOD) denoising is presented in [15]. In it, the authors show that a network without trainable bias terms is more robust when applied to inputs that contain levels of noise that are inconsistent (OOD) with respect to the training data.
One popular way to solve the problem of GT data being required is to synthetically generate the required training pairs [28]. In [6], the authors used "crapified" images, as they call it, to obtain said training pairs for training super-resolution networks and networks that increase the temporal consistency in time-lapse movies. Others have used synthetic data generation for object detection [30] or segmentation [5].
Work specifically concerned with enhancing image contrast is less common. The FCE-Net [33] proposes a network architecture specifically designed to enhance image contrast in biological image data. Since this makes FCE-Net our closest competitor, despite technically being a quite different approach, we will always also compare our own results to the ones obtainable with the FCE-Net.
## 3 Methods
Inspired by [29] and [6], we also set out to use the machinery of supervised learning in deep neural networks. In our case, for the sake of improving image contrast in microscopy data of large tissue samples. To this end, we synthetically generate appropriate training data, pairs of images that are of lower and higher contrast. Naturally, we cannot synthetically remove scattered light and noise from raw microscopy data, otherwise the very task we are seeking a solution for would be solved already. Instead, we can add additional light scattering and noise to the available raw data, making it even worse (see Figure 1).
More specifically, our degradation function is
\[\text{d}(x)=\alpha\cdot x+(1-\alpha)\cdot\text{n}(\text{b}(x)), \tag{1}\]
where \(x\) is a raw input image, \(\alpha\) a hyperparameter that controls the blending between \(x\) and \(\text{n}(\text{b}(x))\), b is a blurring function that models light scattering in biological tissue, and n a function adding noise. In line with the most dominant noise in low-light fluorescent microscopy, n is adding Poisson noise to the blurred data.
Light scattering depends on refractive index transitions throughout the sample and a precise forward model is not easy to compute. For our purposes, the simple approximation introduced in Equation 1 leads to contrast enhancement results that outperform existing methods like the FCE-Net (see Section 4).
Once a body of input data \(X=(x_{1},x_{2},\ldots,x_{k})\) is further degraded to \(D=(d(x_{1}),\ldots,d(x_{k}))\), we use image pairs \((d_{i},x_{i})\) for supervised training of a contrast enhancement network (Model A). The network was trained as a bias-free [15] U-Net [24]. The reason for _not_ training the bias terms of the network nodes is that, once trained, it is intended to be applied to images \(x_{i}\) or similar, which are less severely affected by degradations, therefore, OOD with regards to \(d_{i}\).
### Iterative Predictions
Since the absolute level of degradation in such microscopy images can be stronger than the additional degradation introduced by our forward model, we also explored the effect of iterative prediction Iterative predictions with a trained DeepContrast network (DC) are simply multiple applications of DC to the input \(x\). For example, the final DeepContrast (\(3\times\)) prediction \(y\) is computed by
\[y=\text{DC}(\text{DC}(\text{DC}(x))). \tag{2}\]
The experiments we describe below and the results we show in Figure 3 indicate that iterative predictions indeed keep increasing image contrast. It must be noted, that better contrast does not necessarily mean that the predicted image is better for downstream analysis. Image faint details might get lost at the same time and the importance of such details depends on the downstream analysis to be conducted. We present one set of experiments on how to achieve a good balanced between enhancing contrast and preserving image details in the following sections.
## 4 Experiments
### Data
The imaged samples were cleared liver tissue sections, as described in [16], stained with Phalloidin 488 antibody to label the Actin cortex at all cell borders. Images were acquired in a Zeiss LSM 780 confocal microscope, with a Zeiss LCI Plan-Neofluar 63x 1.3 NA Gly/Water objective, a 488nm wavelength excitation laser, an emission window range of \(489-551\)nm, and a pinhole size of \(1AU\). Images were acquired with an isotropic voxel size of \(0.3\mu m\). The maximum imaging depth was \(100\mu m\).
### Image Degradation Model
To compute synthetically degraded images, as described in Eq. 1, we first blur and noise each 2D slice (focal plane) using a Gaussian filter (\(\sigma=20\) pixels) and Poisson noise at an estimated magnitude as described in [10] (using the image analysis software MotionTracking [17]). Synthetic images were then merged with the original images using \(\alpha\) values ranging in linear steps from \(0.5\) to \(0.3\), with \(0.5\) being used for the most superficial slice.
### Network Architecture and Hyperparameters
Our network is a U-Net [24], using a depth of five, \(32\) initial feature channels, an MAE loss, and a linear function as the last activation layer. Our models were trained until convergence with an initial learning rate of \(4\times 10^{-4}\) for a total of \(450\) epochs with \(200\) steps per epoch. A step uses a batch size of \(16\), of which each patch is a \(128\times 128\) pixels crop from the body of training data. Networks were built using the CSBDeep toolbox [29] using Tensorflow 2.2.1. By default, and if not otherwise stated, we would not train the bias terms of each network node (-bias) to improve OOD predictions [15], as also described in Section 3.
Figure 2: Qualitative results. Images of liver tissue sections stained with Phalloidin as a proxy for cell borders, used to compare our results (DeepContrast Model A) to several baseline methods. Rows show image planes at different depths in the liver tissue. Columns show the raw input, results obtained with CLAHE [34], Huygens deconvolution (see Section 4.4), best FCE-Net [33] results (3\(\times\)), and our best DeepContrast results (3\(\times\)), respectively. The three rightmost columns shows the inset areas marked by dashed boxes and line plots of raw intensities, the FCE-Net, and DeepContrast (along the green line in the respective images). _Scale bars_: \(20\mu m\) in full size images, \(10\mu m\) in insets.
### Baselines
Baseline methods, which we used to compare DeepContrast with, are: \((i)\) classical methods, CLAHE and deconvolution using Huygens, and \((ii)\) DL based methods, _i.e_. the FCE-Net [33] trained on our liver dataset.
CLAHE images were obtained using Fiji [25], where the _Enhanced Local Contrast_ plugin is an implementation of the original CLAHE [34] method.
Deconvolved images were obtained using the software Huygens Professional (version 22.10.0p6) from Scientific Volume Imaging (SVI, The Netherlands), following provided pipelines with a theoretical PSF. Internally, Huygens is using the CMLE algorithm, with SNR set to 5, for 60 iterations, and the background value set to 100. This setup led to the best results on the data at hand.
For the FCE-Net [33] we used the code as it is provided by the authors of the original paper. Its worth mentioning that the provided pre-trained FCE network led to inferior results, hence, we trained the FCE-Net from scratch until convergence using our own data. Please note that we have also applied the FCE-Net iteratively, as described above in Equation 2 and observed that also FCE-Net results keep improving. For fair comparison we are therefore reporting iterative FCE-Net results whenever they are better than single predictions.
### Double Degradation Experiments
Since DeepContrast, by definition, is applied OOD, we wondered if iterative contrast enhancement will make consistent steps.
Figure 4: Quantitative results. Contrast quantification using the Percentile Contrast Index (see Section 4.7) and the Wavelet Contrast Index [1] (higher values are better) represented as average and \(95\%\) Confidence Intervals at each depth (\(N=18\)). Dashed vertical grey lines depict depths shown in Figure 2. Multiple iterations of FCE-Net [33] and our DeepContrast (Model A) approach show image contrast is further improved when these networks are iteratively applied.
Figure 3: Qualitative results of iterative OOD model application. Contrast of the image data of Figure 2 iteratively enhanced using a trained DeepContrast network (Model A). Rows show, analogous to Figure 2, image planes at different depth into the imaged tissue. Columns show the raw input data, and the results of applying DeepContrast a single time, two, three (same as in Figure 2), and six consecutive times. The three rightmost columns shows the inset areas marked by dashed boxes and line plots along the green lines in the raw data, and along the \(1\times\), \(3\times\), and \(6\times\) enhanced outputs. Note that, while contrast is continuously enhanced, too many iterative applications cause a notable loss of image details. _Scale bars_: \(20\ \mu m\) in full size images, 10 \(\mu m\) in insets.
Figure 5: Qualitative and quantitative results of segmentation masks created at multiple iterations of contrast enhancement. Left side column shows Raw input and segmentation mask (Section 4.8). Each further column shows different iterations of contrast enhancement and corresponding segmentation of cell borders. Top row shows inference results with DeepContrast and bottom row shows inference results with FCE-Net. Yellow arrow heads in images highlight lost or degraded structures when comparing DeepContrast and FCE-Net. Violin plots shows distribution of IoU values between the different contrast enhancement methods and raw segmentation masks used as reference at multiple iterations (\(N=2682\)), showing faster decrease of IoU values and more abundant mistakes in segmentation with FCE-Net.
Figure 6: Qualitative double degradation results and predictions with Model B (see Section 4.5). Each row shows different depths in the sample, as in previous figures. Columns depict, from left to right, (\(i\)) double degraded images used as inputs during training, (\(ii\)) single degraded images used as targets during training, (\(iii\)) predictions of Model B when applied to data as the one in column two, (\(iv\)) the raw image data to compare Model B (\(1\times\)) outputs against, (\(v\)) Model B (\(3\times\)) outputs, and (\(vi\)) Model A (\(2\times\)) outputs to compare Model B (\(3\times\)) outputs against. Note that these two predictions should be and are similar since Model B starts with inputs that are once more synthetically degraded. The smaller panels in the rightmost columns show the insets (marked by dashed lines) from the other columns. Structures in all enhanced images are consistent with structures in the raw data, which is encouraging. _Scale bars_: 20 \(\mu m\) in full size images, 10 \(\mu m\) in insets.
As a first verification of our approach we degraded the raw image data twice, acquiring data triplets \((e_{i},d_{i},x_{i})\), with \(x_{i}\in X\), and \(d_{i}=\text{d}(x_{i})\) and \(e_{i}=\text{d}(\text{d}(x_{i}))\). Then we trained a DeepContrast network, Model B, on pairs \((e_{i},d_{i})\) and applied the trained network, in-line with the initially proposed procedure, to \(d_{i}\) to increase its contrast and yielding \(y_{i}=DC(d_{i})\).
Since we started by double degrading the original \(x_{i}\), we can now compare the prediction \(y_{i}\) with \(x_{i}\), and further iterations of Model B with corresponding predictions obtained with Model A (see Section 3). If the trained DeepContrast network is indeed a good approximation of the inverse of our degradation function d, predictions \(y_{i}\) should be similar to the original images \(x_{i}\) at the first iteration, and to the corresponding iteration of output images from Model A.
### Ablations
In order to evaluate if bias-free training [15] is indeed leading to better results, we decided to repeat model training also on networks that are not bias-free, _i.e_. train all weights and biases.
### Contrast Quantification
#### Wavelet Contrast Index
With increasing contrast in an image, we expect background signal to be reduced and, consequently, the brightness of biological structures in the image (signal) to be increased. To quantify image contrast when no GT data is available, we used the Wavelet Contrast Index (WCI) [1]. This measure computes the difference between coefficients obtained from a wavelet decomposition, following the equation
\[\text{WCI}(x)=\log(\frac{W_{95^{th}}(x)}{W_{50^{th}}(x)}), \tag{3}\]
where \(x\) is the input image for which we want to evaluate the contrast, \(W_{95^{th}}\) is the \(95^{th}\) percentile wavelet coefficient and \(50^{th}\) is the median wavelet coefficient.
Wavelet decomposition was performed with the PyWavelets [12] python package using a Haar wavelet as the reference function and used coefficients up to the fourth level of decomposition.
#### Percentile Contrast Index
We also used the Percentile Contrast Index (PCI) to quantify intensity differences between image background and image structures. The PCI is computed by
\[\text{PCI}(x)=\log(\frac{I_{95^{th}}(x)}{I_{50^{th}}(x)}), \tag{4}\]
where \(x\) is again the input image to evaluate, and \(I_{95^{th}}\) is the \(95^{th}\) intensity value in the image being analyzed and \(I_{50^{th}}\) is the median intensity of \(x\). We use the median value, assuming that at least half the pixels of any given image are background pixels.
### Downstream Segmentation after Contrast Enhancement
Enhancing contrast not necessarily improves downstream process-ability (interpretability) of a given dataset. While the contrast, as measured by WCI and/or PCI, might still improve, details relevant for biological interpretation of the data might already get lost. Therefore, the best amount of contrast enhancement depends on a given downstream analysis task. To this end, we introduced a downstream segmentation task and checked if a fixed segmentation pipeline improved with respect to existing ground truth labels. GT segmentation masks were generated from raw data using Labkit [2] (available as a Fiji [25] plugin).
For simplicity, we segmented contrast enhanced images \(y_{i}\) by thresholding, optimizing for the best threshold value, _i.e_. the one that maximizes the intersection-over-union (IoU) with respect to the previously generated GT. Our reasoning was that enhancing contrast would result in a better IoU after thresholding as long as relevant structures in the contrast enhanced images \(y_{i}\) were not lost. As soon as details were getting lost, the IoU dropped, allowing us to choose the most sensible iteration depth for DeepContrast (or the FCE-Net).
## 5 Results
Qualitative results presented in Figure 2 suggest that DeepContrast outperforms all baseline methods. DeepContrast removes or reduces image noise and enhances the intensity of visible image structures seemingly without loosing fine details (signal) from predicted images.
\begin{table}
\begin{tabular}{c|c|c|c} \hline & \multicolumn{3}{c}{SSIM} \\ & \(MB_{1\times}\) vs Raw & \(MB_{2\times}\) vs \(MA_{1\times}\) & \(MB_{3\times}\) vs \(MA_{2\times}\) \\ \hline Very Deep & 0.51 \(\pm\) 0.07 & 0.70 \(\pm\) 0.08 & 0.80 \(\pm\) 0.07 \\ Deep & 0.52 \(\pm\) 0.07 & 0.72 \(\pm\) 0.09 & 0.82 \(\pm\) 0.07 \\ Intermediate & 0.56 \(\pm\) 0.06 & 0.77 \(\pm\) 0.06 & 0.84 \(\pm\) 0.03 \\ Shallow & 0.60 \(\pm\) 0.06 & 0.80 \(\pm\) 0.05 & 0.83 \(\pm\) 0.03 \\ Very Shallow & 0.59 \(\pm\) 0.08 & 0.79 \(\pm\) 0.06 & 0.83 \(\pm\) 0.02 \\ \hline \end{tabular}
\end{table}
Table 1: Quantitative results of the double degradation experiment described in Section 4.5 and shown in Figure 6. We compare the outputs of three iterations of Model B (\(MB_{k\times}\)), which was trained on double-degraded and single-degraded inputs, to the closest matching images, _i.e_. raw data for direct predictions of Model B (\(MB_{1\times}\) vs Raw), direct predictions of Model A to two iterations of Model B (\(MB_{2\times}\) vs \(MA_{1\times}\)), and predictions of two iterations of Model A to three iterations of Model B (\(MB_{3\times}\) vs \(MA_{2\times}\)). Section 3 for details.
Hence, DeepContrast is indeed increasing image contrast.
Both classical methods, _i.e_. CLAHE and deconvolution, displayed relatively poor results mainly deep into the tissue. CLAHE amplified image noise at all imaging depths and mostly failed to highlight biological structures. Deconvolution, on the other hand, did reduce image noise, but failed to increase intensities of foreground structures (most obvious deep into the tissue).
The FCE-Net performed much better, leading to good results close to the surface. But the image contrast in FCE-Net predictions decayed with increasing depth (see insets in Figure 2). Qualitatively, the FCE-Net also seemed to produce less sharp cell borders (as seen in either deep and shallow image regions).
To validate these qualitative observations, we quantified contrast with the two measures WCI and PCI (see Section 4.7). As can be seen in Figure 4, DeepContrast achieved higher image contrast at all imaging depths and over all plotted iterative applications (\(1\times\) to \(3\times\)). One notable exception are the WCI values for \(3\times\) iterations in intermediate imaging depths. In these images, despite the FCE-Net showing higher WCI values, one can see more image structure being lost in FCE-Net predictions than in predictions obtained with DeepContrast (see Figure 5 for a qualitative and quantitative comparison).
As introduced above, contrast enhancement can be applied iteratively (Equation 2). Results of performing multiple rounds of enhancement are shown in Figure 3. Visually, the best results were obtained with three rounds of enhancement (\(3\times\)). While contrast readouts using WCI and PCI would still improve with additional iterations, image details would start disappearing (as can be seen in the \(6\times\) column and the line-plots in Figure 3).
Qualitative results of the Double Degradation Experiments introduced in Section 4.5, are shown in Figure 6. Predictions of Model B at iteration \(k\) should and are corresponding well to predictions of Model A at iteration \(k-1\) since Model B is trained on image pairs that are one application of our forward degradation model (d) more degraded. We quantify this via structure similarity index measure (SSIM) [27] in Table 1 and allow for a qualitative comparison between the corresponding columns in Figure 6.
### Contrast Enhancement vs. Segmentation
To better quantify the undesired effect of loosing relevant details while simultaneously gaining additional contrast in the processed microscopy data, we introduced a simple threshold based segmentation task (See Section 4.8). A qualitative as well as quantitative comparison is shown in Figure 5. IoU values are initially increasing with number of iterations, but then eventually drop when too many image structures are removed. The FCE-Net generally shows lower IoU values, suggesting that DeepContrast is not only leading to more contrasted images, but is at the same time maintaining more image details with iterations. In addition to the IoU quantification, we highlighted lost details on images with yellow arrow heads (see in Figure 5 ), pointing differences between iterative inferences.
### Ablation: Training including Bias
As introduced above, DeepContrast employs bias-free [15] network training. In Figure 7 we show representative predictions of Model A (\(3\times\)), as used in Figure 2, and compare them to predictions obtained with an equivalent model which was trained with bias (+bias). Yellow arrow heads in the figure point at locations where the bias free network does a better job retaining image details. Empirically, we did not spot any cases where the opposite is true, which gave us additional motivation to use bias-free networks in DeepContrast.
Figure 7: Qualitative results of networks trained without and with bias. Phalloidin stained images of liver tissue sections enhanced \(3\times\) with DeepContrast models trained with (+bias; right side) and without bias (-bias; left side). Rows show image planes at different depths relative to cover-glass. Network model trained with bias performs worse when applied OOD, removing structures seen in a model trained without bias (- bias), highlighted by yellow arrow-heads. _Scale bars_: 20 \(\mu m\) in full size images, 10 \(\mu m\) in insets.
## 6 Discussion and Conclusion
In this work we propose to use an image degradation function to approximate light scattering in deep tissue imaging and use it to generate synthetically degraded data to enable supervised network training. Our results show that the relatively simple degradation model we introduced is sufficient to increase image contrast in real microscopy data. Our method can be applied in an iterative manner to further increase image contrast and will retain detailed image structures for more iterations than the competitive baseline methods we compared against.
For the liver data at hand, we have found that the best number of iterations for contrast enhancement is three (\(3\times\)). This assessment is based on a combination of contrast enhancement and retention of fine image details in the contrast enhanced predictions. A more quantitative approach to the visual assessment was introduced by means of a downstream segmentation task, which has indeed confirmed our initial findings.
In general, the best trade-off between contrast enhancement and structural integrity of predictions depends on the nature of the downstream processing task to be conducted. Hence, an analysis similar to the one we performed for the segmentation task could be required to evaluate the best-performing setup.
Similarly we found that for better OOD application of our trained networks, the bias free version seems to lead to better results.
While our approach is leading to excellent results and can easily be used by microscopists and life scientists to improve volumetric image data for quantitative downstream processing, additional research will be required to undo image degradations deep in imaged tissues in more fundamental ways.
## Acknowledgments
The authors thank Jose Valenzuela-Iturra for acquiring image data. We thank the LMF and SCF facilities at MPI-CBG for technical support. We thank Igor Zubarev for help and feedback with the presented experimental setups. We thank Ashesh, Anirban Ray, Sheida Rahnamai Kordasiabi, Igor Zubarev, Joran Deschamps and Damian Dalle Nogre are for helpful discussion and feedback. This work was supported by the European Research Council ERC Advanced Rulliver Grant (no. 695646) to M. Zerial. Additionally, this work was supported by the European Commission through the Horizon Europe program AI4LIFE with grant agreement 101057970-AI4LIFE. Funding was also provided from the Max-Planck Society under project code M.IF.A.MOZG8106.
|
2303.15720
|
Multi-Behavior Recommendation with Cascading Graph Convolution Networks
|
Multi-behavior recommendation, which exploits auxiliary behaviors (e.g.,
click and cart) to help predict users' potential interactions on the target
behavior (e.g., buy), is regarded as an effective way to alleviate the data
sparsity or cold-start issues in recommendation. Multi-behaviors are often
taken in certain orders in real-world applications (e.g., click>cart>buy). In a
behavior chain, a latter behavior usually exhibits a stronger signal of user
preference than the former one does. Most existing multi-behavior models fail
to capture such dependencies in a behavior chain for embedding learning. In
this work, we propose a novel multi-behavior recommendation model with
cascading graph convolution networks (named MB-CGCN). In MB-CGCN, the
embeddings learned from one behavior are used as the input features for the
next behavior's embedding learning after a feature transformation operation. In
this way, our model explicitly utilizes the behavior dependencies in embedding
learning. Experiments on two benchmark datasets demonstrate the effectiveness
of our model on exploiting multi-behavior data. It outperforms the best
baseline by 33.7% and 35.9% on average over the two datasets in terms of
Recall@10 and NDCG@10, respectively.
|
Zhiyong Cheng, Sai Han, Fan Liu, Lei Zhu, Zan Gao, Yuxin Peng
|
2023-03-28T04:07:59Z
|
http://arxiv.org/abs/2303.15720v1
|
# Multi-Behavior Recommendation with Cascading Graph Convolution Networks
###### Abstract.
Multi-behavior recommendation, which exploits auxiliary behaviors (e.g., _click_ and _cart_) to help predict users' potential interactions on the target behavior (e.g., _buy_), is regarded as an effective way to alleviate the data sparsity or cold-start issues in recommendation. Multi-behaviors are often taken in certain orders in real-world applications (e.g., _click-cart-buy_). In a behavior chain, a latter behavior usually exhibits a stronger signal of user preference than the former one does. Most existing multi-behavior models fail to capture such dependencies in a behavior chain for embedding learning. In this work, we propose a novel multi-behavior recommendation model with cascading graph convolution networks (named MB-CGCN). In MB-CGCN, the embeddings learned from one behavior are used as the input features for the next behavior's embedding learning after a feature transformation operation. In this way, our model explicitly utilizes the behavior dependencies in embedding learning. Experiments on two benchmark datasets demonstrate the effectiveness of our model on exploiting multi-behavior data. It outperforms the best baseline by 33.7% and 35.9% on average over the two datasets in terms of Recall@10 and NDCG@10, respectively.
Collaborative filtering, GCN, multi-behavior recommendation +
Footnote †: journal: Information systems and Collaboration
+
Footnote †: journal: Information systems and Collaboration
+
Footnote †: journal: Information systems and Collaboration
+
Footnote †: journal: Information systems and Collaboration
+
Footnote †: journal: Information systems and Collaboration
+
Footnote †: journal: Information systems and Collaboration
+
Footnote †: journal: Information systems and Collaboration
+
Footnote †: journal: Information systems and Collaboration
+
Footnote †: journal: Information systems and Collaboration
+
Footnote †: journal: Information systems and Collaboration
+
Footnote †: journal: Information systems and Collaboration
+
Footnote †: journal: Information systems and Collaboration
+
Footnote †: journal: Information systems and Collaboration
+
Footnote †: journal: Information systems and Collaboration
+
Footnote †: journal: Information systems and Collaboration
+
Footnote †: journal: Information systems and Collaboration
+
Footnote †: journal: Information systems and Collaboration
+
Footnote †: journal: Information systems and Collaboration
+
Footnote †: journal: Information systems and Collaboration
+
Footnote †: journal: Information systems and Collaboration
+
Footnote †: journal: Information systems and Collaboration
+
Footnote †: journal: Journal of the ACM Web Conference 2033 (WWW '23), May 1–5, 2023, Austin, TX, USA
+
Footnote †: journal: journal: Journal of the ACM Web Conference 2023 (WWW '23), May 1–5, 2023, Austin, TX, USA
been also applied in this task. For example, DIPN (DipN, 2017) uses a hierarchical attention network to capture the inter- and intra-behavior relations; MATN (Shi et al., 2018) and KHGT (Shi et al., 2019) apply the transformer network to model multi-behavior relations and learn user/item embeddings. The GCN-based models often construct a unified graph based on all types of interaction behaviors, and then perform GCN over the graph to learn the user and item embeddings with various strategies and techniques (Beng et al., 2017; Liu et al., 2017; Liu et al., 2018; Liu et al., 2019; Liu et al., 2019).
Despite the progress, those methods fail to exploit the behavior dependencies in a chain to directly facilitate the embedding learning in behaviors. In real-world applications, users often take behaviors in a certain order to discover more about an item to help make the final decision, such as _click-scar-spurchase_1. Different behaviors reveal user preference to different extents or from different perspectives (Liu et al., 2019; Liu et al., 2019). In a behavior chain, a latter behavior exhibits a stronger signal of user preference on the item than the former one does (Liu et al., 2019). Therefore, the preference information learned from a previous behavior can be used to facilitate the embedding learning in the next one in the behavior chain. Some methods also consider the dependencies or relations among multi-behaviors (DipN, 2017; Liu et al., 2019; Liu et al., 2019; Liu et al., 2019), however, they learn the dependencies for weighing the contributions of other behaviors to the target behaviors in embedding aggregation. In addition, most existing methods treat each type of auxiliary behavior in the same way and have not considered the behavior order in modeling. Only limited methods consider the effects of behavior chain (Liu et al., 2019; Liu et al., 2019). NMTR (Liu et al., 2019) models the cascading effects of multi-behaviors by injecting the prediction scores of a previous behavior into the next behavior's score prediction. CRGCN (Liu et al., 2019) designs a cascading residual graph convolutional network to explicitly utilize the cascading behaviors in the embedding learning. Similar to many other multi-behavior recommendation models (Beng et al., 2017; Liu et al., 2019; Liu et al., 2019; Liu et al., 2019; Liu et al., 2019; Liu et al., 2019), CRGCN also adopts multi-task learning in model training. Multi-task learning can acquire supervision signals from each type of behavior data for the embedding learning, but the learning process is not fully oriented to the target behavior prediction.
Footnote 1: In real scenarios, users may jump one or two behaviors directly to the final behavior.
In this paper, we propose a novel multi-behavior recommendation model with cascading graph convolution networks (named MB-CGCN). Specifically, our model consists of a sequence of GCN blocks, each of which corresponds to a behavior in the behavior chain. The LightGCN (DipN, 2017) is adopted for its efficiency and efficacy in the blocks to learn user and item embeddings from each behavior. The embeddings of users and items learned from a behavior are used as the input features to the next behavior's embedding learning. Considering that directly using the output embeddings as the input embeddings of the next GCN may inject noise or misleading information to misguide the learning process, a feature transformation is designed to process the embeddings before the delivery. In this way, our model explicitly utilizes the behavior dependencies in a chain to directly facilitate the embedding learning in latter behaviors. Finally, the embedding learned from different behaviors are aggregated for the final behavior prediction. Different from previous models, MB-CGCN does not adopt multi-task learning in optimization. It only uses the target behavior as supervision signals, making the embedding learning in all behaviors oriented to the ultimate goal. Extensive experiments have been conducted on two benchmark datasets to evaluate the effectiveness of MB-CGCN. With the simple structure, MB-CGCN gains a remarkable improvement over the state-of-the-art multi-behavior recommendation methods, achieving 33.7% and 35.9% relative gains on average over the two datasets in terms of Recall@10 and NDCG@10, respectively. Further empirical studies are also performed to examine the validity of different designs in our model and analyze the effects of different multi-behavior numbers and orders. The main contributions are summarized as:
* We highlight the importance of explicitly exploiting the behavior dependencies in a behavior chain in multi-behavior recommendation and propose to distill the behavioral features of a former behavior to facilitate the embedding learning in the latter one.
* We propose a novel multi-behavior recommendation model MB-CGCN with a simple structure. It mainly consists of a sequence of GCN blocks, in which the embeddings learned in a previous GCN are processed by a feature transformation operation and then used as the input to the next one.
* We comprehensively evaluate the effectiveness of MB-CGCN on two real-word datasets. Experiment results show that our model can significantly improve the recommendation performance with a large margin. We release the our code for reproducibility2. Footnote 2: [https://github.com/SS-00-SS/MIBCGCN](https://github.com/SS-00-SS/MIBCGCN)
## 2. Related Work
Multi-behavior recommendation refers to the exploitation of multiple behaviors in user-item interactions for recommendation (Liu et al., 2019; Liu et al., 2019; Liu et al., 2019). Owing to its effectiveness in alleviating the data sparsity issue and enhancing recommendation performance, it has drawn an increasing attention in recent years (Liu et al., 2019; Liu et al., 2019; Liu et al., 2019; Liu et al., 2019; Liu et al., 2019).
The early multi-behavior recommendation methods were mainly developed based on traditional recommendation techniques. A direct approach is to extend the traditional matrix factorization MF technique operating on single matrix to multiple matrices (Liu et al., 2019; Liu et al., 2019). For example, Ajit et al. (Liu et al., 2019) directly extended the matrix factorization model to factorize multiple matrices simultaneously with sharing item-side embeddings. This model was further extended to perform matrix factorization of multiple behaviors by sharing user or item embeddings (Liu et al., 2019; Liu et al., 2019). Another line of research treated the multiple behaviors as auxiliary behavioral data and designed new sampling strategies to enrich the training samples. Loni et al. (Loni et al., 2019) proposed to assign different preference levels to multiple behaviors and extended the standard BPR (Liu et al., 2019) with a new negative sampling strategy for negative item sampling from different behaviors. Ding et al. (Ding et al., 2019) further extended this idea by designing an improved negative sampler to better exploit the multiple behavioral data. Guo et al. (Guo et al., 2019) utilized the item-item similarity to generate samples from multiple auxiliary behaviors. Qiu et al. (Qiu et al., 2019) designed an adaptive sampler in BPR to balance the correlation among behaviors.
With the great success of deep learning techniques in recommendation, researchers have also attempted to developed deep neural network (DNN) or graph convolutional network (GCN) based multi-behavior recommendation models in recent years. In DNN models, a common approach is to first learn user and item embeddings from each behavior via the designed network, and then aggregate
the embeddings learned from different behaviors for the target behavior prediction (Wang et al., 2017; Wang et al., 2018; Wang et al., 2019). The differences lie in the designed network and used attention mechanism. For example, DIPN (Wang et al., 2017) applies a hierarchical attention network to model the relationships between different behaviors in embedding learning aggregation. MATN (Wang et al., 2018) uses a transformer-based network to encode the multi-behavior relations and a memory attention network to generate the embeddings for each behavior, which are then aggregated for target behavior prediction. Besides the user-item interactions, KHGT (Wang et al., 2019) also considers the item-item relations and adopts a transformer network to learn user/item embeddings, which are aggregated with behavior-specific contributions to make the final prediction. Different from those methods that aggregate the embeddings from different behaviors for target behavior prediction, NMTR (Wang et al., 2019) adopts a multi-task learning framework to leverage all behaviors of users as prediction targets and reuse the prediction score of the previous behavior to the score prediction of the next behavior.
For GCN models, a general paradigm is to construct a unified user-item graph based on all behaviors, and then perform GCN operations on the graph to learn user embeddings (Golovolovolov et al., 2013; Kipf and Welling, 2014; Kipf and Welling, 2015; Kipf and Welling, 2015). GHCF (Golovolovolov et al., 2013) learns the embedding over the graph via GCN and then simply uses the aggregated representations to predict each behavior individually. MBGCN (Golovolovolov et al., 2013) learns the behavior contributions over the unified user-item graph and models behavior semantics over item-item co-interacted graphs. The final prediction is an aggregation of the predicted scores obtained by behavior contributions and behavior semantics. MGNN (Wang et al., 2019) leverages the multiplex (multiplexayer) network to learn both shared user and item embeddings and unique embeddings of each behavior. MBGMN (Wang et al., 2019) uses a meta graph neural network to model the diverse multi-behavior patterns and capture the behavior heterogeneity and diversity in the unified graph. Meng et al. (Meng et al., 2019) considered the diversified interests of users behind multiple behaviors and proposed a knowledge-enhanced multi-interest learning method (CKML) to learn users' multiple interests of users by assigning each behavior among interests. HMGGR (Yang et al., 2019) conducts graph contrastive learning among the constructed hyper meta-graphs to adaptively learn the complex dependencies among different behaviors for embedding learning.
Besides chainRec (Wang et al., 2018) and NMTR (Wang et al., 2019), the previous methods have not considered the order information in the modeling. And none of the aforementioned models have explicitly exploited the dependency relations of multi-behaviors into the embedding learning. More recently, Yan et al. (Yan et al., 2019) proposed a CRGCN model to address this limitation with a cascading GCN structure by using multi-task learning. However, due to the use of a residual design, it can only uses a single-layer GCN for auxiliary behaviors. In contrast, our model uses a feature transformation to distill features from previous behaviors and can avoid the problem caused by residual connections. Besides, we did not use the multi-task learning in optimization.
## 3. Proposed Model
### Problem Formulation
Traditional recommendation methods are generally designed towards the single type of user-item interaction (i.e., a target behavior) that directly relevant to the platform's profit (e.g., _buy_). They often face serious data sparsity or cold-start issue in real-world applications, because this behavior will bring a real financial cost to users. Before taking actions of this behavior, users will interact with items by other manners (e.g., _click_ and _cart_) to get more information to help them make decisions. These auxiliary behaviors also contain rich information about user preferences and can be used to alleviate the data sparsity and cold-start issues. In this work, our goal is to design a recommendation model for the target behavior by exploiting the auxiliary behaviors.
Let \(\mathcal{U}\) and \(\mathcal{I}\) be the user set and item set with \(M\) and \(N\) users and items, respectively. We use \(\{Y^{1},Y^{2},\cdots,Y^{B}\}\) to denote the multi-behavior interaction matrices sorted in a behavior order. \(B\) is the number of behavior types. \(Y^{b}\) is the interaction matrix of the \(b\)-th behavior and \(Y^{B}\) is the target behavior. All the interaction matrices are binary, which means that each entry in the matrices has a value 1 or 0, defined as:
\[y^{b}_{u,i}=\begin{cases}1,&\text{If $u$ has interacted with $i$ under behavior $b$;}\\ 0,&\text{otherwise.}\end{cases} \tag{1}\]
The task of multi-behavior recommendation is formulated as:
**Input**: The interaction data of \(B\) types of behaviors \(\{Y^{1},Y^{2},\cdots,Y^{B}\}\), for a user set \(\mathcal{U}\) and an item set \(\mathcal{I}\).
**Output**: A recommendation model to predict the probability that a user \(u\) will interact with an item \(i\) under the \(B\)-th behavior, i.e., target behavior.
### Model Description
#### 3.2.1. Overview
Before delving into the details of our model, we would like to make a first impression on the global view. Motivated by the observations that different behaviors exhibit user preferences from different perspectives (Meng et al., 2019) or to different extents (Wang et al., 2018; Yan et al., 2019), we would like to explicitly utilize the dependencies among behavior chains for embedding learning in different behaviors. Figure 1 shows the overall structure of our MB-CGCN model, which consists of three components: 1) **Embedding initialization**, which initializes user and item embeddings for the subsequent learning based on the behaviors in a defined order. 2) **Cascading GCN blocks**, in which LightGCN is adopted to learn the user and item embeddings for each behaviors. More specifically, the embeddings learned from a previous behavior will be delivered to facilitate the next behavior's embedding learning via a feature transformation. 3) **Embedding aggregation**, which aggregates the embeddings learned from each behavior for the target behavior prediction.
#### 3.2.2. Embedding Initialization
Following existing GCN-based recommendation models (Golovolovolov et al., 2013; Kipf and Welling, 2015; Kipf and Welling, 2015; Kipf and Welling, 2015), we initialize the embedding vectors of a user \(u\in\mathcal{U}\) and an item \(i\in\mathcal{I}\) as \(\mathbf{e_{u}^{0}}\in\mathbb{R}^{d}\) and \(\mathbf{e_{i}^{0}}\in\mathbb{R}^{d}\), respectively. \(d\) denotes the embedding size. We use \(\mathbf{P}\in\mathbb{R}^{M\times d}\) and \(\mathbf{Q}\in\mathbb{R}^{N\times d}\) to respectively denote the embedding matrix for users and items. Each user or item is described by an unique ID, which is represented by an one-hot vector. Let \(\mathbf{ID}^{U}\) and \(\mathbf{ID}^{I}\) be the one-hot vector matrix for all users and items, the embedding of a user \(u_{m}\) and item \(i_{n}\) are initialized as:
\[\mathbf{e}_{u_{m}}^{0}=\mathbf{P}\cdot\mathbf{ID}_{m}^{U},\quad\mathbf{e}_{i}^{0}=\mathbf{Q}\cdot \mathbf{ID}_{m}^{I}, \tag{2}\]
where \(\mathbf{ID}_{m}^{U}\) and \(\mathbf{ID}_{n}^{I}\) are one-hot vector for user \(u_{m}\) and item \(i_{n}\), respectively. Notice that the initialized embeddings are used as
the input feature of users and items in the LightGCN of the first behavior, as shown in Figure 1.
#### 3.2.3. Cascading GCN blocks
The Cascading GCN blocks are the core component of our model. This component mainly consists of a LightGCN chain, in which each LightGCN (Chen et al., 2017) responses to one type of behaviors to learn the user and item embeddings for this behavior. Along with the chain, the embedding learned from a previous LightGCN is used as the input features of users and items in the next LightGCN after a _feature transformation_ operation. The basic idea is to use the cascading LightGCN to extract features from different behaviors and also exploit the dependencies in the behavior chain to help learn the latter behaviors' features. In the next, we briefly recap the core idea of LightGCN and introduce the feature transformation in our model.
**LightGCN Brief.** LightGCN is an effective and popular GCN-based model designed for single-behavior recommendation. It removes the transformation matrix and nonlinear activation from the vanilla GCN. This simplification has proven to be efficient and effective in recommendation. In this work, we adopt LightGCN as the backbone model to learn user and item embeddings from each behavior for its efficiency and efficacy. Note other single-behavior GCN-based recommendation models can be also applied, such as UltraGCN (Hu et al., 2019) and SimGCL (Wang et al., 2019).
The core of GCN-based model is to recursively integrate the embedding information from neighboring nodes and update the embedding of ego nodes. Given the input embedding of a user \(\mathbf{e}_{u}^{b}\) and an item \(\mathbf{e}_{i}^{b}\) for the \(b\)-th behavior, LightGCN leverages the user-item interaction graph to propagate embeddings as:
\[\mathbf{e}_{u}^{(b,l+1)}=\sum_{i\in\mathcal{N}_{u}}\frac{1}{\sqrt{| \mathcal{N}_{u}|}\sqrt{|\mathcal{N}_{i}|}}\mathbf{e}_{i}^{(b,l)}, \tag{3}\]
\[\mathbf{e}_{i}^{(b,l+1)}=\sum_{u\in\mathcal{N}_{i}}\frac{1}{\sqrt{| \mathcal{N}_{i}|}\sqrt{|\mathcal{N}_{u}|}}\mathbf{e}_{u}^{(b,l)}, \tag{4}\]
where \(\mathbf{e}_{u}^{(b,l)}\) and \(\mathbf{e}_{i}^{(b,l)}\) respectively denote the updated embeddings of user \(u\) and item \(i\) under behavior \(b\) after \(l\) layers' propagation. \(\mathcal{N}_{u}\) denotes the set of items that are interacted by user \(u\), and \(\mathcal{N}_{i}\) denotes the set of users that interact with item \(i\). After \(L\) layers' propagation, LightGCN obtains \(L+1\) embeddings to describe a user \(\{\mathbf{e}_{u}^{(b,0)},\mathbf{e}_{u}^{(b,1)},\cdots,\mathbf{e}_{u}^{(b,L)}\}\) and an item \(\{\mathbf{e}_{i}^{(b,0)},\mathbf{e}_{i}^{(b,1)},\cdots,\mathbf{e}_{i}^{(b,L)}\}\). To obtain the final user and item embeddings based on the \(b\)-th behavior, we simply aggregate these embeddings as follows:
\[\mathbf{e}_{u}^{(b)}=\sum_{l=0}^{L}\mathbf{e}_{u}^{(b,l)},\quad\mathbf{e}_{i}^{(b)}=\sum_{ l=0}^{L}\mathbf{e}_{i}^{(b,l)}. \tag{5}\]
The learned embeddings from \(b\)-th behavior will fed into the LightGCN of the next behavior as the input embeddings after a _feature transformation_ operation.
**Feature Transformation.** All the different types of behaviors in interactions reveal users' preferences more or less. In a behavior chain, a latter behavior often exhibits stronger signal or more accurate user preference than a former behavior does (Kang et al., 2019). Therefore, the embeddings learned from a former behavior can be used as good initializations for the next behavior's embedding learning, which is the underlying intuition of our cascading GCN structure. However, the direct use of a former behavior's features as initialized embeddings can be regarded as a refinement of the embeddings by using latter behaviors, which may lose the diverse information conveyed by different behaviors. On the other hand, the noisy information in a former behavior may negatively impact the learning process of latter behaviors seriously. With this consideration, we introduce a feature transformation design in MB-CGCN to process the learned embeddings before the delivery. Let \(\mathbf{W}^{b}\) be the transformation matrix for the \(b\)-th behavior to the \((b+1)\)-th behavior, the transformation is performed as:
\[\mathbf{e}_{u}^{(b+1,0)}=\mathbf{W}_{u}^{b}\mathbf{e}_{u}^{(b)},\quad\mathbf{e}_{i}^{(b+1,0)}= \mathbf{W}_{i}^{b}\mathbf{e}_{i}^{(b)}, \tag{6}\]
where \(\mathbf{W}_{u}^{b}\) and \(\mathbf{W}_{i}^{b}\) respectively denote the transformation vector for user \(u\) and item \(i\). \(\mathbf{e}_{u}^{(b+1,0)}\) and \(\mathbf{e}_{i}^{(b+1,0)}\) denote the initial embeddings of the user \(u\) and item \(i\) in the \((b+1)\)-th behavior. Despite
Figure 1. Overview of our MB-CGCN model.
its simplicity, the feature transformation can effectively distill useful feature to facilitate the next behavior's embedding learning as demonstrated in our experiments.
#### 3.2.4. Embedding Aggregation
To well exploit different behaviors, we aggregate the embeddings learned from all behaviors for prediction. The main focus of this work is to study the potential of exploiting the dependency structure of multi-behaviors in a certain order for recommendation. To keep the structure simple, we simply use a linear combination to aggregate the features learned from different behaviors, which is
\[\mathbf{e_{u}}=\sum_{b=1}^{B}\mathbf{e_{u}^{(b)}},\quad\mathbf{e_{i}}=\sum_{b=1}^{B}\mathbf{e_{i }^{(b)}}. \tag{7}\]
Finally, the model prediction is defined as an inner product of the user and item embeddings:
\[\hat{y}_{ui}=\mathbf{e_{u}^{T}}\mathbf{e_{i}}, \tag{8}\]
which is used as a ranking score for the target behavior recommendation.
### Model Training
Similar to other rank-oriented recommendation, the pairwise learning strategy is adopted for model optimization (Kipf and Welling, 2015; He et al., 2017; He et al., 2018). In implementation, we use the standard BPR loss (He et al., 2017), which assumes a higher score to an observed item than that to an unobserved one. The objective function is formulated as:
\[\mathcal{L}=\sum_{(u,i,j)\in O}-ln\sigma(y_{ui}-y_{uj})+\lambda\left\|\Theta \right\|^{2}, \tag{9}\]
where \(O=\{(u,i,j)|(u,i)\in\mathcal{R}^{+},(u,j)\in\mathcal{R}^{-}\}\) is defined as positive and negative sample pairs, and \(\mathcal{R}^{+}\) (\(\mathcal{R}^{-}\)) denotes the sample that has been observed (unobserved) in the target behavior. \(\sigma(\cdot)\) denotes the sigmoid function. \(\Theta\) denotes all trainable parameters. \(L_{2}\) regularization is adopted to prevent over-fitting and \(\lambda\) is a coefficient to control the \(L_{2}\) regularization.
### Discussion
We notice that CRGCN (Zhou et al., 2017) also exploits a cascading GCN structure to exploit the behavior dependencies in the embedding learning of behaviors. As far as we concerned, MB-CGCN is fundamentally different from CRGCN on _how to deliver the embedding from one behavior to the next behavior_. CRGCN delicately designs a residual connection to preserve previous behavioral features as initialized embeddings of the next behavior's network. In this way, it learns user and item embeddings by gradually refining them through all the behaviors in the chain. The learned embeddings from the last behavior are directly used for prediction. Consequently, the quality of embeddings learned from earlier behaviors (e.g., _click_) excats a great impact on the final performance. Because the earlier behaviors are not deterministic and often noisy, using higher-order propagation for embedding learning in such behaviors is inevitably bring more noise into embeddings. This also explains why CRGCN uses only one-layer propagation in auxiliary behaviors. In contrast, we adopt a feature transformation to distill useful information for next behavior's embedding learning. This helps our model avoid the above problems, and thus our model can enjoy the benefits brought by high-order propagation.
## 4. Experiment
### Experiment Settings
#### 4.1.1. Dataset
Two datasets are adopted for evaluation:
* **Beibei**: This dataset was collected from Beibei3, which is the largest e-commerce platform for baby products in China. It contains 21,716 users and 7,977 items with three types of user-item behaviors, including _view_, _adding-to-cart_ or _cart_ for short, and _buy_. Notice that on the Beibei platform, users have to follow a strict order to make purchase, which is _view-cart-buy_.
Footnote 3: [https://www.beibei.com/](https://www.beibei.com/)
* **Tmall**: This dataset was collected from Tmall4, one of the largest e-commerce platforms in China. It contains 15,449 users and 11,953 items. We also use the three types of behaviors in this dataset as those in Beibei for experiments.
Footnote 4: [https://www.tmnll.com/](https://www.tmnll.com/)
For both datasets, we followed the previous studies to remove the duplicates by keeping the earliest one (Kipf and Welling, 2015; He et al., 2017). The statistical information of the two datasets used in our experiments is summarized in Table 1.
#### 4.1.2. Evaluation Protocols
We adopt the widely used leave-one-out strategy for evaluation (Kipf and Welling, 2015; He et al., 2017; He et al., 2018). For each user, the last interacted item and all the items which she has not interacted with comprise the test set; the second last interacted item of each user is used to construct the validation set for hyper-parameter tuning; the remainder positive items are used in training. In the evaluation stage, all the items in the test set are ranked according to the predicted scores by recommendation models. In our experiments, two standard metrics for top-n recommendation _Recall@K_ and _NDCG@K_ are used to measure performance.
#### 4.1.3. Baselines
To demonstrate the effectiveness of our MB-CGCN, we compare it with several state-of-the-art methods, which can be classified into two categories: single-behavior models and muljith-behavior models
**Single-behavior Models:**
* **MF-BPR**(He et al., 2017): This method has shown competitive performance on the top-n recommendation task, and is commonly used as a baseline to evaluate the efficacy of new models. BPR is a widely used optimization strategy with the assumption that the positive items should score higher than negative ones.
* **NeuMF**(He et al., 2017): It is a representative neural CF models, which uses GMF and MLP simultaneously to capture the non-linear interactions between users and items.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline Dataset & User\# & Item\# & Buy\# & Cart\# & View\# \\ \hline Beibei & 21,716 & 7,997 & 304,576 & 642,622 & 2,412,586 \\ Tmall & 15,449 & 11,953 & 104,329 & 195,476 & 873,954 \\ \hline \hline \end{tabular}
\end{table}
Table 1. Statistics of the datasets used in our experiments.
* **LightGCN(Liu et al., 2019)**: This model is a state-of-the-art GCN-based recommendation model, which exploits the high-order connectives in the user-item bipartite graph for recommendation. In particular, it removes the feature transformation and non-linear activation components in vanilla GCN to simplify the model structure and achieves a significant performance improvement.
**Multi-behavior Models:**
* **NMTR(Liu et al., 2019)**: This model develops a cascading neural network method to model the multi-behavior data for recommendation. It sequentially passes the interaction score of the current behavior to the next and uses multi-task learning for jointly optimization.
* **RGCN(Liu et al., 2019)**: This model differentiates the relations between nodes via edge types in the graph and designs different propagation layers for different relations. This model can adapt to multi-behavior recommendation
* **GNMR(Liu et al., 2019)**: This GNN-based approach attempts to explore the dependencies among multi-behaviors via recursive embedding propagation on the unified multi-behavior interaction graph. It designs a relation aggregation network to model the interaction heterogeneity.
* **MBGCN(Liu et al., 2019)**: It is a state-of-the-art GCN-based multi-behavior recommendation model. This method considers the different contributions of multi-behaviors to the target behavior. It learns the behavior contributions by using GCN on the unified multi-behavior graph and exploits the item-item graph to capture the behavior semantics.
* **CRGCN(Liu et al., 2019)**: This is a most recently proposed model. It adopts a cascading GCN structure to model multi-behavior data. The behavioral features learned from a behavior is delivered to the next behavior with a residual design. This method also adopts the multi-task learning in optimization.
#### 4.1.4. Hyper-parameter Settings
In implementation, we adopt the _Adam_ optimizer for optimization. The embedding size and the batch size of all adopted methods are set to 64 and 1024, respectively. The learning rate is tuned in the range of {1e-2, 1e-3, 1e-4}. In addition, we initialize model parameters (behavior features transformation matrices) with the Xavier initializer. The early stopping strategy is also adopted. The number of GCN layers for each behaviors is tuned in {1, 2, 3, 4}. Without specification, for the behavior chain _view-cart-buy_, we use the layers of {3, 4, 3} and {3, 4, 2} for the three behaviors on Beibei and Tmall, respectively. For other baselines, we mainly use their official open-source code and carefully tune the parameters to achieve the best performance for fair comparisons.
### Overall Performance
In this section, we report the performance comparisons between our MB-CGCN and all the baselines. The results on two datasets are shown in Table 2. The best results are highlighted in bold and the second best results are underlined. From the results, we can see that the multi-behavior models can achieve better performance than the single-behavior models, demonstrating the benefits of leveraging auxiliary behaviors (i.e., view and cart) for the target behavior (i.e., buy) prediction. Our MB-CGCN model achieves the best performance, outperforming all baselines significantly in term of both metrics over two datasets. The improvement across different ranges of top \(K\) (\(K=\{10,20,50\}\)) items over the best baseline can achieve 9.0% and 16.1% on Beibei, and 46.6% and 30.1% on Tmall for Recall@20 and NDCG@20 metrics, respectively. It is a remarkable improvement for recommendation accuracy, especially observing that the best baseline CRGCN has already made a big improvement over the second best baseline, which strongly demonstrates the effectiveness of our MB-CRGN model.
For the single-behavior models, NeuMF outperforms MF-BPR in most cases. MF-BPR uses the inner product as interaction function, which is incapable of modeling the complex relations between users and items. In contrast, NeuMF adopts multi-layers of neural networks to model the non-linear interactions, yielding a better performance. LightGCN achieves consistently better performance over MF-BPR and NeuMF. This demonstrates the advantages of the GCN models by exploiting the high-order neighbor's information over the user-item bipartite graph to learn user and item embeddings for recommendation.
\begin{table}
\begin{tabular}{c|c|c c c|c c c c c|c} \hline \hline \multirow{2}{*}{Dataset} & \multirow{2}{*}{Metric} & \multicolumn{4}{c|}{**Single behavior Methods**} & \multicolumn{4}{c|}{**Multi behavior Methods**} & \multirow{2}{*}{Improv.} \\ \cline{3-10} & & **MF-BPR** & **NeuMF** & **LightGCN** & **RGCN** & **GNMR** & **NMTR** & **MBGCN** & **CRGCN** & **MB-CGCN** \\ \hline \multirow{8}{*}{Beibei} & Recall@10 & 0.0191 & 0.0232 & 0.0391 & 0.0363 & 0.0413 & 0.0429 & 0.0470 & 0.0459 & **0.0579** & 23.2\% \\ & NDCG@10 & 0.0049 & 0.0135 & 0.0209 & 0.0188 & 0.0221 & 0.0198 & 0.0259 & 0.0324 & **0.0381** & 17.6\% \\ & Recall@20 & 0.0531 & 0.0736 & 0.0717 & 0.0684 & 0.0729 & 0.0776 & 0.0792 & 0.0891 & **0.0972** & 9.1\% \\ & NDCG@20 & 0.0239 & 0.0290 & 0.0270 & 0.0274 & 0.0279 & 0.0296 & 0.0330 & 0.0348 & **0.0404** & 16.1\% \\ & Recall@50 & 0.1014 & 0.1402 & 0.1347 & 0.1309 & 0.1391 & 0.1453 & 0.1493 & 0.1694 & **0.1924** & 13.6\% \\ & NDCG@50 & 0.0330 & 0.0405 & 0.0366 & 0.0371 & 0.0374 & 0.0399 & 0.0447 & 0.0487 & **0.0572** & 17.5\% \\ \hline \multirow{8}{*}{Tmall} & Recall@10 & 0.0076 & 0.0236 & 0.0411 & 0.0215 & 0.0368 & 0.0282 & 0.0509 & 0.0855 & **0.1233** & 44.2\% \\ & NDCG@10 & 0.0036 & 0.0128 & 0.0240 & 0.0104 & 0.0216 & 0.0137 & 0.0294 & 0.0439 & **0.0677** & 54.2\% \\ \cline{1-1} & Recall@20 & 0.0244 & 0.0311 & 0.0546 & 0.0326 & 0.0608 & 0.0642 & 0.0691 & 0.1369 & **0.2007** & 46.6\% \\ \cline{1-1} & NDCG@20 & 0.0155 & 0.0152 & 0.0266 & 0.0125 & 0.0263 & 0.0303 & 0.0350 & 0.0676 & **0.0880** & 30.2\% \\ \cline{1-1} & Recall@50 & 0.0393 & 0.0494 & 0.0874 & 0.0411 & 0.0971 & 0.1034 & 0.1117 & 0.2325 & **0.3322** & 42.9\% \\ \cline{1-1} & NDCG@50 & 0.0197 & 0.0193 & 0.0338 & 0.0160 & 0.0336 & 0.0383 & 0.0455 & 0.0866 & **0.1134** & 30.9\% \\ \hline \hline \end{tabular}
\end{table}
Table 2. Overall performance comparison. Improv. denotes the relative improvements over the best baseline.
For the multi-behavior recommendation models, RGCN models different behaviors individually and then aggregates the embeddings learned from each behavior for prediction without distinguishing their contributions to the target behavior. It does not perform well among the multi-behavior models. Both GNMR and MBGCN differentiate the behavior contributions before fusion, and they achieve a better performance than RGCN. Comparing to GNMR, MBGCN additionally exploits the item-item relations to capture the behavior semantics and gains further improvement over GNMR. NMTR considers the cascading effects of multi-behaviors in the model structure. It attempts to take the effects into the embedding learning process by passing the prediction scores of a previous behavior to the next one. It surpasses the GNMR model but does not perform as well as MBGCN. The better performance MBGCN attributes to its use of GCN and additional consideration of item-item relations in the modeling. CRGCN moves a step further over NMTR by directly take the cascading effects of multi-behavior into the embedding learning process explicitly. It is achieved by passing the embeddings learned from the previous behavior to the next one for further refinement. In this way, the embedding learning process of CRGCN is actually a refinement of the embeddings through the behavior chain. CRGCN outperforms all other baselines by a large margin, especially on the Tmall datasets.
MB-CGCN adopts the similar cascading GCN structure as CRGCN, and thus also enjoys the merits of explicitly exploiting the cascading effects in embedding learning. Instead of adopting the residual design in CRGCN to preserve behavior features for delivery, MB-CGCN adopts a feature transformation operation between two GCN blocks to distill effective features from a previous behavior to the next one. In addition, MB-CGCN does not use the multi-task learning in optimization and only employ the signals of target behavior to guide the learning process. The big performance improvement of MB-CGCN over CRGCN demonstrates the effectiveness of our design.
### Ablation Study
#### 4.3.1. Effect of feature transformation
In order to evaluate the validity of the feature transformation in our model, we conducted an ablation study to compare our model with and without the feature transformation module (denoted as **w/o. ft** and **w. ft** in Table 3). The default order of behaviors used in our experiments are: _view-cart-buy5_, which also the required order in Beibei. After removing the feature transformation from MB-CGCN, the embeddings learned from the first behavior (i.e., _view_) are directly used as the initialized embeddings in the next behavior (i.e., _cart_) for embedding learning; and the same for the embedding learning of the last behavior (i.e., _buy_).
Footnote 5: There could be other orders, such as _view-buy_, _cart-buy_, and _cart-view-buy_, which are studied in Section 4.4.2.
Experimental results are shown in Table 3. With feature transformation, MB-CGCN can gain an relative improvement of 9.0% and 5.8% on Beibei, and 0.7% and 6.7% on Tmall for Recall@20 and NDCG@20%, respectively. A latter behavior in the behavior chain usually shows a strong signal of user preferences on items. The results demonstrate the effectiveness of the feature transformation scheme on distilling useful information from an earlier behavior to help learn user and item embeddings in a latter behavior.
#### 4.3.2. Effects of feature aggregation
In MB-CGCN, we aggregate the embeddings learned from all behaviors for users and items for the target behavior prediction by a linear aggregation. To evaluate the utility of the feature aggregation, we compare our model to two variants in experiments:
* **w/o. agg.**: This variant removes the feature aggregation module in MB-CGCN. It means the embeddings learned from the last GCN block are directly used for the target prediction.
* **w/o. concat.**: This variant replaces the aggregation with a concatenation operation. Specifically, the user and item embeddings learned from each behavior are concatenated together for the target prediction.
From the results shown in Table 4, it is clear that it is necessary to consider the embedding learned from all behaviors. As discussed, with the feature transformation, some features learned from auxiliary behaviors will be filtered when delivering the embeddings learned from the first behavior to the target behavior. And it will also encourage the model to learn different features from each behavior. Therefore, to well exploit the multi-behaviors, it is important for our model to consider all the behavioral features. For simplicity, we only compare the linear combination (**w. agg**) with the embedding concatenation (**w. concat.**) in experiments. Empirically, **w. agg** performs much better than **w. concat.** More sophisticated fusion methods (e.g., attention network) can be also applied, which will be explored in future studies.
### Impact of multi-behaviors
#### 4.4.1. Behavior number
To study the effects of auxiliary behaviors on the performance of our model, we perform experiments on the two datasets with one behavior (i.e., _buy_), two behaviors
\begin{table}
\begin{tabular}{c|c c|c c} \hline \hline \multirow{2}{*}{**Method**} & \multicolumn{2}{c|}{**Beibei**} & \multicolumn{2}{c}{**Tmall**} \\ \cline{2-5} & **Recall** & **NDCG** & **Recall** & **NDCG** \\ \hline
**w/o. agg.** & 0.0556 & 0.0140 & 0.0698 & 0.0291 \\
**w. concat.** & 0.0758 & 0.0282 & 0.1648 & 0.0688 \\
**w. agg.** & **0.0972** & **0.0404** & **0.2007** & **0.0880** \\ \hline \hline \end{tabular}
\end{table}
Table 4. Effects of feature aggregation in MB-CGCN. The reported performance is computed based on the top 20 results.
\begin{table}
\begin{tabular}{c|c c|c c} \hline \hline \multirow{2}{*}{**Method**} & \multicolumn{2}{c|}{**Beibei**} & \multicolumn{2}{c}{**Tmall**} \\ \cline{2-5} & **Recall** & **NDCG** & **Recall** & **NDCG** \\ \hline
**w/o. ft** & 0.0892 & 0.0382 & 0.1994 & 0.0825 \\
**w. ft** & **0.0972** & **0.0404** & **0.2007** & **0.0880** \\ \hline \hline \end{tabular}
\end{table}
Table 3. Effects of the feature transformation in MB-CGCN. The reported performance is computed based on the top 20 results.(\(w/o.ft\) and \(w.ft\) denote MB-CGCN with and without the feature transformation, respectively.
(e.g., _cart-buy_), and three behaviors (e.g., _view-cart-buy_). Experimental results are reported in Table 5. Notice that MB-CGCN with one behavior is simplified to the LightGCN. Apparently, with more types of behaviors, our model performs better. Interestingly, with the use of _cart_ data, the performance is improved by a large margin; with the further consideration of _view_ data, the improvement becomes smaller. The reasons might be twofold. First, the _cart_ data have already enrich the user-item interactions to a large extent, it becomes harder to improve the performance. Besides, the _cart_ data can better uncover user preference than the view data. Notice that a _view_ behavior does not necessarily means that a user is interested in an item.
#### 4.4.2. Behavior order
We study the effects of behavior order on our model by four behavior orders on the two datasets: O1(_cart-buy_), O2(_view-cart_), O3(_cart-view-buy_), and O4(_view-cart-buy_). The performance of different behavior orders on two datasets are shown in Figure 2 and Figure 3. Because both O3 and O4 have the same behaviors with different orders, we first compare their performance. The performance of O4 is better than O3 over both datasets, demonstrating the importance of modeling the multi-behaviors in a correct order, which should be consistent with the common behavior orders in real-world applications. Though the interactions of _view_ is much richer over that of _cart_ on both datasets, the performance of O2 is much better than that of O1. This is because the _cart_ behavior reveals more accurate information about user preferences than the _view_ information as discussed above. It is surprising that our model with O3 performs worse than it with O2 on Tmall. Remind that the embedding learned from a previous behavior will directly affect the embedding learning in the next behavior due to the cascading design of our model. A reasonable behavior order, in which a latter behavior should reveal user preference more accurate than its previous behavior, can make the embeddings of the target behavior be gradually learned step by step through the behavior chain. By using the behavior order in O3, the noisy information in the _view_ data will negatively affect the embedding learning in the target behavior, whose features play an important role in the final prediction. This further demonstrates the importance of the behavior order in the cascading modeling of multi-behaviors.
### Effects of layer number
In the this study, we tune the layer number by setting it the same for all the behaviors.6 Table 6 reports the performance of our model with one, two, three layers on both datasets. MB-CGCN performs better over both datasets with the increase of layer numbers. With more layers, LightGCN can exploit higher-order information in the user-item interaction graph for embedding learning, as demonstrated in previous work (Deng et al., 2018; Chen et al., 2018). The better embedding learned from each behavior benefits our model for final prediction.
Footnote 6: Notice that we also performed the studied by separately tuning the number of layers for the LightGCN of each behavior, which does not reported here due to the space limitation. We deem the reported experiments here can already provide a global view on the effects of layer number.
It is worth mentioning that CRGCN, which uses the similar structure as our model, cannot benefit from the high-order GCN on the auxiliary behaviors (Zhu et al., 2019). This is because CRGCN uses a residual design to preserve the behavior features in embedding delivery. However, there could be misleading interactions in auxiliary behaviors (such as _view_). For those behaviors, exploiting higher-order neighbors will bring more noise to the model and hurt the performance eventually. In our model, a feature transformation module is used to process the embedding for delivery. This enables our model to enjoy the benefits from high-order GCN operations.
## 5. Conclusion
In this work, we present a novel multi-behavior recommendation model named MB-CGCN, which adopts the cascading GCN blocks to explicitly leverage the multi-behaviors for embedding learning. In this model, the behavior features learned by LightGCN over a previous behavior is delivered to the next behavior in a chain after a feature transformation operation. The embedding learned from all behaviors are aggregated for the final prediction. Experiments on
\begin{table}
\begin{tabular}{r|c c|c c} \hline \hline \multirow{2}{*}{**Method**} & \multicolumn{2}{c|}{**Beibei**} & \multicolumn{2}{c}{**Tmall**} \\ \cline{2-5} & **Recall** & **NDCG** & **Recall** & **NDCG** \\ \hline
**buy** & 0.0717 & 0.0270 & 0.0546 & 0.0266 \\
**cart-buy** & 0.0930 & 0.0389 & 0.1956 & 0.0851 \\
**view-cart-buy** & **0.0972** & **0.0404** & **0.2007** & **0.0880** \\ \hline \hline \end{tabular}
\end{table}
Table 5. Effects of behavior number in MB-CGCN. The reported performance is computed based on the top 20 results.
Figure 3. Effects of behavior order on Tmall.
Figure 2. Effects of behavior order on Beibei.
two real-world datasets show that MB-CGCN outperforms the state-of-the-art multi-behavior models with a substantial performance gain. Further ablation studies verify the effectiveness of the feature transformation and embedding aggregation in our model. We also study the impact of multi-behavior number and order on the final performance. For future work, we plan to conduct experiments on online systems with A/B testing to validate the utility of our model on the real-world applications.
## Acknowledgments
This work was supported in part by the National Key R&D Program of China under Grant 2021YFF0901502; in part by the National Natural Science Foundation of China under Grants 61902223, 62272254, 62132001, and 61925201; in part by the Shandong Project towards the Integration of Education and Industry under Grants 2022PY009, and 2022PY1001.
|
2310.15618
|
Estimating the dimension of Thurston spine
|
For g at least 2, the Thurston spine Pg is the subspace of Teichmueller space
Tg , consisting of the marked surfaces for which the set of shortest curves,
the systoles, cuts the surface into polygons. Our main result is the existence
of an infinite set A of integers such that codim Pg is a o(g/ log g), when g
varies over A.
This proves the recent conjecture of M. Fortier Bourque.
|
Olivier Mathieu
|
2023-10-24T08:32:29Z
|
http://arxiv.org/abs/2310.15618v1
|
# Estimating the dimension of the Thurston spine
###### Abstract.
For \(g\geq 2\), the Thurston spine \(\mathcal{P}_{g}\) is the subspace of Teichmuller space \(\mathcal{T}_{g}\), consisting of the marked surfaces for which the set of shortest curves, the systoles, cuts the surface into polygons. Our main result is the existence of an infinite set \(A\) of integers \(g\geq 2\) such that
\[\operatorname{codim}\mathcal{P}_{g}\in o(g/\sqrt{\log g}),\]
when \(g\in A\) goes to \(\infty\). This proves the recent conjecture of M. Fortier Bourque.
###### Contents
* 1 Background and Definitions
* 2 Trigonometry in \(\mathbb{H}\)
* 3 The three-holed sphere \(\Sigma(k,\epsilon)\) and the pair of pants \(\Pi(k,\epsilon)\)
* 4 Trigonometry in the pair of pants \(\Pi(k,\epsilon)\)
* 5 Sanki's paths and curve duality
* 6 Local structure of \(\mathcal{P}_{g}\) along a Sanki's path
* 7 Examples
## Introduction
_0.1 General Introduction_
Let \(g\geq 2\). The Teichmuller space \(\mathcal{T}_{g}\) is the space of all marked closed hyperbolic surfaces of genus \(g\). (Precise definitions used in the introduction can be found in Section 1.) It is a smooth variety homeomorphic to \(\mathbb{R}^{6g-6}\), see e.g. [5][6], on which the mapping class group \(\Gamma_{g}\) acts properly. By Harer's Theorem [7], \(\Gamma_{g}\) has virtual cohomological dimension \(4g-5\). This leads to the question, raised in [4]-can we find an equivariant deformation retraction of \(\mathcal{T}_{g}\) onto a subcomplex of dimension \(4g-5\), or equivalently, of codimension \(2g-1\)?
In a remarkable note [17], Thurston considered the subspace \(\mathcal{P}_{g}\subset\mathcal{T}_{g}\) consisting of marked surfaces for which the systoles _fill_ the surface, i.e. the systoles cut the surface into polygons. In _loc. cit._, he proved 1 that \(\mathcal{P}_{g}\) is an equivariant deformation retract of \(\mathcal{T}_{g}\). Since, \(\mathcal{P}_{g}\) is called the Thurston spine. It follows from [17] that \(\dim\mathcal{P}_{g}\geq 4g-5\), or, equivalently, that \(\operatorname{codim}\mathcal{P}_{g}\leq 2g-1\).
Footnote 1: In [11], some doubts have been raised about Thurston’s proof. The results stated below were clearly motivated by his note [17], but their proofs are independent of _loc. cit._. So, we will not discuss here if the main result of [17] is proved or not.
It was shown in Theorem 44 of [16] and verified by a Sage computation in [8] that \(\dim\mathcal{P}_{2}=3\), which is the virtual cohomological dimension of \(\Gamma_{2}\). Therefore, one could have
expected that \(\mathcal{P}_{g}\) has codimension \(2g-1\) for all \(g\). In that direction, P. Schmutz Schaller provided examples of surfaces of genus \(g\) which are cut by a minimal set of \(2g\) systoles. (We could expect that \(\mathcal{P}_{g}\) has locally codimension \(2g-1\), as it will be explained in Subsection 0.2.) A year ago, the breakthrough paper [3] showed that \(\operatorname{codim}\!\mathcal{P}_{g}<2g-1\), for infinitely many \(g\). Nevertheless, I. Irmer proved that \(\mathcal{P}_{g}\) admits a \(\Gamma_{g}\)-equivariant deformation retract into a subcomplex of minimal dimension \(4g-5\), see [9].
More precisely, M. Fortier Bourque proved in [3] that
\[\liminf_{g\to\infty}\operatorname{codim}P_{g}/g\leq 1.\]
Moreover he had conjectured earlier [2] that
\[\liminf_{g\to\infty}\operatorname{codim}\!\mathcal{P}_{g}/g=0.\]
Our paper provides a proof of his conjecture with, in addition, some explicit bound.
**Theorem 1**.: _There exists an infinite set \(A\) of integers \(g\geq 2\) such that_
\[\operatorname{codim}\mathcal{P}_{g}<\tfrac{38}{\sqrt{\ln\ln g}}\;\tfrac{g}{ \sqrt{\ln g}},\]
_for any \(g\in A\)._
This leads to the concrete question -which is the smallest \(g\) for which \(\operatorname{codim}\mathcal{P}_{g}<2g-1\)? In the last section, we will see that, for \(g=17\), we have \(\operatorname{codim}\mathcal{P}_{17}<32\). However, we do not know if \(g=17\) is the smallest \(g\) answering the question.
_0.2 Organization of the paper and the main idea of the proof_
The starting point of the proof is based on the main result of [10], that we now recall.
A regular right-angled hexagon \(H\) of the Poincare half plane \(\mathbb{H}\) is called _decorated_ if it is oriented and its sides are cyclically indexed by \(\mathbb{Z}/6\mathbb{Z}\). Up to direct isometries, there are two such hexagons \(\mathcal{H}\) and \(\overline{\mathcal{H}}\), with opposite orientations.
A tesselation of a closed oriented hyperbolic surface \(S\) is called a _standard tesselation_ if each tile is isometric to \(\mathcal{H}\) or \(\overline{\mathcal{H}}\). Of course, it is presumed that the tiles are glued along edges with the same index, therefore a tile isometric to \(\mathcal{H}\) is surrounded by six tiles isometric to \(\overline{\mathcal{H}}\) and conversely. A vertex of a standard tesselation is an intersection point of two perpendicular geodesics. Therefore, the 1-skeleton of a standard tesselation consists of a finite family of closed geodesics, called the _curves_ of the tesselation.
For a hyperbolic surface \(S\), denote by \(\operatorname{Syst}(S)\) the set of systoles of \(S\).
**Theorem** (Theorem 25 of [10]).: _There exists an infinite set \(A\) of integers \(g\geq 2\), and, for any \(g\in A\) a closed oriented hyperbolic surface \(S_{g}\) of genus \(g\) endowed with a standard tesselation \(\tau_{g}\) such that_
1. _the systoles of_ \(S_{g}\) _are exactly the curves of_ \(\tau_{g}\)_, and_
2. _we have_ \[\operatorname{Card}\,\operatorname{Syst}(S_{g})\leq\frac{57}{\sqrt{\ln\ln\ln g }}\;\frac{g}{\sqrt{\ln g}}.\]
The _index_ of a curve of the tesselation \(\tau_{g}\) is the common index of its edges. It is clear that the subset \(\mathcal{C}\) of curves of index \(\neq 1\) or \(2\) fills the surface and \(\operatorname{Card}\,\operatorname{Syst}(S_{g})\).
Let \(\operatorname{Sys}\left(\mathcal{C}\right)\) be the set of marked hyperbolic surfaces \(S\) of genus \(g\) such that \(\operatorname{Syst}(S)=\mathcal{C}\). For any curve, or free homotopy class, \(C\) of \(S_{g}\), let \(L(C)\) be its length, viewed as a function on the Teichmuller space \(\mathcal{T}_{g}\). Set \(\mathcal{C}=\{C_{1},C_{2}\dots\}\). The subspace \(\operatorname{Sys}\left(\mathcal{C}\right)\) is defined by the Card \(\mathcal{C}-1\) equations
\[L(C_{1})=L(C_{2})=\dots\]
together with some inequalities. Intuitively, our result should follow from the following two facts
1. In \(\mathcal{T}_{g}\), the point \(S_{g}\) is adjacent to \(\operatorname{Sys}\left(\mathcal{C}\right)\), and
2. for any point \(x\in S(\mathcal{C})\) closed to \(S_{g}\), we have \(\operatorname{codim}_{x}S(\mathcal{C})=\operatorname{Card}\,\mathcal{C}-1\)
If we assume that the differentials \(\{\mathrm{d}L(C)\mid c\in\operatorname{Syst}(S_{g})\}\), are linearly independent at the point \(S_{g}\), the previous two facts would follow from the submersion theorem. However, an argument in Theorem 36 of [16] shows that these differentials are often linearly dependent at \(S_{g}\). For this reason, the cardinality of \(\mathcal{C}\) does not determine the local codimension of \(\operatorname{Sys}\left(\mathcal{C}\right)\).
Following an idea of Sanki [15], we can deform the angles of the tiles, by alternately replacing the right angles by angles of value \(\epsilon\) and \(\pi-\epsilon\), for any \(\epsilon\in]0,\pi[\). In this way we obtain a path \(\sigma:]0,\pi[\to\mathcal{T}_{g}\), such that \(\sigma(\pi/2)\) is the hyperbolic surface \(S_{g}\). We will called it the _Sanki_ path of the tesselation \(\tau_{g}\). The main idea of the proof is the following
**Theorem 2** (see Section 5).: _The set of differentials_
\[\{\mathrm{d}L(C)\ |C\in\operatorname{Syst}(S_{g})\}\]
_is linearly independent at \(\sigma(\epsilon)\), except for finitely many values of \(\epsilon\)._
It implies that the assertions (1) and (2) are correct for \(x=\sigma(\epsilon)\), where \(\epsilon\neq\pi/2\) is closed enough to \(\pi/2\).
The proof of Theorem 2 is based on a duality, which is expressed in terms of the Poisson product associated with the Weil-Petersson symplectic structure on \(\mathcal{T}_{g}\). For any curve \(B\) of the tesselation, we define a dual function \(L(B^{*})\) which is a linear combination (with coefficients \(\pm 1/2\)) of lengths of three curves, which are the boundary components of a well-chosen pair of pants.
It results from an asymptotic analysis of Wolpert's formula [19] that
\[\lim_{\epsilon\to 0}\ \{L(B),L(A^{*})\}(\sigma(\epsilon))=\delta_{A,B} \tag{1}\]
for any two curves \(A\), \(B\) of the tesselation, where, as usual, \(\delta_{A,B}\) denotes the Kronecker's symbol. Since \(\delta:=\det(\{L(B),L(A^{*})\})\) is an analytic function, it follows that \(\delta(\sigma(\epsilon))\) is not zero for any \(\epsilon\neq\pi/2\) closed to \(\pi/2\).
In fact the proof of equation (1) is based on elementary but lengthy trigonometric computations of Sections 2-4. To present the computations and the figures as simply as possible, we have restricted ourself to hexagonal tesselations. However similar results hold for tesselations by \(2p\)-gons for any \(p\geq 3\).
## 1. Background and Definitions
### Marking of surfaces and the Teichmuller space
Let \(g\geq 2\). By definition, the _Teichmuller_ space \(\mathcal{T}_{g}\) is the space of all marked oriented closed hyperbolic surfaces of genus \(g\). It means that \(\mathcal{T}_{g}\) parametrizes the set of those hyperbolic
surfaces, where the marking is a datum that distinguishes isometric surfaces corresponding to distinct paramaters.
There are various equivalent definitions of the marking [5]. Here we will adopt the most convenient for our purpose. Let \(\Pi_{g}\) be the group given by the following presentation
\[\langle a_{1},b_{1},\dots,a_{g},b_{g}\mid(a_{1},b_{1})(a_{2},b_{2})\dots(a_{g},b _{g})=1\rangle.\]
Then a point \(x\) of the Teichmuller space is a loxodromic (i.e. faithfull with discrete image) representation \(\rho_{x}:\Pi_{g}\to\mathrm{PSL}(2,\mathbb{R})\) modulo linear equivalence. At the point \(x\), the corresponding hyperbolic surface is \(S_{x}:=\mathbb{H}/\rho_{x}(\Pi_{g})\). With this definition, \(\mathcal{T}_{g}\) is a connected component of a real algebraic variety.
Formally, a _curve_\(c\) is a nontrivial conjugacy class of \(\Pi_{g}\). For a hyperbolic surface, any free homotopy class has a unique geodesic representative. Thus \(c\) defines a closed geodesic \(c_{x}\) of \(S_{x}\), for any \(x\in\mathcal{T}_{g}\). Here closed geodesics are nonoriented, so we will not distinguish the conjugacy classes of \(c\) and \(c^{-1}\). Concretely, the marking of the surface \(S_{x}\) means that each geodesic \(C\) of \(S_{x}\) is marked by a conjugacy class in \(\Pi_{g}\).
In this setting, the _mapping class group_\(\Gamma_{g}\) is the group of all outer automorphisms of \(\Pi_{g}\) which act trivially on \(H^{2}(\Pi_{g})\simeq\mathbb{Z}\). It acts on \(\mathcal{T}_{g}\) by changing the marking, or, more formally, by twisting the representation of \(\Pi_{g}\).
_1.2 Length of curves_
The length of an arc, or a closed geodesic, \(e\) will be denoted \(l(e)\). When there is no possibility of confusion, we will use the same letter for an arc \(e\) and its length. For example the expression \(\cosh\,H\) in the proof of Lemma 4 stands for \(\cosh\,l(H)\).
Let \(x\in\mathcal{T}_{g}\). Given a curve \(c\), set \(L(c)(x)=l(c_{x})\) where \(c_{x}\) is the geodesic representative of \(c\) at \(x\). The formula \(2\,\mathrm{ch}(L(c(x)))=|\mathrm{Tr}(\rho_{x}(c))|\) shows that the function \(L(c):\mathcal{T}_{g}\to\mathbb{R}\) is analytic. Let \(C\) be a closed geodesic of \(S_{x}\). We set \(L(C)=L(c)\), where \(c\) is the curve marking \(C\).
_1.3 The Thurston's spine_\(\mathcal{P}_{g}\)
In riemannian geometry, a _systole_ is an essential closed geodesic of minimal length. In fact, for a hyperbolic surface, any closed geodesic is essential. Let \(\mathcal{P}_{g}\) be the set of all points \(x\in\mathcal{T}_{g}\) such that the set of systoles fills \(S_{x}\), i.e. it cuts \(S_{x}\) into polygons. The subspace \(\mathcal{P}_{g}\) is called the _Thurston spine_, see [17].
By definition, the Thurston spine \(\mathcal{P}_{g}\) is a semi-analytic subset [17], and therefore it admits a triangulation by [12]. In particular, the dimension \(\dim_{x}\,\mathcal{P}_{g}\) at any point \(x\in\mathcal{P}_{g}\) is well defined. Set
\[\dim\,\mathcal{P}_{g}=\mathrm{Max}_{x\in\mathcal{P}_{g}}\,\dim_{x}\, \mathcal{P}_{g}\,\,\mathrm{and}\,\,\mathrm{codim}\mathcal{P}_{g}:=\dim\, \mathcal{T}_{g}-\dim\,\mathcal{P}_{g}.\]
_1.4 Orientation of the boundary components_
In what follows, all surfaces \(S\) are given with an orientation. When \(S\) has a boundary \(\partial S\), it is oriented by the rule that, while moving forward along \(\partial S\), the interior of \(S\) is on the right. With this convention, when a circle of the plane is viewed as the boundary of its interior, it is oriented in the clockwise direction.
_1.5 Angles_
Let \(C\), \(D\) be two distinct geodesic arcs of a surface and let \(P\) be an intersection point. The angle of \(C\) and \(D\) at \(P\), denoted \(\angle_{P}CD\) is measured anticlockwise from \(T_{P}C\) to \(T_{P}D\), where \(T_{P}C\) and \(T_{P}D\) are the tangent line at \(P\) of \(C\) and \(D\). By definition \(\angle_{P}CD\) belongs to \(]0,\pi[\). When we permutes \(C\) and \(D\) there is the formula
\[\angle_{P}DC=\pi-\angle_{P}CD.\]
In what follows, it will be convenient to set \(\overline{\alpha}=\pi-\alpha\) for any \(\alpha\in[0,\pi]\). Also it will be convenient to use the notation \(\angle DC\) when the point \(P\) is unambiguously defined.
The notion of _inner angles_ is different. Let \(S\) be a surface whose boundary \(\partial S\) is piecewise geodesic. Let \(c\) and \(d\) be two consecutive geodesic arcs of \(\partial S\) meeting at a point \(P\). The inner angle is a real number \(\alpha\in]0,2\pi[\). We have \(\alpha<\pi\) when \(S\) is locally convex around \(P\). In that case, the equality \(\angle\,cd=\alpha\) means that the arc \(c\) preceeds \(d\) when going forward along \(\partial S\).
## 2. Trigonometry in \(\mathbb{H}\)
As stated in the Introduction, the analysis of Sanki's paths, defined in Section 5, is based on many trigonometric computations. This section involves trigonometric computations in the Poincare half-plane \(\mathbb{H}\). Subsequent computations in the pairs of pants \(\Pi(k,\epsilon)\) will be done in Section 4.
Let \(d_{\mathbb{H}}\) be the hyperbolic distance on \(\mathbb{H}\). By definition, a _line_ is a complete geodesic \(\Delta\) of \(\mathbb{H}\). For any \(P,Q\in\Delta\), the closed arc between \(P\) and \(Q\) is called a _segment_ and it will be denoted \(PQ\). When necessary, the segment \(PQ\) is oriented from \(P\) to \(Q\).
Given three points \(A,B\) and \(C\in\mathbb{H}\), we denote by \(ABC\) the triangle \(T\) whose sides are \(AB\), \(BC\) and \(CA\). By our convention, \(\partial T\) is oriented clockwise, but we do not require a specific orientation of the sides. Given four points \(A,B,C\) and \(D\in\mathbb{H}\), we define in the same way the quadrilateral \(ABCD\), not ruling out the possibility that one pair of opposite sides intersects.
For the whole section, we will fix an angle \(\epsilon\in]0,\pi[\). In the pictures, we will assume that \(\epsilon<\pi/2\).
### The \(\epsilon\)-pencil \(\mathcal{F}_{\epsilon}(\Delta)\) in \(\mathbb{H}\)
Let \(\Delta\subset\mathbb{H}\) be a line. For any \(P\in\Delta\), let \(F(P)\) be the line passing through \(P\) with \(\angle\Delta F(P)=\epsilon\), see Section 1.5 for our convention concerning angles. Since no triangle has two angles of values \(\epsilon\) and \(\overline{\epsilon}\), any two lines \(F(P)\) and \(F(P^{\prime})\) are parallel. The set \(\mathcal{F}_{\epsilon}(\Delta):=\{F(P)\mid P\in\Delta\}\) will be called the \(\epsilon\)_-pencil_ along the line \(\Delta\).
Let \(\Delta^{\prime}\) be a geodesic arc of \(\mathbb{H}\) with \(\Delta\cap\Delta^{\prime}=\emptyset\). When \(F(P)\) meets \(\Delta^{\prime}\), set \(\Omega(P)=F(P)\cap\Delta^{\prime}\) and
\[\omega(P):=\angle\Delta^{\prime}F(P).\]
The line \(\Delta\) is oriented by the convention that, while going forward, the arc \(\Delta^{\prime}\) is on the right. Therefore the notion of an increasing function \(f:\Delta\to\mathbb{R}\) is well-defined.
In contrast with the Euclidean geometry, the angle \(\omega(P)\) is not constant. On the contrary, it can vary from \(0\) to \(\pi\), as it will now be shown.
**Lemma 3**.: _We have:_
1. _The set_ \(I:=\{P\in\Delta\mid F(P)\cap\Delta^{\prime}\neq\emptyset\}\) _is an interval of_ \(\Delta\)_. Moreover, the map_ \(\omega:I\to]0,\pi[\) _is increasing._
2. _Furthermore, if_ \(\Delta^{\prime}\) _is a line of_ \(\mathbb{H}\) _and_ \(d_{\mathbb{H}}(\Delta^{\prime},\Delta)\neq 0\)_, then_ \(\omega\) _is bijective._
_Proof of claim (1)._ Let PP' be a positively oriented arc of \(I\). Consider the quadrilateral \(Q:=PP^{\prime}\Omega(P^{\prime})\Omega(P)\). With our conventions, its inner angles are \(\epsilon,\overline{\epsilon},\overline{\omega(P^{\prime})}\) and \(\omega(P)\). Since the area of \(Q\) is
\[2\pi-[\epsilon+\overline{\epsilon}+\overline{\omega(P^{\prime})}+\omega(P)]= \omega(P^{\prime})-\omega(P),\]
the function \(\omega\) is increasing.
Next let \(P^{\prime\prime}\) in the interior of the segment \(PP^{\prime}\). Since the line \(F(P^{\prime\prime})\) enters \(Q\) at \(P^{\prime\prime}\), it should left \(Q\) at another point. Since \(F(P^{\prime\prime})\) is parallel to \(F(P)\) and \(F(P^{\prime})\), the exit point lies in the segment \(\Omega(P)\Omega(P^{\prime})\), hence \(P^{\prime\prime}\) belongs to \(I\). Since \(I\) contains a segment whenever it contains its two extremal points, \(I\) is an interval.
_Proof of Claim (2)._ We can assume that \(\Delta=\mathbb{R}i\). By the assumption that \(d_{\mathbb{H}}(\Delta,\Delta^{\prime})>0\), the lines \(\Delta\) and \(\Delta^{\prime}\) do not intersect and their endpoints in \(\partial\mathbb{H}\) are distinct. Therefore the endpoints \(a\), \(b\) of \(\Delta^{\prime}\) in \(\partial\mathbb{H}\) are real numbers with same signs. Without loss of generality, we can also assume that \(0<a<b\), as shown in Figure 1.
There is a line \(F^{+}\) (resp. \(F^{-}\)) in \(\mathcal{F}_{\epsilon}(\Delta)\) whose the endpoint in \(\mathbb{R}_{>0}\) is \(b\) (resp. \(a\)). Set
\(P^{\pm}=\Delta\cap F^{\pm}\).
Let \(\mathbf{B}\) be the open band delimited by \(F^{+}\) and \(F^{-}\). When \(P\) belongs to the interior of \(P^{-}P^{+}\), the line \(F(P)\) lies in the interior of the band \(\mathbf{B}\) and \(F(t)\) meets \(\Delta^{\prime}\). It is clear from the definition of \(F^{\pm}\) that
Figure 1. The band \(\mathbf{B}\)
\[\lim_{P\to P_{-}}\,\omega(P)=0\,\,\text{a}nd\lim_{P\to P_{+}}\,\omega(P)=\pi.\]
Hence by Assertion (1), \(\omega\) is bijective.
### The \(\epsilon\)-edge
Let \(\epsilon\in]0,\pi[\) and let \(\Delta\), \(\Delta^{\prime}\) be two lines with \(d_{\mathbb{H}}(\Delta^{\prime},\Delta)>0\). Let \(H\) be the the common perpendicular arc to \(\Delta\) and \(\Delta^{\prime}\) and let \(S\in\Delta\) and \(S^{\prime}\in\Delta^{\prime}\) be its endpoints.
By Lemma 3 there is a unique \(P\in\Delta\) such that \(\omega(P)=\epsilon\). The edge \(e=P\Omega(P)\) will be called the \(\epsilon\)_-edge_ of \(\Delta\) and \(\Delta^{\prime}\). For \(\epsilon=\pi/2\), the \(\epsilon\)-edge is the perpendicular arc \(H\).
**Lemma 4**.: _Set \(L=d_{\mathbb{H}}(P,P^{\prime})\) where \(P^{\prime}=\Omega(P)\). If \(\epsilon\neq\pi/2\), then_
1. \(H\) _and_ \(e\) _intersect at their midpoints._
2. \(d_{\mathbb{H}}(P,S)=d_{\mathbb{H}}(P^{\prime},S^{\prime})<L/2\)_._
_Moreover the segment \(SP\) is positively oriented whenever \(\epsilon<\pi/2\)._
Proof.: We have \(\angle\Delta e=\epsilon\) and \(\angle e\Delta^{\prime}=\overline{\epsilon}\). The sum of the four angles of the quadrilateral \(Q:=SP\Omega(P)S^{\prime}\) is \(2\pi\). It follows that a pair of opposite edges must intersect, so \(e\) meets \(H\) at some point \(M\). The two triangles \(SPM\) and \(M\Omega(P)S^{\prime}\) have the same three angles, and are therefore isometric. In particular \(d_{\mathbb{H}}(P,S)=d_{\mathbb{H}}(P^{\prime},S^{\prime})\).
It follows that \(e\) and the arc \(H=SS^{\prime}\) intersect at their midpoint \(M\), thus we have \(d_{\mathbb{H}}(P,M)=L/2\). Since \(PM\) is the hypothenuse of the right-angled triangle \(PSM\), we have \(d_{\mathbb{H}}(P,S)<L/2\). The second claim follows.
### Hexagon trigonometry and edge colouring
**Lemma 5**.: _Up to isometry, there exists a unique hyperbolic hexagon \(H(\epsilon)\) whose sides all have the same length \(L=L(\epsilon)\) and whose inner angles are alternately \(\epsilon\) and \(\overline{\epsilon}\)._
_Moreover we have_
\[\cosh L=1+\frac{1}{\sin\epsilon}.\]
Proof of the existence of \(H(\epsilon)\) and of the formula for \(\cosh L\).
Let \(T\) be an oriented triangle whose angles are \(\epsilon/2\), \(\overline{\epsilon}/2\) and \(\pi/3\) and let \(X\) be the vertex at the \(\pi/3\)-angle. Let \(\overline{T}\) be the triangle isometric to \(T\) with opposite orientation. Let \(H(\epsilon)\) be the hexagon obtained by alternately gluing three copies of \(T\) and three copies of \(\overline{T}\) around \(X\). This hexagon satisfies the required conditions, so the existence is proved.
By the law of cosines for the triangle \(T\), we have
\[\cosh L=\frac{\cos\epsilon/2\cos\overline{\epsilon}/2+\cos\pi/3}{\sin\epsilon/2 \sin\overline{\epsilon}/2}\]
\[=1+\frac{1}{2\sin\epsilon/2\cos\epsilon/2}\]
\[=1+\frac{1}{\sin\epsilon}.\]
Proof of the uniqueness.: Let \(H\) be a hexagon whose sides all have the same length l, and whose angles are alternately \(\epsilon\) and \(\overline{\epsilon}\). Let \((A_{i})_{i\in\mathbb{Z}/6\mathbb{Z}}\) be the six vertices, arranged in a cyclic order. Moreover, we will assume that the angles at \(A_{2}\), \(A_{4}\) and \(A_{6}\) are \(\overline{\epsilon}\).
The triangles \(A_{1}A_{2}A_{3}\), \(A_{3}A_{4}A_{5}\) and \(A_{5}A_{6}A_{1}\) have the same angles at the point \(A_{2}\), \(A_{4}\) and \(A_{6}\), and the two sides originating from these points have the same length \(l\). Hence they are isometric. It follows that the triangle \(A_{1}A_{3}A_{5}\) is equilateral.
Let \(X\) be the center of \(A_{1}A_{3}A_{5}\), and let \(T^{\prime}=A_{1}XA_{2}\) and \(\overline{T}^{\prime}=A_{2}A_{3}X\). Since \(T^{\prime}\) and \(\overline{T}^{\prime}\) have same side lengths they are isometric, with opposite orientations. Hence \(H\) is obtained by alternately gluing three copies of \(T^{\prime}\) and three copies of \(\overline{T}^{\prime}\) around \(X\). It follows that the angles of \(T^{\prime}\) are \(\epsilon/2\), \(\overline{\epsilon}/2\) and \(\pi/3\). Therefore \(T^{\prime}\) is isometric the the triangle \(T\) of the existence proof. Hence \(H\) is isometric to \(H(\epsilon)\), proving uniqueness.
We can alternately assign the colours blue and red to the sides of \(H(\epsilon)\), as follows: If \(S\), \(S^{\prime}\) are consecutive sides with \(\angle\)S\({}^{\prime}=\epsilon\), then \(S\) is red and \(S^{\prime}\) is blue, or, quickly speaking, \(\angle\)red blue \(=\epsilon\). For \(\epsilon\neq\pi/2\) this colouring is unique.
### The Saccheri quadrilateral in \(H(\pi/2)\)
In the hexagon \(H(\pi/2)\), let \(S_{1},S_{2}\) and \(S_{3}\) be three consecutive sides and let \(D\) be the arc joining the vertex of \(S_{1}\smallsetminus S_{2}\) and the vertex of \(S_{3}\smallsetminus S_{2}\). Thus \(S_{1}\), \(S_{2}\), \(S_{3}\) and \(D\) are the four sides of a Saccheri quadrilateral. Set \(L^{\prime}=l(D)\).
**Lemma 6**.: _We have_
\[L^{\prime}<2L(\pi/2).\]
Proof.: Set \(L=L(\pi/2)\). The perpendicular line at the middle of \(S_{2}\) cuts the Saccheri quadrilateral into two isometric Lambert quadrilaterals. It follows that
\[\sinh L^{\prime}/2=\cosh L/2\,\sinh L/2,\]
or equivalently \(\sinh^{2}L^{\prime}/2=4\sinh^{2}L/2\), which implies that \(\cosh L^{\prime}=5\). On another hand \(\cosh 2L=2\cosh^{2}2L-1=7\). It follows that \(L^{\prime}<2L\)
## 3. The three-holed sphere \(\Sigma(k,\epsilon)\) and the pair of pants \(\Pi(k,\epsilon)\)
From now on, let \(k\geq 3\) be an integer and let \(\epsilon\in]0,\pi[\). We will often think of \(\epsilon\) as being an acute angle, as in the figures of this section.
In this section, we will define a pair of pants \(\Pi(k,\epsilon)\) endowed with a certain tesselation. We will first consider a three-holed sphere \(\Sigma(k,\epsilon)\) which has two geodesic boundary components \(C\) and \(C^{\prime}\) and one piecewise geodesic boundary component \(\mathcal{D}\). Since the inner angles of \(\mathcal{D}\) are \(<\pi\), it is homotopic to a unique geodesic \(D\). Then \(\Pi(k,\epsilon)\) is the pair of pants lying in \(\Sigma(k,\epsilon)\), whose boundary components are \(C,C^{\prime}\) and \(D\).
### The tesselated three-holed sphere \(\Sigma(k,\epsilon)\)
Let us start with the planar graph \(\Gamma\) represented in Figure 2.
It consists of two cycles \(C\) and \(C^{\prime}\) of length \(k\), which are connected by an edge \(e\) with endpoints \(P\in C\) and \(P^{\prime}\in C^{\prime}\). Starting from \(P\) in an anticlockwise direction, the other points of \(C\) are denoted by \(P_{1},\ldots,P_{k-1}\). For each \(1\leq i\leq k-1\) there is an additional edge \(e_{i}\), pointing outwards from \(C\). One endpoint of \(e_{i}\) is \(P_{i}\) and the other endpoint has valency one. The vertices \(P^{\prime}_{1},\ldots,P^{\prime}_{k-1}\) of \(C^{\prime}\) and the edges \(e^{\prime}_{i}\) are defined similarly.
The edges of the cycles \(C\) and \(C^{\prime}\) are coloured in red, the other edges are coloured in blue. Now we will attach \(2k-2\)-hexagons \(H(\epsilon)\) along the edges of \(\Gamma\) to obtain a hyperbolic surface \(\Sigma(\epsilon,k)\). It will be convenient to define a metric on \(\Gamma\) by requiring that each edge, red or blue, has length \(L=1+1/\sin\epsilon\).
First we attach two copies of \(H(\epsilon)\) along five consecutive edges of \(\Gamma\). The first copy, denoted \(H_{1}(\epsilon)\), is glued along the edges \(e_{k-1}\), \(P_{k-1}P\), \(e\), \(P^{\prime}P^{\prime}_{1}\) and \(e^{\prime}_{1}\). The second copy, denoted \(H_{2}(\epsilon)\), is glued along the edges \(e^{\prime}_{k-1}\), \(P^{\prime}_{k-1}P^{\prime}\), \(e\), \(PP_{1}\) and \(e_{1}\), see Figure 3.
Next we attach the remaining \(2k-4\) hexagons along three edges. Indeed, for each integer \(i\) with \(1\leq i\leq k-2\), we glue one copy of \(H(\epsilon)\) along the edges \(e_{i}\), \(P_{i}P_{i+1}\) and \(e_{i+1}\) and,
Figure 2. The graph \(\Gamma\)
symmetrically, we glue another copy along \(e^{\prime}_{i}\), \(P^{\prime\prime}_{i}P_{i+1}\) and \(e^{\prime}_{i+1}\), see Figure 4.
It is tacitly assumed that all gluings respect the metric and the colours of edges.
This defines a hyperbolic surface \(\Sigma(k,\epsilon)\) which is homeomorphic to the 3-holed sphere \(S_{0,3}\). Two boundary boundary components \(C\) and \(C^{\prime}\) of \(\Sigma(k,\epsilon)\) are geodesics. The third component, call it \(\mathcal{D}\), is piecewise geodesic.
**Lemma 7**.: _The boundary component \(\mathcal{D}\) is freely homotopic to a unique geodesic \(D\). Moreover_
_(1) \(D\) lies in the interior of \(\Sigma(k,\epsilon)\),_
Figure 4. Gluing the remaining hexagons \(H(\epsilon)\) to \(\Gamma\)
Figure 3. Gluing two the first two copies \(H_{1}(\epsilon)\) and \(H_{2}(\epsilon)\) to \(\Gamma\)
_(2) \(D\) meets each arc \(e_{i}\) and \(e^{\prime}_{i}\) exactly once, and_
_(3) \(D\) does not intersect \(e\)._
Proof.: The curve \(\mathcal{D}\) is piecewise geodesic. It is alternatively composed of \(2(k-2)\) blue arcs and \(2(k-2)\) red arcs. Since \(k\geq 3\), \(\mathcal{D}\) is not a geodesic. The inner angles of \(\mathcal{D}\) are each less than \(\pi\), therefore \(\mathcal{D}\) is freely homotopic to a unique geodesic \(D\) lying in the interior of \(\Sigma(k,\epsilon)\).
Let \(d\) be a blue edge. There is a 1-parameter family of curves \((\mathcal{D}_{t})_{t\in[0,1]}\), realizing a homotopy from \(\mathcal{D}=\mathcal{D}_{0}\) to \(D=\mathcal{D}_{1}\), such that the number of bigons formed by \(d\) and \(D=\mathcal{D}_{t}\) does not increase. Since there are no bigons formed by \(d\) and \(\mathcal{D}_{0}\), the geometric intersection numbers \(i(d,\mathcal{D}_{t})\) is constant, which proves the last two claims.
### The pair of pants \(\Pi(k,\epsilon)\) and its central octogon \(\mathbf{Q}(\epsilon)\)
The geodesic \(D\) decomposes \(\Sigma(k,\epsilon)\) into two pieces. The component \(\Pi(k,\epsilon)\subset\Sigma(k,\epsilon)\) with geodesic curves \(C\), \(C^{\prime}\) and \(D\) is a pair of pants.
The blue arcs decompose \(\Pi(k,\epsilon)\) into two hexagons adjacent to the central edge \(e\) and \(2k-4\) quadrilaterals. Let \(\mathbf{H}_{1}(\epsilon):=H_{1}(\epsilon)\cap\Pi(k,\epsilon)\) and \(\mathbf{H}_{2}(\epsilon):=H_{2}(\epsilon)\cap\Pi(k,\epsilon)\) be the two hexagons of the decomposition. Their union \(\mathbf{Q}(\epsilon):=\mathbf{H}_{1}(\epsilon)\cup\mathbf{H}_{2}(\epsilon)\) is a convex octogon.
Let \(H\) be the unique perpendicular arc joining \(C\) and \(C^{\prime}\) and let \(S\in C\) and \(S^{\prime}\in C^{\prime}\) be its endpoints. For \(\epsilon=\pi/2\), we have \(H=e\). Otherwise \(H\) and \(e\) meet as shown in the next lemma.
**Lemma 8**.: _The arc \(H\) lies in the octogon \(\mathbf{Q}(\epsilon)\). When \(\epsilon\neq\pi/2\), \(H\) and \(e\) intersect at their midpoint._
_Moreover we have_
_(1) \(d(P,S)<L/2\), and_
_(2) for \(\epsilon<\pi/2\), the point \(P\) belongs to \(\mathbf{H}(\epsilon)\)._
Proof.: The inner angles of \(\mathbf{Q}(\epsilon)\) are less than \(\pi\), hence there is an isometric embedding
\[\pi:\mathbf{Q}(\epsilon)\rightarrow\mathbb{H}.\]
Let \(\Delta\) and \(\Delta^{\prime}\) be the lines of \(\mathbb{H}\) containing, respectively, the arcs \(\pi\big{(}C\cap\mathbf{Q}(\epsilon)\big{)}\) and \(\pi\big{(}C^{\prime}\cap\mathbf{Q}(\epsilon)\big{)}\). Let \(\overline{H}\) be the common perpendicular arc to \(\Delta\) and \(\Delta^{\prime}\) and let \(\overline{S}:=\overline{H}\cap\Delta\) and \(\overline{S}^{\prime}:=\overline{H}\cap\Delta^{\prime}\) be its feet. By Lemma 4, we have
\[d_{\mathbb{H}}(\pi(P),\overline{S})<L/2.\]
Since \(\pi\big{(}C\cap\mathbf{Q}(\epsilon)\big{)}\) is the geodesic arc of \(\Delta\), centered at \(\pi(P)\), of length \(2L\), it follows that \(\overline{S}\) is on the boundary of \(\pi\big{(}C\cap\mathbf{Q}(\epsilon)\big{)}\). Similarly \(\overline{S}^{\prime}\) is on the boundary of \(\pi\big{(}C\cap\mathbf{Q}(\epsilon)\big{)}\). By convexity, the arc \(\overline{H}\) lies in \(\pi\big{(}\mathbf{Q}(\epsilon)\big{)}\).
Hence \(H=\pi^{-1}(\overline{H})\) belongs to \(\mathbf{Q}(\epsilon)\). The other claims follow from Lemma 4 and the fact that \(\pi\) is an isometry.
## 4. Trigonometry in the pair of pants \(\Pi(k,\epsilon)\)
Let \(k\geq 3\) be an integer and let \(\epsilon\in]0,\pi[\).
The pair of pants \(\Pi(k,\epsilon)\) from the previous section is endowed with an \(\epsilon\)-edge \(e\) joining \(C\) and \(C^{\prime}\), whose endpoints are \(P\in C\) and \(P^{\prime}\in C^{\prime}\). Recall that the length of \(e\) is \(L=\operatorname{arcosh}(1+1/\sin\,\epsilon)\). Let \(H\) (resp. \(h\), resp. \(h^{\prime}\)) be the unique common perpendicular arc to \(C\) and \(C^{\prime}\) (resp. to \(C\) and \(D\), resp. to \(C^{\prime}\) and \(D\)). Cutting \(\Pi(k,\epsilon)\) along \(H\cup h\cup h^{\prime}\) provides the usual decomposition of the pair of pants into two right-angled hexagons.
_4.1 Formula for \(\cosh H\)_
As usual, we will use the same letter for an arc and for its length. As a matter of notation, let \(S\in C\) and \(S^{\prime}\in C^{\prime}\) be the endpoints of \(H\).
**Lemma 9**.: _We have_
\[\cosh H=1+\sin\epsilon.\]
Proof.: When \(\epsilon=\pi/2\), we have \(e=H\) and \(\cosh H=\cosh L=2\). Therefore we can assume that \(\epsilon\neq\pi/2\).
By Lemma 8, \(e\) and \(H\) belong to the octogon \(\mathbf{Q}(\epsilon)\) and intersect at their midpoint \(M\). It follows that \(SM=H/2\) and \(PM=L/2\). By the sine law applied to the triangle \(PSM\), we have
\[\sinh H/2=\sin\epsilon\sinh L/2.\]
From the identities \(2\sinh^{2}H/2=\cosh H-1\) and \(2\sinh^{2}L/2=\cosh L-1\), it follows that
\[\cosh H=1+\sin^{2}\epsilon\,(\cosh L-1).\]
By Lemma 5, we have \(\cosh L-1=1/\sin\epsilon\), from which it follows that \(\cosh H=1+\sin\epsilon\).
_4.2 Conventions concerning the asymptotics of angle functions_
In what follows, we will consider analytic functions \(f:]0,\pi[\rightarrow\mathbb{R}\). In order to study their asymptotic growth near \(0\), we will use the following simplified notations. For any pair of functions \(A,\,B:]0,\pi[\rightarrow\mathbb{R}\), the expression
\[A\sim B\]
means that
\[\lim_{\epsilon\to 0}\frac{A}{B}=1.\]
Moreover the expression
\[A\sim*B\]
means that \(A\sim aB\) for some positive real number \(a\). Similarly, the expression \(A<<B\) means that
\[\lim_{\epsilon\to 0}\frac{A}{B}=0.\]
_4.3 Length estimates_
**Lemma 10**.: _We have_
\[\begin{array}{c}\cosh L\sim*\epsilon^{-1}\text{,}\\ H\sim*\epsilon^{1/2}\text{,}\\ d(S,P)=L/2+o(1)\text{.}\end{array}\]
_Moreover, the lengths \(h\) and \(h^{\prime}\) are equal and we have_
\[h\sim*\epsilon^{\frac{k-1}{2}}\text{.}\]
Proof.: Since by definition \(\cosh L=1+1/\sin\epsilon\), we have \(\cosh L\sim*\epsilon^{-1}\). By Lemma 9, we have \(H\sim*\epsilon^{1/2}\). By Lemma 8, \(H\) and \(e\) intersect at their midpoint \(M\). Since
\[|d(P,S)-d(P,M)|\leq d(S,M)=H\]
we also have \(|d(P,S)-L/2|\in o(1)\), which proves the third claim.
The perpendicular arcs \(H,h\) and \(h^{\prime}\) decompose the pair of pants \(\Pi(k,\epsilon)\) into two isometric right-angled hexagons (see [5], Proposition 3.1.5) and let \(\mathbf{A}\) be one of them.
Set \(d=l(D)/2\). In an anticlockwise direction, the hexagon \(\mathbf{A}\) has sides \(h^{\prime},kL/2,H,kL/2,h\) and \(d\). By the law of sines
\[\frac{\sinh kL/2}{\sinh h^{\prime}}=\frac{\sinh kL/2}{\sinh h}\]
and therefore \(h=h^{\prime}\).
In order to prove the final statement, we first estimate \(\cosh d\). By the law of cosines, we have
\[\cosh H=\frac{\cosh^{2}kL/2+\cosh d}{\sinh^{2}kL/2}\]
\[=1+\frac{1+\cosh d}{\sinh^{2}kL/2}\text{.}\]
It follows that
\[1+\cosh d=\sinh^{2}kL/2(\cosh H-1)\text{,}\]
and \(\cosh d\sim*\epsilon^{kL}H^{2}\sim*\epsilon^{1-k}\). By the law of sines, we have
\[\sinh h=\frac{\sinh H\sinh kL/2}{\sinh d}\]
and therefore \(h\sim\sinh h\sim*\epsilon^{(k-1)/2}\).
### The angles \(\omega_{i}\) and \(\omega_{i}^{\prime}\)
By Lemma 7, the geodesic \(D\) meets the arcs \(e_{i}\) and \(e_{i}^{\prime}\) exactly once each, so we can define
\[\omega_{i}:=\angle De_{i}\text{ and }\omega_{i}^{\prime}:=\angle De_{i}^{\prime}.\]
**Lemma 11**.: _We have_
1. \(\omega_{i}=\omega_{i}^{\prime}\)_, for any_ \(1\leq i\leq k-1\)
_._
2. \(\omega_{1}<\omega_{2}<\cdots<\omega_{k-1}\)_._
Proof.: By definition of \(\Sigma(k,\epsilon)\) and \(\Pi(k,\epsilon)\), there is a isometric rotation of angle \(\pi\) around the midpoint \(M\) of the arc \(PP^{\prime}\). It follows that \(\omega_{i}=\omega_{i}^{\prime}\) for any \(i\).
The pair of arcs \(e_{1}\) and \(e_{k-1}\) decompose \(\Pi(k,\epsilon)\) into two connected components. Let \(\mathbf{Q}_{0}\) be the contractible component, which is a quadrilateral whose sides are \(e_{1}\), \(e_{k-1}\), an arc of \(C\) and an arc of \(D\). It is larger than the quadrilateral \(\mathbf{Q}\) of Figure 5, because the top edge is \(e_{1}\) instead of \(h\).
Let \(\pi:\mathbf{Q}_{0}\to\mathbb{H}\) be a isometric embedding. Set \(\Delta^{\prime}=\pi(D\cap\mathbf{Q})\) and let \(\Delta\) be the line of \(\mathbb{H}\) containing the arc \(\pi(C\cap\mathbf{Q}_{0})\). The arcs \(\pi(e_{1}),\ldots,\pi(e_{k-1})\) belongs to the \(\epsilon\)-pencil \(\mathcal{F}_{\epsilon}(\Delta)\) of the line \(\Delta\). The second claim therefore follows from Lemma 3 (1).
### Angle estimates
**Lemma 12**.: _For any \(i=1,2,\ldots,k-1\), we have_
\[\lim_{\epsilon\to 0}\omega_{i}=\lim_{\epsilon\to 0}\omega_{i}^{\prime}=0.\]
Proof.: By Lemma 11, it is enough to show that \(\lim_{\epsilon\to 0}\omega_{k-1}=0\).
The pair of arcs \(h\) and \(e_{k-1}\) cut \(\Pi(k,\epsilon)\) into two connected components, where \(\mathbf{Q}\) is the contractible component, as shown in Figure 5. Then \(\mathbf{Q}\) is a convex quadrilateral whose vertices are the endpoints of \(h\), namely \(N:=h\cap C\) and \(\Omega:=h\cap D\) and the endpoints of \(e_{k-1}\), namely \(\Omega_{k-1}:=e_{k-1}\cap D\) and \(P_{k-1}\).
Let \(v\) be the diagonal of \(\mathbf{Q}\) joining \(P_{k-1}\) and \(\Omega\). Set
\(\epsilon^{-}=\angle Cv\), \(\epsilon^{+}=\angle ve_{k-1}\), \(\gamma^{-}=\angle hv\) and \(\gamma^{+}=\angle vD\). By definition we have
\(\epsilon^{-}+\epsilon^{+}=\epsilon\), and \(\gamma^{-}+\gamma^{+}=\pi/2\).
Figure 5. This figure illustrates the notation from the proofs of Lemmas 12 and 11.
First, we look at the trigonometry of the triangle \(P_{k-1}N\Omega\). Set \(a=d(N,P_{k-1})\). We have \(a=(k/2-1)L+d(S,P)\). By Lemma 9, we have \(d(S,P)=L/2+o(1)\), therefore \(a=uL+o(1)\), where \(u=(k-1)/2\). It follows that
\[\cosh a\sim*\epsilon^{-u}.\]
Since \(\cosh v=\cosh a\cosh h\), it follows from Lemma 10 that
\[\cosh v\sim\cosh a\sim*\epsilon^{-u}.\]
By the sine law, we have \(\sin\epsilon^{-}=\sinh h/\sinh v\). It follows from Lemma 10 and the previous estimate that \(\epsilon^{-}\sim*\epsilon^{2u}\). Since \(k\geq 3\), we have \(\epsilon^{-}<<\epsilon\) and therefore
\[\epsilon^{+}\sim\epsilon.\]
By combining the cosine and sine laws, we have \(\cos\gamma^{-}=\sin\epsilon^{-}\cosh a=\sinh h\cosh a/\sinh v\), and therefore
\[\cos\gamma^{-}\sim*\epsilon^{u}.\]
Next, we will look at the trigonometry of the triangle \(P_{k-1}\Omega\Omega_{k-1}\). Since \(\sin\gamma^{+}=\cos\gamma^{-}\), we have
\[\gamma^{+}\sim\sin\gamma^{+}\sim*\epsilon^{u}.\]
Using the cosine law, we have \(\cos\overline{\omega}_{k-1}=\sin\epsilon^{+}\sin\gamma^{+}\cosh v-\cos \epsilon^{+}\cos\gamma^{+})\). Adding one on each side and using that \(\cos\overline{\omega}_{k-1}=-\cos\omega_{k-1}\), we obtain
\[1-\cos\omega_{k-1}=\sin\epsilon^{+}\sin\gamma^{+}\cosh v+(1-\cos\epsilon^{+} \cos\gamma^{+}).\]
We will now estimate the right term of the previous identity. Using a Taylor expansion, it is clear that
\[1-\cos\epsilon^{+}\cos\gamma^{+}\in O(\epsilon^{2})+O(\epsilon^{2u})=O( \epsilon^{2}).\]
On the other hand, we have
\[\sin\epsilon^{+}\sin\gamma^{+}\cosh v\sim*\epsilon.\]
Hence we have \(1-\cos\omega_{k-1}\sim*\epsilon\), i.e.
\[\omega_{k-1}\sim*\epsilon^{1/2},\]
and therefore we have proved that \(\lim_{\epsilon\to 0}\omega_{k-1}=0\).
## 5. Sanki's paths and curve duality
For the whole section, we assume given an oriented topological closed manifold \(\mathcal{S}\) of genus \(g\) endowed with an isomorphism \(\rho:\Pi_{g}\to\pi_{1}(\mathcal{S})\), defined modulo the inner conjugations. It will be called the _marking_ of the topological surface \(\mathcal{S}\).
We will consider a set \(\operatorname{Tess}(\mathcal{S})\) of hexagonal tesselations of \(\mathcal{S}\), which are defined by a set \(\mathcal{C}_{R}\) of red curves and a set \(\mathcal{C}_{B}\) of blue curves. Following an idea of [15][2], we define, for each \(\tau\in\operatorname{Tess}(\mathcal{S})\), a path, called a Sanki's path, \(\sigma:]0,2\pi[\to\mathcal{T}_{g}\). Intuitively, Sanki's path are infinitesimal analogs of Penner's construction [13] of quasi-Anosov homeomorphisms.
When \(\tau\) satisfies some additional properties, we define, for each blue curve \(B\) a dual object \(B^{*}\), which is a linear combination of three curves with coefficients \(\pm 1/2\). Of course, \(B^{*}\) is not a multicurve, but its length function \(L(B^{*})\) is well defined. The first result of the paper is Theorem 15, showing a kind of duality between \(B\) and \(B^{*}\). It is expressed in terms of the Poisson bracket \(\{L(A),L(B^{*})\}\) relative to the Weil-Petersson symplectic form [19].
### The set of hexagonal tesselations \(\operatorname{Tess}(\mathcal{S})\)
Let \(\mathcal{H}\) be an oriented topological hexagon whose six sides are alternatively coloured in red and blue. Strictly speaking \(\mathcal{H}\) is a closed disc whose boundary is divided into six components, but the terminology hexagon is more suggestive.
Let \(\operatorname{Tess}(\mathcal{S})\) be the set of all tesselations \(\tau\) of \(\mathcal{S}\) satisfying the following two axioms:
(AX1) The tiles are homeomorphic to \(\mathcal{H}\) and they are glued pairwise along edges of the same colour.
(AX2) Each vertex of the tesselation has valence four.
The last axiom implies that each vertex is the endpoint of four edges, which are alternately red and blue. The graph consisting of red edges is a disjoint union of cycles. Those cycles are called the _red curves of the tesselation_ and the set of red curves is denoted \(\mathcal{C}_{red}\). Similarly, we define the _blue curves of the tesselation_ and the set \(\mathcal{C}_{blue}\) of blue curves. The set \(\operatorname{Curv}(\tau):=\mathcal{C}_{red}\cup\mathcal{C}_{blue}\) is called the set of _curves of the tesselation_.
### Sanki paths
We will now define the Sanki's path of a tessalation \(\tau\in\operatorname{Tess}(\mathcal{S})\). Let \(\epsilon\in]0,\pi[\). Define a metric on the \(1\)-skeleton \(\tau_{1}\) of \(\tau\) by requiring that all edges have length \(L\). Recall that \(L=\operatorname{arcosh}(1+1/\sin\epsilon)\) is the side lengths of the hexagon \(H(\epsilon)\) defined in Subsection 2.3.
For each closed face \(f\) of the tesselation, let \(\phi_{f}:H(\epsilon)\to f\) be a homeomorphism such that its restriction to the boundary \(\delta f:\partial H(\epsilon)\to\partial f\) preserves the metric and the colour of the edges.
A tesselation of \(\mathcal{S}\) is obtained where each tile is endowed with a hyperbolic structure. Along each edge of \(\tau_{1}\), two geodesic arcs have been glued isometrically. Around each vertex of \(\tau_{1}\), the four angles are alternatively \(\epsilon\) and \(\overline{\epsilon}\), hence their sum is \(2\pi\). By Theorem 1.3.5 of [5], there is a hyperbolic metric on \(\mathcal{S}\) extending the metric of the tiles. Together with the marking \(\rho\) of \(\mathcal{S}\), we obtain a well defined marked hyperbolic surface \(S_{\tau}(\epsilon)\).
The idea of deforming right-angled regular polygons by polygons with angles of value alternatively \(\epsilon\) and \(\pi-\epsilon\) first appeared in [15] and it was used in [2]. Therefore the corresponding path \(\sigma_{\tau}:]0,\pi[\to\mathcal{T}_{g}\), \(\epsilon\mapsto S_{\tau}(\epsilon)\) will be called the _Sanki's path_ of the tessalation \(\tau\). Since the function \(\cosh L=1+1/\sin\epsilon\) is analytic, the path \(\sigma_{\tau}\) is analytic.
It should be noted that, around each vertex the colours, blue or red, and the angles, \(\epsilon\) or \(\overline{\epsilon}\), of the edges alternate. Therefore the blue curves and the red curves are geodesics with respect to the hyperbolic metric on \(S_{\tau}(\epsilon)\).
### \(k\)-regular tesselations
For a closed oriented surface \(\mathcal{S}\), \((AX1)\) and \((AX2)\) is the minimal set of axioms required to define the Sanki's path. We will now define more axioms. The axiom (AX3) will ensure that the curves have the same length, while the axioms (AX4) and (AX5) are connected with the duality construction.
Let \(k\geq 2\) be an integer and let \(\mathcal{S}\) be a closed surface. A tesselation \(\tau\in\operatorname{Tess}(\mathcal{S})\) is called a \(k\)_-regular tesselation_ iff it satisfies the following axiom
(AX3) Each curve of \(\tau\), blue or red, consists of exactly \(k\) edges.
Denote by \(\operatorname{Tess}(\mathcal{S},k)\) the set of all \(k\)-regular tesselations. For any \(k\)-regular tesselation \(\tau\), we will consider two additional axioms. The first axiom is
(AX4) A blue edge and a red curve meet at most once.
Assume now that \(\tau\in\operatorname{Tess}(\mathcal{S},k)\) satisfies (AX4). Let \(R\) be a red curve, let \(b,b^{\prime}\) be two blue edges adjacent to \(R\) and let \(N\) be a small regular neigborhood of \(R\). Since \(\mathcal{S}\) is oriented, \(N\smallsetminus R\) consists of two open annuli, \(N^{\pm}\). By axiom (AX4), \(b\) has only one endpoint in \(R\), therefore \(b\) intersect either \(N^{+}\) or \(N^{-}\). Similarly, \(b^{\prime}\) intersect either \(N^{+}\) or \(N^{-}\). We say that \(b,b^{\prime}\) are _adjacent on the same side_ of \(C\) if they both intersect \(N^{+}\) or if they both intersect \(N^{-}\). Our last axiom is
(AX5) Two distinct blue edges are adjacent on the same side of at most one red curve.
Denote by \(\operatorname{Tess}_{45}(\mathcal{S},k)\) the set of \(k\)-regular tesselations satisfying the axioms (AX4) and (AX5).
### The isometric embedding \(\pi_{b}:\Pi(k,\epsilon)\to S_{\tau}(\epsilon)\)
From now on, assume that \(k\geq 3\). Let \(\tau\in\operatorname{Tess}_{45}(\mathcal{S},k)\). In order to define the duality, we first associate to each blue edge a pair of pants \(\Pi(k,\epsilon)\subset S_{\tau}(\epsilon)\).
Let \(b\) be a blue edge with endpoints \(Q\) and \(Q^{\prime}\), and let \(R\) and \(R^{\prime}\) be the red curves passing through \(Q\) and \(Q^{\prime}\). By axiom (AX4), the two curves \(R\) and \(R^{\prime}\) are distinct, so the graph \(\Gamma_{0}:=R\cup R^{\prime}\cup b\) is a union of two circles connected by an edge. Since \(\mathcal{S}\) is oriented, a small normal open neighbourhood \(N\) of \(\Gamma_{0}\) is a thickened eight. Then \(R\cup R^{\prime}\) cuts \(N\) into three components, two of them are homeomorphic to annuli and the third one, call it \(\Omega\), contains \(b\).
In a planar representation of \(\Gamma_{0}\), \(\Omega\) is the exterior of \(R\cup R^{\prime}\). Since \(R\) and \(R^{\prime}\) are boundary components of \(\Omega\), these curves inherit an orientation. By axiom (AX3), \(R\) contains \(k\) vertices of the tesselation. Starting from \(Q\) in the anticlockwise direction, the other \(k-1\) points of
\(R\) are denoted by \(Q_{1},\ldots,Q_{k-1}\). For each \(1\leq i\leq k-1\) let \(b_{i}\) be the blue edge starting at \(Q_{i}\) on the same side as \(b\). The points \(Q^{\prime}_{1},\ldots,Q^{\prime}_{k-1}\) of \(R^{\prime}\) and the edges \(b^{\prime}_{i}\) are defined similarly. Adding the edges \(b_{i},b^{\prime}_{i}\) to the graph \(\Gamma_{0}\), we obtain a graph \(\overline{\Gamma}\).
Let \(\epsilon\in]0,\pi[\) and recall that \(S_{\tau}(\epsilon)\) is the tesselated surface \(S_{g}\) representing a point in \(\mathcal{T}_{g}\).
Let \(\Gamma\) be the graph defined in Section 4.1. There is a local isometry \(\pi:\Gamma\to\overline{\Gamma}\) such that \(\pi(P)=Q\), \(\pi(P_{i})=Q_{i}\), \(\pi(e_{i})=b_{i}\), \(\pi(C)=R\), \(\pi(P^{\prime})=Q^{\prime}\), \(\pi(P^{\prime}_{i})=Q^{\prime}_{i}\), \(\pi(e_{i})=b_{i}\) and \(\pi(C^{\prime})=R^{\prime}\). Clearly, it can be extended uniquely to a local isometry \(\pi:\Pi(k,\epsilon)\to S_{\tau}(\epsilon)\). Let \(\pi_{b}\) be its restriction to \(\Pi(k,\epsilon)\).
**Lemma 13**.: _The map \(\pi_{b}:\Pi(k,\epsilon)\to S_{\tau}(\epsilon)\) is an isometric embedding._
Proof.: Since \(b\) joins \(R\) and \(R^{\prime}\), it follows from Axiom (Ax4) that \(R\) and \(R^{\prime}\) are distinct. By Axioms (Ax4) and (Ax5), the point \(Q_{i}\), respectively \(Q^{\prime}_{j}\) is the unique endpoint of \(b_{i}\cap\Omega\), respectively of \(b^{\prime}_{j}\cap\Omega\). Hence the blue edges \(b\), \(b_{i}\) and \(b^{\prime}_{j}\) are all distinct. For any two edges \(e\neq e^{\prime}\) of \(\Gamma\cap\Pi(k,\epsilon)\), we therefore have \(\pi(e)\neq\pi(e^{\prime})\).
Let \(F,F^{\prime}\) be two faces of \(\Pi(k,\epsilon)\) such that \(\pi(F)=\pi(F^{\prime})\). Since each face \(F\) or \(F^{\prime}\) has at least two blue edges in \(\Gamma\), it follows that \(F\) and \(F^{\prime}\) contain a common blue edge \(e\subset\Gamma\). It follows easily that \(F=F^{\prime}\).
Consequently, the restriction of \(\pi\) to \(\Pi(k,\epsilon)\smallsetminus\mathcal{D}\) is injective. By Lemma 7, \(\Pi(k,\epsilon)\) lies in \(\Sigma(k,\epsilon)\smallsetminus\mathcal{D}\). Therefore \(\pi\) induces an isometric embedding \(\pi_{b}:\Pi(k,\epsilon)\to S_{\tau}(\epsilon)\).
### The dual functions \(L(B^{*})\)
Let \(\tau\in\mathrm{Tess}_{45}(\mathcal{S},k)\) for some integer \(k\geq 3\).
We are now going to define the dual function \(L(B^{*})\), for any blue curve of \(B\) of the tesselation. Choose anf fix one edge \(b\) of \(B\). Let \(R\), \(R^{\prime}\) be the two red curves containing the endpoints of \(b\) and set \(D_{b}=\pi_{b}(D)\). Set
\[L(B^{*})=\frac{1}{2}(L(R)+L(R^{\prime})-L(D_{b})).\]
Informally speaking, \(L(B^{*})\) is the length function associated with the "dual curve" \(B^{*}=1/2(R+R^{\prime}-D_{b})\). Strictly speaking, the function \(L(B^{*})\) depends of the choice of the edge \(b\).
For any \(F,G\in C^{\infty}(\mathcal{T}_{g})\), let \(\{F,G\}\) be their Poisson bracket induced by the the Weil-Petersson symplectic form on \(\mathcal{T}_{g}\), see e.g. [19]. The duality between \(B\) and \(B^{*}\) is demonstrated in the next lemma.
**Lemma 14**.: _Let \(\tau\in\mathrm{Tess}_{45}(\mathcal{S},k)\) and let \(\sigma_{\tau}:]0,\pi[\to\mathcal{T}_{g}\) be the associated Sanki path._
_For any \(A,\,B\in\mathcal{C}_{blue}\), we have_
\[\lim_{\epsilon\to 0}\{L(A),L(B^{*})\}(\sigma_{\tau}(\epsilon))=\delta_{A,B},\]
_where \(\delta_{A,B}\) is the Kronecker delta._
Proof.: Let \(B\in\mathcal{C}_{blue}\). By definition there is an edge \(b\) of \(B\) such that \(2L(B^{*})=L(R)+L(R^{\prime})-L(D_{b})\) where \(R\) and \(R^{\prime}\) are the two red curves containing the endpoints of \(b\).
Set \(\overline{\Pi}=\pi_{b}(\Pi(k,\epsilon))\) and \(\overline{D}=\pi_{b}(D_{\underline{b}})\). For each \(i\in\{1,2,\ldots,k-1\}\), set
\(\beta_{i}=c_{i}\cap\overline{\Pi}(k,\epsilon)\) and \(\beta^{\prime}_{i}=c^{\prime}_{i}\cap\overline{\Pi}(k,\epsilon)\).
By Lemma 13, \(\beta_{i}\) is an arc, with one endpoint \(P_{i}\) in \(R\) and the other endpoint \(\Omega_{i}\) on \(\overline{D}\). Similarly, \(\beta^{\prime}_{i}\) is an arc, with one endpoint \(Q^{\prime}_{i}\) in \(R^{\prime}\) and the other endpoint, say \(\Omega^{\prime}_{i}\), belongs to \(\overline{D}\). We have
1. \(\beta_{i}\) does not intersect \(R^{\prime}\),
2. \(\beta_{i}\cap R=Q_{i}\) and \(\angle R\beta_{i}=\epsilon\)
3. \(\beta_{i}\cap\overline{D}=\Omega_{i}\angle\overline{D}\beta_{i}=\omega_{i}\),
where the angles \(\omega_{i}\) are defined in Section 4.4. Similarly, we have
1. \(\beta^{\prime}_{i}\) does not intersect \(R\),
2. \(\beta^{\prime}_{i}\cap R^{\prime}=P^{\prime}_{i}\) and \(\angle R\beta_{i}=\epsilon\)
3. \(\beta^{\prime}_{i}\cap\overline{D}=\Omega_{i}\angle\overline{D}\beta^{\prime}_ {i}=\omega^{\prime}_{i}\).
Let \(A\in\mathcal{C}_{blue}\) be another blue curve. Set
\(I=\{i\mid 1\leq i\leq k-1\,\text{and}\,\beta_{i}\subset A\}\), and
\(I^{\prime}=\{i\mid 1\leq i\leq k-1\,\text{and}\beta^{\prime}_{i}\subset A\}\).
When \(A\neq B\), the curve \(A\) meets \(R\cup R^{\prime}\cup D_{b}\) exactly at the points \(P_{i},\Omega_{i}\) for \(i\in I\) and \(P^{\prime}_{i},\Omega^{\prime}_{i}\) for \(i\in I\). Therefore by Wolpert's formula [18], we have
\(\{L(A),L(R)+L(R^{\prime})-L(D_{b})\}(\sigma_{\tau}(\epsilon))\)
\(=[\sum_{i\in I}\cos\epsilon-\cos\omega_{i}]+[\sum_{i\in I^{\prime}}\cos \epsilon-\cos\omega^{\prime}_{i}]\).
By Lemma 12, we have
\[\lim_{\epsilon\to 0}\omega_{i}=0,\]
and therefore
\(\lim_{\epsilon\to 0}\{L(A),L(B^{*})\}(\sigma_{\tau}(\epsilon))=0\)
When \(A=B\), the computation is similar except that, in addition to the arcs \(\beta_{i}\) for \(i\in I\) and \(\beta^{\prime}_{i}\) for \(i\in I^{\prime}\), the geodesic \(A\) contains \(b\). Therefore, one obtains
\(\{L(A),L(R)+L(R^{\prime})-L(D_{b})\}(\sigma_{\tau}(\epsilon))\)
\(=2\cos\epsilon+[\sum_{i\in I}\cos\epsilon-\cos\omega_{i}]+[\sum_{i\in I}\cos \epsilon-\cos\omega^{\prime}_{i}]\),
and therefore
\(\lim_{\epsilon\to 0}\{L(A),L(A^{*})\}(\sigma_{\tau}(\epsilon))=1\).
### The duality theorem
Suppose \(k\geq 3\) and choose \(\tau\in\text{Tess}(\mathcal{S},k)\). Recall that \(\sigma_{\tau}:]0,\pi[\rightarrow\mathcal{T}_{g}\) is the Sanki path.
**Theorem 15**.: _Assume that \(\tau\) satisfies the axioms (AX4) and AX(5). Then for any \(\epsilon\in]0,\pi[\) outside some finite set \(F\), the set_
\[\{\mathrm{d}L(C)\mid C\in\mathrm{Curv}(\tau)\}\]
_is linearly independent at the point \(\sigma_{\tau}(\epsilon)\)._
Proof.: For \(\epsilon\in]0,\pi[\), let \(\delta(\epsilon)\) be the determinant of the square matrix
\[(\{L(A),L(B^{*})\}(\sigma_{\tau}(\epsilon)))_{A,B\in\mathcal{C}_{blue}},\]
and set \(F=\{\epsilon\in]0,\pi[\mid\delta(\epsilon)=0\}\).
By Lemma 14, we have \(\lim_{\epsilon\to 0}\,\delta(\epsilon)=1\). Moreover, changing the orientation of \(\mathcal{S}\) amounts to replacing \(\epsilon\) by \(\overline{\epsilon}\), so we also have \(\lim_{\epsilon\to\pi}\,\delta(\epsilon)=\pm 1\). Since \(\delta\) is an analytic function on \(]0,\pi[\), it follows that \(F\) is finite.
It remains to show that, for \(\epsilon\notin F\), the differentials at \(\sigma_{\tau}(\epsilon)\) of the set of length functions \(\{L(C)\mid C\in\mathrm{Curv}(\tau)\}\) are linearly independent.
Let \(\epsilon\notin F\). Let \((a_{A})_{A\in\mathrm{Curv}(\tau)}\) be an element of \(\mathbb{R}^{|\mathrm{Curv}(\tau)|}\) such that
\[\sum_{A\in\mathrm{Curv}(\tau)}\,a_{A}dL(A)|_{\sigma_{\tau}(\epsilon)}=0.\]
Let \(B\in\mathcal{C}_{blue}\). Recall that \(B^{*}\) is a linear combination of two red curves \(R,R^{\prime}\) and a certain geodesic \(D_{b}\). Neither the geodesics \(D_{b}\) nor the red curves meet any red curve transversally. Hence we have \(\{L(C),L(B^{*})\}=0\) for any red curve \(C\). It follows that
\[\sum_{A\in\mathcal{C}_{blue}}\{a_{A}L(A),L(B^{*})\}\]
is zero at \(\sigma_{\tau}(\epsilon)\). Since \(\delta(\epsilon)\neq 0\), we have \(a_{A}=0\) for any \(A\in\mathcal{C}_{blue}\).
Therefore, it follows that
\[\sum_{A\in\mathcal{C}_{red}}\,a_{A}dL(A)|_{\sigma_{\tau}(\epsilon)}=0.\]
Since it is a subset of some Fenchel-Nielsen coordinates, the set of differentials \(\{\mathrm{d}L(A)\mid A\in\mathcal{C}_{red}\}\) is linearly independent. Therefore we also have \(a_{A}=0\) for any \(A\in\mathcal{C}_{red}\).
Hence the differentials \((\mathrm{d}L(A))_{A\in\mathrm{Curv}(\tau)}\) are linearly independent at \(\sigma_{\tau}(\epsilon)\).
## 6. Local structure of \(\mathcal{P}_{g}\) along a Sanki's path
Our previous work [10] provides a systematic construction of \(k\)-regular tesselations \(\tau\). To apply Theorem 15, we first show that the axioms \((AX4)\) and \((AX5)\) are automatically satisfied when the curves of \(S_{\tau}(\pi/2)\) are the systoles.
Then we deduce the local structure of the Thurston's spine along a Sanki's path.
To prove the main result, we will apply Theorem 15 to the tesselations \(\tau_{g}\) of the surface \(S_{g}\) obtained in [10]. Obviously they satisfy the axioms \((AX1-3)\) and it remains to prove that the tesselations \(\tau_{g}\) also satisfy \((AX4)\) and \((AX5)\).
### Verification of the axioms \((Ax4)\) and \((Ax5)\)
As usual, suppose \(\mathcal{S}\) is a surface of genus \(g\geq 2\), \(k\geq 3\) and \(\tau\in\mathrm{Tess}(\mathcal{S},k)\).
**Lemma 16**.: _Assume that the set of systoles of \(S_{\tau}(\pi/2)\) is the set of curves of \(\tau\). Then the tesselation \(\tau\) satisfies the axioms (AX4) and (AX5)._
Proof.: By a well-known lemma of riemannian geometry, two distinct systoles intersect in at most one point, therefore (AX4) is satisfied.
The proof of Axiom (AX5) is more delicate. Let \(C,C^{\prime}\in\mathcal{C}\) and let \(c,d\) be two distinct blue edges connecting \(C\) and \(C^{\prime}\) on the same side. Let \((P,P^{\prime})\in C\times C^{\prime}\) be the endpoints of \(c\), and let \((Q,Q^{\prime})\in C\times C^{\prime}\) be the endpoints of \(d\), as shown in Figure 6.
Since \(c\) and \(d\) are adjacent to \(C\) on the same side, there is a planar representation of \(C\) and \(C^{\prime}\) where \(c\) and \(d\) are on the exterior of \(C\). This planar representation provides an orientation of \(C\) and \(C^{\prime}\), called the _direct_ orientation.
Let \(F_{1}\) and \(F_{2}\) be the two hexagons containing \(d\). Set \(f_{1}=F_{1}\cap C\), \(f_{2}=F_{1}\cap C\), \(f_{1}^{\prime}=F_{1}\cap C\), \(f_{2}^{\prime}=F_{1}\cap C\). By definition, \(f_{1}^{\prime},d\) and \(f_{1}\) are consecutive edges of \(F_{1}\). We can assume \(f_{1}^{\prime},d\) and \(f_{1}\) are ordered relative to the direct orientation. Consequently \(f_{2}\) follows \(f_{1}\) relative to the orientation of \(C\). Since \(\mathcal{S}\) is oriented, \(f_{1}^{\prime}\) follows \(f_{2}^{\prime}\) in the direct orientation of \(C^{\prime}\). Therefore the relative position of \(F_{1}\) and \(F_{2}\) along \(C\) and \(C^{\prime}\) is as in Figure 6.
For \(i=1\) or \(2\), let \(\gamma_{i}\) be the arc of \(C\) from \(Q\) to \(P\) and containing \(f_{i}\). Similarly let \(\gamma_{i}^{\prime}\) be the arc of \(C^{\prime}\) from \(P^{\prime}\) to \(Q\) and containing \(f_{i}^{\prime}\). Let us orient \(c\) from \(P\) to \(P^{\prime}\) and \(d\) from \(Q^{\prime}\) to \(Q\).
The arc \(\gamma_{1}\), \(\gamma_{2}\), \(\gamma_{1}^{\prime}\) and \(\gamma_{2}^{\prime}\) cover \(C\cup C^{\prime}\), therefore we have
\[L(\gamma_{1})+L(\gamma_{1}^{\prime})+L(\gamma_{2})+L(\gamma_{2}^{\prime})=2kL.\]
It is possible to assume without loss of generality that
\[L(\gamma_{1})+L(\gamma_{1}^{\prime})\leq kL.\]
The path \(\gamma_{1}\) consists of \(L(\gamma_{1})/L\) edges. Let \(g_{1}\) be the last edge of \(\gamma_{1}\). Similarly, let \(g_{1}^{\prime}\) be the first edge of \(\gamma_{1}^{\prime}\). By definition, \(g_{1}\) contains \(P\) and \(g_{1}^{\prime}\) contains \(P^{\prime}\). Let \(G_{1}\) be the hexagon with the three consecutive edges \(g_{1}\), \(c\) and \(g_{1}^{\prime}\).
Now \(f_{1}\neq g_{1}\) and \(f_{1}^{\prime}\neq g_{1}^{\prime}\), otherwise the edge \(d\) would join \(g_{1}\) and \(g_{1}^{\prime}\). Hence we have \(F_{1}\neq G_{1}\). Therefore there is a factorization \(\gamma_{1}=f_{1}*\delta_{1}*g_{1}\), where \(\delta_{1}\) is the geodesic arc
Figure 6. Respective positions of \(F_{1}\) and \(F_{2}\). They appear on the left side and the right sides of the figure: it should be understood that they lie on a cylinder.
between \(f_{1}\) and \(g_{1}\) and where the notation \(*\) stands for the concatenation of paths. Similarly, there is a factorization \(\gamma_{1}^{\prime}=f_{1}*\delta_{1}^{\prime}*g_{1}\).
We show now that the loop
\[\gamma:=\gamma_{1}*c*\gamma_{1}^{\prime}*d,\]
is not null-homotopic. Assume otherwise. Set \(S_{g}=S_{\tau}(\pi/2)\), let \(\pi:\mathbb{H}\to S_{g}\) be the universal cover of \(S_{g}\) and let \(\tilde{\gamma}\) be a lift of \(\gamma\) in \(\mathbb{H}\). Since \(\gamma\) is composed of four geodesic arcs, and the angles between them are \(\pi/2\), the lift \(\tilde{\gamma}\) would bound a quadrilateral whose inner angles are all \(\pi/2\) or \(3\pi/2\) which is impossible. It follows that \(\gamma\) is not null-homotopic.
The hexagon \(F_{1}\) contains a Saccheri quadrilateral whose basis is \(d\) and with feet given by \(f_{1}\) and \(f_{1}^{\prime}\). Let \(d^{\prime}\) be the fourth side, oriented from \(f_{1}^{\prime}\) to \(f_{1}\). As a path, \(d^{\prime}\) is homotopic to \(f_{1}^{\prime}*d*f_{1}\).
Similarly, let \(c^{\prime}\) be the last side, oriented from \(g_{1}\) to \(g_{1}^{\prime}\), of the Saccheri quadrilateral in \(G_{1}\) whose basis is \(c\) and whose feet are \(g_{1}\) and \(g_{1}^{\prime}\). Similarly, \(c^{\prime}\) is homotopic to \(g_{1}*d*g_{1}^{\prime}\).
Up to a reparametrization, we have
\[\gamma=\delta_{1}*g_{1}*c*g_{1}^{\prime}*\delta_{1}^{\prime}*f_{1}^{\prime}*d*f _{1}.\]
Hence \(\gamma\) is homotopic to \(\tilde{\gamma}=\delta_{1}*c^{\prime}*\delta_{1}^{\prime}*d^{\prime}\). Since we have \(L(c^{\prime})=L(d^{\prime})=L^{\prime}\), we have
\[L(\tilde{\gamma})=L(\gamma_{1})+L(\gamma_{1}^{\prime})+2L^{\prime}-4L<kL\]
by Lemma 6, which contradicts that \(\mathcal{C}\) is the set of systoles.
### Two corollaries
We will now derive two corollaries concerning the structure of \(\mathcal{P}_{g}\) at the neighborhood of a Sanki's path.
Given a finite set of curves \(\mathcal{C}=\{C_{1},C_{2},\ldots,C_{n}\}\), let \(E(\mathcal{C})\) be the set of \(x\in\mathcal{T}_{g}\) such that
\[L(C_{1})(x)=L(C_{2})(x)=\cdots=L(C_{n})(x).\]
Also let \(\operatorname{Sys}\left(\mathcal{C}\right)\) be the set of points \(x\in\mathcal{P}_{g}\) such that \(\mathcal{C}\) is the set of systoles at \(x\).
Let \(X\subset\mathcal{P}_{g}\) be a locally closed subset, and let \(x\in\overline{X}\). We say that \(X\) is _locally a smooth manifold at \(x\)_ if \(U\cap X\) is a smooth manifold for some open neighborhood of \(X\). When it is the case, the local codimension \(\operatorname{codim}_{x}X\) is well defined. Our previous definition do not require that \(x\) belongs to \(X\).
**Corollary 17**.: _Let \(\tau\in\operatorname{Tess}(\mathcal{S},k)\) for some \(k\geq 3\) such that \(\operatorname{Curv}(\tau)\) is the set of systoles of \(S_{\tau}(\pi/2)\)._
_Let \(\mathcal{C}\subset\operatorname{Curv}(\tau)\) be any filling subset. Then for any \(\epsilon\neq\pi/2\) closed to \(\pi/2\), we have_
_(i) \(\sigma_{\tau}(\epsilon)\) belongs to \(\overline{\operatorname{Sys}\left(\mathcal{C}\right)}\),_
_(ii) \(\operatorname{Sys}\left(\mathcal{C}\right)\) is a smooth manifold in the neighborhood of \(\sigma_{\tau}(\epsilon)\), and_
_(iii) \(\operatorname{codim}_{\sigma_{\tau}(\epsilon)}\operatorname{Sys}\left( \mathcal{C}\right)=\operatorname{Card}\mathcal{C}-1\)._
Proof.: By Lemma 16, the tesselation \(\tau\) belongs to \(\operatorname{Tess}_{45}(\mathcal{S},k)\). Hence by Theorem 15, the map
\[L(\mathcal{C}):\mathcal{T}(\mathcal{S})\to\mathbb{R}^{\mathcal{C}},x\mapsto(L( C)(x))_{C\in\operatorname{Curv}(\tau)}\]
is a submersion at the point \(\sigma_{\tau}(\epsilon)\) for all \(\epsilon\neq\pi/2\) closed to \(\pi/2\). By the submersion theorem, \(E(\mathcal{C})\) is smooth of codimension Card \(\mathcal{C}-1\) around the point \(\sigma_{\tau}(\epsilon)\) and \(\sigma_{\tau}(\epsilon)\) is adherent to the set \(E^{+}(\mathcal{C})\) of all \(x\in E(\mathcal{C})\) defined by the inequations
\[L(C)(x)<L(C^{\prime})(x),\]
for all
\[C\in\mathcal{C}\]
and
\[C^{\prime}\in\operatorname{Curv}(\tau)\smallsetminus\mathcal{C}\]
Thus Assertion (ii) and (iii) follows from the fact that \(\operatorname{Sys}\left(\mathcal{C}\right)\) is an open set of \(E(\mathcal{C})\), see [16][17].
**Corollary 18**.: _Under the hypothesis of Corollary 17, the point \(\sigma_{\tau}(\pi/2)\) is adherent to \(\operatorname{Sys}\left(\mathcal{C}\right)\) and we have_
\[\operatorname{codim}\operatorname{Sys}\left(\mathcal{C}\right)<\operatorname{ Card}\left(\mathcal{C}\right)\text{.}\]
### The main result from [10]
A _decoration_ of the hexagon \(H(\pi/2)\) is a cyclic indexing of it six sides by \(\mathbb{Z}/6\mathbb{Z}\). Up to direct isometries, there are exactly two decorated hexagons, say \(\mathcal{H}\) and \(\overline{\mathcal{H}}\).
Let \(S\) be a closed hyperbolic surface. A _standard_ hexagonal tesselation \(\tau\) of \(S\) is a tesselation of \(S\), where each tile is isomorphic to \(\mathcal{H}\) or \(\overline{\mathcal{H}}\). Of course, it is assumed that tiles are glued along edges of the same index.
**Theorem** (Theorem 25 of [10]).: _There exists an infinite set \(A\) of integers \(g\geq 2\), and, for any \(g\in A\), a closed oriented hyperbolic surface \(S_{g}\) of genus \(g\) endowed with a standard tesselation \(\tau_{g}\) satisfying the following assertions_
1. _the systoles of_ \(S_{g}\) _are the curves of_ \(\tau_{g}\)_, and_
2. _we have_ \[\operatorname{Card}\,\operatorname{Syst}(S_{g})\leq\frac{57}{\sqrt{\ln\ln\ln g }}\ \frac{g}{\sqrt{\ln g}}\text{.}\]
### Proof of Theorem 1
**Theorem 1**.: _There exists an infinite set \(A\) of integers \(g\geq 2\) such that_
\[\operatorname{codim}\mathcal{P}_{g}<\frac{38}{\sqrt{\ln\ln\ln g}}\ \frac{g}{\sqrt{\ln g}}\text{,}\]
_for any \(g\in A\)._
Proof.: Let \(A\) be the set of the of the theorem of Subsection 6.3. Let \(g\in A\) and let \(\tau_{g}\) be the corresponding tesselation.
By hypotheses, any curve \(C\) of \(\tau\) consists of edges of the same index. By extension it will be called the index of the curve. Let \(\mathcal{C}\) be the set of all curves of index \(3,4,5\) or \(6\). We claim that \(\mathcal{C}\) fills the surface.
Let \(P\) be a vertex at the intersection of two curves of index \(1\) and \(2\). Let \(\mathbf{Q}\) be the union of the four hexagons surrounding \(P\). It turns out that \(\mathbf{Q}\) is a \(12\)-gon whose edges have indices distinct from \(1\) and \(2\). It follows that \(\mathcal{C}\) cuts the surface into these \(12\)-gons.
It is clear that \(\operatorname{Card}\,\mathcal{C}=2/3\operatorname{Card}\,\operatorname{Syst}(S _{g})\). To finish the proof, it is enough to show that \(\tau_{g}\) satisfies the hypothesis of Corollary 18.
We can assign the red colour to the edges of \(\tau_{g}\), of index \(1,2\) or \(3\) and the blue colour to other edges. Moreover, since all curves have the same length, the tesselation is \(k\)-regular for
some \(k\). The case \(k=2\) was excluded from consideration in [10] so we have \(k\geq 3\). In fact, the decoration implies that \(k\) is even [10], so we have \(k\geq 4\). It follows that \(\tau_{g}\) belongs to \(\operatorname{Tess}(\mathcal{S},k)\) for some \(k\geq 4\).
It follows from Corollary 18 that
\[\operatorname{codim}\mathcal{P}_{g}<\tfrac{38}{\sqrt{\ln\ln\ln g}}\ \tfrac{g}{\sqrt{\ln g}}.\]
## 7. Examples
Before [3], it was a challenging question to know if \(\operatorname{codim}\mathcal{P}_{g}\) was less than \(2g-1\). Since the bounds in [3] are not explicit, it is still interesting to know the smallest \(g\) for which \(\operatorname{codim}\mathcal{P}_{g}<2g-1\). We will describe our construction for \(g=17\) and show that \(\operatorname{codim}\mathcal{P}_{17}<33\).
We will first briefly explain the case \(2k=2\) which was excluded from consideration in order to avoid some specificity.
### Standard \(2k\)-regular tesselations
We will briefly explain the construction of all standard \(2k\)-regular tesselations, following [10]. Let \(\mathcal{H}\) be a decorated right-angled regular hexagon of the Poincare half-plane \(\mathbb{H}\). For each \(i\in\mathbb{Z}/6\mathbb{Z}\), let \(s_{i}\) be the reflection in the line \(\Delta_{i}\) containing the side of index \(i\) of \(\mathcal{H}\). The group \(W\) generated by these reflections is a Coxeter group with presentation
\[\langle s_{i}\mid(s_{i}s_{i+1})^{2}=1,\forall i\in\mathbb{Z}/6\mathbb{Z}\rangle.\]
By a theorem of Poincare, the collection of hexagons \(\{w.\mathcal{H}\}\) is the set of tiles of a tesselation of \(\tau\) of \(\mathbb{H}\).
Let \(W^{+}\) be the subgroup of index two consisting of products of an even number of generators. Let \(k\geq 1\). Let \(H\) be a subgroup of \(W\) satisfying
1. \(H\) is a finite index subgroup of \(W^{+}\),
2. \(H^{w}\cap\langle s_{i},s_{i+1}\rangle=\{1\}\), and
3. \(H^{w}\cap\langle s_{i}s_{i+1}\rangle=\langle(s_{i}s_{i+1})^{k}\rangle\),
for any \(i\in\mathbb{Z}/6\mathbb{Z}\) and \(w\in W\), where \(H^{w}\) stands for \(wHw^{-1}\).
Then \(\mathbb{H}/H\) is a closed oriented hyperbolic surface endowed with a \(2k\)-regular standard tesselation. Conversely, any such tesselated surface is isometric to \(\mathbb{H}/H\), where \(H\) satisfies the previous conditions, see [10], Theorem 12. This leads to the question, only partially answered by Criterion 18 of [10] - when the curves of the tesselation are the systoles of the surface?
### Schmutz's genus two surface
The case \(2k=2\) is simple, but it has been excluded because of its particularity. In fact, the three-holed sphere \(\Sigma(2,\epsilon)\) is equal to \(\Pi(2,\epsilon)\).
There is only one subgroup \(H\) of \(W\) satisfying the previous three conditions, and we have \(W/H\simeq(\mathbb{Z}/2\mathbb{Z})^{2}\). The corresponding surface \(S_{2}:=\mathbb{H}/H\) is the genus \(2\) surface tesselated by \(4\) hexagons, see Figure 7. It has been proved in [16] that the set \(\mathcal{C}\) of the curves of the tesselation are the systoles.
The curve \(S_{2}\) has six points \(P_{i}\), for \(i\in\mathbb{Z}/6\mathbb{Z}\), which are fixed by the hyperelleptic invotion. Denote by \(C_{i,i+1}\) the curve of \(\mathcal{C}\) containing \(P_{i}\) and \(P_{i+1}\).
### An exemple of genus \(17\)
When \(2k=4\), the analysis is more complicated. We will describe a surface of genus \(17\) endowed with a \(4\)-regular tesselation.
Let \(H\subset W\) be the normal subgroup generated by the elements \((s_{i}s_{i+3})^{2}\) and set \(S_{17}=\mathbb{H}/H\). The quotient \(\Gamma:=W/H\) is isomorphicto \((\mathbb{Z}/2\mathbb{Z})^{6}\), hence \(S_{17}\) is tesselated by \(64\) hexagons. It follows that \(S_{17}\) has genus \(17\).
**Lemma 19**.: _The systoles of \(S_{17}\) are exactly the curves of the given tesselation._
Proof.: This specific example does not fully satisfies the hypotheses of criterion \(18\) of [10], so we will briefly explain the proof.
The group \(\Gamma\) is given with \(6\) generators, and its Caley graph \(\operatorname{Cay}\Gamma\) is the one-skeleton of a \(6\)-dimensional cube. There is an embedding of \(\operatorname{Cay}\Gamma\) in \(S_{17}\). The vertices are the centers of the hexagons and the edges are the geodesic arcs connecting two vertices belonging to two adjacent faces and crossing their common edge.
Figure 7. Up to repetition, there are only six vertices on the left side of this figure, which are the points \(P_{i}\) indexed by \(1,2,\ldots,6\). They are located on the \(x\)-axis of the figure on the right. The hyperelliptic involution is a \(180\)-degree rotation around this axis. Three systoles are located on the vertical plane and the other three are on the horizontal plane.
A loop in \(\operatorname{Cay}\Gamma\) is a word \(w\) on the letters \((s_{i})_{i\in\mathbb{Z}/6\mathbb{Z}}\) representing \(1\) in \(\Gamma\). The letters \(s_{1},s_{3}\) and \(s_{5}\) are called the red letters, and the other three are called the blue letters. For any word \(w\), let \(l_{R}(w)\), resp. \(l_{B}(w)\), be the number of occurences of red letters, resp. of blue letters. Also set \(l(w)=l_{R}(w)+l_{B}(w)\).
As in Lemma 14 of [10], any closed geodesic \(\gamma\) is freely homotopic to a loop \(\omega(\gamma)\) in \(\operatorname{Cay}\Gamma\). Indeed if \(\gamma\) crosses successively some edges of index \(i_{1}\), \(i_{2}\), \(\dots i_{n}\) then \(\omega(\gamma)\) is the word \(s_{i_{1}}s_{i_{2}}\dots s_{i_{n}}\). If at some point \(\gamma\) crosses a vertex at the intersection of two edges of indices \(i\) and \(i+1\), the previous definition is ambiguous. By convention, we will consider that \(\gamma\) crosses first an edge of index \(i\) and then an edge of index \(i+1\).
We claim that the systoles of \(S_{17}\) are the curves of the tesselation, which have length \(4L\), where \(L=\operatorname{arcosh}2\). Let \(\gamma\) be a closed geodesic. Note that \(l_{R}(\omega(\gamma)\) and \(l_{B}(\omega(\gamma)\) are even.
First assume that \(l(\omega(\gamma)>4\). We have \(l_{R}(\omega(\gamma)\geq 4\) or \(l_{B}(\omega(\gamma)\geq 4\) and \(\gamma\) is not a curve of the tesselation. Therefore \(\gamma\) has length bigger that \(4L\) by Lemma 17 of [10].
Next assume that \(l(\omega(\gamma)\leq 4\) It is obvious that \(l(\omega(\gamma)\) is bigger than \(2\), so we have \(l(\omega(\gamma))=4\).
Note that \(\omega(\gamma)\) cannot contains two identical consecutive letters, so \(\omega(\gamma)=s_{i}s_{j}s_{i}s_{j}\) for some \(i\neq j\). Note also that the words \(s_{i}s_{i+1}s_{i}s_{i+i}\) are null-homotopic in \(S_{17}\). If \(\omega(\gamma)=s_{i}s_{i+2}s_{i}s_{i+2}\), then \(\gamma\) is a curve of the tesselation. If \(\omega(\gamma)=s_{i}s_{i+3}s_{i}s_{i+3}\), then \(\gamma\) is a concatenation of four arcs which connects the middles of a side of index \(i\) to a side of index \(i+3\). If \(c\) is one of these arcs, it cut an hexagon into two right-angled pentagons. By the formula of Theorem 3.5.10 of [14], we have \(l(c)=\operatorname{arcosh}3\), therefore \(l(\gamma)=4\operatorname{arcosh}3\) is bigger than \(4L\). Since \(\gamma\) is defined up to an orientation, we have treated all cases where \(l(\omega(\gamma))=4\).
The next lemma shows \(\operatorname{codim}\mathcal{P}_{g}<2g-1\) for \(g=17\).
**Lemma 20**.: _We have \(\operatorname{codim}\mathcal{P}_{17}<33\)._
Proof.: The surface \(S_{17}\) has \(48\) curves. Let \(\mathcal{C}\) be the set of all curves of index \(3\),\(4\),\(5\), or \(6\). As in the proof of Theorem 1, the set \(\mathcal{C}\) fills the surface. Since \(\operatorname{Card}\mathcal{C}=32\), we have \(\operatorname{codim}\mathcal{P}_{17}<32\) by Corollary 18.
_Remark_. The set \(\mathcal{C}\) of the proof is not a minimal filling subset. Intuitive computations suggest that the minimal filling subsets have cardinality \(25\), and that \(\operatorname{codim}\mathcal{P}_{17}=24\).
|
2306.10086
|
Covariant Quantization of the Partially Massless Graviton Field in de
Sitter Spacetime
|
We present a covariant quantization scheme for the so-called "partially
massless" graviton field in de Sitter spacetime. Our approach is founded on the
principles of the de Sitter group representation theory (in the sense given by
Wigner), the Wightman-G\"{a}rding axioms for gauge-invariant fields
(Gupta-Bleuler scheme), and the essential analyticity prerequisites in the
complexified pseudo-Riemannian manifold. To implement the quantization process
effectively, we adopt coordinate-independent (global) de Sitter plane waves.
These plane waves, defined in the appropriate tube domains of complex de Sitter
spacetime, serve as the de Sitter counterparts to the standard Minkowskian
plane waves. By employing these analytical plane waves, we enable a spectral
analysis of the corresponding two-point function that closely resembles the
Fourier analysis typically employed in the flat Minkowskian case. Within this
framework, we present the Wightman two-point function for the partially
massless graviton field, which satisfies the essential criteria of locality,
covariance, and normal analyticity. Furthermore, we provide insights into the
underlying Hilbert space structure and the corresponding unsmeared field
operator. A direct consequence of this quantization construction confirms the
widely accepted notion of light-cone propagation for the de Sitter partially
massless graviton field.
|
Jean-Pierre Gazeau, Hamed Pejhan
|
2023-06-16T12:56:23Z
|
http://arxiv.org/abs/2306.10086v3
|
# Covariant Quantization of the Partially Massless Graviton Field in de Sitter Spacetime
###### Abstract
We present a covariant quantization scheme for the so-called "partially massless" graviton field in de Sitter spacetime. Our approach is founded on the principles of the de Sitter group representation theory (in the sense given by Wigner), the Wightman-Garding axioms for gauge-invariant fields (Gupta-Bleuler scheme), and the essential analyticity prerequisites in the complexified pseudo-Riemannian manifold. To implement the quantization process effectively, we adopt coordinate-independent (global) de Sitter plane waves. These plane waves, defined in the appropriate tube domains of complex de Sitter spacetime, serve as the de Sitter counterparts to the standard Minkowskian plane waves. By employing these analytical plane waves, we enable a spectral analysis of the corresponding two-point function that closely resembles the Fourier analysis typically employed in the flat Minkowskian case. Within this framework, we present the Wightman two-point function for the partially massless graviton field, which satisfies the essential criteria of locality, covariance, and normal analyticity. Furthermore, we provide insights into the underlying Hilbert space structure and the corresponding unsmeared field operator. A direct consequence of this quantization construction confirms the widely accepted notion of light-cone propagation for the de Sitter partially massless graviton field.
## I Introduction
### Motivation
This paper is part of a series of papers/books (see Refs. [1; 2] and references therein) that attempts to develop a consistent formulation of elementary systems in the global structure of de Sitter (dS) and anti-dS (AdS) spacetimes, in the Wigner sense [3; 4], as associated with unitary irreducible representations (UIRs) of the dS and AdS relativity groups, respectively;1 (A)dS relativity versus Einstein-Poincare relativity. The motivation for this attempt, if we restrict our attention merely to the dS case which is of interest in the present study, is rooted in part in the key role that is played by the dS geometry in the _inflationary cosmological scenarii_2, in part in the desire to establish possible mechanisms for _late-time cosmology_3, and in part in the need for a dS analogue of the so-called AdS/CFT correspondence (the dS/CFT correspondence). Yet, the underlying motivation behind this attempt stems from a more fundamental consideration/concern that we now elaborate on.
Footnote 1: To get the gist, let \(P\) denote the physical systems whose global and local symmetries of their classical phase spaces are respectively determined by a Lie group \(G\) and its Lie algebra \(\mathfrak{g}\). [This is the case, for instance, for “free” elementary systems living in dS and AdS spacetimes]. Then, the following statements hold:
\(\bullet\) The phase-space reading of \(P\)s can be realized by the orbits under the co-adjoint action of \(G\) in the dual linear space to \(\mathfrak{g}\) (traditionally, symbolized by \(\mathfrak{g}^{*}\) in the literature). Such orbits, known as co-adjoint orbits, are symplectic manifolds. Moreover, each co-adjoint orbit carries a natural \(G\)-invariant (Liouville) measure and also, as a homogeneous space, is homeomorphic to an even-dimensional group coset \(G/G_{s}\), where \(G_{s}\), being a (closed) subgroup of \(G\), stabilizes some orbit point. For more details, see Refs. [5; 6].
\(\bullet\) Co-adjoint orbits, possessing very rich analytic structures, also underlie (projective) Hilbert spaces carrying UIRs of the respective symmetry group \(G\). [Here, a comprehensive program of quantization of functions (or distributions) by considering all references of covariant integral quantization (see, for instance, Refs. [7; 8; 9; 10; 11]) can be taken into account.] In the sense that was initially put forward by Wigner [3; 4] in the context of Einstein-Poincare relativity and then developed by others [12; 13; 14; 15; 16; 17] to Galilean, dS, and AdS systems, the (projective) Hilbert spaces identify (in some restricted sense) quantum ("one-particle") states spaces in the respective quantum-mechanical reading of \(P\)s. The invariant parameters characterizing the (projective) UIRs then identify the basic quantum numbers characterizing the respective quantum states of \(P\)s. Remarkably, by construction, this quantization scheme guarantees a “smooth” transition from classical to the quantum reading of the physical systems \(P\). For a more detailed discussion, focusing on the dS and AdS cases, readers are referred to Refs. [1; 2] and references therein.
Footnote 2: According to the inflationary cosmological scenarii, our Universe experienced a dS phase in the very early epochs of its life [18].
Footnote 3: Recent astrophysical data coming from type Ia supernovae [19] show that the expansion of our Universe is accelerating and point towards a small but nonvanishing positive cosmological constant. In other words, our Universe might presently be in a dS phase, which tends towards a pure dS spacetime.
\({}^{1}\)_Universite Paris Cite, CNRS, Astroparticule et Cosmologie, F-75013 Paris, France \({}^{2}\)Institute of Mathematics and Informatics, Bulgarian Academy of Sciences, Acad. G. Bonchev Str. Bl. 8, 1113, Sofia, Bulgaria \({}^{3}\)_
\({}^{2}\)_Institute of Mathematics and Informatics, Bulgarian Academy of Sciences, Acad. G. Bonchev Str. Bl. 8, 1113, Sofia, Bulgaria \({}^{4}\)_
\({}^{3}\)_Dipartimento di Fisica, Universita di Napoli, Napoli, Napoli, Italy \({}^{5}\)_
\({}^{4}\)_Dipartimento di Fisica, Universita di Napoli, Napoli, Italy \({}^{6}\)_
\({}^{5}\)_Dipartimento di Fisica, Universita di Napoli, Napoli, Italy \({}^{7}\)_
\({}^{6}\)_Dipartimento di Fisica, Universita di Napoli, Napoli, Italy \({}^{8}\)_
\({}^{7}\)_Dipartimento di Fisica, Universita di Napoli, Napoli, Italy \({}^{9}\)_
\({}^{8}\)_Dipartimento di Fisica, Universita di Napoli, Napoli, Italy \({}^{10}\)_
\({}^{9}\)_Dipartimento di Fisica, Universita di Napoli, Napoli, Italy \({}^{11}\)_
\({}^{11}\)_Universita di Napoli, Napoli, Italy \({}^{12}\)_
[MISSING_PAGE_POST]
\({}^{4
|
2310.10990
|
A second-order exponential integration constraint energy minimizing
generalized multiscale method for parabolic problems
|
This paper investigates an efficient exponential integrator generalized
multiscale finite element method for solving a class of time-evolving partial
differential equations in bounded domains. The proposed method first performs
the spatial discretization of the model problem using constraint energy
minimizing generalized multiscale finite element method (CEM-GMsFEM). This
approach consists of two stages. First, the auxiliary space is constructed by
solving local spectral problems, where the basis functions corresponding to
small eigenvalues are captured. The multiscale basis functions are obtained in
the second stage using the auxiliary space by solving local energy minimization
problems over the oversampling domains. The basis functions have exponential
decay outside the corresponding local oversampling regions. We shall consider
the first and second-order explicit exponential Runge-Kutta approach for
temporal discretization and to build a fully discrete numerical solution. The
exponential integration strategy for the time variable allows us to take full
advantage of the CEM-GMsFEM as it enables larger time steps due to its
stability properties. We derive the error estimates in the energy norm under
the regularity assumption. Finally, we will provide some numerical experiments
to sustain the efficiency of the proposed method.
|
Leonardo A. Poveda, Juan Galvis, Eric Chung
|
2023-10-17T04:28:22Z
|
http://arxiv.org/abs/2310.10990v2
|
A second-order exponential integration constraint energy minimizing generalized multiscale method for parabolic problems
###### Abstract
This paper investigates an efficient exponential integrator generalized multiscale finite element method for solving a class of time-evolving partial differential equations in bounded domains. The proposed method first performs the spatial discretization of the model problem using constraint energy minimizing generalized multiscale finite element method (CEM-GMsFEM). This approach consists of two stages. First, the auxiliary space is constructed by solving local spectral problems, where the basis functions corresponding to small eigenvalues are captured. The multiscale basis functions are obtained in the second stage using the auxiliary space by solving local energy minimization problems over the oversampling domains. The basis functions have exponential decay outside the corresponding local oversampling regions. We shall consider the first and second-order explicit exponential Runge-Kutta approach for temporal discretization and to build a fully discrete numerical solution. The exponential integration strategy for the time variable allows us to take full advantage of the CEM-GMsFEM as it enables larger time steps due to its stability properties. We derive the error estimates in the energy norm under the regularity assumption. Finally, we will provide some numerical experiments to sustain the efficiency of the proposed method.
## 1 Introduction
In recent decades, the scientific community has devoted considerable effort to developing efficient numerical methods to approximate nonlinear and semilinear parabolic partial differential equations in high-contrast media, which arise from various practical problems. In general, it is hard to find the exact solutions of this kind of model; numerical approaches are currently the essential tools to approximate these solutions. As well-known, these approximations are made at two stages. First, high-contrast ratios require fine-scale meshes
in spatial discretization, drastically increasing the degrees of freedom and producing inadequate and inefficient computational costs. On the other hand, high-contrast ratios frequently deteriorate the convergence of the fine-scale approximation, which becomes a common challenge. There have been many existing approaches to handle high-contrast problems, which are usually referred to as numerical upscaling methods, multiscale finite element methods, variational multiscale methods, heterogeneous multiscale methods, mortar multiscale methods, localized orthogonal decomposition methods (LOD), generalized multiscale finite element methods (GMsFEM), and so on in the literature Hou and Wu (1997); Hughes et al. (1998); Efendiev et al. (2013); Arbogast and Xiao (2013); Chung et al. (2023). The proposed method in this work belongs to the class of generalized multiscale finite element methods, where the objective is to seek multiscale basis functions to represent the local heterogeneities by solving local constraint-minimizing problems. This approach was initially proposed by Chung et al. (2018) and has widely been applied to many applications such as in Li et al. (2019); Fu and Chung (2020); Chung and Pun (2020); Wang et al. (2021); Poveda et al. (2023). In particular, CEM-GMsFEM can be divided into two steps. First, this method constructs auxiliary basis functions via local spectral problems. Then, constraint energy minimization problems are solved to obtain the required multiscale basis functions. These basis functions are shown to decay exponentially from the target coarse block and can be computed locally.
In the second stage, explicit, semi-implicit, and fully-implicit schemes are frequently used for temporal discretization. These latter are unconditionally stable in contrast with explicit methods, which could be easier to compute but require certain constraints in the time step size. Nevertheless, in implicit schemes, we need to solve non-linear equations at each time step using some iterative method, which can be a bottleneck in computations. In recent years, alternative techniques have emerged to solve the corresponding nonlinearity. Among the most outstanding, the so-called Exponential Integrators are a good candidate for this purpose. These schemes are robust time-stepping methods that do not need the solution of large linear systems Hochbruck et al. (1998); Cox and Matthews (2002); Hochbruck and Ostermann (2010). Instead, they solve explicitly the problem using the matrix exponential computation for each time step. In literature, there exist several types of methods such as exponential Runge-Kutta methods (Hochbruck and Ostermann, 2005, 2005, 2008), exponential Rosenbrock methods (Hochbruck et al., 2008, 2010), exponential multistep method (Hochbruck and Ostermann, 2011), exponential splitting methods (Hansen and Ostermann, 2009) and Lawson methods (Lawson, 1967).
This paper is mainly motivated by Contreras et al. (2023); Huang et al. (2023). We design and analyze an exponential integration CEM-GMsFEM for semilinear parabolic problems in high-contrast multiscale media. We present a rigorous convergence analysis, which has been lacking so far. The proof is provided under some assumption on the nonlinear reaction term and exact solution.
The remainder of this paper is as follows. Section 2 describes the problem and its spatial discretization. The construction of CEM-GMsFEM basis functions using constraint
energy minimization is given in Section 3. The multiscale basis functions are constructed by solving a class of local spectral problems and constrained minimization problems. In Section 4, we present the explicit exponential Runge-Kutta method. Under appropriate assumptions of the exact solution and nonlinear reaction term, we show the fully discrete error analysis in Appendix A. Numerical experiments are presented in Section 5. Finally, conclusions and final remarks are drawn in Section 6.
## 2 Problem setup
In this section, we study the numerical solution of the semilinear parabolic problem by taking the following form:
\[\left\{\begin{aligned} \partial_{t}u-\operatorname{div}( \kappa(\mathrm{x})\nabla u)&=f(u),\quad\text{in }\Omega\times[0,T],\\ u(\mathrm{x},t)&=0,\quad\text{on }\partial\Omega \times[0,T],\\ u(\mathrm{x},t=0)&=\hat{u},\quad\text{on }\Omega,\end{aligned}\right. \tag{2.1}\]
where \(\Omega\) is an open domain in \(\mathbb{R}^{d}\left(d=2,3\right)\) with a boundary defined by \(\partial\Omega\), \(u(t,\mathrm{x})\) is the unknown function, \(\kappa\) denotes the high-contrast multiscale field, such that \(\kappa_{0}\leq\kappa\leq\kappa_{1}\), where \(0<\kappa_{0}<\kappa_{1}<\infty\) and \(f(u)\) is the nonlinear reaction term of the underlying system and explicitly independent of time. For simplicity of presentation, we present the parabolic problems subject to homogeneous Dirichlet boundary conditions. However, we cite to Ye and Chung (2023) for a detailed analysis of CEM-GMsFEM for high-contrast elliptic problems with inhomogeneous boundary conditions.
We briefly present the notation of the function spaces and norms to be used throughout this paper. We denote by \(\mathrm{H}^{m}(D)\), with \(m\geq 0\) the Sobolev spaces on subdomain \(D\subset\Omega\) equipped with the norm \(\|\cdot\|_{m,D}\). If \(D=\Omega\), we will omit the index \(D\). We denote by \(\|\cdot\|_{0,D}\) the norm associated with the inner product \((\cdot,\cdot)\) in the space \(\mathrm{L}^{2}(D)\). We further set \(\mathrm{H}^{1}_{0}(D)\) the subspace of \(\mathrm{H}^{1}(D)\). We will use the notation \(x\preceq y\) to show us that there exists a positive constant \(C\) independent of the grid size, such as \(x\leq Cy\).
### A semi-discretization by finite element grid approximation
In this subsection, we introduce the notions of fine and coarse grids to discretize the problem (2.1). Let \(\mathcal{T}^{H}\) be a usual conforming partition of the computational domain \(\Omega\) into coarse block \(K\in\mathcal{T}^{H}\) with diameter \(H\). Then, we denote this partition as the coarse grid and assume each coarse element is partitioned into a connected union of fine-grid blocks. In this case, the fine grid partition will be denoted by \(\mathcal{T}^{h}\) and is, by definition, a refinement of the coarse grid \(\mathcal{T}^{H}\), such that \(h\ll H\). We shall denote \(\{\mathrm{x}_{i}\}_{i=1}^{N_{\mathrm{c}}}\) as the vertices of the coarse grid \(\mathcal{T}^{H}\), where \(N_{\mathrm{c}}\) denotes the number of coarse nodes. We define the neighborhood of the node \(\mathrm{x}_{i}\) by
\[\omega_{i}=\bigcup\{K_{j}\in\mathcal{T}^{H}:\mathrm{x}_{i}\in\overline{K}_{j }\}.\]
In addition, for CEM-GMsFEM considered in this paper, we have that given a coarse block \(K_{i}\), we represent the oversampling region \(K_{i,m}\subset\Omega\) obtained by enlarging \(K_{i}\) with \(m\geq 1\) coarse grid layers, see Fig. 1.
We consider the linear finite element space \(\mathrm{V}_{h}\) associated with the grid \(\mathcal{T}^{h}\), where the basis functions in this space are the standard Lagrange basis functions defined as \(\{\eta^{i}\}_{i=1}^{N_{f}}\), where \(N_{f}\) denotes the number of interior nodes of \(\mathcal{T}^{h}\). Then, the semi-discrete finite element approximation to (2.1) on the fine grid is to find \(u_{h}\in\mathrm{V}_{h}\) such that
\[\begin{cases}(\partial_{t}u_{h},v_{h})+a(u_{h},v_{h})=(f(u_{h}),v_{h}),\quad \text{for each $v_{h}\in\mathrm{V}_{h}$,for all $t\in[0,T]$,}\\ (u_{h}(0),v_{h})=(\hat{u},v_{h}),\quad\text{for each $v_{h}\in\mathrm{V}_{h}$.} \end{cases} \tag{2.2}\]
Here we use the bilinear forms
\[(u,v) =\int_{\Omega}u(\mathrm{x})v(\mathrm{x})d\mathrm{x},\quad\text{ for each $u,v\in\mathrm{L}^{2}(\Omega)$,}\] \[a(u,v) =\int_{\Omega}\kappa(\mathrm{x})\nabla u(\mathrm{x})\cdot\nabla v (\mathrm{x})d\mathrm{x},\quad\text{for each $u,v\in\mathrm{H}_{0}^{1}(\Omega)$.}\]
We consider the following representation for the solution of (2.2):
\[u_{h}(\mathrm{x},t)=\sum_{i=1}^{N_{f}}u_{i}(t)\eta_{i}(\mathrm{x}),\quad\text{ for all $t\in[0,T]$.} \tag{2.3}\]
Figure 1: Illustration of the 2D multiscale grid with a typical coarse element \(K_{i}\) and oversampling domain \(K_{i,2}\), the fine grid element and neighborhood \(\omega_{i}\) of the node \(\mathrm{x}_{i}\).
We use the expression above into (4.1) and take \(v=\eta_{j}\) for \(j=1,\ldots,N_{f}\), then
\[\begin{cases}\sum_{i=1}^{N_{f}}\frac{d}{dt}u_{i}(t)(\eta_{i},\eta_{ j})+\sum_{i=1}^{N_{f}}u_{i}(t)a(\eta_{i},\eta_{j})=\left(f\left(\sum_{i=1}^{N_{f}}u_ {i}(t)\eta_{i}\right),\eta_{j}\right),\quad j=1,\ldots,N_{f},\\ \sum_{i=1}^{N_{f}}u_{i}(0)(\eta_{i},\eta_{j})=(\hat{u},\eta_{j}), \quad j=1,\ldots,N_{f}.\end{cases} \tag{2.4}\]
We recast (2.4) in a continuous-time matrix formulation
\[\begin{cases}M\frac{d}{dt}u_{h}(t)+Au_{h}(t)=F(u_{h}(t)),\\ Mu_{h}(0)=\hat{u},\end{cases} \tag{2.5}\]
where matrices \(M,A\in\mathbb{R}^{N_{f}\times N_{f}}\), and term \(F(u_{h})\in\mathbb{R}^{N_{f}}\), with
\[\begin{split}[M]_{ij}&=\int_{\Omega}\eta_{i}( \mathrm{x})\eta_{j}(\mathrm{x})d\mathrm{x},\quad[A]_{ij}=\int_{\Omega}\kappa( \mathrm{x})\nabla\eta_{i}(\mathrm{x})\cdot\nabla\eta_{j}(\mathrm{x})d\mathrm{ x},\\ F_{i}(u_{h})&=\int_{\Omega}f(u_{h})\eta_{i}(\mathrm{x})d \mathrm{x},\quad u_{h}=[u_{1}(t),\ldots,u_{N_{f}}(t)].\end{split} \tag{2.6}\]
## 3 Construction of CEM-GMsFEM basis functions
In this section, we shall describe the construction of CEM-GMsFEM basis functions using the framework from Chung et al. (2018). This procedure can be divided into two stages. The first stage involves constructing the auxiliary spaces by solving a local spectral problem in each coarse element \(K\). The second stage provides the multiscale basis functions by solving local constraint energy minimization problems in oversampling regions, see Chung et al. (2018).
### Auxiliary basis function
We present the construction of the auxiliary multiscale basis functions by solving the local eigenvalue problem for each coarse element \(K_{i}\). We consider \(\mathrm{H}^{1}(K_{i}):=\mathrm{H}^{1}(\Omega)\big{|}_{K_{i}}\) the restriction of the space \(\mathrm{H}^{1}(\Omega)\) to the coarse element \(K_{i}\). We solve the following local eigenvalue problem: find \(\{\lambda_{j}^{(i)},\varphi_{j}^{(i)}\}\) such that
\[a_{i}(\varphi_{j}^{(i)},w)=\lambda_{j}^{(i)}s_{i}(\varphi_{j}^{(i)},w),\quad \text{for each }w\in\mathrm{H}^{1}(K_{i}), \tag{3.1}\]
where
\[a_{i}(v,w):=\int_{K_{i}}\kappa\nabla v(\mathrm{x})\cdot\nabla w(\mathrm{x})d \mathrm{x},\quad s_{i}(v,w):=\int_{K_{i}}\widetilde{\kappa}v(\mathrm{x})w( \mathrm{x})d\mathrm{x}.\]
Here, \(\widetilde{\kappa}=\kappa\sum_{i=1}^{N_{\rm c}}|\nabla\chi_{i}|^{2}\), where \(N_{\rm c}\) is the total number of neighborhoods and \(\{\chi_{i}\}\) is a set of partition of unity functions for the coarse grid \(\mathcal{T}^{H}\). The problem defined above is solved on the fine grid in the actual computation. We assume that the eigenfunctions satisfy the normalized condition \(s_{i}(\varphi_{j}^{(i)},\varphi_{j}^{(i)})=1\). We shall use \(L_{i}\) eigenvectors corresponding to the first \(\lambda_{j}^{(i)}\) eigenvalues that are arranged in increasing order and to construct the local auxiliary multiscale space \(\mathrm{V}_{\rm aux}^{(i)}:=\{\varphi_{j}^{(i)}:1\leq j\leq L_{i}\}\). We can define the global auxiliary multiscale space as \(\mathrm{V}_{\rm aux}:=\bigoplus_{i=1}^{N_{\rm c}}\mathrm{V}_{\rm aux}^{(i)}\).
For the local auxiliary space \(\mathrm{V}_{\rm aux}^{(i)}\), the bilinear form \(s_{i}\) given above defines an inner product with norm \(\|v\|_{s(K_{i})}=s_{i}(v,v)^{1/2}\). Then, we can define the inner product and norm for the global auxiliary multiscale space \(\mathrm{V}_{\rm aux}\), which are defined by
\[s(v,w)=\sum_{i=1}^{N_{\rm c}}s_{i}(v,w),\quad\|v\|_{s}:=s(v,v)^{1/2},\quad \text{for each $v,w\in\mathrm{V}_{\rm aux}$}.\]
To construct the CEM-GMsFEM basis functions, we use the following definition.
**Definition 3.1** (Chung et al., 2018).: Given a function \(\varphi_{j}^{(i)}\in\mathrm{V}_{\rm aux}\), if a function \(\psi\in\mathrm{V}\) satisfies
\[s(\psi,\varphi_{j}^{(i)}):=1,\quad s(\psi,\varphi_{j^{\prime}}^{(i^{\prime})}) =0,\quad\text{if $j^{\prime}\neq j$ or $i^{\prime}\neq i$},\]
then, we say that is \(\varphi_{j}^{(i)}\)-orthogonal where \(s(v,w)=\sum_{i=1}^{N_{\rm c}}s_{i}(v,w)\).
Now, we define \(\pi:\mathrm{V}\to\mathrm{V}_{\rm aux}\) as the projection of the inner product \(s(v,w)\). More precisely, \(\pi\) is defined by
\[\pi(v):=\sum_{i=1}^{N_{\rm c}}\pi_{i}(v)=\sum_{i=1}^{N_{\rm c}}\sum_{j=1}^{L_{ i}}s_{i}(v,\varphi_{j}^{(i)})\varphi_{j}^{(i)},\quad\text{for each $v\in\mathrm{V}$},\]
where \(\pi_{i}:L^{2}(K_{i})\to\mathrm{V}_{\rm aux}^{(i)}\) denotes the projection with respect to inner product \(s_{i}(\cdot,\cdot)\). The null space of the operator \(\pi\) is defined by \(\widetilde{\mathrm{V}}=\{v\in\mathrm{V}:\pi(v)=0\}\). Now, we will construct the multiscale basis functions. Given a coarse block \(K_{i}\), we denote the oversampling region \(K_{i,m}\subset\Omega\) obtained by enlarging \(K_{i}\) with an arbitrary number of coarse grid layers \(m\geq 1\), see Figure 1. Then, we define the multiscale basis functions by
\[\psi_{j,{\rm ms}}^{(i)}=\operatorname{argmin}\{a(\psi,\psi):\psi\in\mathrm{H}_ {0}^{1}(K_{i,m}),\,\psi\text{ is $\varphi_{j}^{(i)}$-orthogonal}\}, \tag{3.2}\]
where \(\mathrm{H}^{1}(K_{i,m})\) is the restriction of \(\mathrm{H}^{1}(\Omega)\) in \(K_{i,m}\) and \(\mathrm{H}_{0}^{1}(K_{i,m})\) is the subspace of \(\mathrm{H}^{1}(K_{i,m})\) with zero trace on \(\partial K_{i,m}\). The multiscale finite element space \(\mathrm{V}_{\rm ms}\) is defined by
\[\mathrm{V}_{\rm ms}=\operatorname{span}\{\psi_{j,{\rm ms}}^{(i)}:1\leq j\leq L _{i},1\leq i\leq N_{\rm c}\}.\]
By introducing the Lagrange multiplier, the problem (3.2) is equivalent to the explicit form: find \(\psi^{(i)}_{j,\mathrm{ms}}\in\mathrm{H}^{1}_{0}(K_{i,m})\), \(\xi\in\mathrm{V}^{(i)}_{\mathrm{aux}}(K_{i})\) such that
\[\begin{cases}a(\psi^{(i)}_{j,\mathrm{ms}},\mu)+s(\mu,\upsilon)&=0,\quad\text{ for all }\mu\in\mathrm{H}^{1}_{0}(K_{i,m}),\\ s(\psi^{(i)}_{j,\mathrm{ms}}-\varphi^{(i)}_{j},\nu)&=0,\quad\text{for all }\nu\in \mathrm{V}^{(i)}_{\mathrm{aux}}(K_{i,m}),\end{cases}\]
where \(\mathrm{V}^{(i)}_{\mathrm{aux}}(K_{i,m})\) is the union of all local auxiliary spaces for \(K_{i}\subset K_{i,m}\). Thus, the semi-discrete multiscale approximation reads as follows: find \(u_{\mathrm{ms}}\in\mathrm{V}_{\mathrm{ms}}\) such that
\[(\partial_{t}u_{\mathrm{ms}},v)+a(u_{\mathrm{ms}},v)=(f(u_{\mathrm{ms}}),v), \quad\text{for each }v\in\mathrm{V}_{\mathrm{ms}}. \tag{3.3}\]
Each multiscale basis function \(\psi^{(i)}_{j,\mathrm{ms}}\) is eventually represented on the fine grid. Therefore, each \(\psi^{(i)}_{j,\mathrm{ms}}\) can be represented by a vector. Using (3.2), we shall construct the coarse-scale matrix of local basis functions as:
\[R_{0}=[\psi^{(i)}_{1,\mathrm{ms}},\ldots,\psi^{(i)}_{L_{i},\mathrm{ms}}],\]
which maps quantities from the multiscale space \(\mathrm{V}_{\mathrm{ms}}\) to the fine-scale space \(\mathrm{V}_{h}\). Similarly to (2.5), we can define the matrix coarse-scale nonlinear system as
\[M_{0}\frac{d}{dt}u^{H}_{\mathrm{ms}}+A_{0}u^{H}_{\mathrm{ms}}=F_{0}(u^{H}_{ \mathrm{ms}}), \tag{3.4}\]
where \(u^{H}_{\mathrm{ms}}\) is the coarse-scale approximation, and the coarse-scale stiffness and mass matrices are given by
\[A_{0}=R^{T}_{0}AR_{0},\quad M_{0}=R^{T}_{0}MR_{0}, \tag{3.5}\]
respectively, and \(T\) represents the transpose operator. We also have the coarse-scale vector as \(F_{0}=R^{T}_{0}F\).
## 4 Temporal discretization
This section considers the fully-discrete scheme for the discrete formulation (2.2). Let \(0=t_{0}<t_{1}<\cdots<t_{N_{t}-1}<t_{N_{t}}=T\) be a partition of the interval \([0,T]\), with time-step size given by \(\delta^{n}=t_{n}-t_{n-1}>0\), for \(n=1,\ldots,N_{t}\), where \(N_{t}\) is an integer.
### Classical implicit time integration schemes
To solve the ODE matrix systems (2.5) and (3.4) in an interval \([0,T]\) and comparing the proposed method, we introduce the implicit time integration scheme, which leads to finding \(u^{n}_{h}\in\mathrm{V}_{h}\) such that
\[(u^{n}_{h}-u^{n-1}_{h},v)+\delta^{n}a(u^{n}_{h},v)=\delta^{n}(f(u^{n}_{h}),v), \quad\text{for each }v\in\mathrm{V}_{h}, \tag{4.1}\]
with a small enough time step size \(\delta^{n}\). By using (2.3) into (4.1) and take \(v=\eta_{j}\) for \(j=1,\ldots,N_{f}\), we then have the matrix problem
\[\begin{cases}M(u_{h}^{n}-u_{h}^{n-1})=\delta^{n}\theta\left(F(u_{h}^{n})-Au_{h}^ {n}\right)+\delta^{n}(1-\theta)\left(F(u_{h}^{n-1})-Au_{h}^{n-1}\right),\\ Mu_{h}^{0}=\hat{u}_{h},\end{cases} \tag{4.2}\]
where \(\hat{u}_{h}=\int_{\Omega}\hat{u}\eta_{j}dx\) and matrices \(M,A\) and vector \(F(u_{h}^{n})\) are defined as in (2.6). If \(\theta=1\), we obtain the backward Euler scheme. On the other hand, by taking \(\theta=\frac{1}{2}\), we obtain the Crank-Nicolson scheme.
Analogously, we can define a full-discrete multiscale formulation: find \(u_{\text{ms}}^{n}\in\mathrm{V}_{\text{ms}}\) such that
\[(u_{\text{ms}}^{n}-u_{\text{ms}}^{n-1},v)+\delta^{n}a(u_{\text{ms}}^{n},v)= \delta^{n}(f(u_{\text{ms}}^{n}),v),\quad\text{for each $v\in\mathrm{V}_{\text{ms}}$}. \tag{4.3}\]
Therefore, denoting the fine-scale residue by \(r=Mu_{h}^{n-1}+\delta^{n}F(u_{h}^{n})\), we obtain that
\[u_{\text{ms}}^{n}=R_{0}(M_{0}+\delta^{n}\theta A_{0})^{-1}R_{0}^{T}\left((M- \delta^{n}(1-\theta)A)u_{\text{ms}}^{n-1}+\delta^{n}((1-\theta)F(u_{\text{ms}} ^{n-1})-F(u_{\text{ms}}^{n}))\right).\]
### Explicit exponential time integration
In this subsection, we recall exponential integration techniques relevant to the proposed constraint energy minimizing generalized multiscale finite element approach. Firstly, We define the discrete solution \(u_{h}(t)\) evolving in time. Since that \(a(\cdot,\cdot)\) is a bounded bilinear functional over \(\mathrm{V}_{h}\times\mathrm{V}_{h}\), by invoking Riesz's representation Theorem, we have that there exists a bounded linear operator \(L_{h}:\mathrm{V}_{h}\to\mathrm{V}_{h}\) such that
\[a(u_{h},v_{h})=\langle L_{h}u_{h},v_{h}\rangle,\quad\text{for each $v_{h}\in \mathrm{V}_{h}$}.\]
Then, we can be rewritten (2.2) as
\[\begin{cases}\partial_{t}u_{h}+L_{h}u_{h}&=P_{h}f(u_{h}),\quad\text{for each $\mathrm{x}\in\Omega$},\quad 0\leq t\leq T,\\ u_{h}(0)&=P_{h}\hat{u},\quad\mathrm{x}\in\Omega,\end{cases} \tag{4.4}\]
where \(P_{h}:\mathrm{L}^{2}\to\mathrm{V}_{h}\) is the \(\mathrm{L}^{2}\)-orthogonal projection operator. Analogous to the matrix problem (2.5), we assume that \(u_{h}(t_{n-1})\) is given for a current time \(t_{n-1}\), and we pretend to compute \(u_{h}(t_{n})\), with \(t_{n}=t_{n-1}+\delta^{n}\). Thus, we introduce the integrating factor problem given by
\[\frac{d}{dt}B(t)=B(t)L_{h},\quad B(t_{n-1})=I,\quad n=1,\ldots,N_{t}, \tag{4.5}\]
where \(I\) is the identity matrix. The problem (4.5) has a unique solution given by \(B(t)=e^{(t-t_{n-1})L_{h}}\). Then, using the matrix \(L_{h}\) and the integration factor above, one can rewrite the problem (2.5) as
\[\frac{d}{dt}(B(t)L_{h}u_{h}(t))=B(t)P_{h}f(u_{h}),\]
This problem has an exact solution implicitly represented as
\[u_{h}(t_{n})=e^{-\delta^{n}L_{h}}u_{h}(t_{n-1})+\int_{0}^{\delta^{n}}e^{(s-\delta^ {n})L_{h}}P_{h}f(u_{h}(s+t_{n-1}))ds. \tag{4.6}\]
Notice that \(B^{-1}(t):=e^{-(t-t_{n-1})N}\) denotes the inverse of \(B(t)\). This last equation is well-known as the variation-of-constants formula (Hochbruck and Ostermann, 2010; Hochbruck et al., 1998).
#### 4.2.1 Numerical time integration
We now pretend an approximation to the nonlinear term within the integral of (4.6) via an interpolating polynomial on certain quadrature nodes. We denote \(u_{h}^{n}\approx u_{h}(t_{n})\) the fully discrete numerical solution at the time step \(t_{n}\). Then, we shall apply the classic explicit exponential Runge-Kutta scheme (Hochbruck and Ostermann, 2010) and obtain a fully-discrete numerical method for solving the problem (2.1) as follows: for \(n=1,\ldots,N_{t}\),
\[\left\{\begin{aligned} U^{n,i}&=e^{-c_{i} \delta^{n}L_{h}}u_{h}^{n-1}+\delta^{n}\sum_{j=1}^{i-1}\alpha_{ij}(-\delta^{n}L _{h})P_{h}f(U^{n,j}),\quad i=1,\ldots,m,\\ u_{h}^{n}&=e^{-\delta^{n}L_{h}}u_{h}^{n-1}+\delta^{n }\sum_{i=1}^{m}\beta_{i}(-\delta^{n}L_{h})P_{h}f(U^{n,i}),\end{aligned}\right. \tag{4.7}\]
where \(m\) denotes the number of stages for the exponential Runge-Kutta method and \(U^{n,i}\approx u_{h}(t_{n-1}+c_{i}\delta^{n})\). Here the interpolation nodes \(c_{1},\ldots,c_{m}\) are \(m\) distinct nodes selected in \([0,1]\). The coefficients \(\alpha_{ij}(-\delta^{n}L_{h})\) and \(\beta_{j}(-\delta^{n}L_{h})\) are chosen as linear combinations of the \(\phi\)-functions \(\phi_{k}(-c_{i}\delta^{n}L_{h})\) and \(\phi_{k}(-\delta^{n}L_{h})\), respectively. These functions are given by
\[\phi_{0}(N)=e^{N},\quad\phi_{k}(N)=\int_{0}^{1}e^{(1-\theta)N}\frac{\theta^{k -1}}{(k-1)!}d\theta,\quad k\geq 1. \tag{4.8}\]
These \(\phi\)-functions satisfy the following recurrence relation (see for instance Cox and Matthews, 2002),
\[\phi_{k+1}(N)=N^{-1}\left(\phi_{k}(N)-\frac{1}{k!}I\right),\quad\text{for $k \leq 0$},\]
and we are assuming \(N^{-1}\) exists. Thus,
\[\phi_{k}(-\delta^{n}L_{h})=\frac{1}{(\delta^{n})^{k}}\int_{0}^{\delta^{n}}e^{ -(\delta^{n}-\tau)L_{h}}\frac{\tau^{k-1}}{(k-1)!}d\tau,\quad\text{for each $k\geq 1$}.\]
For reasons of consistency, we will assume throughout the paper that the following assumptions hold (Hochbruck and Ostermann, 2010),
\[\sum_{j=1}^{m}\beta_{j}(-\delta^{n}L_{h})=\phi_{1}(-\delta^{n}L_{h}),\quad \sum_{j=1}^{i-1}\alpha_{ij}(-\delta^{n}L_{h})=c_{i}\phi_{1}(-\delta^{n}L_{h}), \quad 1\leq i\leq m. \tag{4.9}\]
From the latter, we can infer that \(c_{1}=0\). Taking \(m=1\) in (4.7), we have a first-order scheme given as
\[u^{n}_{\rm ERK1}=u^{n-1}_{h}+\delta^{n}\phi_{1}(-\delta^{n}L_{h})\left[P_{h}f(u^{n -1}_{h})-L_{h}u^{n-1}_{h}\right], \tag{4.10}\]
which is well-known as the first-order Exponential Euler method. Analogously, for \(m=2\), we have two interpolation nodes \(c_{1}=0\) and \(c_{2}\in(0,1]\). In particular, we shall use \(c_{2}=1\). Thus, we obtain the two-stage second-order exponential Runge-Kutta scheme for (4.7) correspondingly reads
\[u^{n}_{\rm ERK22}=u^{n}_{\rm ERK1}+\delta^{n}\phi_{2}(-\delta^{n}L_{h})\left[P_ {h}f(u^{n}_{\rm ERK1})-P_{h}f(u^{n-1}_{h})\right]. \tag{4.11}\]
More general (higher-order) exponential time stepping schemes need more complicated order conditions and may be derived using higher-order \(\phi\)-functions defined above (see Hochbruck and Ostermann (2005a,b) for more details). Analogously, we propose the approximation
\[\phi_{k}(-\delta^{n}L_{h})\approx R_{0}\phi_{k}(-\delta^{n}L_{\rm ms})R_{0}^{T },\quad k\geq 1,\]
where \(L_{\rm ms}\) is the linear operator associated with the bilinear functional \(a(\cdot,\cdot)\) in (3.3). Then, we can define a first-order, fully discrete multiscale exponential integrator formulation
\[u^{n}_{\rm ms,ERK1}=u^{n-1}_{\rm ms}+\delta^{n}R_{0}\phi_{1}(-\delta^{n}L_{ \rm ms})R_{0}^{T}\left(P_{h}f(u^{n-1}_{\rm ms})-L_{h}u^{n-1}_{\rm ms}\right), \tag{4.12}\]
and a second-order formulation
\[u^{n}_{\rm ms,ERK22}=u^{n}_{\rm ms,ERK1}+\delta^{n}R_{0}\phi_{2}(-\delta^{n}L _{\rm ms})R_{0}^{T}\left(P_{h}f(u^{n}_{\rm ms,ERK1})-P_{h}f(u^{n-1}_{\rm ms}) \right), \tag{4.13}\]
for \(n=1,\ldots,N_{t}\).
## 5 Numerical experiments
In this section, we present some numerical results by using the exponential integration CEM-GMsFEM to solve the parabolic equation with the multiscale permeability field \(\kappa({\rm x})\) up to \(t=T\) using various time-step sizes \(\delta^{n}\) and a fixed number of fine grid nodes. Our numerical experiments evaluate the \(\phi\)-functions using the Pade approximation implemented EXPINT package in Berland et al. (2007). We consider spatial variable \({\rm x}=(x_{1},x_{2})\), in \(\Omega=[0,1]\times[0,1]\) and a \(128\times 128\) fine grid to compute a reference solution. To show spatial accuracy, we will choose a different coarse grid size to compute the relative error between the fine-scale and first-order CEM-GMsFEM-EIRK1 (Exponential Runge-Kutta) and CEM-GMsFEM-FDBE (Finite Difference Backward Euler) solutions. For temporal accuracy, we consider different time steps in all experiments. We also consider different high-contrast permeability fields \(\kappa_{i}({\rm x})\), with \(i\in\{1,2,3,4\}\) used in the numerical examples shown in Figure 2.
**Example 5.1**.: In this experiment, we consider the problem:
\[\left\{\begin{aligned} \partial_{t}u-\operatorname{div}(\kappa_{1}( \mathrm{x})\nabla u)&=0,\quad\text{in }\Omega\times[0,T],\\ u(x_{1},x_{2},t)&=0,\quad\text{on }\partial\Omega \times[0,T],\\ u(x_{1},x_{2},t=0)&=x_{1}(1-x_{1})x_{2}(1-x_{2}), \quad\text{on }\Omega.\end{aligned}\right. \tag{5.1}\]
We test with a vanishing reaction term \(f:=0\) and use a high-contrast media \(\kappa_{1}\) with the value of \(10^{2}\) in the high-contrast channels (see Figure 1(a)). The final time of this simulation is \(T=0.2\), and we consider 200 time steps for CEM-GMsFEM-EIRK1 and CEM-GMsFEM-FDBE. The reference solution is approximated in the fine-scale grid using the backward Euler scheme with 1000 time steps. In Figure 3, we present the relative error estimates between the reference solution and CEM-GMsFEM-EIRK1 and CEM-GMsFEM-FDBE schemes for problem (5.1) with a coarse grid size of \(H=\frac{1}{8}\). From Figure (left), we observe that the errors decrease as the number of local basis functions increases using CEM-GMsFEM-EIRK1 in contrast to the CEM-GMsFEM-FDBE scheme. Therefore, we obtain a good agreement using only a few local basis functions on each coarse block. From Figure 3 (right), we can observe that the accuracy will improve as the number of oversampling layers increases. Further, when enough oversampling layers are given, the error tends to be constant.
Table 1 shows the convergence behavior concerning the coarse mesh size in H\({}^{1}\), L\({}^{2}\) and max-norm for the problem (5.1) at the final time. We only use 4 basis functions on each coarse block with varying coarse grid sizes \(H=\frac{1}{2},\frac{1}{4},\frac{1}{8}\), and \(\frac{1}{16}\), associated with an appropriate number of oversampling layers \(m\). Observe that for CEM-GMsFEM-EIRK1 with fixed \(H=\frac{1}{8}\), the relative errors are just about \(6.05\%,2.97\%\) and \(0.0001\%\) in H\({}^{1}\), L\({}^{2}\) and max-norm respectively. Furthermore, for CEM-GMsFEM-EIRK1 schemes, the errors decay as coarse mesh size. On the other hand, the relative errors for the CEM-GMsFEM-FDBE scheme are just about \(22.89\%,22.17\%\), and \(0.0006\%\) in the respective norms. We
Figure 2: Permeability fields.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline Scheme & \(H\) & \(m\) & H\({}^{1}\) & L\({}^{2}\) & max \\ \hline \hline \multirow{4}{*}{EIRK1} & \(\frac{1}{2}\) & 1 & 1.3299E-01 & 8.6967E-02 & 3.8439E-07 \\ \cline{2-6} & \(\frac{1}{4}\) & 2 & 1.1094E-01 & 6.9012E-02 & 3.2874E-07 \\ \cline{2-6} & \(\frac{1}{8}\) & 2 & 6.0491E-02 & 2.9707E-02 & 1.3739E-07 \\ \cline{2-6} & \(\frac{1}{16}\) & 3 & 2.5312E-02 & 1.0422E-02 & 5.2108E-08 \\ \hline \multirow{4}{*}{FDBE} & \(\frac{1}{2}\) & 1 & 1.8257E-01 & 1.5295E-01 & 4.7968E-07 \\ \cline{2-6} & \(\frac{1}{4}\) & 2 & 1.9529E-01 & 1.7438E-01 & 5.5931E-07 \\ \cline{1-1} \cline{2-6} & \(\frac{1}{8}\) & 2 & 2.2888E-01 & 2.2171E-01 & 6.5542E-07 \\ \cline{1-1} \cline{2-6} & \(\frac{1}{16}\) & 3 & 2.5040E-01 & 2.4381E-01 & 8.1906E-07 \\ \hline \end{tabular}
\end{table}
Table 1: Spatial convergence rate for problem (5.1) at the final time \(T=0.2\) with varying coarse grid size \(H\) and oversampling coarse layers \(m\) using a contrast of \(10^{2}\).
Figure 3: Relative error for the CEM-GMsFEM-EIRK1 and CEM-GMsFEM-FDBE solution with increasing the number of local multiscale basis functions (left) and the number of oversampling layers (right) for problem (5.1) at final time \(T=0.2\).
also observe that errors are increased even with increasing time steps as we increment coarse mesh size. Table 2 shows the order of temporal accuracy for CEM-GMsFEM-EIRK1 scheme in L\({}^{2}\) and H\({}^{1}\)-norms. We notice that the order of the temporal accuracy in the H\({}^{1}\)-norm is just about 1, which coincides with Theorem A.10. In addition, the order of the temporal accuracy in the L\({}^{2}\)-norm is much better than expected from theoretical results given in Theorem A.13. Therefore, we have a good performance of the CEM-GMsFEM-EIRK1 scheme. Figure 4 depicts the solution profiles at the final instant \(T=0.2\) using the two schemes.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline Scheme & \(N_{t}\) & H\({}^{1}\) & CR & L\({}^{2}\) & CR \\ \hline \hline \multirow{5}{*}{EIRK1} & 8 & 2.1449E+00 & – & 2.1405E+00 & – \\ \cline{2-6} & 16 & 7.1026E-01 & 1.5945E+00 & 7.0602E-01 & 1.6001E+00 \\ \cline{2-6} & 32 & 2.6403E-01 & 1.4276E+00 & 2.5743E-01 & 1.4555E+00 \\ \cline{2-6} & 64 & 9.4676E-02 & 1.4796E+00 & 7.9631E-02 & 1.6928E+00 \\ \cline{2-6} & 128 & 4.8138E-02 & 9.7581E-01 & 4.9546E-03 & 4.0064E+00 \\ \hline \end{tabular}
\end{table}
Table 2: Temporal convergence rate for problem (5.1) with contrast of \(10^{2}\) and final time of \(T=0.2\) with varying time steps \(N_{t}\) and fixed coarse size of \(H=1/8\) and 4 basis functions.
Figure 4: (a) CEM-GMsFEM-EIRK1, (b) CEM-GMsFEM-FDBE, and (c) The reference solution of problem (5.1) at final time \(T=0.2\), using coarse grid size \(H=1/8\), 4 local multiscale basis functions and \(m=4\) oversampling layers.
**Example 5.2**.: In this experiment, we consider the problem:
\[\left\{\begin{aligned} \partial_{t}u-\operatorname{div}(\kappa_{2}( \operatorname{x})\nabla u)&=u-u^{3},\quad\text{in }\Omega\times[0,T],\\ u(x_{1},x_{2},t)&=0,\quad\text{on }\partial\Omega \times[0,T],\\ u(x_{1},x_{2},t=0)&=x_{1}(1-x_{1})x_{2}(1-x_{2}), \quad\text{on }\Omega.\end{aligned}\right. \tag{5.2}\]
We use the high-contrast media \(\kappa_{2}\) with the value of \(10^{3}\) in the high-contrast channels (see Figure 1(b)). The final time is \(T=0.2\), and we consider 200 time steps for CEM-GMsFEM-EIRK1 and 300 time steps for CEM-GMsFEM-FDBE. The reference solution is again approximated in the fine-scale grid using the Backward Euler scheme with 1000 time steps. As in Example 5.1, Figure 5 (left) shows the relative errors with different numbers of local basis functions. Observe that errors decrease as the number of local basis functions increases using the CEM-GMsFEM-EIRK1 scheme in contrast to the CEM-GMsFEM-FDBE scheme. Therefore, we obtain a good agreement using only a few local basis functions on each coarse block. Furthermore, in Figure 5 (right), we show the error when we increase the numbers of oversampling layers using a fixed coarse grid size \(H=\frac{1}{8}\) and 4 basis functions on each coarse block. We observe that when the number of oversampling layers increases, the approximation becomes more accurate, and the error decreases very slowly once the number of oversampling layers attains a certain number.
Table 3 shows the \(\text{H}^{1}\), \(\text{L}^{2}\) and max-error obtained at the final time simulation for the problem (5.2). We use 4 basis functions on each coarse block with increasing the coarse grid size \(H=\frac{1}{2},\frac{1}{4},\frac{1}{8}\), and \(\frac{1}{16}\), associated with some appropriate oversampling layers \(m\). We obtain a good approximation for all coarse scale solutions with big-time steps by CEM
Figure 5: Relative error between the CEM-GMsFEM-EIRK1 solution and the reference solution with increasing the number of local basis functions (left) and the number of oversampling layers (right) for problem (5.2) at the final time \(T=0.2\).
GMsFEM-EIRK1 compared with the CEM-GMsFEM-FDBE. In the latter, even when using a sufficiently small time step, the scheme does not converge and is more drastic than in example 5.1. In Table 4, we show the temporal convergence order in both \(\mathrm{L}^{2}\) and \(\mathrm{H}^{1}\)-norms. Similar to Example 5.1, we have that the order of \(\mathrm{H}^{1}\)-norm is about \(1\) and \(\mathrm{L}^{2}\)-norm is much better that theoretical result. Figure 6 depicts the solution profiles at the final instant \(T=0.2\) using the two schemes and reference solution for the problem (5.2).
In the examples below, we shall use a second-order explicit exponential Runge-Kutta and Crank-Nicolson schemes denoted by CEM-GMsFEM-EIRK22 and CEM-GMsFEM-FDCN, respectively.
**Example 5.3**.: We consider the problem:
\[\left\{\begin{aligned} \partial_{t}u-\mathrm{div}(\kappa_{3}( \mathrm{x})\nabla u)&=\frac{1}{\epsilon^{2}}(u-u^{3}),\quad \text{in }\Omega\times[0,T],\\ u(x_{1},x_{2},t)&=0,\quad\text{on }\partial\Omega \times[0,T],\\ u(x_{1},x_{2},t=0)&=\epsilon x_{1}(1-x_{1})x_{2}(1-x _{2}),\quad\text{on }\Omega.\end{aligned}\right. \tag{5.3}\]
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline Scheme & \(N_{t}\) & \(\mathrm{H}^{1}\) & CR & \(\mathrm{L}^{2}\) & CR \\ \hline \hline \multirow{4}{*}{EIRK1} & 8 & 2.7427E+00 & – & 2.7372E+00 & – \\ \cline{2-6} & 16 & 8.3854E-01 & 1.7096E+00 & 8.3422E-01 & 1.7142E+00 \\ \cline{2-6} & 32 & 3.0339E-01 & 1.4667E+00 & 2.9734E-01 & 1.4882E+00 \\ \cline{2-6} & 64 & 1.0627E-01 & 1.5134E+00 & 9.3312E-02 & 1.6720E+00 \\ \cline{2-6} & 128 & 4.7160E-02 & 1.1720E+00 & 6.3001E-03 & 3.8886E+00 \\ \hline \end{tabular}
\end{table}
Table 4: Temporal accuracy for problem (5.2) with contrast of \(10^{3}\) and final time of \(T=0.2\) with varying time steps \(N_{t}\) and fixed coarse size of \(H=\frac{1}{8}\) and \(4\) basis functions.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline Scheme & \(H\) & \(m\) & \(\mathrm{H}^{1}\) & \(\mathrm{L}^{2}\) & max \\ \hline \hline \multirow{4}{*}{EIRK1} & \(\frac{1}{2}\) & 1 & 1.3711E-01 & 9.1747E-02 & 3.5696E-07 \\ \cline{2-6} & \(\frac{1}{4}\) & 2 & 1.1267E-01 & 7.1108E-02 & 3.0173E-07 \\ \cline{2-6} & \(\frac{1}{8}\) & 3 & 5.2644E-02 & 2.4200E-02 & 1.1581E-07 \\ \cline{2-6} & \(\frac{1}{16}\) & 4 & 1.9081E-02 & 8.1719E-03 & 3.5274E-08 \\ \hline \multirow{4}{*}{FDBE} & \(\frac{1}{2}\) & 1 & 1.4664E-01 & 7.8413E-02 & 2.4848E-07 \\ \cline{2-6} & \(\frac{1}{4}\) & 2 & 1.3394E-01 & 8.3843E-02 & 2.8238E-07 \\ \cline{1-1} \cline{2-6} & \(\frac{1}{8}\) & 3 & 1.4239E-01 & 1.2660E-01 & 4.6369E-07 \\ \cline{1-1} \cline{2-6} & \(\frac{1}{16}\) & 4 & 3.8840E-01 & 1.7666E-01 & 1.0345E-06 \\ \hline \end{tabular}
\end{table}
Table 3: Errors between the coarse-scale and the reference solution at the final time \(T=0.2\) with different coarse grid sizes and different numbers of associated oversampling layers \(m\) for problem (5.2) using a contrast of \(10^{3}\).
Here, \(\epsilon=0.1\) measures the interface thickness and we use the high-contrast media \(\kappa_{3}\) with the value of \(10^{4}\) in the high-contrast channels (see Figure 1(c)). The final time of this simulation is \(T=0.016\). We consider \(100\) and \(500\) time steps for spatial accuracy for CEM-GMsFEM-ERK22 and CEM-GMsFEM-FDCN, respectively. We also consider uniform refined coarse grids with \(H=\frac{1}{2},\frac{1}{4},\frac{1}{8}\) and \(\frac{1}{16}\). The reference solution is approximated in the fine-scale grid using the Backward Euler scheme with \(1000\) time steps. The errors for spatial accuracy in the \(\mathrm{L}^{2}\) and \(\mathrm{H}^{1}\)-norms are reported in Table 5. We notice that the spatial convergence rates for CEM-GMsFEM-EIRK22 are higher than one in both norms, as expected in contrast with CEM-GMsFEM-FDCN, which fails to maintain spatial accuracy in \(\mathrm{H}^{1}\)-norm.
In Table 6, we show that the temporal accuracy is about \(1\). This is probably due to the influence of spatial accuracy, which coincides very well with the estimates given in Theorems A.12 and A.13. Figure 7 depicts the solution profiles at the final instant \(T=0.016\) using the two schemes and reference solution for the problem (5.3).
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline Scheme & \(N_{t}\) & H\({}^{1}\) & CR & L\({}^{2}\) & CR \\ \hline \hline \multirow{5}{*}{EIRK22} & 8 & 1.4146E-01 & – & 1.3598E-01 & – \\ \cline{2-6} & 16 & 8.0467E-02 & 8.1401E-01 & 6.8636E-02 & 9.8635E-01 \\ \cline{2-6} & 32 & 5.5277E-02 & 5.4172E-01 & 3.4029E-02 & 1.0122E+00 \\ \cline{2-6} & 64 & 2.7406E-02 & 1.0122E+00 & 1.6740E-02 & 1.0234E+00 \\ \cline{2-6} & 128 & 1.3411E-02 & 1.0310E+00 & 8.4554E-03 & 9.8542E-01 \\ \hline \multirow{5}{*}{FDCN} & 8 & 6.8599E+00 & – & 2.0627E-01 & – \\ \cline{2-6} & 16 & 6.6305E+00 & 4.9063E-02 & 1.1583E-01 & 8.3254E-01 \\ \cline{2-6} & 32 & 5.9835E+00 & 1.4813E-01 & 6.2416E-02 & 8.9204E-01 \\ \cline{2-6} & 64 & 4.6921E+00 & 3.5076E-01 & 3.2232E-02 & 9.5340E-01 \\ \cline{2-6} & 128 & 2.6097E+00 & 8.4632E-01 & 1.5789E-02 & 1.0296E+00 \\ \hline \end{tabular}
\end{table}
Table 6: Temporal accuracy for problem (5.3) with contrast of \(10^{4}\) and final time of \(T=0.016\) with varying time steps \(N_{t}\) and fixed coarse size of \(H=\frac{1}{8}\) and 4 basis functions.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline Scheme & \(H\) & \(m\) & H\({}^{1}\) & CR & L\({}^{2}\) & CR \\ \hline \hline \multirow{5}{*}{EIRK22} & \(\frac{1}{2}\) & 2 & 1.8813E-01 & – & 4.4512E-02 & – \\ \cline{2-6} & \(\frac{1}{4}\) & 3 & 1.3651E-01 & 4.6267E-01 & 3.0293E-02 & 5.5520E-01 \\ \cline{2-6} & \(\frac{1}{8}\) & 4 & 4.5901E-02 & 1.5724E+00 & 1.0694E-02 & 1.5020E+00 \\ \cline{2-6} & \(\frac{1}{16}\) & 5 & 1.7469E-02 & 1.3937E+00 & 4.1275E-03 & 1.3735E+00 \\ \hline \multirow{5}{*}{FDCN} & \(\frac{1}{2}\) & 2 & 1.8865E-01 & – & 4.1144E-02 & – \\ \cline{2-6} & \(\frac{1}{4}\) & 3 & 1.9529E-01 & 4.6429E-01 & 2.6865E-02 & 6.1495E-01 \\ \cline{2-6} & \(\frac{1}{8}\) & 4 & 4.6756E-02 & 1.5481E+00 & 4.8988E-03 & 2.4552E+00 \\ \cline{2-6} & \(\frac{1}{16}\) & 5 & 8.7217E-01 & -4.2213E+00 & 2.0979E-03 & 1.2234E+00 \\ \hline \end{tabular}
\end{table}
Table 5: Spatial convergence rate for problem (5.3) at the final time \(T=0.016\) with varying coarse grid size \(H\) and oversampling coarse layers \(m\) using a contrast of \(10^{4}\).
Here, \(\epsilon=0.05\) measures the interface thickness and we use the high-contrast media \(\kappa_{4}\) with the value of \(10^{4}\) in the high-contrast channels (see Figure 1(d)). Similarly to Example 5.3, we set the final time of \(T=0.016\) and consider \(100\) and \(500\) time steps for CEM-GMsFEM-ERK22 and CEM-GMsFEM-FDCN, respectively. The reference solution is approximated in the fine-scale grid using the backward Euler scheme with \(1000\) time steps. The errors for spatial accuracy in the L\({}^{2}\) and H\({}^{1}\)-norms are reported in Table 7. We notice that the spatial convergence rates for CEM-GMsFEM-EIRK22 and CEM-GMsFEM-FDCN are higher than one, but the first is slightly higher than the second. Table 8 reports the temporal convergence, which is about \(1\) in both norms, but the CEM-GMsFEM-EIRK22 is slightly higher than the other one.
**Example 5.5**.: Finally, we consider the reaction-diffusion system:
\[\left\{\begin{aligned} \partial_{t}u-\mathrm{div}(\kappa_{3}( \mathrm{x})\nabla u)&=u-u^{3}-v,\quad\text{in }\Omega_{1}\times[0,T],\\ \partial_{t}v-\mathrm{div}(\kappa_{1}(\mathrm{x})\nabla v)& =u-v,\quad\text{in }\Omega_{2}\times[0,T],\\ u(x_{1},x_{2},t)=v(x_{1},x_{2},t)&=0,\quad\text{on } \partial\Omega_{i}\times[0,T],\quad\text{for }i=1,2,\\ u(x_{1},x_{2},t=0)&=0.05\sin(x_{1})\sin(x_{2}), \quad\text{on }\Omega_{1},\\ v(x_{1},x_{2},t=0)&=\sin(\pi(x_{1}-0.25))\cos(2\pi( x_{2}-0.125)),\quad\text{on }\Omega_{2}.\end{aligned}\right. \tag{5.5}\]
Here, we use the high-contrast media \(\kappa_{3}\) in \(\Omega_{1}\) and \(\kappa_{4}\) in \(\Omega_{2}\) with the value of \(10^{4}\). The final time of this simulation is \(T=0.016\), and we consider \(100\) time steps for CEM-GMsFEM
Figure 7: (a) CEM-GMsFEM-EIRK22 (b) CEM-GMsFEM-FDCN, with \(100\) and \(500\) time-steps, respectively, and (c) the reference solution of problem (5.3) at final time \(T=0.016\), using coarse grid size \(H=\frac{1}{8}\), and \(4\) local multiscale basis functions, with \(m=4\) oversampling layers.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline Scheme & \(H\) & \(m\) & H\({}^{1}\) & CR & L\({}^{2}\) & CR \\ \hline \hline \multirow{4}{*}{EIRK22} & \(\frac{1}{2}\) & 2 & 1.8826E-01 & – & 3.9401E-02 & – \\ \cline{2-6} & \(\frac{1}{4}\) & 3 & 1.6799E-01 & 1.6432E-01 & 3.0689E-02 & 3.6048E-01 \\ \cline{2-6} & \(\frac{1}{8}\) & 4 & 8.0971E-02 & 1.0529E+00 & 1.0951E-02 & 1.4866E+00 \\ \cline{2-6} & \(\frac{1}{16}\) & 5 & 2.3621E-02 & 1.7773E+00 & 4.3121E-03 & 1.3446E+00 \\ \hline \multirow{4}{*}{FDCN} & \(\frac{1}{2}\) & 2 & 1.8760E-01 & – & 4.3753E-02 & – \\ \cline{2-6} & \(\frac{1}{4}\) & 3 & 1.6741E-01 & 1.6425E-01 & 3.5695E-02 & 2.9364E-01 \\ \cline{2-6} & \(\frac{1}{8}\) & 4 & 8.1543E-02 & 1.0378E+00 & 1.8700E-02 & 9.3262E-01 \\ \cline{2-6} & \(\frac{1}{16}\) & 5 & 2.8306E-02 & 1.5264E+00 & 1.4437E-02 & 3.7328E-01 \\ \hline \end{tabular}
\end{table}
Table 7: Spatial convergence rate for problem (5.4) at the final time \(T=0.016\) with varying coarse grid size \(H\) and oversampling coarse layers \(m\) using a contrast of \(10^{4}\).
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline Scheme & \(N_{t}\) & H\({}^{1}\) & CR & L\({}^{2}\) & CR \\ \hline \hline \multirow{4}{*}{EIRK22} & 8 & 5.8435E-01 & – & 5.7640E-01 & – \\ \cline{2-6} & 16 & 3.0472E-01 & 9.3934E-01 & 2.9074E-01 & 9.8732E-01 \\ \cline{2-6} & 32 & 1.5125E-01 & 1.0105E+00 & 1.2894E-01 & 1.1730E+00 \\ \cline{2-6} & 64 & 9.4282E-02 & 6.8196E-01 & 5.3729E-02 & 1.2630E+00 \\ \cline{2-6} & 128 & 5.1733E-02 & 8.6590E-01 & 2.0252E-02 & 1.4076E+00 \\ \hline \multirow{4}{*}{FDCN} & 8 & 8.5341E-01 & – & 8.4866E-01 & – \\ \cline{2-6} & 16 & 6.5339E-01 & 3.8530E-01 & 6.4422E-01 & 3.9763E-01 \\ \cline{2-6} & 32 & 4.1568E-01 & 6.5245E-01 & 4.0038E-01 & 6.8616E-01 \\ \cline{2-6} & 64 & 2.3385E-01 & 8.2991E-01 & 2.1352E-01 & 9.0701E-01 \\ \cline{2-6} & 128 & 1.3157E-01 & 8.2966E-01 & 1.0372E-01 & 1.0416E+00 \\ \hline \end{tabular}
\end{table}
Table 8: Temporal convergence rate for problem (5.4) with contrast of \(10^{4}\) and final time \(T=0.016\) with varying time steps \(N_{t}\) and fixed coarse size of \(H=\frac{1}{8}\) and 4 basis functions.
ERK22. The reference solution is approximated in the fine-scale grid using the backward Euler scheme with 1000 time steps. Table 9 shows the convergence behavior concerning the coarse mesh size in H\({}^{1}\) and L\({}^{2}\) for the problem (5.5) at the final time \(T\). We only use 4 basis functions on each coarse block with varying coarse grid sizes \(H=\frac{1}{2},\frac{1}{4},\frac{1}{8}\), and \(\frac{1}{16}\), associated with some appropriate oversampling layers \(m\). We observe that the spatial accuracy is higher than one for solutions \(u\) and \(v\), which matches our theoretical results. Table 10 shows the temporal convergence in both L\({}^{2}\) and H\({}^{1}\)-norms. Therefore, we have a good performance of the CEM-GMsFEM-EIRK22 scheme. Figure 9 depicts the solution profiles at the final instant \(T=0.016\) using the two schemes.
## 6 Conclusion
We have presented the explicit exponential integration CEM-GMsFEM for solving the semilinear parabolic problems in high-contrast media. As noted by Contreras et al. (2023), the disparity of scales in the heterogeneous coefficients can affect the stability of the usual implicit schemes. In this work, we presented an alternative technique to handle this kind of scenario. We have used CEM-GMsFEM for spatial discretization. The first step is constructing the auxiliary space by solving local spectral problems. Next, we solve a constraint energy minimization problem to construct the multiscale basis functions in the oversampling regions. We introduce the first- and second-order explicit exponential Runge-Kutta integrators for temporal discretization. Rigorous convergence analysis of the proposed
Figure 8: (a) CEM-GMsFEM-EIRK22 (b) CEM-GMsFEM-FDCN, with 100 and 500 time-steps, respectively, and (c) the reference solution of problem (5.4) at final time \(T=0.016\), using coarse grid size \(H=\frac{1}{8}\), and 4 local multiscale basis functions, with \(m=4\) oversampling layers.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline Solution & \(N_{t}\) & H\({}^{1}\) & CR & L\({}^{2}\) & CR \\ \hline \hline & 8 & 1.3509E-01 & – & 6.7057E-02 & – \\ \cline{2-5} & 16 & 1.1382E-01 & 1.1869E+00 & 3.4229E-02 & 1.9590E+00 \\ \cline{2-5} \(u\) & 32 & 1.0823E-01 & 1.0517E+00 & 1.8507E-02 & 1.8496E+00 \\ \cline{2-5} & 64 & 1.0682E-01 & 1.0132E+00 & 1.1017E-02 & 1.6799E+00 \\ \cline{2-5} & 128 & 1.0647E-01 & 1.0033E+00 & 7.5750E-03 & 1.4544E+00 \\ \hline & 8 & 2.0157E-01 & – & 1.4416E-01 & – \\ \cline{2-5} & 16 & 1.1423E-01 & 1.7646E+00 & 6.4472E-02 & 2.2360E+00 \\ \cline{2-5} \(v\) & 32 & 8.5618E-02 & 1.3342E+00 & 2.8341E-02 & 2.2749E+00 \\ \cline{2-5} & 64 & 7.8317E-02 & 1.0932E+00 & 1.3250E-02 & 2.1390E+00 \\ \cline{2-5} & 128 & 7.6890E-02 & 1.0186E+00 & 1.0147E-02 & 1.3058E+00 \\ \hline \end{tabular}
\end{table}
Table 10: Temporal convergence rate for problem (5.5) at the final time \(T=0.016\) with varying coarse grid size \(H\) and oversampling coarse layers \(m\) using a contrast of \(10^{4}\).
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline Solution & \(H\) & \(m\) & H\({}^{1}\) & CR & L\({}^{2}\) & CR \\ \hline \hline & \(\frac{1}{2}\) & 2 & 2.0263E-01 & – & 3.4693E-02 & – \\ \cline{2-5} \(u\) & \(\frac{1}{4}\) & 3 & 1.7844E-01 & 1.8337E-01 & 1.8615E-02 & 8.9816E-01 \\ \cline{2-5} & \(\frac{1}{8}\) & 4 & 1.0640E-01 & 7.4599E-01 & 6.4657E-03 & 1.5256E+00 \\ \cline{2-5} & \(\frac{1}{16}\) & 5 & 2.9204E-02 & 1.8652E+00 & 1.8762E-03 & 1.7850E+00 \\ \hline & \(\frac{1}{2}\) & 2 & 3.3754E-01 & – & 1.3785E-01 & – \\ \cline{2-5} & \(\frac{1}{4}\) & 3 & 2.7226E-01 & 3.1005E-01 & 9.7684E-02 & 4.9688E-01 \\ \cline{2-5} & \(\frac{1}{8}\) & 4 & 7.6753E-02 & 1.8267E+00 & 1.1111E-02 & 3.1362E+00 \\ \cline{2-5} & \(\frac{1}{16}\) & 5 & 2.0903E-02 & 1.8765E+00 & 3.3035E-03 & 1.7499E+00 \\ \hline \end{tabular}
\end{table}
Table 9: Spatial convergence rate for problem (5.5) at the final time \(T=0.016\) with varying coarse grid size \(H\) and oversampling coarse layers \(m\) using a contrast of \(10^{4}\).
method shows optimal error estimates in the H\({}^{1}\)- and L\({}^{2}\)-norms with one and two Runge-Kutta stages, respectively. Extensive numerical examples all verify the spatial and temporal accuracy of the proposed scheme and confirm the theoretical results.
## Acknowledgement
The research of EC is partially supported by the Hong Kong RGC General Research Fund (Projects: 14305222 and 14304021).
## Appendix A Convergence analysis
This section focuses on error estimates of fully discrete solutions produced by the proposed exponential integrator multiscale finite element method for solving the semilinear parabolic problem with homogeneous Dirichlet boundary conditions. We start with some notations and basic approximation results for the multiscale finite element approximations to estimate the error bound. We define the following norms for our analysis
\[\|u\|_{a}^{2}:=\int_{\Omega}\kappa|\nabla u|^{2}d\mathrm{x},\quad\|u\|_{s}^{2}: =\int_{\Omega}\widetilde{\kappa}|u|^{2}d\mathrm{x}.\]
In our estimates, we assume that the oversampling size is \(m=\mathcal{O}(\log(\kappa_{\mathrm{max}}/H))\), see Chung et al. (2018). We introduce from Thomee (2006); Huang et al. (2023) some regularity and growth conditions for functions \(f\) and \(u\) to carry out the error analysis of our proposed method.
Figure 9: (a) Second-order CEM-GMsFEM-ERK22 with 100 time-steps and (b) The reference solution of problem (5.5) at final time \(T=0.016\), using \(H=1/8\), 4 local multiscale basis functions, and \(m=4\) oversampling layers.
**Assumption A.1**.: _The function \(f(v)\) grows mildly with respect to \(v\), i.e., there exists a number \(p>0\) for \(d=1,2\) or \(p\in(0,2]\) for \(d=3\) such that_
\[\left|\frac{\partial f}{\partial v}(v)\right|\preceq 1+\|v\|^{p},\quad\text{ for each }v\in\mathbb{R}.\] (A.1)
**Assumption A.2**.: _The function \(f(v(t))\) is smooth enough with respect to \(t\), i.e., for any given constant \(C>0\), it holds_
\[\sum_{|\alpha|\leq 2}|D^{\alpha}f(v(t))|\preceq 1,\quad\text{for each }t\in[0,T], \quad v\in[-C,C].\] (A.2)
**Assumption A.3**.: _The exact solution \(u(t)\) satisfies some of the following regularity conditions:_
\[\sup_{0\leq t\leq T}\|u(t)\|_{2} \preceq 1,\] (A.3a) \[\sup_{0\leq t\leq T}\|\partial_{t}u(t)\|_{\infty} \preceq 1,\] (A.3b) \[\sup_{0\leq t\leq T}\|\partial_{tt}u(t)\|_{\infty} \preceq 1,\] (A.3c)
_where the hidden constants may depend on \(T\)._
We shall introduce the following result on the locally-Lipschitz continuity of the function \(f\), see Thomee (2006).
**Lemma A.4**.: _Suppose that the function \(f\) satisfies Assumption A.1, and the exact solution \(u(t)\) satisfies (A.3a) in Assumption A.3. Then, \(f\) is locally-Lipschitz continuous in a strip along the exact solution \(u(t)\), i.e., for any given positive constant \(C\),_
\[\|f(v)-f(w)\|_{0}\preceq\|v-w\|_{a},\] (A.4)
_for any \(t\in[0,T]\) and \(v,w\in\mathrm{V}_{h}\) satisfying \(\max\{\|(v-u(t)\|_{a},\|w-u(t)\|_{a}\}\leq C\), where the hidden constant in (A.4) may depend on \(C\)._
### Fully-discrete error estimates
This section presents the error between the exact solution \(u(t_{n})\) and the fully discrete multiscale solution \(u_{\mathrm{ms}}^{n}\). For simplicity of presentation, we shall assume that the partition is uniform in \([0,T]\) with time step \(\delta^{n}\). Let \(u_{\mathrm{ms}}(t)\) be the multiscale solution of the semi-discrete problem (3.3), and \(u_{\mathrm{ms}}^{n}\) the multiscale fully-discrete solution produced by the exponential integrator multiscale finite element method (4.12) or (4.13).
Let \(\hat{u}\in\mathrm{V}_{\mathrm{ms}}\) be the elliptic projection of the solution \(u\) that satisfies
\[(\nabla(u-\hat{u})(t),\nabla v)=0,\quad\text{for each }v\in\mathrm{V}_{ \mathrm{ms}},\text{and }t>0.\] (A.5)
The following lemma gives the error estimate of \(\hat{u}(t)\) for the semi-parabolic problem.
**Lemma A.5**.: _Let \(u\) be the solution of (2.1). For each \(t\in[0,T]\), we define the elliptic projection \(\hat{u}\in\mathrm{V}_{\mathrm{ms}}\) satisfies (A.5). Then,_
\[\|(u-\hat{u})(t)\|_{a} \preceq H\Lambda^{-\frac{1}{2}}\kappa_{\min}^{-\frac{1}{2}},\] (A.6a) \[\|(u-\hat{u})(t)\|_{0} \preceq H^{2}\Lambda^{-1}\kappa_{\min}^{-1},\] (A.6b)
_where \(C\) is a constant independent of the mesh size \(H\) and \(\Lambda=\min_{1\leq i\leq N}\lambda_{L_{i}+1}^{(i)}\)._
Proof.: Note that \(u\in\mathrm{V}_{0}\) of (2.1) satisfies
\[a(u,v)=(f(u),v)-(\partial_{t}u,v)=(f(u)-\partial_{t}u,v),\quad\text{for each }v\in \mathrm{V}_{0},\quad\text{for all }t\in[0,T].\]
Thus, let \(\hat{u}(t)\) be the elliptic projection of \(u\) in \(\mathrm{V}_{\mathrm{ms}}\), that satisfies
\[a(\hat{u},v)=a(u,v)=(f(u)-\partial_{t}u,v),\quad\text{for each }v\in\mathrm{V}_{ \mathrm{ms}},\quad\text{for all }t>0.\]
For \(v=u-\hat{u}\) and by using Lemma 1 from Chung et al. (2018) we have,
\[\|u-\hat{u}\|_{a}^{2}=|a(u-\hat{u},u-\hat{u})|\leq|(f(u)-\partial_{t}u,u-\hat{ u})|\leq\|\widetilde{\kappa}^{-1/2}(f(u)-\partial_{t}u)\|_{0}\|u-\hat{u}\|_{s}.\]
Observe that, by using the orthogonality of the eigenfunctions \(\varphi_{j}^{(i)}\) of (3.1), we arrive at
\[\|u-\hat{u}\|_{s}^{2}=\sum_{i=1}^{N_{c}}\|(I-\pi_{i})(u-\hat{u})\|_{s_{i}}^{2 }\leq\Lambda^{-1}\sum_{i=1}^{N_{c}}\|u-\hat{u}\|_{a_{i}}^{2}=\Lambda^{-1}\|u- \hat{u}\|_{a}^{2},\]
where \(\Lambda=\min_{1\leq i\leq N}\lambda_{L_{i}+1}^{(i)}\). Thus, by gathering the two expressions above, we have that
\[\|u-\hat{u}\|_{a}\leq\Lambda^{-\frac{1}{2}}\|\widetilde{\kappa}^{-\frac{1}{2} }(f(u)-\partial_{t}u)\|_{0}.\]
By using \(f(0)=0\), Assumption A.1 and that \(u\) satisfies the Assumption A.3, we obtain that \(\|f(u)\|_{0}=\|f(u)-f(0)\|_{0}\preceq 1\). Now, using (A.3b) and \(|\nabla\chi_{i}|=\mathcal{O}(H^{-1})\), we can arrive at
\[\|u-\hat{u}\|_{a}\preceq H\Lambda^{-\frac{1}{2}}\kappa_{\min}^{-\frac{1}{2}},\]
which is (A.6a). We shall use the duality argument for (A.6b). For \(t\in[0,T]\), we define \(w\in\mathrm{V}_{0}\) such that
\[a(w,v)=(u-\hat{u},v),\quad\text{for each }v\in\mathrm{V}_{0},\]
and define \(\hat{w}\) as the elliptic projection of \(w\) in the space \(\mathrm{V}_{\mathrm{ms}}\), that is,
\[a(\hat{w},v)=(u-\hat{u},v),\quad\text{for each }v\in\mathrm{V}_{\mathrm{ms}}.\]
By using (A.6a), we obtain
\[\|u-\hat{u}\|_{0}^{2}=a(w,u-\hat{u})=(w-\hat{w},u-\hat{u})\leq\|w-\hat{w}\|_ {a}\|u-\hat{u}\|_{a}\preceq H^{2}\Lambda^{-1}\kappa_{\min}^{-1}\|u-\hat{u}\|_ {0}.\]
Hence, dividing by \(\|u-\hat{u}\|_{0}\), we obtain (A.6b). This completes the proof.
The following theorems give the H\({}^{1}\) and L\({}^{2}\) error estimates.
**Theorem A.6**.: _Let \(u\) be the solution of (2.2) and satisfies (A.3a) and (A.3b) in Assumption A.3. We define the elliptic projection \(\hat{u}\in\mathrm{V}_{\mathrm{ms}}\) satisfies (A.5). There exists a constant \(H_{0}>0\) such that if the coarse grid size \(H\leq H_{0}\), then_
\[\|(u-u_{\mathrm{ms}})(\cdot,t)\|_{a}\preceq H\Lambda^{-\frac{1}{2}}\kappa_{ \min}^{-\frac{1}{2}}+H^{2}\Lambda^{-1}\kappa_{\min}^{-1},\] (A.7)
_where the hidden constant is independent of the coarse grid size \(H\)._
Proof.: We denote that
\[u-u_{\mathrm{ms}}=(u-\hat{u})+(\hat{u}-u_{\mathrm{ms}}):=\rho+\theta,\quad \text{for all }t\in[0,T],\]
where \(\hat{u}\) is the elliptic projection in the space \(\mathrm{V}_{\mathrm{ms}}\) of the exact solution \(u\). About \(\theta\), by (3.3), we obtain
\[(\partial_{t}\theta,v)+a(\theta,v) =(\partial_{t}\hat{u},v)-(f(u_{\mathrm{ms}}),v)+a(\hat{u},v)\] \[=(\partial_{t}(\hat{u}-u),v)+(\partial_{t}u,v)+a(u,v)-(f(u_{ \mathrm{ms}}),v)\] \[=(f(u)-f(u_{\mathrm{ms}}),v)-(\partial_{t}\rho,v),\]
for all \(v\in\mathrm{V}_{\mathrm{ms}}\). Taking \(v=2\partial_{t}\theta=2\partial_{t}(\hat{u}-u_{\mathrm{ms}})\in\mathrm{V}_{ \mathrm{ms}}\), we find that
\[2(\partial_{t}\theta,\partial_{t}\theta)+2a(\theta,\partial_{t}\theta)=2(f(u) -f(u_{\mathrm{ms}}),\partial_{t}\theta)-2(\partial_{t}\rho,\partial_{t}\theta).\]
By using the Cauchy-Schwarz and Young's inequality, we get that
\[2\|\partial_{t}\theta\|_{0}^{2}+\frac{d}{dt}\|\theta\|_{a}^{2} \leq 2(\|f(u)-f(u_{\mathrm{ms}})\|_{0}+\|\partial_{t}\rho\|_{0})\| \partial_{t}\theta\|_{0}\] \[\leq\|f(u)-f(u_{\mathrm{ms}})\|_{0}^{2}+\|\partial_{t}\rho\|_{0}^ {2}+2\|\partial_{t}\theta\|_{0}^{2}.\]
Then,
\[\frac{d}{dt}\|\theta\|_{a}^{2} \leq\|f(u)-f(u_{\mathrm{ms}})\|_{0}^{2}+\|\partial_{t}\rho\|_{0}^ {2}.\]
Following Thomee (2006, Theorem 14.2, pp. 246), with \(t\) large sufficiently in \([0,T]\), there exist \(H_{0}>0\) such that the coarse grid size \(H\leq H_{0}\), then, we can find that
\[\frac{d}{dt}\|\theta\|_{a}^{2} \leq\|f(u)-f(u_{\mathrm{ms}})\|_{0}^{2}+\|\partial_{t}\rho\|_{0}^ {2}\] \[\leq\|f(u)-f(\hat{u})\|_{0}^{2}+\|f(\hat{u})-f(u_{\mathrm{ms}})\|_ {0}^{2}+\|\partial_{t}\rho\|_{0}^{2}\] \[\preceq\|\rho\|_{a}^{2}+\|\theta\|_{a}^{2}+\|\partial_{t}\rho\|_ {0}^{2},\]
where in the last inequality, we used Assumption A.1 and fact that \(\|\hat{u}\|_{a}\leq C\|u\|_{a}\preceq 1\). Integrating with respect to time, we obtain
\[\|\theta(\cdot,t)\|_{a}^{2}\preceq\int_{0}^{t}(\|\rho\|_{a}^{2}+\|\theta\|_{a }^{2})dt+\int_{0}^{t}\|\partial_{t}\rho\|_{0}^{2}ds+\|\theta(\cdot,0)\|_{a}^{ 2}.\]
Note that the initial condition implies \(\theta(0):=(\hat{u}-u_{\rm ms})(0)=0\). Then, Gronwall's inequality yields
\[\|\theta(\cdot,t)\|_{a}^{2}\preceq\int_{0}^{t}(\|\rho\|_{a}^{2}+\|\partial_{t} \rho\|_{0}^{2})ds.\]
By Lemma A.5, we obtain that
\[\|(\hat{u}-u_{\rm ms})(\cdot,t)\|_{a}^{2}\preceq\int_{0}^{t}\left(\|u-\hat{u} \|_{0}^{2}+\|\partial_{t}(u-\hat{u})\|_{0}^{2}\right)ds\preceq H^{2}\Lambda^{- 1}\kappa_{\rm min}^{-1}+H^{4}\Lambda^{-2}\kappa_{\rm min}^{-2}.\] (A.8)
Here, the hidden constant depends on \(T\). Finally, using the triangle inequality and gathering (A.6a) and (A.8), we then have
\[\|(u-u_{\rm ms})(\cdot,t)\|_{a}^{2}\leq\|(u-\hat{u})(\cdot,t)\|_{a}^{2}+\|( \hat{u}-u_{\rm ms})(\cdot,t)\|_{a}^{2}\preceq H^{2}\Lambda^{-1}\kappa_{\rm min }^{-1}+H^{4}\Lambda^{-2}\kappa_{\rm min}^{-2}.\]
This finishes the proof.
**Theorem A.7**.: _Under assumptions of Theorem A.6, we then have,_
\[\|(u-u_{\rm ms})(\cdot,t)\|_{0}\preceq H^{2}\Lambda^{-1}\kappa_{\rm min}^{-1}.\]
_where \(C\) is a constant independent of the mesh size \(H\) and \(\Lambda=\min_{1\leq i\leq N}\lambda_{L_{i}+1}^{(i)}\)._
Proof.: Similar to Theorem A.6, we shall denote
\[u-u_{\rm ms}=(u-\hat{u})+(\hat{u}-u_{\rm ms}):=\rho+\theta,\quad\text{for all }t>0,\]
where \(\hat{u}\) is the elliptic projection in the space \(\rm V_{ms}\) of the exact solution \(u\). For \(\theta\), taking \(v=\theta=\hat{u}-u_{\rm ms}\in\rm V_{ms}\), we find that
\[(\partial_{t}\theta,\theta)+a(\theta,\theta)=(f(u)-f(u_{\rm ms}),\theta)-( \partial_{t}\rho,\theta).\]
By using \(a(\theta,\theta)\geq 0\) and the Cauchy-Schwarz inequality, it follows that
\[\frac{1}{2}\frac{d}{dt}\|\theta\|_{0}^{2}=\|\theta\|_{0}\frac{d}{dt}\|\theta \|_{0}\leq(\|f(u)-f(u_{\rm ms})\|_{0}+\|\partial_{t}\rho\|_{0})\|\theta\|_{0}.\]
By Lemma A.4, we obtain
\[\frac{d}{dt}\|\theta\|_{0}\leq C\|u-u_{\rm ms}\|_{0}+\|\partial_{t}\rho\|_{0} \preceq(\|\rho\|_{0}+\|\theta\|_{0})+\|\partial_{t}\rho\|_{0}.\]
Integrating in time and invoking Gronwall's inequality, we get
\[\|\theta(\cdot,t)\|_{0}\preceq\int_{0}^{t}\|\rho\|_{0}ds+\int_{0}^{t}\|\partial _{t}\rho\|_{0}.\]
By Assumption A.1, equations (A.3b),(A.3c) and (A.6b), we obtain that
\[\|(\hat{u}-u_{\rm ms})(\cdot,t)\|_{0}\preceq\int_{0}^{t}\left(\|u-\hat{u}\|_{0}+ \|\partial_{t}(u-\hat{u})\|_{0}\right)ds\preceq H^{2}\Lambda^{-1}\kappa_{\rm min }^{-1}.\] (A.9)
Finally, by using triangle inequality and gathering (A.6b) and (A.9),we obtain the bound
\[\|(u-u_{\rm ms})(\cdot,t)\|_{0}\leq\|(u-\hat{u})(\cdot,t)\|_{0}+\|(\hat{u}-u_{ \rm ms})(\cdot,t)\|_{0}\preceq H^{2}\Lambda^{-1}\kappa_{\rm min}^{-1},\]
which finishes the proof.
We shall recall some definitions and results related to the semigroup \(\{e^{-\delta^{n}L_{h}}\}_{\delta^{n}\geq 0}\) and some terms used in the exponential Runge-Kutta schemes, which are important to the forthcoming analysis of the proposed multiscale finite element method. The following stability bounds for the semigroup \(\{e^{-\delta^{n}L_{h}}\}_{\delta^{n}\geq 0}\) are crucial in our analysis.
**Lemma A.8** (Hochbruck and Ostermann, 2005a).:
1. _For any given parameter_ \(\gamma\geq 0\)_, it holds_ \[\|e^{-\delta^{n}L_{h}}\|_{0}+\|(\delta^{n})^{\gamma}L_{h}^{\gamma}e^{-\delta^ {n}L_{h}}\|_{0}\preceq 1,\quad\text{for each $\delta^{n}>0$},\,\text{for each $h>0$}.\] (A.10)
2. _For any given parameter_ \(0\leq\gamma\leq 1\)_, it holds_ \[\left\|(\delta^{n})^{\gamma}L_{h}^{\gamma}\sum_{j=1}^{n-1}e^{-j\delta^{n}L_{ h}}\right\|_{0}\preceq 1,\quad\text{for each $\delta^{n}>0$},\,\text{for each $h>0$}.\] (A.11)
3. _For any given parameter_ \(0\leq\gamma\leq 1\)_, it holds_ \[\|\phi(-\delta^{n}L_{h})\|_{0}+\|(\delta^{n})^{\gamma}L_{h}^{\gamma}\phi(- \delta^{n}L_{h})\|_{0}\preceq 1,\quad\text{for each $\delta^{n}>0$},\,\text{for each $h>0$},\] (A.12) _where_ \(\phi(-\delta^{n}L_{h})=\beta_{i}(-\delta^{n}L_{h})\) _or_ \(\phi(-\delta^{n}L_{h})=\alpha_{ij}(-\delta^{n}L_{h})\)_,_ \(i,j=1,\ldots,m\)_._
Observe that from coercivity of the bilinear form \(a(\cdot,\cdot)\), we have that there exists a positive constant \(C\) such that
\[\frac{1}{C}\|L_{h}^{\frac{1}{2}}v\|_{0}\leq\|v\|_{a}\leq C\|L_{h}^{\frac{1}{2 }}v\|_{0}.\]
Similar to (4.4), we have that the multiscale solution \(u_{\rm ms}\) is defined as the solution of the following problem
\[\begin{cases}\partial_{t}u_{\rm ms}(t)+L_{\rm ms}u_{\rm ms}(t)&=P_{\rm ms}f(u_ {\rm ms}),\quad\text{for each $\mathrm{x}\in\Omega$},\quad 0\leq t\leq T,\\ u_{\rm ms}(0)&=P_{\rm ms}\hat{u},\quad\mathrm{x}\in\Omega,\end{cases}\] (A.13)
where \(P_{\rm ms}\) is the L\({}^{2}\)-orthogonal projection operator in \({\rm V}_{\rm ms}\). We rewrite the semi-discrete solution \(u_{\rm ms}(t_{n})\) (\(n=1,\ldots,N_{t}\)) into the sum of following expressions:
\[\begin{split} u_{\rm ms}(t_{n})&=e^{-\delta^{n}L_{ \rm ms}}u_{\rm ms}(t_{n-1})+\int_{0}^{\delta^{n}}e^{(s-\delta^{n})L_{\rm ms}}P_{ \rm ms}f(u_{\rm ms}(s+t_{n-1}))ds\\ &=e^{-\delta^{n}L_{\rm ms}}u_{\rm ms}(t_{n-1})+\int_{0}^{\delta^{n }}e^{(s-\delta^{n})L_{\rm ms}}P_{\rm ms}f(u(s+t_{n-1}))ds\\ &\quad+\int_{0}^{\delta^{n}}e^{(s-\delta^{n})L_{\rm ms}}\{P_{\rm ms }f(u_{\rm ms}(s+t_{n-1}))-P_{\rm ms}f(u(s+t_{n-1}))\}ds,\end{split}\] (A.14)
and define the functions
\[\left\{\begin{split}\xi_{i}(-\delta^{n}L_{\rm ms})& =\phi_{i}(-\delta^{n}L_{\rm ms})-\sum_{k=1}^{m}\beta_{k}(-\delta^ {n}L_{\rm ms})\frac{c_{k}^{i-1}}{(i-1)!},\quad i=1,\ldots,m,\\ \xi_{j,i}(-\delta^{n}L_{\rm ms})&=\phi_{j}(-c_{i} \delta^{n}L_{\rm ms})c_{i}^{j}-\sum_{k=1}^{i-1}\alpha_{ik}(-\delta^{n}L_{\rm ms })\frac{c_{k}^{j-1}}{(j-1)!},\quad i,j=1,\ldots,m.\end{split}\right.\] (A.15)
We also denote \(f^{(k)}(u(t))=\frac{d^{k}}{dt^{k}}f(u(t))\) as the \(k\)-th full differentiation of \(f\) with respect to \(t\). By comparing (A.14) with the fully-discrete scheme (4.7), we then obtain
\[\begin{split} u_{\rm ms}(t_{n-1}+c_{i}\delta^{n})& =e^{-c_{i}\delta^{n}L_{\rm ms}}u_{\rm ms}(t_{n-1})+\delta^{n}\sum_{ j=1}^{i-1}\alpha_{ij}(-\delta^{n}L_{\rm ms})P_{\rm ms}f(u(t_{n-1}+c_{j}\delta^{n}) )+\epsilon^{ni},\\ u_{\rm ms}(t_{n})&=e^{-\delta^{n}L_{\rm ms}}u_{\rm ms }(t_{n-1})+\delta^{n}\sum_{i=1}^{m}\beta_{i}(-\delta^{n}L_{\rm ms})P_{\rm ms}f( u(t_{n-1}+c_{j}\delta^{n}))+\epsilon^{n},\end{split}\]
where the defects terms \(\{\epsilon^{ni}\}_{i=1}^{m}\) and \(\epsilon^{n}\) are respectively given by
\[\begin{split}\epsilon^{ni}&=\sum_{j=1}^{q}(\delta^ {n})^{j}\xi_{j,i}(-\delta^{n}L_{\rm ms})P_{\rm ms}f^{(j-1)}(u(t_{n-1}))+ \epsilon^{ni,q},\\ \epsilon^{n}&=\sum_{i=1}^{q}(\delta^{n})^{i}\xi_{i}(- \delta^{n}L_{\rm ms})P_{\rm ms}f^{(i-1)}(u(t_{n-1}))+\epsilon^{n,q},\end{split}\]
where the remainders \(\epsilon^{ni,q}\) and \(\epsilon^{n,q}\) are defined as
\[\begin{split}\epsilon^{ni,q}&=\int_{0}^{c_{i} \delta^{n}}e^{-(c_{i}\delta^{n}-s)L_{\rm ms}}\int_{0}^{s}\frac{(s-\tau)^{q-1}}{ (q-1)!}P_{\rm ms}f^{(q)}(u(t_{n-1}+\tau))d\tau ds\\ &\quad-\delta^{n}\sum_{k=1}^{i-1}\alpha_{ik}(-\delta^{n}L_{\rm ms })\int_{0}^{c_{k}\delta^{n}}\frac{(c_{k}\delta^{n}-\tau)^{q-1}}{(q-1)!}P_{\rm ms }f^{(q)}(u(t_{n-1}+\tau))d\tau\\ &\quad+\int_{0}^{c_{i}\delta^{n}}e^{-(c_{i}\delta^{n}-\tau)L_{\rm ms }}\left\{P_{\rm ms}f(u_{\rm ms}(t_{n-1}+\tau))-P_{\rm ms}f(u(t_{n-1}+\tau)) \right\}d\tau,\end{split}\] (A.16)
and
\[\epsilon^{n,q} =\int_{0}^{\delta^{n}}e^{-(\delta^{n}-s)L_{\text{ms}}}\int_{0}^{s} \frac{(s-\tau)^{q-1}}{(q-1)!}P_{\text{ms}}f^{(q)}(u(t_{n-1}+\tau))d\tau ds\] \[\quad-\delta^{n}\sum_{i=1}^{m}\beta_{i}(-\delta^{n}L_{\text{ms}}) \int_{0}^{c_{i}\delta^{n}}\frac{(c_{i}\delta^{n}-\tau)^{q-1}}{(q-1)!}P_{\text{ ms}}f^{(q)}(u(t_{n-1}+\tau))d\tau\] \[\quad+\int_{0}^{\delta^{n}}e^{-(\delta^{n}-\tau)L_{\text{ms}}}\left\{ P_{\text{ms}}f(u_{\text{ms}}(t_{n-1}+\tau))-P_{\text{ms}}f(u(t_{n-1}+\tau)) \right\}d\tau.\]
In the expressions above, \(q\) denotes any non-negative integers such that \(P_{\text{ms}}f^{(q)}(u(t))\) exists and is continuous. We have the next result following the framework given by Hochbruck and Ostermann (2005a); Huang et al. (2023).
**Lemma A.9**.: _Given an integer \(q=1\) or \(2\). Suppose that the function \(f\) satisfies the Assumptions A.1 and A.2, and exact solution \(u(t)\) fulfills (A.3a) and (A.3b) in Assumption A.3. In addition, if \(q=2\), \(u(t)\) satisfies (A.3c). Then, for \(n=1,\ldots,N_{t}\), \(i=1,\ldots,m\), it holds that_
\[\|\epsilon^{ni,q}\|_{a} \preceq(\delta^{n})^{q+1}\sup_{0\leq\eta\leq 1}\|P_{\text{ms}}f^{(q )}(u(t_{n-1}+\eta\delta^{n}))\|_{a}+H\Lambda^{-\frac{1}{2}}\kappa_{\min}^{- \frac{1}{2}},\] (A.17a) \[\left\|\sum_{j=0}^{n-1}e^{-j\delta^{n}L_{\text{ms}}}\epsilon^{n- j,q}\right\|_{a} \preceq(\delta^{n})^{q}\sup_{0\leq t\leq T}\|P_{\text{ms}}f^{(q)}(u (t))\|_{a}+H\Lambda^{-\frac{1}{2}}\kappa_{\min}^{-\frac{1}{2}},\] (A.17b)
_where the hidden constants are independent of the coarse grid size \(H\) and the time-step size \(\delta^{n}\)._
Proof.: We define \(\|\epsilon^{ni,q}\|_{a}\leq I_{1}+I_{2}+I_{3}\), where \(I_{k}\), \((k\in\{1,2,3\})\) denotes the energy norm of each term in the right-hand side of (A.16). About (A.17a), by using condition (A.10) in Lemma A.8, we obtain
\[I_{1} =\left\|\int_{0}^{c_{i}\delta^{n}}e^{-(c_{i}\delta^{n}-s)L_{\text {ms}}}\int_{0}^{s}\frac{(s-\tau)^{q-1}}{(q-1)!}P_{\text{ms}}f^{(q)}(u_{\text{ ms}}(t_{n-1}+\tau))d\tau ds\right\|_{a}\] \[\preceq\left\|\int_{0}^{c_{i}\delta^{n}}e^{-(c_{i}\delta^{n}-s)L_{ \text{ms}}}\int_{0}^{s}\frac{(s-\tau)^{q-1}}{(q-1)!}L_{\text{ms}}^{\frac{1}{2} }P_{\text{ms}}f^{(q)}(u_{\text{ms}}(t_{n-1}+\tau))d\tau ds\right\|_{0}\] \[\preceq(\delta^{n})^{q+1}\sup_{0\leq s\leq c_{i}\delta^{n}}\|e^{- (c_{i}\delta^{n}-s)L_{\text{ms}}}\|_{0}\sup_{0\leq\eta\leq 1}\|L_{\text{ms}}^{ \frac{1}{2}}P_{\text{ms}}f^{(q)}(u_{\text{ms}}(t_{n-1}+\eta\delta^{n}))\|_{0}\] \[\preceq(\delta^{n})^{q+1}\sup_{0\leq\eta\leq 1}\|P_{\text{ms}}f^{(q)} (u_{\text{ms}}(t_{n-1}+\eta\delta^{n}))\|_{0}.\]
Analogously, we have for \(I_{2}\) that
\[I_{2} =\left\|\delta^{n}\sum_{k=1}^{i-1}\alpha_{ik}(-\delta^{n}L_{\rm ms}) \int_{0}^{c_{k}\delta^{n}}\frac{(c_{k}\delta^{n}-\tau)^{q-1}}{(q-1)!}P_{\rm ms}f ^{(q)}(u(t_{n-1}+\tau))d\tau\right\|_{a}\] \[\preceq\left\|\delta^{n}\sum_{k=1}^{i-1}\alpha_{ik}(-\delta^{n}L_ {\rm ms})\int_{0}^{c_{k}\delta^{n}}\frac{(c_{k}\delta^{n}-\tau)^{q-1}}{(q-1)!}L _{\rm ms}^{\frac{1}{2}}P_{\rm ms}f^{(q)}(u(t_{n-1}+\tau))d\tau\right\|_{0}\] \[\preceq(\delta^{n})^{q+1}\sum_{k=1}^{i-1}\|\alpha_{ik}(-\delta^{n} L_{\rm ms})\|_{0}\sup_{0\leq\eta\leq 1}\|P_{\rm ms}f^{(q)}(u(t_{n-1}+\eta\delta^{n})) \|_{a}\] \[\preceq(\delta^{n})^{q+1}\sup_{0\leq\eta\leq 1}\|P_{\rm ms}f^{(q)} (u(t_{n-1}+\eta\delta^{n}))\|_{a}.\]
About \(I_{3}\), we get that
\[\|I_{3}\|_{a} =\left\|\int_{0}^{c_{i}\delta^{n}}e^{-(c_{i}\delta^{n}-s)L_{\rm ms }}\left\{P_{\rm ms}f^{(q)}(u_{\rm ms}(t_{n-1}+\tau))-P_{\rm ms}f^{(q)}(u_{h}(t _{n-1}+\tau))\right\}d\tau\right\|_{a}\] \[\preceq\left\|\int_{0}^{c_{i}\delta^{n}}L_{\rm ms}^{\frac{1}{2}}e^ {-(c_{i}\delta^{n}-s)L_{\rm ms}}ds\right\|\sup_{0\leq\eta\leq 1}\|P_{\rm ms}f(u_{ \rm ms}(t_{n-1}+\eta\delta^{n}))\] \[\quad-P_{\rm ms}f(u_{h}(t_{n-1}+\eta\delta^{n}))\|_{0}.\]
For the first term on the right-hand side of the inequality above, we have
\[\left\|\int_{0}^{c_{i}\delta^{n}}L_{\rm ms}^{\frac{1}{2}}e^{-(c_{i}\delta^{n} -s)L_{\rm ms}}ds\right\|_{0}=\max_{\lambda}\left|\int_{0}^{c_{i}\delta^{n}} \lambda^{\frac{1}{2}}e^{-(c_{i}\delta^{n}-s)\lambda}ds\right|\leq\max_{ \lambda}|\lambda^{-\frac{1}{2}}|\preceq 1,\]
where \(\lambda\) is the eigenvalue of \(a(\phi_{i,{\rm ms}},v)=\lambda(\phi_{i,{\rm ms}},v)\), for all \(v\in{\rm V}_{\rm ms}\). Since \(P_{\rm ms}f\) satisfies Lemma A.4, and invoking Theorem A.6, we find that
\[\|I_{3}\|_{a} =\left\|\int_{0}^{c_{i}\delta^{n}}e^{-(c_{i}\delta^{n}-s)L_{\rm ms }}\left\{P_{\rm ms}f(u_{\rm ms}(t_{n-1}+\tau))-P_{\rm ms}f(u_{h}(t_{n-1}+\tau) )\right\}d\tau\right\|_{a}\] (A.18) \[\preceq\sup_{0\leq\eta\leq 1}\|P_{\rm ms}f(u_{\rm ms}(t_{n-1}+ \eta\delta^{n}))-P_{\rm ms}f(u_{h}(t_{n-1}+\eta\delta^{n}))\|_{0}\] \[\preceq\sup_{0\leq\eta\leq 1}\|u_{\rm ms}(t_{n-1}+\eta\delta^{n}) -u_{h}(t_{n-1}+\eta\delta^{n})\|_{a}\] \[\preceq H\Lambda^{-\frac{1}{2}}\kappa_{\rm min}^{-\frac{1}{2}}.\]
Therefore, using the triangle inequality, we will obtain the desired result. Following a similar argument, we find the bounded for (A.17b).
For the sake of simplicity, we finally define \(\varepsilon^{n}=u_{\rm ms}^{n}-u_{\rm ms}(t_{n})\) and
\(u_{\rm ms}(t_{n-1}+c_{i}\delta^{n})\) for \(i=1,\ldots,m\). Then, we can find that the recurrence relations
\[\begin{split} E^{n-1,i}&=e^{-c_{i}\delta^{n}L_{\rm ms }}\varepsilon^{n-1}+\delta^{n}\sum_{j=1}^{i-1}\alpha_{ij}(-\delta^{n}L_{\rm ms })\big{[}P_{\rm ms}f(U_{\rm ms}^{n-1,j})\\ &\quad-P_{\rm ms}f(u_{\rm ms}(t_{n-1}+c_{j}\delta^{n}))\big{]}- \epsilon^{ni},\end{split}\] (A.19)
\[\varepsilon^{n}=e^{-\delta^{n}L_{\rm ms}}\varepsilon^{n-1}+\delta^{n}\sum_{i= 1}^{m}\beta_{i}(-\delta^{n}L_{\rm ms})\left[P_{\rm ms}f(U_{\rm ms}^{nj})-P_{ \rm ms}f(u_{\rm ms}(t_{n-1}+c_{j}\delta^{n}))\right]-\epsilon^{n}.\] (A.20)
Now, we shall show the error estimates for the first-order Exponential Euler scheme.
**Theorem A.10**.: _Suppose that \(f\) satisfies Assumptions A.1 and A.2, and the exact solution \(u(t)\) satisfies (A.3a) and (A.3b). Then, There exists a constant \(H_{0}>0\) such that if the spatial coarse grid size \(H\leq H_{0}\), such that the multiscale solution \(u_{\rm ms}^{n}\) given by the Exponential Euler scheme (4.12) holds_
\[\|u(t_{n})-u_{\rm ms}^{n}\|_{a}\preceq\delta^{n}+H\Lambda^{-\frac{1}{2}} \kappa_{\rm min}^{-\frac{1}{2}},\] (A.21)
_for \(n=1,\ldots,N_{t}\). The hidden constants are independent of the coarse grid size \(H\) and time step size \(\delta^{n}\)._
Proof.: By the triangle inequality, we have
\[\|u(t_{n})-u_{\rm ms}^{n}\|_{a}\leq\|(u-u_{\rm ms})(t_{n})\|_{a}+\|u_{\rm ms}(t _{n})-u_{\rm ms}^{n}\|_{a}.\]
Then, the first term on the left-hand side is bounded by Theorem A.6. We shall concentrate on obtaining a bound for the second one. Then, we firstly set \(m=1\) in the recurrence relation (A.20), getting
\[u_{\rm ms}(t_{n})-u_{\rm ms}^{n}=:\varepsilon^{n}=e^{-\delta^{n}L_{\rm ms}} \varepsilon^{n-1}+\delta^{n}\phi_{1}(-\delta^{n}L_{\rm ms})\left[P_{\rm ms}f( u_{\rm ms}^{n-1})-P_{\rm ms}f(u(t_{n-1}))\right]-\epsilon^{n},\]
where
\[\epsilon^{n}=\delta^{n}\phi_{1}(-\delta^{n}L_{\rm ms})P_{\rm ms}f(u(t_{n-1}))+ \epsilon^{n,1},\]
and
\[\begin{split}\epsilon^{n,1}&=\int_{0}^{\delta^{n}}e ^{-(\delta^{n}-s)L_{\rm ms}}\int_{0}^{s}P_{\rm ms}f^{\prime}(u(t_{n-1}+\tau))d \tau ds\\ &\quad+\int_{0}^{\delta^{n}}e^{-(\delta^{n}-\tau)L_{\rm ms}}\left\{ P_{\rm ms}f(u_{\rm ms}(t_{n-1}+\tau))-P_{\rm ms}f(u(t_{n-1}+\tau))\right\}d\tau. \end{split}\]
Then, by recursive substitution, we find that
\[\begin{split}\|\varepsilon^{n}\|_{a}&\leq\left\|\delta^{ n}\sum_{j=1}^{n-1}e^{-(n-j-1)\delta^{n}L_{\text{ms}}}\phi_{1}(\delta^{n}L_{\text{ms}}) \left[P_{\text{ms}}f(u_{\text{ms}}^{j})-P_{\text{ms}}f(u(t_{j}))\right]\right\| _{a}\\ &\quad+\left\|\sum_{j=0}^{n-1}e^{-j\delta^{n}L_{\text{ms}}} \epsilon^{n-j}\right\|_{a}\\ &=I_{1}+I_{2}.\end{split}\] (A.22)
For \(I_{1}\) in (A.22), we use the triangle inequality to find that
\[\begin{split} I_{1}&=\left\|\delta^{n}\sum_{j=1}^{n- 1}e^{-(n-j-1)\delta^{n}L_{\text{ms}}}\phi_{1}(-\delta^{n}L_{\text{ms}})\left[P _{\text{ms}}f(u_{\text{ms}}^{j})-P_{\text{ms}}f(u(t_{j}))\right]\right\|_{a}\\ &\leq\left\|\delta^{n}\phi_{1}(-\delta^{n}L_{\text{ms}})\left[P_ {\text{ms}}f(u_{\text{ms}}^{n-1})-P_{\text{ms}}f(u(t_{n-1}))\right]\right\|_{a }\\ &\quad+\left\|\delta^{n}\sum_{j=0}^{n-2}e^{-(n-j-1)\delta^{n}L_{ \text{ms}}}\phi_{1}(-\delta^{n}L_{\text{ms}})\left[P_{\text{ms}}f(u_{\text{ms} }^{j})-P_{\text{ms}}f(u(t_{j}))\right]\right\|_{a}\\ &=I_{3}+I_{4}.\end{split}\]
About \(I_{3}\), we have
\[\begin{split} I_{3}&\leq\left\|L_{\text{ms}}^{\frac {1}{2}}\int_{0}^{\delta^{n}}e^{-(\delta^{n}-s)L_{\text{ms}}}ds\right\|_{0}\|P_{ \text{ms}}f(u_{\text{ms}}^{n-1})-P_{\text{ms}}f(u(t_{n-1}))\|_{0}\\ &\preceq\delta^{n}\sup_{0\leq s\leq\delta^{n}}\|L_{\text{ms}}^{ \frac{1}{2}}e^{-(\delta^{n}-s)L_{\text{ms}}}\|_{0}\|P_{\text{ms}}f(u_{\text{ms }}^{n-1})-P_{\text{ms}}f(u(t_{n-1}))\|_{0}\\ &\preceq(\delta^{n})^{\frac{1}{2}}\|u_{\text{ms}}^{n-1}-u(t_{n-1}) \|_{a}=(\delta^{n})^{\frac{1}{2}}\|\varepsilon^{n-1}\|_{a}.\end{split}\]
On other hand, by using (A.12) in Lemma A.8, we obtain
\[\|\phi_{1}(-\delta^{n}L_{\text{ms}})\|_{0}=\left\|\frac{1}{\delta^{n}}\int_{0} ^{\delta^{n}}e^{-(\delta^{n}-s)L_{\text{ms}}}ds\right\|_{0}\leq\sup_{0\leq s \leq\delta^{n}}\|e^{-(\delta^{n}-s)}\|_{0}\preceq 1.\]
Then, for \(I_{4}\), we find that
\[I_{4}\preceq \left\|\delta^{n}L_{\text{ms}}^{\frac{1}{2}}\sum_{j=0}^{n-2}e^{-(n-1 -j)\delta^{n}L_{\text{ms}}}\right\|_{0}\sup_{0\leq t\leq T}\|P_{\text{ms}}f(u_{ \text{ms}}(t))-P_{\text{ms}}f(u(t))\|_{0}\] \[+\delta^{n}\sum_{j=0}^{n-2}\|L_{\text{ms}}^{\frac{1}{2}}e^{-(n-1- j)\delta^{n}L_{\text{ms}}}\|_{0}\|P_{\text{ms}}f(u_{\text{ms}}^{j})-P_{\text{ms}}f(u_{ \text{ms}}(t_{j}))\|_{0}\] \[\preceq \sup_{0\leq t\leq T}\|P_{\text{ms}}f(u_{\text{ms}}(t))-P_{\text{ ms}}f(u(t))\|_{0}+\delta^{n}\sum_{j=0}^{n-2}t_{n-j-1}^{-\frac{1}{2}}\|P_{\text{ ms}}f(u_{\text{ms}}^{j})-P_{\text{ms}}f(u_{\text{ms}}(t_{j}))\|_{0}\] \[\preceq H\Lambda^{-\frac{1}{2}}\kappa_{\min}^{-\frac{1}{2}}+ \delta^{n}\sum_{j=0}^{n-2}t_{n-j-1}^{-\frac{1}{2}}\|\varepsilon^{j}\|_{0}.\]
Therefore, we obtain
\[I_{1}\preceq(\delta^{n})^{\frac{1}{2}}\|\varepsilon^{n-1}\|_{a}+H\Lambda^{- \frac{1}{2}}\kappa_{\min}^{-\frac{1}{2}}+\delta^{n}\sum_{j=0}^{n-2}t_{n-j-1}^{ -\frac{1}{2}}\|\varepsilon^{j}\|_{a}.\] (A.23)
Finally, for \(I_{2}\) in (A.22), we have \(m=1\) and using the definition (A.15), we can infer \(\xi(-\delta^{n}L_{\text{ms}})=0\), along with \(\epsilon^{j}=\epsilon^{j,1}\), for \(j=1,\ldots,n\). Then by invoking (A.17b) in Lemma A.9, we have
\[I_{2}\preceq\delta^{n}\sup_{0\leq t\leq T}\|P_{\text{ms}}f(u(t))\|_{a}+H \Lambda^{-\frac{1}{2}}\kappa_{\min}^{-\frac{1}{2}}.\] (A.24)
Gathering (A.23) and (A.24), we have
\[\|\varepsilon^{n}\|_{a} \preceq(\delta^{n})^{\frac{1}{2}}\|\varepsilon^{n-1}\|_{a}+ \delta^{n}\sum_{j=0}^{n-2}t_{n-j-1}^{-\frac{1}{2}}\|\varepsilon^{j}\|_{a}+ \delta^{n}\sup_{0\leq t\leq T}\|P_{\text{ms}}f(u(t))\|_{a}+H\Lambda^{-\frac{1 }{2}}\kappa_{\min}^{-\frac{1}{2}}\] \[\preceq\delta^{n}\sum_{j=1}^{n-1}t_{n-j}^{-\frac{1}{2}}\|\varepsilon ^{j}\|_{a}+\delta^{n}\sup_{0\leq t\leq T}\|P_{\text{ms}}f(u(t))\|_{a}+H\Lambda ^{-\frac{1}{2}}\kappa_{\min}^{-\frac{1}{2}}.\]
By using a discrete version of Gronwall's inequality we get
\[\|\varepsilon^{n}\|_{a}\preceq\delta^{n}+H\Lambda^{-\frac{1}{2}}\kappa_{\min} ^{-\frac{1}{2}}.\]
Therefore, gathering the inequality above with (A.7), we arrive at (A.21), which finishes the proof.
**Lemma A.11**.: _Let \(f\) be a function that satisfies Assumptions A.1 and A.2, and consider that the exact solution \(u(t)\) of problem (2.1) satisfies the conditions (A.3a) and (A.3b) in Assumption A.3. If \(m\geq 2\), then it holds for any \(0\leq n\leq N_{t}\),_
\[\|E^{ni}\|_{a}\preceq\|\varepsilon^{n-1}\|_{a}+(\delta^{n})^{2}\sup_{0\leq\eta \leq 1}\|P_{\rm ms}f^{\prime}(u(t_{n-1}+\eta\delta^{n}))\|_{a}+H\Lambda^{-\frac{1 }{2}}\kappa_{\min}^{-\frac{1}{2}}+\sup_{0\leq t\leq T}\|(\hat{u}-u_{\rm ms})( \cdot,t)\|_{a},\] (A.25)
_where the hidden constant is independent of coarse grid size \(H\) and time step size \(\delta^{n}\)._
Proof.: Following the definition of \(E^{ni}\) in (A.19), we have
\[\|E^{ni}\|_{a} \leq\left\|e^{-c_{i}\delta^{n}L_{\rm ms}}\varepsilon^{n-1}\right\| _{a}+\left\|\delta^{n}\sum_{j=1}^{i-1}\alpha_{ij}(-\delta^{n}L_{\rm ms})P_{\rm ms }f(U_{\rm ms}^{nj})-P_{\rm ms}f(u_{\rm ms}(t_{n-1}+c_{j}\delta^{n})))\right\|_ {a}\] \[\quad+\|\epsilon^{ni}\|_{a}\] \[=I_{1}+I_{2}+I_{3}.\]
By using Lemma A.8, we obtain for \(I_{1}\),
\[I_{1}\preceq\|\varepsilon^{n-1}\|_{a}.\] (A.26)
Using the similar arguments to obtain (A.18) in Lemma A.9, we get for \(I_{2}\)
\[I_{2} =\left\|\delta^{n}\sum_{j=1}^{i-1}\alpha_{ij}(-\delta^{n}L_{\rm ms })\left[P_{\rm ms}f(U^{nj})-P_{\rm ms}f(u(t_{n-1}+c_{j}\delta^{n}))\right] \right\|_{a}\] \[\preceq\sum_{j=1}^{i-1}(\delta^{n})^{\frac{1}{2}}\|(\delta^{n})^{ \frac{1}{2}}L_{\rm ms}^{\frac{1}{2}}\alpha_{ij}(-\delta^{n}L_{\rm ms})\|_{0} \max_{2\leq j\leq i-1}\|P_{\rm ms}f(U^{nj})-P_{\rm ms}f(u(t_{n-1}+c_{j}\delta ^{n}))\|_{0}\] \[\preceq(\delta^{n})^{\frac{1}{2}}\max_{2\leq j\leq i-1}\|P_{\rm ms }f(U^{nj})-P_{\rm ms}f(u(t_{n-1}+c_{j}\delta^{n}))\|_{0}\] \[\preceq(\delta^{n})^{\frac{1}{2}}\max_{2\leq j\leq i-1}\|E^{nj}\| _{a}+H\Lambda^{-\frac{1}{2}}\kappa_{\min}^{-\frac{1}{2}}.\]
Finally, for \(I_{3}\), using the expression (A.15) and the consistency conditions (4.9), we can infer that the function \(\xi_{1,j}=0\) for \(j=1,\ldots,m\), and then the estimation of \(\|\epsilon^{ni}\|_{a}\) can be obtained via \(\|\epsilon^{ni,1}\|_{a}\). Then, we get that
\[\|E^{ni}\|_{a}\preceq\|\varepsilon^{n-1}\|_{a}+(\delta^{n})^{2}\sup_{0\leq\eta \leq 1}\|P_{\rm ms}f^{\prime}(t_{n-1}+\eta\delta^{n})\|_{a}+(\delta^{n})^{\frac{1} {2}}\max_{2\leq j\leq i-1}\|E^{nj}\|_{a}+H\Lambda^{-\frac{1}{2}}\kappa_{\min}^ {-\frac{1}{2}}.\]
This completes the proof.
**Theorem A.12**.: _Suppose that \(f\) satisfies Assumptions A.1 and A.2, and the exact solution \(u(t)\) satisfies (A.3a)-(A.3c). Then, There exists a constant \(H_{0}>0\) such that if the spatial coarse grid size \(H\leq H_{0}\), such that the multiscale solution \(u_{\rm ms}^{n}\) given by the Exponential Euler scheme (4.13) holds_
\[\|u(t_{n})-u_{\rm ms}^{n}\|_{a}\preceq(\delta^{n})^{2}+H\Lambda^{-\frac{1}{2}} \kappa_{\rm min}^{-\frac{1}{2}},\] (A.27)
_for \(n=1,\ldots,N_{t}\). The hidden constants are independent of the coarse grid size \(H\) and time step size \(\delta^{n}\)._
Proof.: Similar to Theorem A.10, we have
\[\|u(t_{n})-u_{\rm ms}^{n}\|_{a}\leq\|(u-u_{\rm ms})(t_{n})\|_{a}+\|u_{\rm ms}(t _{n})-u_{\rm ms}^{n}\|_{a}.\] (A.28)
For the second term on the right-hand side, by definition (A.15), we obtain that \(\xi_{1}(-\delta^{n}L_{\rm ms})=\xi_{2}(-\delta^{n}L_{\rm ms})=0\), then \(\epsilon^{n}=\epsilon^{n,2}\), by using the Lagrange interpolation for \(m=2\). By (A.12) in Lemma A.8, we have
\[\left\|\delta^{n}\sum_{j=0}^{n-1}e^{-(n-1-j)\delta^{n}L_{\rm ms}} \sum_{i=1}^{2}\beta_{i}(-\delta^{n}L_{\rm ms})\left[P_{\rm ms}f(U^{ji})-P_{ \rm ms}f(u(t_{j}+c_{i}\delta^{n}))\right]\right\|_{a}\] (A.29) \[\preceq\left\|\delta^{n}\sum_{i=1}^{2}\beta_{i}(-\delta^{n}L_{\rm ms })\left[P_{\rm ms}f(U^{n-1,i})-P_{\rm ms}f(u(t_{n-1}+c_{i}\delta^{n}))\right] \right\|_{a}\] \[\quad+\left\|\delta^{n}\sum_{j=0}^{n-2}e^{-(n-1-j)\delta^{n}L_{ \rm ms}}\sum_{i=1}^{2}\beta_{i}(-\delta^{n}L_{\rm ms})\left[P_{\rm ms}f(U^{ji} )-P_{\rm ms}f(u(t_{j}+c_{i}\delta^{n}))\right]\right\|_{a}\] \[=:I_{1}+I_{2}.\]
For \(I_{1}\), we get that
\[I_{1} \preceq\sum_{i=1}^{2}(\delta^{n})^{\frac{1}{2}}\left\|(\delta^{n })^{\frac{1}{2}}L_{\rm ms}^{\frac{1}{2}}\beta_{i}(-\delta^{n}L_{\rm ms}) \right\|_{0}\max_{1\leq i\leq 2}\left\|P_{\rm ms}f(U^{n-1,i})-P_{\rm ms}f(u(t_{n-1 }+c_{i}\delta^{n}))\right\|_{0}\] (A.30) \[\preceq(\delta^{n})^{\frac{1}{2}}\max_{1\leq i\leq 2}\left\|P_{ \rm ms}f(U^{n-1,i})-P_{\rm ms}f(u(t_{n-1}+c_{i}\delta^{n}))\right\|_{0}\] \[\preceq(\delta^{n})^{\frac{1}{2}}\max_{1\leq i\leq 2}\|E^{n-1,i}\|_ {a}+H\Lambda^{-\frac{1}{2}}\kappa_{\rm min}^{-\frac{1}{2}}+\sup_{0\leq t\leq T }\|(\hat{u}-u_{\rm ms})(\cdot,t)\|_{a},\]
and
\[\begin{split} I_{2}\preceq&\left\|\delta^{n}L_{\text{ms}}^{ \frac{1}{2}}\sum_{j=0}^{n-2}e^{-(n-1-j)\delta^{n}L_{\text{ms}}}\right\|_{0 \leq t\leq T}\left\|P_{\text{ms}}f(u(t))-P_{\text{ms}}f(u_{\text{ms}}(t)) \right\|_{0}\\ &+\sum_{j=0}^{n-2}\delta^{n}\left\|L_{\text{ms}}^{\frac{1}{2}}e^{ -(n-j-1)\delta^{n}L_{\text{ms}}}\right\|_{0}\max_{1\leq i\leq 2}\left\|P_{ \text{ms}}f(U^{ji})-P_{\text{ms}}f(u_{\text{ms}}(t_{j}+c_{i}\delta^{n})) \right\|_{0}\\ \preceq&\delta^{n}\sum_{j=0}^{n-2}t_{n-j-1}^{- \frac{1}{2}}\max_{1\leq i\leq 2}\|P_{\text{ms}}f(U^{ji})-P_{\text{ms}}f(u_{ \text{ms}}(t_{j}+c_{i}\delta^{n}))\|_{0}+H\Lambda^{-\frac{1}{2}}\kappa_{\text {min}}^{-\frac{1}{2}}\\ &+\sup_{0\leq t\leq T}\|(\hat{u}-u_{\text{ms}})(\cdot,t)\|_{a}\\ \preceq&\delta^{n}\sum_{j=0}^{n-2}t_{n-j-1}^{- \frac{1}{2}}\max_{1\leq i\leq 2}\|E^{ji}\|_{a}+H\Lambda^{-\frac{1}{2}}\kappa_{ \text{min}}^{-\frac{1}{2}}+\sup_{0\leq t\leq T}\|(\hat{u}-u_{\text{ms}})( \cdot,t)\|_{a}.\end{split}\] (A.31)
By using (A.29)-(A.31), and Lemma A.9, we arrive at
\[\begin{split}\|\varepsilon^{n}\|_{a}\leq&\left\| \delta^{n}\sum_{j=0}^{n-1}e^{-(n-1-j)\delta^{n}L_{\text{ms}}}\sum_{i=1}^{2} \beta_{i}(-\delta^{n}L_{\text{ms}})\left[P_{\text{ms}}f(U^{ji})-P_{\text{ms}}f (u(t_{j}+c_{i}\delta^{n}))\right]\right\|_{a}\\ &+\left\|\sum_{j=0}^{n-1}e^{-j\delta^{n}L_{\text{ms}}}\epsilon^{n -j,2}\right\|_{a}\\ \preceq&(\delta^{n})^{\frac{1}{2}}\max_{1\leq i\leq 2}\| E^{n-1,i}\|_{a}+\delta^{n}\sum_{j=0}^{n-2}t_{n-j-1}^{\frac{1}{2}}\max_{i\leq i \leq 2}\|E^{ji}\|_{a}\\ &+(\delta^{n})^{2}\sup_{0\leq t\leq T}\|P_{\text{ms}}f^{(2)}(u(t ))\|_{a}+H\Lambda^{-\frac{1}{2}}\kappa_{\text{min}}^{-\frac{1}{2}}.\end{split}\] (A.32)
Using Lemma A.11, and a discrete version of Gronwall's inequality, we find that
\[\|u_{\text{ms}}(t_{n})-u_{\text{ms}}^{n}\|_{1}\preceq(\delta^{n})^{2}+H \Lambda^{-\frac{1}{2}}\kappa_{\text{min}}^{-\frac{1}{2}}.\]
We obtain the desired result by gathering the expression above with (A.28).
Similar computation allows us to obtain the following error estimates in \(\mathrm{L}^{2}\)-norm.
**Theorem A.13** (Error estimate in \(\mathrm{L}^{2}\)-norm).: _Suppose that \(f\) satisfies Assumptions A.1 and A.2, and the exact solution \(u(t)\) satisfies (A.3a)-(A.3c). Then, There exists a constant
\(H_{0}>0\) such that if the spatial coarse grid size \(H\leq H_{0}\), such that the multiscale solution \(u^{n}_{\rm ms}\) given by the Exponential Euler scheme (4.12) holds_
\[\|u(t_{n})-u^{n}_{\rm ms}\|_{0}\preceq(\delta^{n})+H^{2}\Lambda^{-1}\kappa^{-1}_ {\rm min},\]
_and for (4.13),_
\[\|u(t_{n})-u^{n}_{\rm ms}\|_{0}\preceq(\delta^{n})^{2}+H^{2}\Lambda^{-1}\kappa^{ -1}_{\rm min},\]
_for \(n=1,\ldots,N_{t}\). The hidden constants are independent of the coarse grid size \(H\) and time step size \(\delta^{n}\)._
|
2305.05695
|
Dynamical He Flashes in Double White Dwarf Binaries
|
The detonation of an overlying helium layer on a
$0.8-1.1\,\mathrm{M}_{\odot}$ carbon-oxygen (CO) white dwarf (WD) can detonate
the CO WD and create a thermonuclear supernova (SN). Many authors have recently
shown that when the mass of the He layer is low ($\lesssim
0.03\,\mathrm{M}_{\odot}$), the ashes from its detonation minimally impact the
spectra and light-curve from the CO detonation, allowing the explosion to
appear remarkably similar to Type Ia SNe. These new insights motivate our
investigation of dynamical He shell burning, and our search for a binary
scenario that stably accumulates thermally unstable He shells in the
$0.01-0.08\,\mathrm{M}_{\odot}$ range, thick enough to detonate, but also often
thin enough for minimal impact on the observables. We first show that our
improved non-adiabatic evolution of convective He shell burning in this shell
mass range leads to conditions ripe for a He detonation. We also find that a
stable mass-transfer scenario with a high entropy He WD donor of mass
$0.15-0.25\,\mathrm{M}_\odot$ yields the He shell masses needed to achieve the
double detonations. This scenario also predicts that the surviving He donor
leaves with a space velocity consistent with the unusual runaway object, D6-2.
We find that hot He WD donors originate in common envelope events when a
$1.3-2.0\,\mathrm{M}_\odot$ star fills its Roche lobe at the base of the red
giant branch at orbital periods of $1-10$ days with the CO WD.
|
Tin Long Sunny Wong, Lars Bildsten
|
2023-05-09T18:00:42Z
|
http://arxiv.org/abs/2305.05695v1
|
# Dynamical He Flashes in Double White Dwarf Binaries
###### Abstract
The detonation of an overlying helium layer on a \(0.8-1.1\,\mathrm{M}_{\odot}\) carbon-oxygen (CO) white dwarf (WD) can detonate the CO WD and create a thermonuclear supernova (SN). Many authors have recently shown that when the mass of the He layer is low (\(\lesssim 0.03\,\mathrm{M}_{\odot}\)), the ashes from its detonation minimally impact the spectra and light-curve from the CO detonation, allowing the explosion to appear remarkably similar to Type Ia SNe. These new insights motivate our investigation of dynamical He shell burning, and our search for a binary scenario that stably accumulates thermally unstable He shells in the \(0.01-0.08\,\mathrm{M}_{\odot}\) range, thick enough to detonate, but also often thin enough for minimal impact on the observables. We first show that our improved non-adiabatic evolution of convective He shell burning in this shell mass range leads to conditions ripe for a He detonation. We also find that a stable mass-transfer scenario with a high entropy He WD donor of mass \(0.15-0.25\,\mathrm{M}_{\odot}\) yields the He shell masses needed to achieve the double detonations. This scenario also predicts that the surviving He donor leaves with a space velocity consistent with the unusual runaway object, D6-2. We find that hot He WD donors originate in common envelope events when a \(1.3-2.0\,\mathrm{M}_{\odot}\) star fills its Roche lobe at the base of the red giant branch at orbital periods of \(1-10\) days with the CO WD.
0000-0002-8820-7880]Tin Long Sunny Wong
0000-0002-2883-0888]Lars Bildsten
## 1 Introduction
For decades astrophysicists have tried to answer the question - where do type Ia supernovae (SNe Ia) come from? The broadly accepted answer is that they come from the detonation of a carbon-oxygen white dwarf (CO WD) (Hoyle and Fowler, 1960). However, it is unclear whether SNe Ia predominantly come from explosions occurring near the Chandrasekhar mass (\(M_{\mathrm{Ch}}\)), or below.
One proposed sub-\(M_{\mathrm{Ch}}\) explosion mechanism is the double detonation scenario, where the detonation of a He shell triggers the detonation of the underlying CO core (e.g., Livne, 1990; Livne and Glasner, 1991; Woosley and Weaver, 1994; Garcia-Senz et al., 1999; Fink et al., 2007, 2010; Kromer et al., 2010; Woosley and Kasen, 2011; Sim et al., 2012; Pakmor et al., 2012; Moll and Woosley, 2013; Shen and Bildsten, 2014; Townsley et al., 2019; Polin et al., 2019; Gronow et al., 2020; Leung and Nomoto, 2020; Boos et al., 2021; Gronow et al., 2021). A challenge to this scenario is that the burning products of the He shell detonation lead to disagreements in spectra and light curve with observations of normal SNe Ia (e.g., Hoeflich and Khokhlov, 1996; Nugent et al., 1997); the production of Ti, Cr and Fe group elements in the He detonation leads to line blanketing and the resulting colors are redder than observed SNe Ia (e.g., Kromer et al., 2010; Woosley and Kasen, 2011; Polin et al., 2019; Collins et al., 2022). This can be alleviated in part by reducing the He shell mass, and, with bare sub-\(M_{\mathrm{Ch}}\) CO WDs, good agreement with observations is found (e.g., Sim et al., 2010; Blondin et al., 2017; Shen et al., 2018). With improved nucleosynthesis through the inclusion of a large nuclear network and CNO material in the He shell (Shen and Moore, 2014), Townsley et al. 2019, Boos et al. 2021 and Shen et al. 2021 find that a thin He shell double detonation (\(\lesssim 0.03\,\mathrm{M}_{\odot}\)) can lead to good agreement with observations of spectroscopically normal SNe Ia (though their thin-shell results are at variance with Gronow et al., 2021; Collins et al., 2022).
The He detonation can be triggered by accretion stream instabilities during the dynamical phase of a double WD merger, with total accumulated He shell mass at ignition as low as \(\approx 0.01\,\mathrm{M}_{\odot}\)(e.g., Guillochon et al., 2010; Pakmor et al., 2012). This dynamically driven double-degenerate double-detonation (D6) scenario is strongly supported by the discovery of three hyperve
locity WDs with velocities \(\gtrsim 1000\,\mathrm{km~{}s^{-1}}\)(Shen et al., 2018).
Alternatively, the He detonation can arise during a He shell flash where the He shell accumulates through stable mass transfer. The donor can be a nondegenerate He star, with mass transfer rates \(\approx 10^{-8}\,\mathrm{M_{\odot}\,yr^{-1}}\) leading to a thick He shell \(\approx 0.1-0.2\,\mathrm{M_{\odot}}\)(e.g., Iben and Tutukov, 1991; Brooks et al., 2015; Bauer et al., 2017), though the resulting transient better resembles peculiar SNe Ia (e.g., Woosley and Kasen, 2011; Polin et al., 2019; De et al., 2019).
In the AM CVn last flash scenario, the donor is a cold He WD (Bildsten et al., 2007; Piersanti et al., 2015, 2019). The mass transfer rate begins high (\(\dot{M}\gtrsim 10^{-6}\,\mathrm{M_{\odot}\,yr^{-1}}\)), leading to weak He flashes, and decreases with time, leading to He flashes that increase in strength. The last He flash to occur is the strongest, and can potentially lead to a He detonation that results in a ".Ia" supernova (Shen et al., 2010) or double-detonation SN Ia. However, Piersanti et al. (2015, 2019) suggest that the AM CVn last flash is not strong enough to become dynamical.
We revisit He WD donors as potential progenitors of double-detonations and hypervelocity WDs like D6-2. Motivated by the suggestion that AM CVn binaries can be born with a wide range of entropies (Deloye et al., 2007; Wong and Bildsten, 2021; van Roestel et al., 2022; Burdge et al., 2023), we explore a binary scenario of stable mass transfer from a high-entropy (hot) He WD onto a massive CO WD that leads to a strong, first He flash potentially developing into a detonation of the He and possibly the CO.
We discuss the binary evolution models in Section 2. We show that high-entropy He WDs have lower peak \(\dot{M}\) than cold He WDs, leading to accretor He shell masses at ignition comparable to, or even greater than, in the AM CVn last flash scenario (Bildsten et al., 2007; Piersanti et al., 2015, 2019). In Section 3, we demonstrate that these He flashes can become dynamical and develop into a detonation, and discuss the minimum He shell mass required for such outcome. We show in Section 4 that high-entropy He WDs originate in binary scenarios from unstable mass transfer with an evolved \(M=1.3-2.0\,\mathrm{M_{\odot}}\) donor near the base of the red giant branch (RGB), and a short post common envelope (CE) orbital period. We conclude in Section 5 by discussing the open questions that remain.
## 2 Binary evolution up to ignition of the He shell
### Setup
We model the mass transfer from a high-entropy He WD (donor) onto a CO WD (accretor) and the ensuing He flash on the accretor using Modules for Experiments in Stellar Astrophysics (MESA version 21.12.1; Paxton et al., 2011, 2013, 2015, 2018, 2019; Jermyn et al., 2023). The initial WD models are constructed in version 15140. Additional He flash models in Section 3.4 are run in version 22.11.1, chosen to test the new time dependent convection capability. Our MESA input and output files are available at Zenodo ([https://doi.org/10.5281/zenodo.7815303](https://doi.org/10.5281/zenodo.7815303)).
We consider He WD donors with initial central entropies, \(s^{i}_{\mathrm{c,He}}/(N_{\mathrm{A}}k_{\mathrm{B}})\), where \(N_{\mathrm{A}}\) is Avogadro's constant and \(k_{\mathrm{B}}\) is the Boltzmann constant, from 2.4 to 4.0 in increments of 0.1, and initial masses, \(M^{i}_{\mathrm{He}}\), from 0.15 to 0.25\(\,\mathrm{M_{\odot}}\) in increments of 0.01. We first evolve a 2.0 (for \(M^{i}_{\mathrm{He}}\leqslant 0.20\,\mathrm{M_{\odot}}\)) or 2.5\(\,\mathrm{M_{\odot}}\) star from pre-main sequence to the formation of a He core of the desired mass (core boundary defined by hydrogen mass fraction \(X=10^{-5}\)), with solar metallicity (\(Z=0.0142\); Asplund et al., 2009) and MESA's mesa_49.net network, which includes neutrons, \({}^{1-2}\)H, \({}^{3-4}\)He, \({}^{7}\)Li, \({}^{7,9-10}\)Be, \({}^{8}\)B, \({}^{12-13}\)C, \({}^{13-15}\)N, \({}^{14-18}\)O, \({}^{17-19}\)F, \({}^{18-22}\)Ne, \({}^{21-24}\)Na, \({}^{23-26}\)Mg, \({}^{25-27}\)Al, \({}^{27-30}\)Si, \({}^{30-31}\)P, \({}^{31-34}\)S and interlinking reactions. Then we strip the envelope off using a fast wind with \(\dot{M}\) between \(10^{-8}\) and \(10^{-6}\,\mathrm{M_{\odot}\,yr^{-1}}\), and cool the He core to the desired central entropy.
We similarly construct CO WDs with initial masses, \(M^{i}_{\mathrm{WD}}\), of 0.9, 1.0 and 1.1\(\,\mathrm{M_{\odot}}\). The 0.9 and 1.0\(\,\mathrm{M_{\odot}}\) models start with zero-age main sequence (ZAMS) masses of 5.5 and 6.3\(\,\mathrm{M_{\odot}}\), and the 1.1\(\,\mathrm{M_{\odot}}\) model is scaled from the 1.0\(\,\mathrm{M_{\odot}}\) model after envelope stripping. The CO cores are cooled to a central temperature of \(T_{\mathrm{c}}=2\times 10^{7}\,\mathrm{K}\). This range of CO masses is motivated by studies (e.g., Sim et al., 2010; Polin et al., 2019; Boos et al., 2021; Shen et al., 2021) that predict double detonations or bare CO detonations with CO cores in this mass range yield spectra and light curve evolution similar to subluminous, normal, and overluminous type Ia supernovae.
We initiate the WDs in a binary and evolve both components and orbital parameters. The initial orbital period is chosen such that the He WD comes into contact within \(10^{5}\) yr and its entropy then is the same as the initial value (see Section 4). We assume fully conservative mass transfer, modeled following Kolb and Ritter (1990), with orbital angular momentum loss driven solely by gravitational waves. For convergence, we set eps_mdot_factor\(=0\) for the donor. This neglects the redistribution of energy due to mass loss as laid out in Paxton et al. (2019), but has no effect on \(\dot{M}\) since it depends on the donor's mass-radius relation (see Section 2.2).
We model both components as nonrotating. This could impact the stability of mass transfer, the mass transfer rate \(\dot{M}\), and dissipation in the He shell of the accretor, all of which can influence the He shell thickness and thus strength of the He flash. However, as we explain here, we find any effects of rotation to be minimal. First, because of the larger radii of our high-entropy donors and the small mass ratio between the WDs, we find that disk accretion occurs for all runs in this study, using equation (6) of Nelemans et al. (2001), and so the mass transfer is stable (Marsh et al., 2004). Second, while a fully synchronized donor will be spun up to \(\approx 30\%\) of critical rotation (e.g., Bauer and Kupfer, 2021), we found that the resulting inflated radius is similar to that of a nonrotating model with slightly higher entropy (\(\Delta s^{i}_{\rm c,He}\approx 0.1\)). The mass transfer rate is slightly lower, but we expect only a small shift in the parameter space for a strong He flash, by \(\Delta s^{i}_{\rm c,He}\approx 0.1\). Third, Neunteufel et al. (2017) found that while angular momentum transport by the Tayler-Spruit dynamo (Spruit, 2002) can lead to near solid-body rotation in the accretor during accretion-induced spin-up, the resulting viscous dissipation can significantly reduce the required He shell mass for ignition. However, Piro (2008) found that viscous heating is unimportant compared to heating due to accretion for \(\dot{M}\approx 10^{-7}\,{\rm M}_{\odot}\,{\rm yr}^{-1}\) that is relevant for our work. Furthermore, we find that the enhanced Tayler-Spruit dynamo proposed by Fuller et al. (2019) reduces viscous dissipation even more considerably. Fourth, rotationally induced mixing between the CO core and He shell may impact ignition conditions. Studies considering hydrodynamic processes find considerable mixing. Yoon et al. (2004) find that mixing tends to stabilize He burning, while Piro (2015) finds that mixing occurs earlier but at a larger depth. In contrast, Neunteufel et al. (2017), who consider the Tayler-Spruit dynamo, find little mixing at the core-shell interface. Finally, tidal dissipation may also reduce the required He shell mass for ignition (see Fuller and Lai, 2012, for the hydrogen, non-accreting counterpart). We defer to future studies for investigating this possibility.
During the binary run, we adopt a nuclear network that includes \({}^{14}\)C, since it participates in the \({}^{14}\)N(\(e^{-},\,\nu)^{14}\)C(\(\alpha,\,\gamma)^{18}\)O (NCO) reaction chain (Hashimoto et al., 1986) at densities above \(1.25\times 10^{6}\,{\rm g}\,{\rm cm}^{-3}\), and may trigger an earlier ignition for thick He shells (Bauer et al., 2017). We calculate the weak reaction rates linking \({}^{14}\)N and \({}^{14}\)C following Schwab et al. (2017), which agree within 50% for \(5.5\leqslant\log_{10}\left(Y_{e}\rho/{\rm g}\,{\rm cm}^{-3}\right)\leqslant 7.0\) where \(Y_{e}=0.5\), and \(7.0\leqslant\log_{10}\left(T/{\rm K}\right)\leqslant 8.3\) with the rates provided by G. Martinez-Pinedo in the MESA custom_rates test suite, but are more finely spaced in \(\log_{10}\rho\) space and so avoids interpolation issues when the rates change by orders of magnitude. The \({}^{14}\)C(\(\alpha,\,\gamma)^{18}\)O rates are from Bauer et al. (2017).
We assume that a phase of H-rich mass transfer has already occurred prior to the He-rich phase modeled in this work (e.g., extremely low-mass WDs are expected to have \(\approx 10^{-3}\,{\rm M}_{\odot}\) of H on their surface; Istrate et al., 2016). The H-rich mass transfer causes H flashes on the accretor, and given the short binary separation, the accretor may expand and fill its own Roche-lobe. We assume that the binary survives the H-novae by ejecting the H shells, as is found likely for a \(0.2+1.0\,{\rm M}_{\odot}\) binary in Shen (2015).
As He-rich mass transfer begins, the temperature in the accretor envelope rises due to compression, forming a temperature inversion, and eventually reaches ignition. We terminate the binary run once a convection zone is formed, and continue evolving the accretor through the He flash until the convection zone reaches the surface. This will be described in more detail in Section 3.
### Mass transfer history
High-entropy He WD donors yield a lower \(\dot{M}\), and, when thermally unstable, a thicker He shell at ignition than a cold He WD donor (Deloye et al., 2007). By high-entropy, we mean the He WD is hotter and less dense, and has a lower degree of degeneracy. For this study, we consider He WDs with central entropy \(s_{\rm c,He}/(N_{\rm A}k_{\rm B})\gtrsim 3.0\) high-entropy, corresponding to a cooling time \(\lesssim 10^{8}\,{\rm yr}\) after their formation (see Section 4). With their larger radii, high-entropy He WDs reach period minimum and peak \(\dot{M}\) at longer \(P_{\rm orb}\), where they have longer gravitational wave timescales, \(\tau_{\rm gr}\equiv J_{\rm orb}/\dot{J}_{\rm gr}\) where \(J_{\rm orb}\) is the orbital angular momentum and \(\dot{J}_{\rm gr}\) is the rate of angular momentum loss due to gravitational wave radiation (Landau and Lifshitz, 1971). The timescale for orbital evolution is given by \(\tau_{\rm gr}\), and \(\dot{M}\approx M_{\rm He}/\tau_{\rm gr}\)(e.g., Bauer and Kupfer, 2021). Therefore, high-entropy He WDs have lower peak \(\dot{M}\), as a consequence of their larger radii. There is a donor entropy so large that the low \(\dot{M}\) leads to accumulation of a He layer that never undergoes a flash.
Figure 1 compares the mass transfer histories for He WD donors with \(M^{i}_{\rm He}=0.15,0.20\,{\rm M}_{\odot}\), \(M^{i}_{\rm WD}=1.0\,{\rm M}_{\odot}\), and a wide range of \(s^{i}_{\rm c,He}\). It illustrates that a higher-entropy He WD has a lower peak \(\dot{M}\) and a longer period minimum. Higher-entropy He WDs also start with a thicker nondegenerate layer on the surface, and so come into contact at longer \(P_{\rm orb}\). Removal of the nondegenerate layer leads to contraction of the radius (Deloye et al., 2007; Kaplan et al., 2012), so \(P_{\rm orb}\) de
creases and \(\dot{M}\) gradually increases from \(\lesssim 10^{-8}\,\mathrm{M}_{\odot}\,\mathrm{yr}^{-1}\) to \(\approx 10^{-7}\,\mathrm{M}_{\odot}\,\mathrm{yr}^{-1}\). This is seen in the high-entropy models in the "turn-on" phase of mass transfer. Moreover, comparison between the top and bottom panels shows that for higher \(M_{\mathrm{He}}^{i}\), the period minimum occurs at shorter \(P_{\mathrm{orb}}\), and peak \(\dot{M}\) is higher, at fixed \(s_{\mathrm{c,He}}^{i}\). This is a consequence of their smaller radii. However, the \(\dot{M}\) evolution is eventually the same regardless of \(M_{\mathrm{He}}^{i}\), for the models with the same \(s_{\mathrm{c,He}}^{i}\)(Deloye et al., 2007; Wong and Bildsten, 2021).
We do not consider initial entropies higher than \(s_{\mathrm{c,He}}^{i}/(N_{\mathrm{A}}k_{\mathrm{B}})=4.0\). He WDs with initial entropies below roughly this value have high enough \(\dot{M}\) that the mass transfer timescale, \(\tau_{\dot{M}}\equiv M_{\mathrm{He}}/\dot{M}\), is shorter than the thermal timescale, \(\tau_{\mathrm{th}}\equiv\int_{0}^{M_{\mathrm{He}}}\left(c_{\mathrm{p}}T \mathrm{d}m\right)/L_{\mathrm{He}}\), leading to adiabatic evolution (Deloye et al., 2007; Wong and Bildsten, 2021). With \(s_{\mathrm{c,He}}^{i}/(N_{\mathrm{A}}k_{\mathrm{B}})\gtrsim 4.0\), \(\dot{M}\) during the "turn-on" phase is so low that \(\tau_{\dot{M}}>\tau_{\mathrm{th}}\). The donor therefore loses entropy until \(s_{\mathrm{c,He}}/(N_{\mathrm{A}}k_{\mathrm{B}})\) decreases down to \(\approx 4.0\), where \(\tau_{\dot{M}}\lesssim\tau_{\mathrm{th}}\) and adiabatic evolution begins. As a result, all He WDs with \(s_{\mathrm{c,He}}^{i}/(N_{\mathrm{A}}k_{\mathrm{B}})\gtrsim 4.0\) eventually resemble a \(s_{\mathrm{c,He}}^{i}/(N_{\mathrm{A}}k_{\mathrm{B}})\approx 4.0\) one in \(\dot{M}\) evolution.
### Properties at Onset of He Flash
Panels (a) and (e) of Figure 2 show properties of the accretor and donor that are determined at He shell ignition, for our fiducial grid of \(M_{\mathrm{WD}}^{i}=1.0\,\mathrm{M}_{\odot}\) models. Similar results are shown in Figure 3 for \(M_{\mathrm{WD}}^{i}=0.9,1.1\,\mathrm{M}_{\odot}\) models.
The total accumulated He shell masses at ignition (panel a) of our models span the range of \(0.01-0.08\,\mathrm{M}_{\odot}\). These cover the range of He shell masses predicted by Bildsten et al. (2007) and Piersanti et al. (2015) for the AM CVn last flash scenario. Three trends can be observed. First, with a higher \(s_{\mathrm{c,He}}^{i}\) at fixed \(M_{\mathrm{He}}^{i}\), a thicker He shell is required for ignition. This is the result of less efficient "compressional heating" due to the lower \(\dot{M}\). The thickest He shells here are ignited with a boost from the NCO reaction chain occurring near the base of the accreted layer (Hashimoto et al., 1986; Bauer et al., 2017). Second, no He flash occurs above a certain \(s_{\mathrm{c,He}}^{i}\) for each \(M_{\mathrm{He}}^{i}\), due to the very low \(\dot{M}\) for these models. These systems will continue to evolve as an AM CVn binary to long orbital periods (e.g., Ramsay et al., 2018; Wong and Bildsten, 2021), possibly explaining why some AM CVn donors have high-entropy (van Roestel et al., 2022). Third, the parameter space for the same range of \(\Delta M\) shifts to higher \(s_{\mathrm{c,He}}^{i}\) as \(M_{\mathrm{He}}^{i}\) increases. This is because \(\Delta M\) largely depends on peak \(\dot{M}\), which is higher as \(M_{\mathrm{He}}^{i}\) increases, and lower as \(s_{\mathrm{c,He}}^{i}\) increases.
While it is unclear whether all these models can develop a He detonation, they do support a steady transverse detonation wave, especially given the inclusion of a large nuclear network and modest enrichment of CNO material (Shen and Moore, 2014). If double-detonation is successful, the ones with \(\Delta M\lesssim 0.03\,\mathrm{M}_{\odot}\) are of interest for spectroscopically normal type Ia supernovae, while models with thicker He shells may resemble abnormal thermonuclear supernovae (Polin et al., 2019; Boos et al., 2021).
Following (if possible) the double-detonation of the accretor and the unbinding of the binary, the donor departs at its pre-explosion orbital velocity, which is shown in panel (e) of Figure 2. The radius of the donor, and hence \(P_{\mathrm{orb}}\) at period minimum, increase with \(s_{\mathrm{c,He}}^{i}\), leading to a decrease of \(v_{\mathrm{orb,He}}^{\mathrm{ign}}\) with \(s_{\mathrm{c,He}}^{i}\). Typical low-entropy (\(s_{\mathrm{c,He}}^{i}/(N_{\mathrm{A}}k_{\mathrm{B}})\lesssim 3.0\)) donors have \(v_{\mathrm{orb,He}}^{\mathrm{ign}}\gtrsim 1100\,\mathrm{km}\;\mathrm{s}^{-1}\), but high-entropy donors may reach \(v_{\mathrm{orb,He}}^{\mathrm{ign}}\approx 1000\,\mathrm{km}\;\mathrm{s}^{-1}\), becoming comparable to the heliocentric velocity of the hypervelocity WD D6-2 (\(1010^{+60}_{-50}\,\mathrm{km}\;\mathrm{s}^{-1}\); Bauer et al., 2021). A lower \(M_{\mathrm{WD}}^{i}\) gives a lower \(v_{\mathrm{orb,He}}^{\mathrm{ign}}\) at fixed \(s_{\mathrm{c,He}}^{i}\) (though the boundary for ignition may change slightly), by \(\approx 40-50\,\mathrm{km}\;\mathrm{s}^{-1}\) for \(M_{\mathrm{WD}}^{i}=0.9\,\mathrm{M}_{\odot}\) (see Figure 3), allowing for a larger parameter space for matching D6-2. Regardless of \(M_{\mathrm{WD}}^{i}\)
Figure 1.— Mass transfer rate, \(\dot{M}\), as a function of orbital period, \(P_{\mathrm{orb}}\), for He WDs with \(M_{\mathrm{He}}^{i}=0.15\) (top) or \(0.20\,\mathrm{M}_{\odot}\) (bottom) of various \(s_{\mathrm{c,He}}^{i}\) transferring mass onto a \(M_{\mathrm{WD}}^{i}=1.0\,\mathrm{M}_{\odot}\) accretor. Inset indicates the flow of time.
our models confirm the analysis by Bauer et al. (2021) that D6-2 could be a former He WD donor1 where a double-detonation may have happened, and our results furthermore suggest that D6-2 must have been high-entropy.
Footnote 1: Alternatively, D6-2 could be a former He star donor (Neunteufel et al., 2021), or former CO WD accretor with a hybrid He/CO donor (Pakmor et al., 2021).
## 3 The He shell flash
Important parameters adopted during the He flash are as follows. First, the accretor's nuclear network is expanded to include neutrons, \({}^{1}\)H, \({}^{4}\)He, \({}^{11}\)B, \({}^{12-14}\)C, \({}^{13-15}\)N, \({}^{14-18}\)O, \({}^{17-19}\)F, \({}^{18-22}\)Ne, \({}^{21-23}\)Na, \({}^{22-26}\)Mg, \({}^{25-27}\)Al, \({}^{27-30}\)Si, \({}^{29-31}\)P, \({}^{31-34}\)S, \({}^{33-35}\)Cl, \({}^{36-39}\)Ar, \({}^{39}\)K, \({}^{40}\)Ca, \({}^{43}\)Sc, \({}^{44}\)Ti, \({}^{47}\)V, \({}^{48}\)Cr, \({}^{51}\)Mn, \({}^{52,56}\)Fe, \({}^{55}\)Co, and \({}^{55,56,58-59}\)Ni. This gives a network that encompasses the 55-isotope network adopted by Townsley et al. (2019) for accurate energy release. In particular, the reaction \({}^{12}\)C(\(p,\gamma\))\({}^{13}\)N(\(\alpha,p\))\({}^{16}\)O yields significant energy release at temperatures above \(10^{9}\) K (Shen & Bildsten, 2009; Shen & Moore, 2014), and plays an important role in some of our models. Second, we adopt the Cox formulation of the mixing length theory (MLT; Cox & Giuli, 1968), and a mixing length parameter \(\alpha_{\rm MLT}=2\). We adopt the Ledoux criterion, and include semiconvective mixing (with an efficiency \(\alpha_{\rm semi}=1\); Langer et al., 1985) and thermohaline mixing (with an efficiency of 1; Brown et al., 2013). Third, we relax the tolerances to gold2_tol_residual_norm3 = 1d-6 gold2_tol_max_residual3 = 1d-3, and upon two consecutive retries, we temporarily set use_dPrad_dm_form_of_f_gradient_eqn =.true.2 which often aids solver convergence, particularly at the top boundary of the convection zone.
Footnote 2: [https://docs.mesastar.org/en/release-r22.05.1/reference/controls.html#use-dprad-dm-form-of-t-gradient-eqn](https://docs.mesastar.org/en/release-r22.05.1/reference/controls.html#use-dprad-dm-form-of-t-gradient-eqn)
### Fiducial grid
Due to heat transport away from the temperature peak during the He shell's accumulation, ignition occurs above the base of the accreted layer by \(\approx 0.004-0.025\) M\({}_{\odot}\). This is illustrated by panel (b) of Figure 2, which shows the mass exterior to the base of the convection zone (BCZ) at \(\log_{10}\left(T_{\rm bcz}/{\rm K}\right)\gtrsim 8.3\). This is slightly larger than when the convection zone first appears, because initially the inner convective boundary moves inwards until \(\log_{10}\left(T_{\rm bcz}/{\rm K}\right)\gtrsim 8.3\). We do not include mixing beyond the convective boundary via overshooting, nor the convective premixing or predictive mixing schemes for determining the convective boundary (e.g.,
Figure 2: Total accumulated He shell mass at ignition, mass exterior to the base of the convection zone (BCZ), minimum \(\tau_{\rm heat}/\tau_{\rm accel}\) where convective velocity is greatest, \(\tau_{\rm heat}/\tau_{\rm dyn}\) at the BCZ, and the orbital velocity of the donor at the start of the He flash, from top to bottom. All runs start with \(M_{\rm WD}=1.0\) M\({}_{\odot}\), and color-coding indicates \(M_{\rm He}^{i}\), from \(0.15\) M\({}_{\odot}\) for the darkest color, to \(0.25\) M\({}_{\odot}\) for the lightest color. For each \(M_{\rm He}^{i}\) set, we indicate the minimum \(s_{\rm c,He}^{i}\) above which no He flash occurs, by the lines at the top of the first panel. In the bottom panel, we show the heliocentric velocity of the hypervelocity WD D6-2 (dash-blue line) with its \(1\sigma\) uncertainty (blue region). Its posterior probability distribution (Bauer et al., 2021) is shown on the side.
Figure 3: Same as Figure 2, but with \(M_{\rm WD}^{i}=0.9,1.1\,{\rm M_{\odot}}\) for the left and right panels respectively.
Paxton et al., 2018, 2019). However, any mixing beyond the convective boundary would move the BCZ further inwards, creating a more explosive outcome. In Section 3.4, we artificially induce ignition at the base of the accreted layer. Furthermore, in Section 3.3 we show the effects of adopting a different accretor \(T_{\rm c}\) and conductive opacity, both of which influence conditions at the BCZ.
Three timescales affect the outcome of the He flash. The first is the dynamical timescale,
\[\tau_{\rm dyn}=\frac{H}{c_{\rm s}}, \tag{1}\]
where \(H\) is the pressure scale height and \(c_{\rm s}\) is the sound speed. The second is the local heating timescale, at which temperature increases due to burning,
\[\tau_{\rm heat}=\frac{c_{\rm p}T}{\epsilon_{\rm nuc}}, \tag{2}\]
where \(c_{\rm p}\) is the specific heat capacity and \(\epsilon_{\rm nuc}\) is the nuclear energy generation rate. This is smaller than the global heating timescale over the convective envelope, \(\tau_{\rm heat,global}=\int(c_{\rm p}T\,{\rm d}m)/\int(\epsilon_{\rm nuc}\,{ \rm d}m)\), which is the time to heat the entire convection zone (Shen & Bildsten, 2009). The third is the convective acceleration timescale, at which convective velocity, \(v_{\rm c}\), varies (Jermyn et al., 2023) and which is given by, in MESA default parameters,
\[\tau_{\rm accel}=\frac{3H}{\sqrt{2c_{\rm p}T\nabla_{\rm a}\left(\nabla-\nabla_ {\rm L}\right)}}, \tag{3}\]
where \(\nabla\) is the temperature gradient, \(\nabla_{\rm a}\) is the adiabatic gradient, and \(\nabla_{\rm L}\) is the Ledoux gradient. In steady state, \(\tau_{\rm accel}\) is \(3/(2\alpha_{\rm MLT})\) times the eddy turnover timescale,
\[\tau_{\rm edd}=\frac{H}{v_{\rm c}}. \tag{4}\]
When \(\tau_{\rm heat}\lesssim\tau_{\rm accel}\), the standard assumption that convection is in steady state becomes dubious, since the temperature rises faster than convection can respond. Instead, convection is expected to freeze out (e.g., Woosley & Kasen, 2011; Jermyn et al., 2023). When time-dependent convection (TDC) is applied, heat is more strongly trapped at the BCZ, leading to strong superadiabaticity and a higher peak \(T_{\rm bcx}\)(see Jermyn et al., 2023). Woosley & Weaver (1994) also argue that convection breaks down when \(\tau_{\rm heat}\lesssim\tau_{\rm edd}\). A subsonic, turbulence-dominated deflagration results and, although not well-studied for He, may transition into a detonation (e.g., Shen et al., 2010).
When \(\tau_{\rm heat}\lesssim\tau_{\rm dyn}\) locally, an overpressure develops over the scale height and a detonation is very likely. Moreover, Shen & Moore (2014) show that, if a large nuclear net and CNO isotopes are included, the detonation may well initiate in a hotspot that is small (required to be at least \(\approx 3\times 10^{6}\) cm for an isobaric hotspot with central temperature \(10^{9}\) K and central density \(10^{5}\) g cm\({}^{-3}\)) compared to the scale height of the convection zone (\(\approx\) few \(10^{7}-10^{8}\) cm). In this case, the local heating timescale should be compared to the sound-crossing time over the dimension of the hotspot, which makes a detonation even more likely (Shen & Moore, 2014). In addition, many He flashes realized in this work are ignited above the base of the accreted layer and mixing beyond the convective boundary can induce a stronger He flash. For these two reasons, we consider \(\tau_{\rm heat}\lesssim 100\,\tau_{\rm dyn}\) of interest, as long as the envelope can sustain a steady transverse detonation wave. We note that a hydrodynamical approach is more appropriate in the limit \(\tau_{\rm heat}\lesssim\tau_{\rm dyn}\), as is done by Woosley & Kasen (2011) in 1D, but we continue to adopt a hydrostatic approach for numerical convenience and to approximate \(\tau_{\rm heat}/\tau_{\rm dyn}\) of our models. Furthermore, when \(\tau_{\rm heat}\approx\tau_{\rm dyn}\), \(\tau_{\rm heat}\) already approaches \(\approx 0.1\) times \(\pi r_{\rm bcx}/c_{\rm s}\), i.e., the time for sound waves to communicate over a shell of radius \(r_{\rm bcx}\). In other words, multiple points at the BCZ may initiate an ignition (e.g., Woosley & Kasen, 2011). Future three-dimensional simulations similar to Zingale et al. (2013), Jacobs et al. (2016), and Glasner et al. (2018) may further inform the exact conditions of the initiation of a detonation.
The minimum values of the ratios \(\tau_{\rm heat}/\tau_{\rm accel}\) and \(\tau_{\rm heat}/\tau_{\rm dyn}\) are compared in panels (c) and (d) of Figure 2. A thicker He shell leads to a higher \(P_{\rm bcx}\) for hydrostatic balance, and a higher peak \(T_{\rm bcx}\). These both lead to stronger nuclear burning, and hence lower \(\tau_{\rm heat}\). Therefore, \(\tau_{\rm heat}/\tau_{\rm accel}\) and \(\tau_{\rm heat}/\tau_{\rm dyn}\) both decrease with \(\Delta M\), and hence \(s^{i}_{\rm c,He}\). Some of our models, with total accumulated He masses of \(\gtrsim 0.03\) M\({}_{\odot}\), can reach \(\tau_{\rm heat}/\tau_{\rm dyn}\lesssim 10\). This suggests that high-entropy He WD are a viable channel for He detonations and related transients.
### Different accretor mass
In Figure 3, we show the results for an initially \(M^{i}_{\rm WD}=0.9,1.1\) M\({}_{\odot}\) accretor. Both show a similar range of total accumulated He shell mass at ignition as the fiducial \(1.0\) M\({}_{\odot}\) grid, but as \(M^{i}_{\rm WD}\) increases, the minimum ratio between \(\tau_{\rm heat}\) and \(\tau_{\rm dyn}\), and \(\tau_{\rm heat}\) and \(\tau_{\rm accel}\) decrease. This results from the increasing density at the He base as \(M^{i}_{\rm WD}\) increases. As \(M^{i}_{\rm WD}\) increases, so does the surface gravity and density at the base of the He shell. As a result, for the same \(M^{i}_{\rm He}\) and \(s^{i}_{\rm c,He}\), the total accumulated He shell mass at ignition, minimum ratio between \(\tau_{\rm heat}\) and \(\tau_{\rm dyn}\), and \(\tau_{\rm heat}\) and \(\tau_{\rm accel}\), decrease
with \(M_{\rm WD}^{i}\). In other words, for a given \(M_{\rm He}^{i}\), in order to achieve a dynamical He flash, a higher \(M_{\rm WD}^{i}\) requires a slightly lower \(s_{\rm e,He}^{i}\). Also due to the increasing density at the He shell base with \(M_{\rm WD}^{i}\), none of the \(0.9\,{\rm M}_{\odot}\) models show ignition boosted by the NCO reaction chain, but highest entropy models with \(M_{\rm WD}^{i}=1.0,1.1\,{\rm M}_{\odot}\) do, as the density at the He base reaches the critical density \(1.25\times 10^{6}\,{\rm g\,cm^{-3}}\)(Bauer et al., 2017).
### Other variables
In this work we include the correction by Blouin et al. (2020) to the electron conductive opacities, which results in a lower opacity in the He envelope of the accretor. Due to the faster transport of heat away from the envelope, a higher He shell mass is required for ignition. However, the Blouin et al. (2020) correction, while accurate at moderate degeneracy, may not be appropriate at strong degeneracy (Cassisi et al., 2021). We re-ran our \(M_{\rm He}^{i}=0.15\,{\rm M}_{\odot}\), \(s_{\rm e,He}^{i}/(N_{\rm A}k_{\rm B})=3.1\) binary applying the damping factors proposed by Cassisi et al. (2021) to the accretor. With a 'weak (strong) damping', the total accumulated He mass at ignition is reduced from \(0.034\,{\rm M}_{\odot}\) to \(0.032(0.027)\,{\rm M}_{\odot}\), while the mass enclosed by the BCZ is increased from \(1.008\,{\rm M}_{\odot}\) to \(1.012(1.012)\,{\rm M}_{\odot}\). Both result in a slightly weaker He flash, with the minimum \(\tau_{\rm heat}/\tau_{\rm dyn}\) increased from \(2.2\) to \(6.5(23.8)\). Nevertheless, the uncertainty in the electron conductive opacities plays a minor role in our simulations, compared to, e.g., the mass transfer history.
While we fix the initial center temperature of the accretor to \(T_{\rm c}=2\times 10^{7}\,{\rm K}\), we test the effects of different choices of \(T_{\rm c}\) by re-running the \(M_{\rm He}^{i}=0.15\,{\rm M}_{\odot}\), \(s_{\rm e,He}^{i}/(N_{\rm A}k_{\rm B})=3.0\) binary. The fiducial run has a total accumulated He mass of \(0.024\,{\rm M}_{\odot}\) and a mass enclosed by the BCZ of \(1.008\,{\rm M}_{\odot}\), which are changed to \(0.027,0.022,0.019\,{\rm M}_{\odot}\) and \(1.010,1.005,1.002\,{\rm M}_{\odot}\) for \(\log_{10}(T_{\rm c}/{\rm K})=7.0,7.5,7.7\) respectively. In other words, a hotter initial accretor results in a lower accumulated He mass at ignition and ignition closer to the center (e.g., Woosley and Kasen, 2011). These opposing effects largely cancel and result in a similar minimum \(\tau_{\rm heat}/\tau_{\rm dyn}\), with \(17.9\) for the fiducial and \(15.5\), \(19.8\), and \(22.3\) for \(\log_{10}(T_{\rm c}/{\rm K})=7.0,7.5,7.7\) respectively.
### Envelope mass condition for dynamical flash
Our models differ from those in Shen and Bildsten (2009) because ours allow superadiabaticity in the convective zone and use a large nuclear net. We now explore the impacts of these two choices on the envelope mass required for a dynamical flash.
We construct He flash models with different combinations of core mass \(M_{\rm core}\) and envelope mass \(M_{\rm env}\) as follows. We first scale the \(1.0\,{\rm M}_{\odot}\) CO WD (\(X(^{16}{\rm O})\approx 0.61,X(^{12}{\rm C})\approx 0.37,X(^{22}{\rm Ne}) \approx 0.02\)) to the desired \(M_{\rm core}\). Then we accrete material similar in composition to the \(0.15\,{\rm M}_{\odot}\) He WD (\(X(^{4}{\rm He})\approx 0.986\), \(X(^{14}{\rm N})\approx 0.0088\), where the progenitor star has \(Z=0.0142\)), onto the CO WD until \(M_{\rm env}\) is reached, at \(\dot{M}=10^{-8}\,{\rm M}_{\odot}\,{\rm yr}^{-1}\) so that the He does not ignite. The CO WD is allowed to cool until \(T_{\rm c}=2\times 10^{7}\,{\rm K}\). Finally, the envelope is artificially heated at its base until a convection zone appears. In other words, unlike in the binary evolution, the BCZ is located at the base of the accreted material.
We run grids of models with 3 different treatments of convection: (1) with Cox MLT allowing superadiabaticity, which is the same as in Section 3.1; (2) forcing an adiabatic profile in the convection zone; and (3) forcing an adiabatic profile, accounting for only the triple-alpha reaction and not allowing compositional changes so as to simulate a nearly pure He envelope, as is assumed in Shen and Bildsten (2009). Adiabatic convection is enforced by the MLT++ capacity (Paxton et al., 2013), via the MESA controls:
okay_to_reduce_gradT_excess =.true. gradT_excess_lambda1 = -1 gradT_excess_max_logT = 12,
which, together, ensure that superadiabaticity in the convection zone is fully reduced.
From the grids of models, we interpolate in \(M_{\rm core}\) and \(M_{\rm env}\) to find where \(\tau_{\rm heat}=\tau_{\rm dyn}\), \(\tau_{\rm heat}=10\,\tau_{\rm dyn}\), and \(\tau_{\rm heat}=100\,\tau_{\rm dyn}\) (for Cox MLT only). These are
Figure 4: He envelope mass required such that \(\tau_{\rm dyn}/\tau_{\rm heat}\) at the BCZ reaches 1 (solid lines), 10 (dashed lines), and 100 (dot-dashed; only for Cox MLT) for a given mass enclosed by the BCZ. Dark blue, orange and light blue lines correspond to Cox MLT, adiabatic profile, and adiabatic profile with only triple-alpha burning.
shown in Figure 4. For a given \(M_{\rm core}\), the adiabatic profile with only triple-alpha burning requires the thickest \(M_{\rm env}\), followed in order by the adiabatic profile and Cox MLT. The difference between the first two reflects the importance of including a large nuclear net, in particular the reaction \({}^{12}\)C(\(p,\,\gamma\))\({}^{13}\)N(\(\alpha,p\))\({}^{16}\)O (Shen and Bildsten, 2009; Shen and Moore, 2014), though this is slightly metallicity-dependent since the protons are produced from reactions like \({}^{14}\)N(\(\alpha,\gamma\))\({}^{18}\)F(\(\alpha,\rm p\))\({}^{21}\)Ne (Shen and Bildsten, 2009). However, we varied the metallicity of the \(M_{\rm core}=1.0\) M\({}_{\odot}\), \(\log_{10}(M_{\rm env}/\)M\({}_{\odot})=-1.5\) model, and found that the minimum \(\tau_{\rm heat}/\tau_{\rm dyn}\) changes from 1.45 at \(0.1\,Z_{\odot}\), to 0.86 at \(Z_{\odot}\) (our fiducial), and to 0.77 at \(2\,Z_{\odot}\), so the uncertainty resulting from varying metallicity plays a small role. The difference between the adiabatic profile and Cox MLT arises because, for a given \(M_{\rm core}\) and \(M_{\rm env}\), superadiabaticity as allowed by Cox MLT results in a higher peak \(T_{\rm bcz}\), which in turn reduces \(\tau_{\rm heat}\). Finally, we further run a grid of models with TDC (again allowing superadiabaticity), which agrees well with Cox MLT for less dynamical flashes. With more dynamical flashes, because the inequality \(\tau_{\rm heat}\lesssim\tau_{\rm accel}\) strengthens, TDC exhibits stronger heat-trapping at the BCZ (see Section 3.6 of Jermyn et al., 2023, for more details). This results in a larger superadiabaticity and a larger peak \(T_{\rm bcz}\). However, the reduction in \(M_{\rm env}\) required for \(\tau_{\rm heat}=\tau_{\rm dyn}\) is \(\lesssim 5\%\).
In agreement with Woosley and Kasen (2011), we find that the \({}^{12}\)C(\(p,\,\gamma\))\({}^{13}\)N(\(\alpha,p\))\({}^{16}\)O reaction reduces the minimum \(M_{\rm env}\) required for a dynamical He flash. Our Cox MLT, \(\tau_{\rm heat}=\tau_{\rm dyn}\) line agrees well with their "hot" line (see their Figure 19) at \(M_{\rm core}\approx 0.9\) M\({}_{\odot}\), but is slightly lower by \(\approx 30\%\) at \(\approx 1.1\) M\({}_{\odot}\). The reason could be that they define \(\tau_{\rm heat}\) as the time to run away to \(1.2\times 10^{9}\) K and they have a sparser grid of models.
## 4 How to Make High-Entropy He Wds?
Now the question is, how does one obtain He WDs that are high-entropy at contact? We address this first assuming that the He WD is formed through a CE event.
First, the He WD has to be high-entropy at formation. Figure 5 shows the evolution on the HR diagram of stars of \(M=1.0-2.5\) M\({}_{\odot}\) (without mass loss), from the start of core H burning through the RGB. Contours label \(s_{c}\) and \(M_{\rm He}\) (where the He core boundary is defined by H mass fraction \(X=0.1\) in this section) in the left and right panels respectively. As the He core mass grows while the star crosses the Hertzsprung gap (HG) and ascends the RGB, \(s_{c}\) drops. With higher MS progenitor mass, a given \(M_{\rm He}\) is formed earlier with higher \(s_{c}\). As the CE event happens on short timescales, \(s_{c}\) of the post-CE He WD is the same as that of the pre-CE He core. Thus, if a CE event occurs early in the post-MS evolution (near the base of the RGB)3, corresponding to an orbital period of \(\approx 1-8\) d for a \(1.0\) M\({}_{\odot}\) companion, then a high-entropy He WD can be obtained at formation.
Footnote 3: We assume a CE event will happen since the RGB star has a deep convective envelope (\(q_{\rm conv}\gtrsim 0.5\)). For more realistic conditions, see, for example, Temmink et al. (2023).
Second, this high-entropy He WD should not cool before coming into contact. Figure 6 shows the cooling evolution of He WDs of various masses, in central temperature and density space. Comparison between the lines of constant \(s_{c}\) and those of constant cooling age, shows that the high entropies required by our scenario, \(s_{c}/(N_{\rm A}k_{\rm B})=3.0-4.0\), implies that the He WDs can only cool for \(\lesssim 10^{8}\) yr between their formation and the onset of He mass transfer.
Third, in order to have cooled for little time before contact, the binary has to be formed at short orbital periods. For example, the gravitational-wave-induced merger timescale, which can be taken as the time before the binary comes into contact again, is \(\approx 10^{8}\) yr for a \(0.2\) M\({}_{\odot}+1.0\) M\({}_{\odot}\) binary with an orbital period of \(\approx 1\) hr.
Such short post-CE periods are favored by recent findings that the CE efficiency is low (e.g., Zorotovic et al., 2010; Scherbak and Fuller, 2023), as well as suggestions that ELM WDs are formed at short orbital periods (Brown et al., 2016). Along each evolutionary track in Figure 5, we assume that the He WD progenitor (star 1) fills its Roche lobe and undergoes a CE event, during which the companion (star 2) remains at the same mass. Assuming that the change in orbital energy during the CE event is used to eject the CE with an efficiency of \(\alpha\approx 1/3\)(e.g., Scherbak and Fuller, 2023), we solve the CE energy equation for the post-CE binary separation, \(a_{f}\),
\[E_{\rm bind}=\alpha\left(\frac{GM_{1}M_{2}}{2a_{i}}-\frac{GM_{\rm He}M_{2}}{2a_ {f}}\right), \tag{5}\]
where \(E_{\rm bind}\) is the binding energy of the envelope obtained from MESA accounting for recombination energy, and the pre-CE binary separation, \(a_{i}\), is given by the RLOF condition for star 1 (Eggleton, 1983)
\[a_{i}=R_{1}\frac{0.6q_{1}^{2/3}+\ln\left(1+q_{1}^{1/3}\right)}{0.49q_{1}^{2/3}}, \tag{6}\]
where \(q_{1}=M_{1}/M_{2}\).
A low CE parameter e.g., \(\alpha=1/3\) favors formation of He WD binaries at ultrashort periods \(P_{\rm orb}\ll 1\) hr, which have very short gravitational wave merger timescales.
For systems with \(M_{\rm He}\gtrsim 0.25\,\rm M_{\odot}\) and \(M\lesssim 1.5\,\rm M_{\odot}\), there is sufficient time for the newly formed He WD to cool, but they do not lead to dynamical flashes. In contrast, systems that do lead to dynamical flashes, marked as the region of interest in Figure 5, will remain at the same entropy. However, the post-CE binary may be so compact that the newly-formed He WD immediately fills its Roche lobe (e.g., Deloye et al., 2007). We remain agnostic as to whether this leads to a merger outcome or that a transition to stable mass transfer is possible. If the latter case does occur, our work suggests that a CE event occurring at the base of the RGB favors the formation of a high-entropy He WD donor amenable to a dynamical He flash from later mass transfer.
In the stable mass transfer channel for ELM WD formation, a pre-ELM WD with mass \(\gtrsim 0.15\,\rm M_{\odot}\) becomes detached at \(P_{\rm orb}\gtrsim 5\,\rm hr\)(e.g., Sun & Arras, 2018). Although these models have thick H shells and undergo stable H burning which maintains a warm He core (Kaplan et al., 2012), the He WD will still have cooled for too long and become low-entropy before coming into contact again. This is show by the grey dots in Figure 6, where the models are made following Sun & Arras (2018). Given this, we suggest that a high-entropy He WD has to descend from the CE channel with a short post-CE orbital period \(P_{\rm orb}\lesssim 1\,\rm hr\). However, we note that an appreciable number of observed ELM WD binaries with \(M_{\rm He}\lesssim 0.2\,\rm M_{\odot}\) have shorter \(P_{\rm orb}\) than predicted by the stable mass transfer channel (Li et al., 2019), so it is possible the stable mass transfer channel may still produce high-entropy He WDs that are of interest to this work.
## 5 Conclusion
We have shown that mass transfer from a high-entropy He WD onto a massive CO WD can lead to a strong, first He flash. With higher donor entropy, the peak \(\dot{M}\) decreases, leading to a larger total accumulated He shell mass at ignition (see Section 2). For an initially \(1.0\,\rm M_{\odot}\) accretor, the explored range of total accumulated He shell mass at ignition spans from 0.01 to \(0.08\,\rm M_{\odot}\). By including CNO isotopes and a large nuclear network accounting for the reaction chain \({}^{12}\)C\((p,\,\gamma)^{13}\)N\((\alpha,p)^{16}\)O (Shen & Moore, 2014), and by allowing superadiabaticity in the convection zone, we show that the resulting He flash can become dynamical (see Section 3).
Pending a successful double detonation, this scenario can explain some SNe Ia. For some thin-shell (\(\lesssim 0.03\,\rm M_{\odot}\)) models in our simulations, the resulting transient may be a spectroscopically normal SNe Ia, and thick-shell may produce peculiar SNe Ia (e.g., Polin et al., 2019; Townsley et al., 2019; Boos et al., 2021; Shen et al., 2021). Furthermore, our results provide a good match to the velocity of the hypervelocity WD D6-2 (Shen et al., 2018; Bauer et al., 2021), and suggest that D6-2 must
Figure 5: Evolution of \(1.0-2.5\,\rm M_{\odot}\) stars up the RGB, with contours labeling central specific entropy (left) and He core mass (right). The two blue lines bracket parameter space for which a dynamical He flash may happen once mass transfer occurs after the CE event. The convective envelope occupies half of the star’s total mass to the right of the dot-dashed line. The three dotted lines label the orbital periods of 2, 3, 5 days (left), and 1.7, 3, 7 days (right) where the star fills its Roche lobe with a \(1.0\,\rm M_{\odot}\) companion.
have been high-entropy. We plan to study the impact of SN ejecta on high-entropy He WD donors like D6-2 in the near future, similarly to Bauer et al. (2019).
For non-dynamical He flashes that cause expansion of the accretor to Roche-lobe overflow, Shen (2015) has raised the possibility of a subsequent merger. Should this be the case, then the only surviving systems that continue to evolve to longer periods as AM CVn binaries (e.g. van Roestel et al., 2022), are those with initial entropies so large that no flash ever occurs.
In Section 4, we argue that these high-entropy He WDs that have a dynamical flash must result from a CE event. The unstable mass transfer begins near the base of the RGB when the donor progenitor is still high-entropy, and ends with short orbital periods such that the newly formed He WD cannot cool before coming into contact again. Main sequence stars with masses \(\approx 1.3-2.0\,\mathrm{M}_{\odot}\), an unseen companion with mass \(\approx 1.0\,\mathrm{M}_{\odot}\), and orbital periods between 1 and 8 days are good candidates for producing the high-entropy He WDs of interest here. The short lifetimes (\(1.1-3.5\) Gyr) of these main sequence stars may explain SNe Ia in a younger population.
We thank the referee for their constructive suggestions that have greatly improved our manuscript. We thank Evan Bauer, Adam Jermyn, Jared Goldberg, and Will Schultz for helpful conversations about running MESA and Evan Bauer in addition for sharing data of D6-2. We are grateful to Abigail Polin for helpful conversations about He detonation and thermonuclear supernovae, and Ken Shen in addition for helpful comments on an earlier draft. We thank Meng Sun for sharing her MESA inlists for modeling ELM WD formation. This work was supported, in part, by the National Science Foundation through grant PHY-1748958, and by the Gordon and Betty Moore Foundation through grant GBMF5076. Use was made of computational facilities purchased with funds from the National Science Foundation (CNS-1725797) and administered by the Center for Scientific Computing (CSC). The CSC is supported by the California NanoSystems Institute and the Materials Research Science and Engineering Center (MRSEC; NSF DMR 1720256) at UC Santa Barbara. MESA (v15140,v21.12.1,v22.05.1; Paxton et al., 2011, 2013, 2015, 2018, 2019; Jermyn et al., 2023), py_mesa_reader(Wolf & Schwab, 2017), ipython/jupyter(Perez & Granger, 2007; Kluyver et al., 2016), matplotlib(Hunter, 2007), NumPy(Harris et al., 2020), SciPy(Virtanen et al., 2020), Astropy(Astropy Collaboration et al., 2013, 2018), and Python from python.org.
|
2310.01645
|
Dividing active and passive particles in nonuniform nutrient
environments
|
To explore the coupling between a growing population of microorganisms such
as E. coli and a nonuniform nutrient distribution, we formulate a minimalistic
model. It consists of active Brownian particles that divide and grow at a
nutrient-dependent rate following the Monod equation. The nutrient
concentration obeys a diffusion equation with a consumption term and a point
source. In this setting the heterogeneity in the nutrient distribution can be
tuned by the diffusion coefficient.
In particle-based simulations, we demonstrate that passive and weakly active
particles form proliferation-induced clusters when the nutrient is localized,
without relying on further mechanisms such as chemotaxis or adhesion. In
contrast, strongly active particles disperse in the whole system during their
lifetime and no clustering is present. The steady population is unaffected by
activity or nonuniform nutrient distribution and only determined by the ratio
of nutrient influx and bacterial death. However, the transient dynamics
strongly depends on the nutrient distribution and activity. Passive particles
in almost uniform nutrient profiles display a strong population overshoot, with
clusters forming all over the system. In contrast, when slowly diffusing
nutrients remain centred around the source, the bacterial population quickly
approaches the steady state due to its strong coupling to the nutrient.
Conversely, the population overshoot of highly active particles becomes
stronger when the nutrient localisation increases. We successfully map the
transient population dynamics onto a uniform model where the effect of the
nonuniform nutrient and bacterial distributions are rationalized by two
effective areas.
|
Till Welker, Holger Stark
|
2023-10-02T21:18:07Z
|
http://arxiv.org/abs/2310.01645v2
|
# Dividing active particles in nonuniform nutrient environments
###### Abstract
To explore the coupling between a growing population of motile microorganisms such as _E. coli_ and a nonuniform nutrient distribution, we formulate a minimalistic model. It consists of active Brownian particles that divide and grow at a nutrient-dependent rate following the Monod equation. The nutrient concentration obeys a diffusion equation with a consumption term and a point source. In this setting the heterogeneity in the nutrient distribution can be tuned by the diffusion coefficient. In particle-based simulations we demonstrate that particles form proliferation-induced clusters when the nutrient is localized, without relying on further mechanisms such as chemotaxis or adhesion. The steady population is unaffected by the nonuniform nutrient distribution but purely determined by the ratio of nutrient influx and bacterial death. In contrast, the transient dynamics strongly depends on the nutrient distribution. Small populations and almost uniform distributions display a strong population overshoot, with clusters forming all over the system. In contrast, when nutrients are centred around the source for a small diffusion constant, the bacterial population quickly approaches the steady state due to its strong coupling to the nutrient. We successfully map the transient dynamics onto a uniform model where the effect of the nonuniform nutrient distribution is rationalized by an effective system size.
_Keywords_: Active Particles, Proliferation, Population Dynamics
## 1 Introduction
Living microorganisms show two distinct types of activity: motility [1, 2, 3] and proliferation [4, 5, 6]. On the one hand, motile microorganisms locally consume chemical energy, use it to self-propel, and dissipate it due to friction with the fluid surrounding. This flux of energy drives the system out of equilibrium. As a consequence, particles with purely repulsive interaction can show motility-induced phase separation [7, 8, 9]. Combining motility with mechanical, chemical, hydrodynamic, or social interactions [10] results in a rich palette of collective phenomena from turbulent behaviour [11, 12], over
dynamic clustering [13, 14], fluid pumps [15, 16] and convective rolls [17], to flocking dynamics [18, 19, 20, 21], swirls [22, 23, 24, 25], and fluid clusters caused by non-reciprocal torques [26]. The tendency of chemotactic bacteria to accumulate in regions of high attractant concentration has been known since the 19th century [27] and studied ever since with huge interest in self-generated gradients [28, 29, 30, 31].
On the other hand, proliferation represents a different form of activity. Through cell division and growth, individuals inject biomass into the system, thereby breaking the conservation of mass [4]. Mechanical forces due to cell-cell contacts strongly impact the growth process and can be modelled using pressure-dependent growth rates [32, 6, 33] or a cell cycle impacted by pressure [34, 35]. Cells more resilient to pressure tend to have a competitive advantage in crowded environments [34, 36]. The _E. coli_ bacterium, which we have in mind in our model, tends to be less affected by mechanical pressure compared to a variety of epithelial cells [34], so we neglect pressure in this study. However, it should be noted that mechanical self-regulation slows down the elongation rate [37] and in narrow long channels the mechanical forces become the main limiting factor of cell growth [38]. Furthermore, for elongated cells the competition between active and passive forces results in a mosaic of locally ordered patches [39]. Finally, proliferation and cell death can also liquidise tissue leading to cell diffusion [6] and the transport of cells is additionally amplified during the transition of the colony from two to three dimensions [40].
The interplay between motility and proliferation has already shown interesting dynamics and is crucial in describing and interpreting experimental observations. The combination of density dependent velocity and growth results in clusters of characteristic size and transient ring patterns [41], a behaviour previously linked to chemotaxis [42, 43, 44]. Furthermore, a system of reaction-diffusion equations modelling growing bacterial colonies in the presence of nutrient [45] demonstrates that motility is crucial to describe the rich variety of spatial patterns observed in experiments [45, 46, 47]. Whereas, immobilizing bacteria reduces this variety of possible colony patterns [46].
As soon as cell number is no longer conserved, another aspect becomes relevant - population dynamics. Classic approaches to model it range in complexity from exponential growth for abundant resources, over density-dependent growth rates [5] and coupled differential equations for population-nutrient [48] or predator-prey dynamics [49], to models taking into account stochastic fluctuations [50] or spatial structure [51]. The latter has significant impact on population dynamics of fish populations [52], host-parasitoid systems [53], and on the spread of infectious diseases [54]. A recent study combined motility and proliferation to model the transition from initial exponential to subexponential growth [55].
In this article, we study the coupled population dynamics of active agents and their emergent collective patterns in a non-uniform nutrient profile using active Brownian particles. Hydrodynamic interactions due to shape-dependent flow fields [56, 57] are neglected. To explore purely proliferation-induced effects, we ignore chemotaxis and its influence on dispersal [58] and clustering [27]. For bacteria, this can be
realized by genetically suppressing tumbling [59]. Our particles or cells divide with a nutrient-dependent rate, while divisions are assumed to be quasi-instantaneous. We consider a constant nutrient influx from a point source. The time scales of motility and proliferation are strongly separated for microorganisms such as _E. coli_, which also requires large system sizes to explore the non-uniform population and nutrient distributions. Therefore, to keep the computational effort manageable, we squeeze the time scales together. A possible experimental realization is a highly viscous fluid which reduces cell motility [60].
Within such a minimalistic model, the steady population of the model bacteria only depends on the ratio between nutrient supply and bacterial death, as expected from a uniform model. In contrast, the nutrient amount is highly governed by its non-uniform distribution around the source, which is tunable by the nutrient diffusion constant. In turn, the nutrient distribution regulates the spatial extent of the bacterial cluster and thereby provides a clustering mechanism that does not rely on chemotaxis or adhesion. The transient growth towards steady state is also heavily influenced by the nutrient diffusion constant. We show that it can be mapped on the dynamics of a uniform model using an effective system size.
The article is structured as follows. In Sec. 2 we introduce our model. We investigate its steady state in Sec. 3.1 and the transient growth in Sec. 3.2. In the conclusions of Sec. 4 we summarize our results and present an outlook.
## 2 Model
In this section, we formulate a particle-based model for dividing bacteria consuming a nutrient, choose feasible bio-inspired parameters, and introduce a model that describes a uniform system as a reference.
### Particle-based model
The setup of the two-dimensional system is sketched in Fig. 1 (left). The bacteria are described as overdamped active Brownian particles experiencing thermal noise on their orientation:
\[\dot{\mathbf{r}}_{i} = v\mathbf{u}_{i}+\mu\sum_{i\neq j}\mathbf{F}_{ij}\,, \tag{1}\] \[\dot{\varphi}_{i} = \sqrt{2D_{R}}\xi_{i}\,, \tag{2}\]
with orientation vector \(\mathbf{u}_{i}=(\cos\varphi_{i},\sin\varphi_{i})\), propulsion velocity \(v\), translational and rotational mobilities \(\mu\) and \(\mu_{R}\), and rotational diffusion coefficient \(D_{R}=\mu_{R}k_{B}T\). For a spherical particle, the relationship of Stokes friction coefficients yields \(\mu/\mu_{R}=\sigma^{2}/3\). The Gaussian noise \(\xi_{i}\) is delta-correlated with zero mean and a variance of one. The volume exclusion force \(\mathbf{F}_{ij}=\mathbf{F}_{\sigma}(\mathbf{r}_{i}-\mathbf{r}_{j})\) resembles a purely repulsive spring with stiffness \(k\):
\[\mathbf{F}_{\sigma}(\mathbf{r})=k\theta(|\mathbf{r}|-\sigma)(|\mathbf{r}|- \sigma)\hat{\mathbf{r}}\,. \tag{3}\]
The Heaviside step function \(\theta(r)\) cuts off the interaction at the particle diameter \(\sigma\). Particles are contained in the system by a reflective boundary condition to avoid accumulation at the wall and an additional repulsive spring interaction between particles and wall to prevent particles from being pushed out of the system area by other particles.
We introduce a two-dimensional nutrient concentration field \(s(\mathbf{r},t)\). A nutrient is pumped into the system at rate \(R_{0}\) at location \(\mathbf{r}_{s}\), diffuses with diffusion coefficient \(D_{N}\), and is eaten up by the bacteria:
\[\frac{\partial s(\mathbf{r},t)}{\partial t}=D_{N}\nabla^{2}s(\mathbf{r},t)+R_ {0}\cdot\delta(\mathbf{r}-\mathbf{r}_{s})-\sum_{i=1}^{N}\delta(\mathbf{r}- \mathbf{r}_{i}(t))\gamma g(s(\mathbf{r}_{i},t))\,. \tag{4}\]
The individual uptake rate \(\gamma g(s(\mathbf{r}_{i},t))\) is proportional to the nutrient-dependent growth rate \(g(s)\) and the parameter \(\gamma\) is the amount of nutrient required to make a new cell [5]. Nutrient amounts will be given in reference to this value. The growth rate follows the Monod function [61]
\[g(s)=\frac{g_{max}s}{K_{s}+s}\,, \tag{5}\]
with maximal growth rate \(g_{max}\) and characteristic area concentration \(K_{s}\), where the growth rate becomes \(g_{max}/2\). The area concentration is related to the volume concentration, \(\mathcal{K}_{s}=K_{s}\sigma^{-1}\), used in real three-dimensional systems using the "height" \(\sigma\) of our quasi-two-dimensional system.
To include the nutrient-dependent growth and starvation process, we introduce the rule:
_In a time interval \(dt\), each particle has the probabilities of \(dt\cdot g(s(\mathbf{r}_{i},t))\) to divide_
_and \(dt\cdot d\) to die._
Figure 1: (left) Sketch of the setup with a source of diffusing nutrients in the centre, reflective boundary conditions, and a quadratic box shape. (right) Mechanical implementation of the cell division, as spheres with growing radius. During the division, the parent cell is marked in yellow and the daughter cell is marked in blue.
Here, we choose a constant death rate \(d\). If a parent cell divides, a daughter cell gets placed on the parent cell. They interact via a repulsive spring force \(\mathbf{F}_{\sigma_{d}(t)}(\mathbf{r}_{i}-\mathbf{r}_{j})\), introduced in Eq. (3), with growing rest distance \(\sigma_{d}(t)=v_{d}t\). The speed of the division process is regulated by \(v_{d}\). If the growing rest distance reaches the full particle diameter \(\sigma_{d}(t)=\sigma\) or the particles drift apart \(|\mathbf{r}_{i}-\mathbf{r}_{j}|\geq\sigma\), the division process is finished. Parent and offspring now interact like all other particles via the force in Eq. (3). The division process is sketched in Fig. 1 (right). This implementation is similar to Ref. [35], however, we assume a constant cell size after the cell division is finished.
This model is an example of a hybrid particle-continuum reaction-diffusion system. For the simulation we rescale the equations as shown in A.
### Time scales and simulation parameters
A reductionist perspective on our system with relevant time scales is sketched in Fig. 2 (left): I) bacteria and II) nutrient couple via III) growth and IV) nutrient uptake. Additionally, there is V) a constant influx of nutrients and VI) an outflux of bacteria due to the death of individual cells. We define time scales for all six aspects:
1. bacterial dynamics: \(\tau_{R}=D_{R}^{-1}\) - orientational correlation time.
2. nutrient diffusion: \(\tau_{N}=\sigma^{2}D_{N}^{-1}\) - time it takes, for example, glucose to diffuse over the characteristic distance.
3. bacterial growth: \(\tau_{G}=g^{-1}(K_{s})=2g_{max}^{-1}\) - inverse growth rate at concentration \(K_{s}\).
4. nutrient uptake \(\tau_{U}=2\sigma^{2}K_{s}/\gamma g_{max}\) - time to consume the nutrient of concentration \(K_{s}\) in the local surrounding of the bacteria with area \(\sigma^{2}\).
5. nutrient influx: \(\tau_{I}=\gamma R_{0}^{-1}\) - time to insert the nutrient amount necessary to make a new cell. This can be easily varied in an experimental setup.
6. bacterial death: \(\tau_{D}=d^{-1}\) - inverse death rate.
The biological system - _Escherichia coli_ in \(41\,^{\circ}\)C water with glucose as the limiting nutrient - dictates the parameters (Appendix B). The resulting time scales span over seven orders of magnitude, as shown in Fig. 2 (right). In our particle-based simulation, this is not feasible. Therefore, we squeeze the time scales together such that the ratios of their orders of magnitude (in units of \(\tau_{R}\)), defined as the logarithms, are preserved (e.g. \(\log\frac{\tau_{G}}{\tau_{R}}/\log\frac{\tau_{U}}{\tau_{R}}=\mathrm{const}\)). More concretely, we reduce all the orders of magnitudes by a factor of 3. The procedure is demonstrated for the time scale of bacterial growth:
\[\tau_{G}=952.4\,\tau_{\mathrm{R}}=10^{2.979}\,\tau_{\mathrm{R}}\to\tau_{G}=1 0^{2.979/3}\,\tau_{\mathrm{R}}=9.839\,\tau_{\mathrm{R}}\,. \tag{6}\]
The transformed time scales in Fig. 2 (right) span less than three orders of magnitude and can be realized in the simulation.
We recover the simulation parameters from the transformed time scales:
\[g_{max}=2\tau_{G}^{-1}=0.203\,\tau_{\mathrm{R}}^{-1}\,,\quad d =\tau_{D}^{-1}=0.030\,\tau_{\mathrm{R}}^{-1}\,,\] \[K_{s}=\gamma\sigma^{-2}\tau_{U}\tau_{G}^{-1}=0.181\,\gamma\sigma ^{-2}\,,\quad D_{N}=\sigma^{-2}\tau_{N}^{-1}=9.775\,\sigma^{2}\tau_{\mathrm{R }}^{-1}\,.\]
We additionally choose a persistence number \(P=\tau_{R}\sigma^{-1}v=20\), corresponding to a velocity of \(v=10.562\,\mathrm{\SIUnitSymbolMicro m}\mathrm{s}^{-1}\), and a sufficiently large spring constant \(k=3\times 10^{4}\,\mathrm{k}_{\mathrm{B}}\mathrm{T}\sigma^{-2}\). Furthermore, we set the division speed equal to the particle velocity, \(v_{d}=v\), which guarantees that the time scale of division \(\sigma v_{d}^{-1}\) is much faster than the time scale of bacterial growth \(\tau_{G}\). The simulation box is quadratic with edge length \(L_{x}=L_{y}=200\,\sigma\). The nutrient source is placed in the centre. The setup is sketched in Fig. 1 (left). At the start of the simulation, \(N_{0}=0.1\cdot N^{*}\) bacteria are placed on a square grid and the nutrient field is set to \(s=0\). Here, \(N^{*}\) is the steady-state population in the uniform model introduced in Sec. 2.3. The simulation time is \(1000\,\tau_{\mathrm{R}}\) and properties concerning the stationary state use the data between \(500\,\tau_{\mathrm{R}}\) and \(1000\,\tau_{\mathrm{R}}\).
### Uniform model
As a reference, we introduce a scalar model assuming a uniform nutrient distribution. The dynamics of population number \(N\) and the total nutrient amount \(S\) is described as
\[\dot{N} = g(S/A)N-Nd\,,\] \[\dot{S} = R_{0}-\gamma g(S/A)N\,, \tag{7}\]
We assume here a constant influx of nutrient, while outflux is only caused by bacterial death. This presents a limiting case of the population model in Ref. [50] and is distinct from the chemostat, where bacterial and nutrient depletion is caused by material outflux [48]. The system area \(A\) connects nutrient amount \(S\) and nutrient density \(s=S/A\). The fixed point of this system,
\[N^{*}=\frac{R_{0}}{d\gamma}\quad\mathrm{and}\quad S^{*}=Ag^{-1}(d)\,, \tag{8}\]
gives the steady population \(N^{*}\) and nutrient amount \(S^{*}\). Note that \(N^{*}\) does not depend on system size. This is plausible because, in steady state the bacterial outflux rate due to death, \(dN^{*}\), and bacterial influx rate due to nutrient-induced division, \(R_{0}/\gamma\), need to balance each other for all system sizes.
Figure 2: (Left) Reductionist sketch of the system. (Right) Time scales of the biological system before scaling and time scales of the simulation after scaling. The processes from left (fast) to right (slow) with their respective characteristic time scales are: diffusion (\(\tau_{N}\)), bacterial dynamics (\(\tau_{R}\)), nutrient uptake (\(\tau_{U}\)), cell growth (\(\tau_{G}\)), and cell death (\(\tau_{D}\)).
The dynamics of the dimensionless population, \(\widetilde{N}=N/N^{*}\), and nutrient amount, \(\widetilde{S}=S/S^{*}\) with the dimensionless time \(\widetilde{t}=t/\tau_{D}=td\), are described by
\[\frac{d}{d\widetilde{t}}\widetilde{N} = \left[\widetilde{g}(\widetilde{S})-1\right]\widetilde{N}\,,\] \[\frac{d}{d\widetilde{t}}\widetilde{S} = \frac{1}{\Omega}\frac{4\widetilde{g}_{max}}{\widetilde{g}_{max}-1 }\left[1-\widetilde{g}(\widetilde{S})\widetilde{N}\right]\,, \tag{9}\]
with \(\widetilde{g}(\widetilde{S})=g(SA^{-1})d^{-1}\). The dynamics is completely determined by the rescaled maximum growth rate \(\widetilde{g}_{max}=g_{max}d^{-1}\) and the parameter
\[\Omega=\frac{A}{A_{c}N^{*}}\quad\mbox{with}\quad A_{c}=\frac{\gamma( \widetilde{g}_{max}-1)^{2}}{4\widetilde{g}_{max}K_{s}}\,. \tag{10}\]
relating the system area \(A\) to the critical area \(A_{c}N^{*}\). The eigenvalues of the Jacobian at the fixed point are
\[\lambda_{1,2}=\frac{2}{\Omega}\left[-1\pm\sqrt{1-\Omega}\right]\,. \tag{11}\]
The solution is always stable because \(\mbox{Re}(\lambda_{1,2})<0\). The critical number density \(\rho_{c}=A_{c}^{-1}\) separates the occurring dynamics into a damped oscillating population at low densities (\(\Omega>1\), \(\mbox{Im}(\lambda_{1,2})\neq 0\)) and an overdamped relaxing population at high densities (\(\Omega<1\), \(\mbox{Im}(\lambda_{1,2})=0\)). Figure 3 shows rescaled population and nutrient amount plotted over time, the insert amplifies the characteristic oscillations occurring for \(\Omega>1\).
## 3 Results
Figure 4 gives an overview of our essential results from the 2D simulations. For a small nutrient diffusion coefficient \(D_{N}=10\,\sigma^{2}\tau_{\rm R}^{-1}\) (left), the population \(N\) grows
Figure 3: Rescaled population (left) and nutrient amount (right) versus time for different parameters \(\Omega\). The dynamics for \(\Omega>1\) is characterized by the strongly damped oscillations shown in the inset. In the upper axis, time is rescaled by the particle-based model’s characteristic time scale \(\tau_{R}\).
continuously until the steady state is reached, while the nutrient amount \(S\) goes through a maximum. The population strongly clusters around the source of the nutrient, which is nonuniformly distributed. In contrast, for large \(D_{N}=10\,000\,\sigma^{2}\tau_{\mathrm{R}}^{-1}\) (right), the nutrient spreads evenly over the whole system area and the population is almost uniform during the growth process. The population relaxes faster towards the steady state for the small \(D_{N}\), but the steady population is not affected by the nonuniform distribution of nutrient and active particles.
In the following, we will establish a connection between the population dynamics and heterogeneities in the system. We will introduce an effective system size within the uniform model of Eq. (7) to understand the role of such heterogeneities. This will give us insights into the steady state (Sec. 3.1) and the transient phase (Sec. 3.2). The parameters not explicitly mentioned are chosen as discussed in Sec. 2.2.
### Steady State
We start with analysing the steady state. Figure 5 shows snapshots of the particle distribution with different degree of heterogeneity tuned by the nutrient diffusion constant \(D_{N}\). For small \(D_{N}\) the nutrient distribution is strongly peaked and overlaid by the strongly clustered particles that only occupy a fraction of the system area. Increasing \(D_{N}\) allows the population to spread over larger areas. For \(D_{N}=10\,000\,\sigma^{2}\tau_{\mathrm{R}}^{-1}\) the cluster has completely dissolved and the entire area is covered by active particles. In other words there is an effective system area \(A_{\mathrm{eff}}\) occupied by the particles that grows
Figure 4: (Bottom) Dynamics of population \(N\) and nutrient amount \(S\) with nutrient input rate \(R_{0}=3000d\gamma\) for the diffusion coefficients \(D_{N}=10\,\sigma^{2}\tau_{\mathrm{R}}^{-1}\) (left) and \(D_{N}=10\,000\,\sigma^{2}\tau_{\mathrm{R}}^{-1}\) (right). (Top) The snapshots show different stages of the growth process as indicated in the bottom plots. Black dots represent active particles and the green shading the nutrient field. For small \(D_{N}\) the nutrient is almost entirely covered by active particles in the steady state.
with the nutrient diffusion coefficient and reaches the total system area for sufficiently large \(D_{N}\). First, we look at population \(N^{*}_{sim}\) and nutrient amount \(S^{*}_{sim}\) in the steady state, which we determined in the 2D simulations. Figure 6 (top) shows both quantities plotted versus \(D_{N}\) for different nutrient input rates \(R_{0}\). The steady state of the uniform model in Eq. (8) with \(N^{*}=R_{0}/(d\gamma)\) and \(S^{*}=Ag^{-1}(d)\) is represented by the dashed lines. Most notably, the population in the steady state, \(N^{*}_{sim}\), is independent of \(D_{N}\) and, thus, it is not influenced by the degree of heterogeneity, for example, in the nutrient distribution. The argument used in Sec. 2.3 explaining the independence of \(N^{*}\) from system size also applies here. Bacterial influx \(R_{0}/\gamma\) and bacterial outflux \(dN^{*}\) always need to balance for all heterogeneities and system sizes. In contrast, the nutrient amount strongly depends on \(D_{N}\) and only for \(D_{N}/\sigma^{2}\tau_{\rm R}^{-1}\to\infty\) converges to the prediction of the uniform model, for which we used the system size \(A\). This is the case, because for large \(D_{N}\) the nutrient is distributed almost uniformly and the uniform model applies.
Since \(S^{*}\propto A\), one could be tempted to introduce an effective system size \(A_{\rm eff}\) to explain the results from the particle-based simulation, which deviate from the prediction of the uniform model for small and intermediate \(D_{N}\). However, we already know that the occupied area \(A_{\rm eff}\) grows with \(D_{N}\) and, therefore, would expect \(S^{*}_{sim}\) to grow with \(D_{N}\). But this is only the case for \(D_{N}\) larger than ca. \(10^{2}\,\sigma^{2}\tau_{\rm R}^{-1}\), for smaller \(D_{N}\) the nutrient amount in the steady state rises again.
We propose that this is caused by the limited uptake rate per area the particles can provide. We consider the specific case \(R_{0}=3000d\gamma=90\,\tau_{\rm R}^{-1}\gamma\). The diffusion equation (4) of the nutrient is solved on a lattice with constant \(2\sigma\). Thus, the nutrient from the delta-peaked source is initially distributed over the area \(4\sigma^{2}\), which results in an input rate per area of \(22.5\,\tau_{\rm R}^{-1}\sigma^{-2}\gamma\). In contrast, the maximum uptake rate per individual, \(\gamma g_{max}\), and the maximum number density in hexagonal tight packing, \(\rho_{max}=\frac{2}{\sqrt{3}}\sigma^{-2}\approx 1.15\,\sigma^{-2}\)[62], limits the maximum uptake rate per area to \(\gamma g_{max}\rho_{max}\approx 0.234\gamma\tau_{R}^{-1}\sigma^{-2}\). This value is much lower than the input rate per area at the position of the source. The nutrient needs to diffuse away before it can entirely be consumed, which needs more time for smaller \(D_{N}\). As a consequence, the nutrient amount in the steady state rises. To validate this hypothesis, we performed simulations
Figure 5: Particle distribution in the steady state for different \(D_{N}\) and with \(N^{*}=3000\). The green shading shows the nutrient distribution. The green curve illustrates the nutrient concentration along the grey dashed line. The nutrient profile gets flatter for larger \(D_{N}\).
with particles, which do not interact by volume exclusion. Thus, the maximum density and thereby the maximum uptake rate per area is no longer limited. Indeed, Fig. 6 (bottom) shows that now the steady nutrient amount \(S^{*}_{sim}\) is monotonously rising with \(D_{N}\), as predicted.
### Transient Phase
Now that we established a first connection between nutrient diffusion coefficient \(D_{N}\), the nonuniform steady-state nutrient distribution, and the area occupied by the particles, we proceed by looking at the transient dynamics. First, we look closer at the spatial distribution of the growing and evolving particle population. The snapshots (a)-(c) in Fig. 4 convey that the particle distribution is radially symmetric. This is certainly not the case at the beginning of the growth process. In Fig. 7 we consider the system dynamics for \(N^{*}=3000\) and \(D_{N}=200\,\sigma^{2}\tau_{\rm R}\). However, to amplify the effect of statistical
Figure 6: (top-left, right) Steady population of the particle-based model, \(N^{*}_{\rm sim}\), and steady nutrient amount \(S^{*}_{\rm sim}\) (in units of characteristic nutrient amount \(\gamma\)) plotted versus nutrient diffusion constant \(D_{N}\) for different nutrient input rates \(R_{0}\) (in units of \(d\gamma\)). Dashed lines indicate the predictions of the uniform model, \(N^{*}\) and \(S^{*}\), as stated in Eqs. (8) and the full lines in the right plot are guides to the eye. (bottom) Steady nutrient amount versus \(D_{N}\) for particles with and without hard-core repulsion for \(R_{0}=3000d\gamma\).
fluctuations, a smaller initial population \(N_{0}=100\) is chosen in contrast to \(N_{0}=N^{*}/10\) used otherwise. The simulation starts with a small population (a). The nutrient quickly builds up until it reaches a maximum (b); now particle growth is possible in large parts of the system. Individual particles establish clusters across the entire system, the most prominent around the nutrient source (c). Because stochastic fluctuations strongly affect the distribution of the few initial particles, the clusters are unevenly distributed and also the nutrient distribution becomes asymmetric. The clusters grow, consume nutrients, and overshoot the carrying capacity of the steady state (d). In parallel, a depletion zone of nutrients around the central cluster develops (c,d). As a consequence of the nutrient depletion, the clusters in the outer regions start to dissolve (e). As the system relaxes further toward the steady state, the outer clusters completely dissolve, the central cluster becomes symmetric, and the depletion zone vanishes (e,f).
For larger \(D_{N}\) the asymmetry tends to be more prominent, while for small \(D_{N}\) only the centre is covered by nutrients and clusters cannot grow in the outer regions. However, the nutrient depletion zone is more pronounced for small \(D_{N}\), as Fig. 4 clearly shows. The diffusion coefficient \(D_{N}=200\,\sigma^{2}\tau_{\rm R}\) depicted in Fig. 7 is an intermediate value, where both effects can be observed.
We further rationalize the relation between population dynamics and nonuniform nutrient distribution. In Fig. 8 we plot population \(N\) versus time for different \(D_{N}\). The population dynamics is more strongly damped for more heterogenous nutrient distributions, _i.e._, when \(D_{N}\) is small (left plot). However, for increasing \(D_{N}\), we see that a clear overshoot in \(N\) develops. We did already observe the same transition from overdamped relaxation to damped oscillations in the uniform model for increasing system size as shown in Fig. 3. Therefore, in the following we will introduce an effective system size to model this transition in the particle-based simulations and thereby account for heterogeneities in the system.
Figure 7: Transient dynamics of a system with equilibrium population \(N^{*}=3000\), initial population \(N_{0}=100\), and diffusion coefficient \(D_{N}=200\,\sigma^{2}\tau_{\rm R}\). Left: Snapshots of evolving particle population and nutrient distribution (green shading) at different times as indicated in the right plot. The green curve illustrates the nutrient concentration along the grey dashed line. Right: Population \(N\) and nutrient amount \(S\) plotted versus time.
The solid lines of different colours in Fig. 8 show fits of the simulation results using the uniform model of Eq. (9). To fit the curves, we use the fit parameter \(\Omega_{\rm eff}=A_{\rm eff}/(A_{c}N^{*})\) defined in Eq. (10), where we replace \(A\) by an effective system size \(A_{\rm eff}\) that can deviate from the simulation box size \(A_{\rm box}\). For \(D_{N}/\sigma^{2}\tau_{\rm R}^{-1}=10,1000,10000\) the fit captures the dynamics well, only for \(D_{N}=100\,\sigma^{2}\tau_{\rm R}^{-1}\) a clear deviation is apparent. So, by choosing an effective system size \(A_{\rm eff}\) through the fitted value for \(\Omega_{\rm eff}\) that accounts for the fact that particles and nutrient display a nonuniform distribution in the simulation box, we are able to model the simulated dynamics.
To explore the good agreement between particle-based simulations and uniform model more quantitatively, we illustrate in Fig. 9 (top-left) the dependence of the parameter \(\Omega\) in the uniform model on system size \(A\) and steady population \(N^{*}\). To its right, the fitted parameter \(\Omega_{\rm eff}\) of simulations with different diffusion constants \(D_{N}\) and steady populations \(N^{*}\) is shown. We note the tendency in the two top plots of Fig. 9 that both, \(\Omega\) and \(\Omega_{\rm eff}\), are small for large \(N^{*}\) and small system size / \(D_{N}\), while they are large for small \(N^{*}\) and large system size / \(D_{N}\). This underpins the strong link between \(D_{N}\) and the effective system size \(A_{\rm eff}=\Omega_{\rm eff}A_{c}N^{*}\), which is calculated from the fitted \(\Omega_{\rm eff}\) and which we plot in Fig. 9 (bottom). For all steady populations \(N^{*}\), we observe a monotonous relation between heterogeneity and effective system size. In particular, for smaller \(N^{*}\) the effective system size jumps at around \(D_{N}=100\sigma^{2}\tau_{R}^{-1}\), while for large \(D_{N}\) it approaches the total system area. This all agrees well with our former observation that for small \(D_{N}\) the particle cluster only occupies a fraction of the total system, while for large \(D_{N}\) it covers the entire area.
## 4 Conclusion
We presented a minimalistic model for bacterial growth in a nonuniform nutrient environment, where we couple proliferating active Brownian particles to a diffusing nutrient emitted from a point source. This allows us to gain insights into the interplay between collective behaviour and population dynamics of bacteria. The heterogeneity
Figure 8: Population dynamics of the particle-based simulations (green) with \(N^{*}=3000\) for different diffusion coefficients of the nutrient, \(D_{N}\). The dynamics is fitted with the uniform model of Eq. (9) (different colors) using \(\Omega_{\rm eff}\) as a fit parameter.
of the nutrient distribution around the point source highly depends on its diffusion coefficient. To keep the computational effort manageable, we needed to squeeze the time scales of motility and growth together.
Using numerical simulations, we show that localizing the nutrient supply limits the effective area of bacterial growth and results in proliferation-induced bacterial clustering around the nutrient source. This presents an alternative clustering mechanism that does not rely on chemotaxis or adhesion. If outflux is only caused by cell death, the steady population is purely determined by the ratio of nutrient influx to cell death and can be predicted by a uniform model.
The transient dynamics of the system strongly depends on the nutrient distribution, which is determined by the nutrient diffusion constant. For rather uniform nutrient profiles at larger \(D_{N}\), fluctuations in the small initial bacterial distribution amplify in the transient phase because cells can form clusters throughout the entire system.
Figure 9: (top-left) Parameter \(\Omega\) of the uniform model in Sec. 2.3 over system sizes \(A\) and steady populations \(N^{*}\). The system size is given in units of the simulation box size \(A_{\mathrm{box}}=40\,000\,\sigma^{2}\). (top-right) Fit parameter \(\Omega_{\mathrm{eff}}\) of the particle-based simulation for different diffusion coefficients of nutrient \(D_{N}\) and steady populations \(N^{*}\). (bottom) Relation between the diffusion coefficient of nutrient \(D_{N}\) and the effective system area \(A_{\mathrm{eff}}\) for different steady populations \(N^{*}\). The effective system area \(A_{\mathrm{eff}}\) is calculated from the fit parameter \(\Omega_{\mathrm{eff}}\) using Eq. (10).
This results in an asymmetric bacterial distribution during the transient phase, which dissolves when approaching the steady state. In a peaked nutrient distribution occurring at small \(D_{N}\) bacterial growth is limited to a small region during the entire transient phase and the bacterial distribution looks rather symmetric around the nutrient source.
If the nutrient and thereby the bacterial cluster are strongly localized, the population number quickly approaches the steady state. We propose this is due to the localized nutrient almost entirely covered by bacteria, which results in a strong coupling. At increased nutrient diffusion, where the profile is more uniform, the nutrient strongly overshoots and is followed by a population overshoot. Here, large areas of the nutrient are not covered by bacteria and the coupling is weaker. The observed difference in the dynamics can be rationalized by the uniform population model, where a reduced effective area takes into account the nonuniform nutrient distribution. We demonstrate this by fitting the population curve of the particle-based simulation with a fit parameter that contains this effective area. The relation between diffusion coefficient and effective area depends on population size but it is always monotonous. In other words, a more heterogeneous nutrient distribution results in a smaller effective system size.
We demonstrated that our minimalistic model allows us to build a bridge between analytic population models and particle-based simulations. It makes general statements that serve as orientation for future theoretical and experimental investigations. Our insights into the role of nutrient heterogeneity can help to understand how an environment determines the individual reproductive success [63, 64, 65, 66], the spread of neutral mutations [67, 68], and the population dynamics in changing environments [69]. To make quantitative predictions, our model needs to be extended to include processes such as chemotaxis [27, 28, 29, 30, 31], density/pressure-dependent growth rates [32, 6, 33], cell elongation [37, 39], polydispersity [70, 71], and more complex setup geometries.
We thank Josua Grawitter, Arne W Zantop and Arnold JTM Mathijssen for interesting discussions and TU Berlin for financial support. TW also acknowledges support through a Deutschlandstipendium.
## Conflict of interest
The authors declare no competing interests.
|
2310.16339
|
Exponential relaxation to equilibrium for a kinetic
Fokker-Planck-Alignment equation with force
|
In this note, we consider a kinetic Fokker-Planck-Alignment equation with
Rayleigh-type friction and self-propulsion force which is derived from general
environmental averaging models. We show the exponential relaxation in time
toward equilibrium of the solutions provided certain spectral gap conditions
are satisfied. The result is proved by using Desvillettes-Villani's method for
collisional models to establish the global hypocoercivity.
|
Vinh Nguyen
|
2023-10-25T03:47:47Z
|
http://arxiv.org/abs/2310.16339v1
|
# Exponential relaxation to equilibrium for a kinetic Fokker-Planck-alignment equation with force
###### Abstract.
In this note, we consider a kinetic Fokker-Planck-Alignment equation with Rayleigh-type friction and self-propulsion force which is derived from general environmental averaging models. We show the exponential relaxation in time toward equilibrium of the solutions provided certain spectral gap conditions are satisfied. The result is proved by using Desvillettes-Villani's method for collisional models to establish the global hypocoercivity.
Key words and phrases:Collective behavior, Fokker-Planck equation, Hypocoercivity, Rayleigh friction 2020 Mathematics Subject Classification: 35Q84, 35Q92, 92D25 **Acknowledgment.** The author would like to thank Professor Roman Shvydkoy for useful discussions and acknowledge partial support from NSF grant DMS-2107956 (PI: Roman Shvydkoy).
## 1. Introduction
In this note, we are interested in a kinetic Fokker-Planck-Alignment equation which is derived from general environmental averaging models. More specifically, let \(\Omega\subset\mathbb{R}^{n}\) be a periodic domain. An agent is featured by its position \(x\in\Omega\) and its velocity \(v\in\mathbb{R}^{n}\). The density of agents who has position \(x\) and velocity \(v\) at time \(t\geqslant 0\), denoted by \(f=f(x,v,t)\), is governed by the following equation:
\[\partial_{t}f+v\cdot\nabla_{x}f=s_{\rho}\big{[}\Delta_{v}f+\nabla_{v}\cdot \big{(}\big{[}(v-[u]_{\rho})+F(v)\big{]}f\big{)}\big{]}, \tag{1}\]
subject to the initial condition
\[f(x,v,0)=f_{0}(x,v).\]
Here \(\rho\) and \(u\) are macroscopic density and macroscopic velocity defined by
\[\rho(x)=\int_{\mathbb{R}^{n}}f(x,v)\,\mathrm{d}v,\quad u\rho(x)=\int_{\mathbb{ R}^{n}}vf(x,v)\,\mathrm{d}v. \tag{2}\]
The family of pairs \((\kappa_{\rho},[\,]_{\rho})\) with \(\,\mathrm{d}\kappa_{\rho}:=s_{\rho}\,\mathrm{d}\rho\) satisfies the conditions for a material environmental averaging model introduced in [10]. The Rayleigh-type friction and self-propulsion force \(F\) is given by
\[F(v)=\frac{\sigma(|v|^{p}-1)v}{\eta(|v|)}, \tag{3}\]
where \(\eta:\mathbb{R}_{+}\to\mathbb{R}_{+}\) is a smooth, positive and increasing function satisfying
\[\eta(z)=1\text{ if }z\leqslant R\text{ for some }R>0;\text{ and }\eta(z)\sim z^{q}\text{ for some }q>p\text{ as }z\to\infty. \tag{4}\]
Our goal is to show that the solution of (1) relaxes exponentially fast toward its equilibrium. We utilize the Desvillettes-Villani's method (see [5, 13]) for collisional models to modify the entropy and establish a global hypocoercivity. Without additional force, Shvydkoy gave the first result on global hypocoercivity for this type of model in [9]. In that paper, the averaging operator is given by
\[[u]_{\rho}:=\phi\ast\big{(}\frac{\phi\ast(u\rho)}{\phi\ast\rho}\big{)},\]
where \(\phi\) is a radial non-negative non-increasing function satisfying
\[\int_{\Omega}\phi(x)\,\mathrm{d}x=1,\quad\phi(x)\geqslant c_{0}\mathds{1}_{ \{|x|<r_{0}\}}.\]
Then the result was extended to a class of kinetic equations in [10]. In this work, we show that if an extra force is added then we still have a global hypocoercivity and hence, an exponential relaxation to equilibrium provided that the force is small in the sense of assumption (iv) below.
Before stating our result, let us give some motivation for studying the equation (1). The study of collective behavior has attracted a lot of attention from the scientific community because it has diverse applications ranging from biology, physics, computer science, social science etc., see e.g. [2, 11, 12, 14] and the references therein.
For microscopic descriptions, many models of collective behavior can be described as follows:
\[\begin{cases}\dot{x}_{i}=v_{i},\qquad(x_{i},v_{i})\in\Omega\times\mathbb{R}^{ n},\\ \dot{v}_{i}=s_{i}([v]_{i}-v_{i})+F_{i},\quad i=1,\ldots,N,\end{cases} \tag{5}\]
where \(s_{i},F_{i}\) are respectively the communication strength and the force corresponding to the \(i\)-th agent; \(v=(v_{1},\ldots,v_{N})\mathbb{R}^{nN}\) and \([v]_{i}\) denotes the averaging operator acts on the \(i\)-th agent. The celebrated Cucker-Smale system [3, 4] can be written in form (5) with
\[s_{i}=\sum_{j=1}^{N}m_{j}\phi(|x_{i}-x_{j}|),\qquad[v]_{i}=\frac{\sum_{j=1}^{N} m_{j}\phi(|x_{i}-x_{j}|)v_{j}}{\sum_{j=1}^{N}m_{j}\phi(|x_{i}-x_{j}|)}, \tag{6}\]
where \(\phi\) is a smooth radial non-increasing function, \(m_{i}\) is the communication weight of the \(i-\)th agent. In this model \(F_{i}=0\). For examples with nontrivial force \(F_{i}\), the readers can see [7, 8, 11]. If we take \(F_{i}\) in (5) to be the combination of a deterministic force and a noise of the form
\[F_{i}=\frac{\sigma(1-|v_{i}|^{p})v_{i}}{\eta(|v_{i}|)}+\sqrt{2s_{i}(x)}\dot{B} _{i},\quad 0<\sigma<1\text{ and }p>0, \tag{7}\]
here \(\eta\) is given by (4) and \(B^{\prime}_{i}s\) are independent Brownian motions in \(\mathbb{R}^{n}\), then the stochastic mean-field limit of (5) formally leads to the kinetic equation (1).
In this short note, we will merely focus on the long-time behavior of the solution of (1) provided it exists. For a rigorous derivation of (1) via stochastic mean-field limit one can consult the scheme from [1, 10]. For the existence of solution, we refer to [1, 6, 10]. We assume the solution \(f\) to (1) belongs to some weighted Sobolev space
\[H_{l}^{k}(\Omega\times\mathbb{R}^{n}):=\left\{f:\sum_{k^{\prime}\leqslant k} \sum_{|\alpha|=k^{\prime}}\int_{\Omega\times\mathbb{R}^{n}}\left\langle v \right\rangle^{l+2(k-k^{\prime})}\left|\partial_{x,v}^{\alpha}f\right|^{2} \mathrm{d}x\,\mathrm{d}v<\infty\right\},\]
where \(\left\langle v\right\rangle=\sqrt{1+|v|^{2}}\) and \(\alpha\) denotes a multiindex.
Next let us introduce some more notations. Letting \(G:\mathbb{R}_{+}\to\mathbb{R}\) be the function defined by
\[G(z):=\int_{0}^{z}\frac{\sigma(y^{p+1}-y)}{\eta(y)}\ \mathrm{d}y,\]
and letting
\[V(v)=\frac{|v|^{2}}{2}+G(|v|). \tag{8}\]
Then the gradient and Hessian matrix of \(V\) can be computed explicitly,
\[\nabla V =v+F(v), \tag{10}\] \[\nabla^{2}V =\left(1+\frac{\sigma(|v|^{p}-1)}{\eta(|v|)}\right)\mathbb{I}+ \frac{\sigma|v|^{p}}{\eta(|v|)}\frac{v}{|v|}\otimes\frac{v}{|v|}-\frac{\sigma (|v|^{p}-1)|v|\eta^{\prime}(|v|)}{\eta^{2}(|v|)}\frac{v}{|v|}\otimes\frac{v}{| v|}, \tag{9}\]
where \(\mathbb{I}\) is the identity matrix.
_Remark 1.1_.: By the assumption (4) and the identity (10) we see that the Hessian matrix of \(V\) is bounded. Thus, there exists a positive constant \(\Lambda\) such that
\[|(\nabla^{2}V)(y)|\leqslant\Lambda|y|,\quad\forall y\in\mathbb{R}^{n}. \tag{11}\]
We also note that for \(y\in\mathbb{R}^{n}\),
\[y^{T}(\nabla^{2}V)y\geqslant\left(1-\frac{\sigma}{\eta(|v|)}-\frac{\sigma|v|^ {p+1}\eta^{\prime}(|v|)}{\eta^{2}(|v|)}\right)|y|^{2}\geqslant\lambda|y|^{2}, \tag{12}\]
where \(\lambda>0\) is a constant depending on \(\sigma\).
We expect that the solution to (1) converges to
\[f_{\infty}:=\frac{1}{Z}e^{-V(v)}\quad\text{ with }Z=\int_{\Omega\times\mathbb{R}^{n}}e ^{-V(v)}\,\mathrm{d}v\,\mathrm{d}x. \tag{13}\]
The macroscopic field \(u_{F}\) is defined by
\[\rho u_{F}(x)=\int_{\mathbb{R}^{n}}F(v)f(x,v)\,\mathrm{d}v.\]
Denote \(L^{2}(\kappa_{\rho}):=L^{2}(\mathrm{d}\kappa_{\rho})\). The inner product in \(L^{2}(\kappa_{\rho})\) is denoted by \(\left\langle\cdot,\cdot\right\rangle_{\kappa_{\rho}}\). Our main result is the following:
**Theorem 1.2**.: _Suppose that \(f\in H^{k}_{l}(\Omega\times\mathbb{R}^{n})\) is a solution to (1) such that \(\rho(t)\) satisfies the following assumptions for all \(t\geqslant 0\):_
1. \(c_{0}\leqslant s_{\rho}\leqslant c_{1}\) _and_ \(\|\nabla s_{\rho}\|_{\infty}\leqslant c_{2}\)_, where_ \(c_{0},c_{1},c_{2}\) _are positive constants,_
2. \(\nabla_{x}(s_{\rho}[\cdot]_{\rho}):L^{2}(\rho)\to L^{2}(\rho)\) _is uniformly bounded,_
3. _there exists a constant_ \(0<\varepsilon_{0}<1\) _such that_ \[\sup\big{\{}\left\langle w,[w]_{\rho}\right\rangle_{\kappa_{\rho}}|\,w\in L^{ 2}(\kappa_{\rho}),\|w\|_{L^{2}(\kappa_{\rho})}=1\big{\}}\leqslant 1- \varepsilon_{0},\]
4. _there exists a constant_ \(0<\varepsilon_{1}<1\) _such that_ \[\|u_{F}\|_{L^{2}(\kappa_{\rho})}\leqslant\varepsilon_{1}\|u\|_{L^{2}(\kappa_{ \rho})}.\]
_Then \(f\) converges to \(f_{\infty}\) exponentially fast:_
\[\|f(t)-f_{\infty}\|_{L^{1}(\Omega\times\mathbb{R}^{n})}\leqslant Ce^{-\delta t},\]
_where \(C>0\) is a constant depending on initial data \(f_{0}\) and given parameters; \(\delta>0\) is a constant depending only on given parameters._
_Remark 1.3_.: Observe that in the case of Cucker-Smale model, since \(s_{\rho}=\phi*\rho\) and \(s_{\rho}[u]_{\rho}=\phi*(u\rho)\), condition (ii) holds automatically and condition (i) holds if \(\phi*\rho\geqslant\underline{\rho}\) for some \(\underline{\rho}>0\).
## 2. Proof of main result
In this section, we will prove Theorem 1.2. Firstly, let us introduce some notations and definitions.
### Notations and preliminaries
The relative entropy is defined by
\[\mathcal{H}(f|f_{\infty})=\int_{\Omega\times\mathbb{R}^{n}}f\log\frac{f}{f_{ \infty}}\,\mathrm{d}v\,\mathrm{d}x.\]
For our convenient computation, we will derive an equation for \(h\) satisfying \(f=hf_{\infty}\). Plugging this \(f\) into equation (1), we have the following equation for \(h\):
\[\partial_{t}h+v\cdot\nabla_{x}h=s_{\rho}\big{(}\Delta_{v}h-\nabla V\cdot \nabla_{v}h+h[u]_{\rho}\cdot\nabla V-[u]_{\rho}\cdot\nabla_{v}h\big{)}. \tag{14}\]
Letting
\[A=\nabla_{v},\qquad B=v\cdot\nabla_{x},\]
and \(A^{*}\) be the adjoint of \(A\) with respect to the inner product in the weighted space \(L^{2}(\mu)\):
\[\left\langle\varphi_{1},\varphi_{2}\right\rangle=\int_{\Omega\times\mathbb{R }^{n}}\varphi_{1}\varphi_{2}\,\mathrm{d}\mu,\quad\mathrm{d}\mu=f_{\infty}\, \mathrm{d}v\,\mathrm{d}x.\]
We can calculate \(A^{*}\) explicitly,
\[A^{*}=(\nabla V-\nabla_{v})\cdot.\]
Then we can write (14) in the abstract form:
\[h_{t}=-s_{\rho}A^{*}Ah-Bh+s_{\rho}A^{*}(h[u]_{\rho}). \tag{15}\]
Following the notations from the paper [10], let us define the partial Fisher information functionals as follows:
\[\mathcal{I}_{vv}(h)=\int_{\Omega\times\mathbb{R}^{n}}\frac{|\nabla_{v}h|^{2}}{ h}\,\mathrm{d}\mu,\quad\mathcal{I}_{xv}(h)=\int_{\Omega\times\mathbb{R}^{n}} \frac{\nabla_{x}h\cdot\nabla_{v}h}{h}\,\mathrm{d}\mu,\quad\mathcal{I}_{xx}(h)= \int_{\Omega\times\mathbb{R}^{n}}\frac{|\nabla_{x}h|^{2}}{h}\,\mathrm{d}\mu.\]
The full Fisher information is defined by
\[\mathcal{I}=\mathcal{I}_{vv}+\mathcal{I}_{xx}.\]
For our convenience we use the notation
\[(\varphi)_{\mu}:=\int_{\Omega\times\mathbb{R}^{n}}\varphi\,\mathrm{d}\mu.\]
Denote \(\bar{h}=\log h\) and
\[\mathcal{D}_{vv}=(s_{\rho}h|\nabla_{v}^{2}\bar{h}|^{2})_{\mu},\quad\mathcal{D}_ {xv}=(s_{\rho}h|\nabla_{v}\nabla_{x}\bar{h}|^{2})_{\mu},\]
where \(\nabla_{v}^{2}\bar{h}\) is the Hessian matrix with respect to \(v\) of \(\bar{h}\). We will use the notations \(J_{A},J_{B},J_{u}\) to refer to the terms related to the operators \(A,B\) and related to \(u\) respectively. They are different in the proof of each lemma in the sequel. We denote by \(C,c\) positive constants which may vary from line to line.
### Proof of Theorem 1.2
By the Csiszar-Kullback inequality,
\[\|f-f_{\infty}\|_{L^{1}(\Omega\times\mathbb{R}^{n})}^{2}\leqslant c\mathcal{H}. \tag{16}\]
Therefore, it suffices to show that the entropy function \(\mathcal{H}\) decays exponentially fast in time. Using (1) and integration by parts, we have
\[\frac{\mathrm{d}}{\mathrm{d}t}\mathcal{H}=-\int_{\Omega\times\mathbb{R}^{n}}s _{\rho}\frac{|\nabla_{v}f+\nabla Vf|^{2}}{f}\,\mathrm{d}v\,\mathrm{d}x+\left<u _{V},[u]_{\rho}\right>_{\kappa_{\rho}}, \tag{17}\]
where
\[u_{V}=u+u_{F}. \tag{18}\]
Define the partial Fisher information functional \(\mathcal{I}_{vv}\) by
\[\mathcal{I}_{vv}=\int_{\Omega\times\mathbb{R}^{n}}s_{\rho}\frac{|\nabla_{v}f+ \nabla Vf|^{2}}{f}\,\mathrm{d}v\,\mathrm{d}x.\]
By the assumption (i) we have
\[\frac{\mathrm{d}}{\mathrm{d}t}\mathcal{H}\leqslant-c_{0}\mathcal{I}_{vv}+ \left<u_{V},[u]_{\rho}\right>_{\kappa_{\rho}}. \tag{19}\]
We can also rewrite (17) in the dissipative form:
\[\frac{\mathrm{d}}{\mathrm{d}t}\mathcal{H}=-\int_{\Omega\times\mathbb{R}^{n}}s _{\rho}\frac{|\nabla_{v}f+(\nabla V-u_{V})f|^{2}}{f}\,\mathrm{d}v\,\mathrm{d} x-\|u_{V}\|_{L^{2}(\kappa_{\rho})}^{2}+\left<u_{V},[u]_{\rho}\right>_{\kappa_{ \rho}}. \tag{20}\]
By the triangle inequality and assumption (iv) we have
\[\|u\|_{L^{2}(\kappa_{\rho})}\leqslant\frac{1}{1-\varepsilon_{1}}\|u_{V}\|_{L^ {2}(\kappa_{\rho})}. \tag{21}\]
Then by the Cauchy-Schwarz inequality, assumptions (iii) and (iv) we have
\[\left<u_{V},[u]_{\rho}\right>_{\kappa_{\rho}}= \left<u_{V},[u_{V}]_{\rho}\right>_{\kappa_{\rho}}-\left<u_{V},[u_ {F}]_{\rho}\right>_{\kappa_{\rho}}\] \[\leqslant (1-\varepsilon_{0})\|u_{V}\|_{L^{2}(\kappa_{\rho})}^{2}+\frac{ \varepsilon_{1}}{1-\varepsilon_{1}}\|u_{V}\|_{L^{2}(\kappa_{\rho})}^{2} \tag{22}\] \[\leqslant (1-c_{3})\|u_{V}\|_{L^{2}(\kappa_{\rho})}^{2},\]
where \(c_{3}>0\) depending on \(\varepsilon_{0},\varepsilon_{1}\). Plugging this inequality into (20) we obtain
\[\frac{\mathrm{d}}{\mathrm{d}t}\mathcal{H}\leqslant-c_{3}\|u_{V}\|_{L^{2}( \kappa_{\rho})}^{2}. \tag{23}\]
Combining (19), (23) and (22) we have
\[\frac{\mathrm{d}}{\mathrm{d}t}\mathcal{H}\leqslant-\frac{c_{0}c_{3}}{1+c_{3}} \mathcal{I}_{vv}-\frac{c_{3}^{2}}{1+c_{3}}\|u_{V}\|_{L^{2}(\kappa_{\rho})}^{2} \leqslant-c\mathcal{I}_{vv}-c\|u_{V}\|_{L^{2}(\kappa_{\rho})}^{2}, \tag{24}\]
where \(c>0\) depending on \(\varepsilon_{0},\varepsilon_{1},c_{0}\).
By (12), \(f_{\infty}\) satisfies a logarithmic Sobolev inequality, see [13]. Thus, we have
\[\mathcal{H}\leqslant c\mathcal{I}. \tag{25}\]
We have the following three estimates on the time derivative of partial Fisher information functionals. Their proofs will be presented in the next subsection.
**Lemma 2.1**.: _We have_
\[\frac{d}{dt}\mathcal{I}_{vv}(h)\leqslant-2\mathcal{D}_{vv}-\lambda c_{0}\mathcal{ I}_{vv}-2\mathcal{I}_{xv}+c\|u\|_{L^{2}(\kappa_{p})}^{2}, \tag{26}\]
_where \(c\) is a positive constant depending on \(c_{0},c_{1},\lambda,\Lambda\)._
**Lemma 2.2**.: _We have_
\[\frac{d}{dt}\mathcal{I}_{xv}\leqslant c\mathcal{I}_{vv}-\frac{1}{2}\mathcal{ I}_{xx}+2\mathcal{D}_{vv}+\mathcal{D}_{xv}+c\|u\|_{L^{2}(\kappa_{p})}^{2}, \tag{27}\]
_where \(c\) is dependent on \(c_{0},c_{1},c_{2},\lambda,\Lambda\)._
**Lemma 2.3**.: _We have_
\[\frac{d}{dt}\mathcal{I}_{xx}(h)\leqslant c\mathcal{I}_{vv}-\mathcal{D}_{xv}+ c\|u\|_{L^{2}(\kappa_{p})}^{2},\]
_where \(c\) is a constant depending on \(\lambda,\Lambda\) and the parameters in the assumption (i), (ii)._
Choosing \(\varepsilon>0\) small so that if we define
\[\tilde{\mathcal{I}}=\mathcal{I}_{vv}+\varepsilon\mathcal{I}_{xv}+\frac{ \lambda c_{0}}{c}\mathcal{I}_{xx}, \tag{28}\]
then \(\mathcal{I}\sim\tilde{\mathcal{I}}\). Combining three lemmas above and the assumption (iv) we have
\[\frac{\mathrm{d}}{\mathrm{d}t}\tilde{\mathcal{I}}\leqslant-\lambda c_{0} \mathcal{I}_{vv}-\frac{\varepsilon}{2}\mathcal{I}_{xx}+C\|u\|_{L^{2}(\kappa_{ p})}^{2}\leqslant-\lambda c_{0}\mathcal{I}_{vv}-\frac{\varepsilon}{2}\mathcal{I}_{ xx}+C\|u_{V}\|_{L^{2}(\kappa_{p})}^{2}. \tag{29}\]
From (24), (29) and (25) we can choose a constant \(\gamma\) such that
\[\frac{\mathrm{d}}{\mathrm{d}t}\big{(}\tilde{\mathcal{I}}+\gamma\mathcal{H} \big{)}\lesssim-\mathcal{I}\leqslant-\delta(\tilde{\mathcal{I}}+\gamma\mathcal{ H}). \tag{30}\]
Thus, by Gronwall's inequality we obtain
\[\tilde{\mathcal{I}}+\gamma\mathcal{H}\leqslant(\tilde{\mathcal{I}}_{0}+\gamma \mathcal{H}_{0})e^{-\delta t}\leqslant c\mathcal{I}_{0}e^{-\delta t}. \tag{31}\]
Then we can conclude the theorem.
### Proof of three technical lemmas
In this subsection, we will give the proofs of three lemmas mentioned previously.
Proof of Lemma 2.1.: Let us rewrite \(\mathcal{I}_{vv}\) in the form
\[\mathcal{I}_{vv}=(\nabla_{v}h\cdot\nabla_{v}\bar{h})_{\mu}.\]
By chain rule and equation (15) we get
\[\frac{\mathrm{d}}{\mathrm{d}t}\mathcal{I}_{vv}= \,2(\nabla_{v}h_{t}\cdot\nabla_{v}\bar{h})_{\mu}-(|\nabla_{v}\bar{h }|^{2}h_{t})_{\mu}:=J_{A}+J_{B}+J_{u},\]
where
\[J_{A} =-2(s_{\rho}\nabla_{v}A^{*}Ah\cdot\nabla_{v}\bar{h})_{\mu}+(s_{ \rho}|\nabla_{v}\bar{h}|^{2}A^{*}Ah)_{\mu},\] \[J_{B} =-2(\nabla_{v}Bh\cdot\nabla_{v}\bar{h})_{\mu}+(|\nabla_{v}\bar{h }|^{2}Bh)_{\mu},\] \[J_{u} =2(s_{\rho}\nabla_{v}A^{*}([u]_{\rho}h)\cdot\nabla_{v}\bar{h})_{ \mu}-(s_{\rho}|\nabla_{v}\bar{h}|^{2}A^{*}([u]_{\rho}h))_{\mu}.\]
For notational convenience we will use the Einstein summation convention in the sequel.
We firstly consider the term \(J_{A}\). Using the identity
\[\partial_{v_{i}}(A^{*}Ah)=A^{*}Ah_{v_{i}}+\nabla V_{v_{i}}\cdot\nabla_{v}h,\]
\(J_{A}\) equals to
\[-2(s_{\rho}A^{*}Ah_{v_{i}}\bar{h}_{v_{i}})_{\mu}-2(s_{\rho}(\nabla V_{v_{i}} \cdot\nabla_{v}h)\bar{h}_{v_{i}})_{\mu}+(s_{\rho}|\nabla_{v}\bar{h}|^{2}A^{*} Ah)_{\mu}=:J_{A}^{1}+J_{A}^{2}+J_{A}^{3}.\]
By (12) we have
\[J_{A}^{2}=-2(s_{\rho}h^{-1}(\nabla_{v}h)^{T}\nabla^{2}V\nabla_{v}h)_{\mu} \leqslant-2\lambda(s_{\rho}h^{-1}\nabla_{v}h\cdot\nabla_{v}h)_{\mu}.\]
Then the assumption (i) in Theorem 1.2 implies that
\[J_{A}^{2}\leqslant-2\lambda c_{0}\mathcal{I}_{vv}.\]
By switching \(A^{*}\) in \(J^{1}_{A},J^{3}_{A}\) we can write
\[J^{1}_{A}+J^{3}_{A}= -2(s_{\rho}Ah_{v_{i}}\cdot A\bar{h}_{v_{i}})_{\mu}+(s_{\rho}A(| \nabla_{v}\bar{h}|^{2})\cdot Ah)_{\mu}\] \[= -2(s_{\rho}hA\bar{h}_{v_{i}}\cdot A\bar{h}_{v_{i}})_{\mu}-2(s_{ \rho}\bar{h}_{v_{i}}Ah\cdot A\bar{h}_{v_{i}})_{\mu}+2(s_{\rho}\bar{h}_{v_{i}}A \bar{h}_{v_{i}}\cdot Ah)_{\mu}\] \[= -2(s_{\rho}hA\bar{h}_{v_{i}}\cdot A\bar{h}_{v_{i}})_{\mu}=-2 \mathcal{D}_{vv}.\]
Combining the above estimates we obtain
\[J_{A}\leqslant-2\mathcal{D}_{vv}-2\lambda c_{0}\mathcal{I}_{vv}. \tag{32}\]
For the term \(J_{B}\), plugging \(B=v\cdot\nabla_{x}\) into \(J_{B}\) we have
\[J_{B}=-2(\nabla_{x}h\cdot\nabla_{v}\bar{h})_{\mu}-2((v\cdot\nabla_{x}h_{v_{i} })\bar{h}_{v_{i}})_{\mu}+(|\nabla_{v}\bar{h}|^{2}v\cdot\nabla_{x}h)_{\mu}.\]
Using the identity \(\bar{h}_{v_{i}}=h_{v_{i}}h^{-1}\) and integration by parts, we get
\[2((v\cdot\nabla_{x}h_{v_{i}})\bar{h}_{v_{i}})_{\mu}=\big{(}v\cdot\nabla_{x}|h _{v_{i}}|^{2}h^{-1}\big{)}_{\mu}=\big{(}|h_{v_{i}}|^{2}h^{-2}v\cdot\nabla_{x}h \big{)}_{\mu}=(|\nabla_{v}\bar{h}|^{2}v\cdot\nabla_{x}h)_{\mu}.\]
Substituting this into \(J_{B}\) we yield
\[J_{B}=-2\mathcal{I}_{xv}. \tag{33}\]
For the last term \(J_{u}\), we have
\[J_{u}= \,2(s_{\rho}\nabla_{v}A^{*}([u]_{\rho}h)\cdot\nabla_{v}\bar{h})_{ \mu}-(s_{\rho}|\nabla_{v}\bar{h}|^{2}A^{*}([u]_{\rho}h))_{\mu}\] \[= \,2(s_{\rho}\nabla_{v}(\nabla V\cdot[u]_{\rho}h-[u]_{\rho}\cdot \nabla_{v}h)\cdot\nabla_{v}\bar{h})_{\mu}-(s_{\rho}\nabla_{v}|\nabla_{v}\bar{ h}|^{2}\cdot[u]_{\rho}h)_{\mu}\] \[= \,2(s_{\rho}\nabla^{2}V([u]_{\rho}h)\cdot\nabla_{v}\bar{h})_{\mu} +2(s_{\rho}(\nabla V\cdot[u]_{\rho})\nabla_{v}h\cdot\nabla_{v}\bar{h})_{\mu}- 2(s_{\rho}\nabla_{v}^{2}h([u]_{\rho})\cdot\nabla_{v}\bar{h})_{\mu}\] \[-2(s_{\rho}\nabla_{v}^{2}\bar{h}(\nabla_{v}\bar{h})\cdot[u]_{\rho }h)_{\mu}\] \[= \,:J^{1}_{u}+J^{2}_{u}+J^{3}_{u}+J^{4}_{u}.\]
Plugging
\[\bar{h}_{v_{i}v_{j}}=h^{-1}h_{v_{i}v_{j}}-h^{-2}h_{v_{i}}h_{v_{j}}\]
into \(J^{4}_{u}\) we get
\[J^{4}_{u}= -2(s_{\rho}h^{-1}h_{v_{i}v_{j}}\bar{h}_{v_{j}}[u_{i}]_{\rho}h)_{ \mu}+2(s_{\rho}h^{-2}h_{v_{i}}h_{v_{j}}\bar{h}_{v_{j}}[u_{i}]_{\rho}h)_{\mu}\] \[= -2(s_{\rho}\nabla_{v}^{2}h([u]_{\rho})\cdot\nabla_{v}\bar{h})_{ \mu}+2(s_{\rho}|\nabla_{v}\bar{h}|^{2}\nabla_{v}h\cdot[u]_{\rho})_{\mu}\] \[= \,J^{3}_{u}+2(s_{\rho}|\nabla_{v}\bar{h}|^{2}\nabla_{v}h\cdot[u]_ {\rho})_{\mu}.\]
Therefore,
\[J^{2}_{u}+J^{3}_{u}+J^{4}_{u}= \,2(s_{\rho}(\nabla V\cdot[u]_{\rho}h)\nabla_{v}\bar{h}\cdot \nabla_{v}\bar{h})_{\mu}-2(s_{\rho}|\nabla_{v}\bar{h}|^{2}\nabla_{v}h\cdot[u]_{ \rho})_{\mu}+2J^{4}_{u}\] \[= \,2(s_{\rho}A^{*}([u]_{\rho}h)|\nabla_{v}\bar{h}|^{2})_{\mu}+2J^{4 }_{u}\] \[= \,2(s_{\rho}h[u]_{\rho}\cdot A(|\nabla_{v}\bar{h}|^{2}))_{\mu}+2J^ {4}_{u}\] \[= \,4(s_{\rho}h[u]_{\rho}\cdot\nabla_{v}^{2}\bar{h}(\nabla_{v}\bar{h }))_{\mu}+2J^{4}_{u}=0.\]
Thus,
\[J_{u}= \,2(s_{\rho}\nabla^{2}V([u]_{\rho}h)\cdot\nabla_{v}\bar{h})_{\mu}= 2(s_{\rho}\nabla^{2}V([u]_{\rho})\cdot\nabla_{v}h)_{\mu}\] \[\leqslant \,2\Lambda c_{1}[\|u]_{\rho}\|_{L^{2}(\kappa_{\rho})}\sqrt{ \mathcal{I}_{vv}}\qquad\text{(by (i), (\ref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq
Proof of Lemma 2.2.: Computing the derivative of \(\mathcal{I}_{xv}\) with respect to \(t\) we get
\[\frac{\mathrm{d}}{\mathrm{d}t}\mathcal{I}_{xv}(h)=(\nabla_{x}h_{t}\cdot\nabla_{v }\bar{h})_{\mu}+(\nabla_{x}\bar{h}\cdot\nabla_{v}h_{t})_{\mu}-(h_{t}\nabla_{v }\bar{h}\cdot\nabla_{x}\bar{h})_{\mu}=:J_{A}+J_{B}+J_{u},\]
where
\[J_{A}= -(\nabla_{x}(s_{\rho}A^{*}Ah)\cdot\nabla_{v}\bar{h})_{\mu}-( \nabla_{x}\bar{h}\cdot\nabla_{v}(s_{\rho}A^{*}Ah))_{\mu}+(s_{\rho}A^{*}Ah\nabla _{v}\bar{h}\cdot\nabla_{x}\bar{h})_{\mu}=:J_{A}^{1}+J_{A}^{2}+J_{A}^{3},\] \[J_{B}= -(\nabla_{x}(v\cdot\nabla_{x}h)\cdot\nabla_{v}\bar{h})_{\mu}-( \nabla_{x}\bar{h}\cdot\nabla_{v}(v\cdot\nabla_{x}h))_{\mu}+((v\cdot\nabla_{x} h)\nabla_{v}\bar{h}\cdot\nabla_{x}\bar{h})_{\mu}:=J_{B}^{1}+J_{B}^{2}+J_{B}^{3},\] \[J_{u}= (\nabla_{x}(s_{\rho}A^{*}([u]_{\rho}h))\cdot\nabla_{v}\bar{h})_{ \mu}+(\nabla_{x}\bar{h}\cdot\nabla_{v}(s_{\rho}A^{*}([u]_{\rho}h)))_{\mu}-(s_{ \rho}A^{*}([u]_{\rho}h)\nabla_{v}\bar{h}\cdot\nabla_{x}\bar{h})_{\mu}.\]
Let us firstly estimate \(J_{A}\). Switching \(A^{*}\) and using the identity
\[\nabla_{v}h_{x_{i}}=\bar{h}_{x_{i}}\nabla_{v}h+h\nabla_{v}\bar{h}_{x_{i}}\]
we have
\[J_{A}^{1} =-(s_{\rho}A^{*}Ah_{x_{i}}\bar{h}_{v_{i}})_{\mu}-((s_{\rho})_{x_{i }}A^{*}Ah\bar{h}_{v_{i}})_{\mu}=-(s_{\rho}\nabla_{v}h_{x_{i}}\cdot\nabla_{v} \bar{h}_{v_{i}})_{\mu}-((s_{\rho})_{x_{i}}\nabla_{v}h\cdot\nabla_{v}\bar{h}_{ v_{i}})_{\mu}\] \[=-(s_{\rho}h\nabla_{v}\bar{h}_{x_{i}}\cdot\nabla_{v}\bar{h}_{v_{ i}})_{\mu}-(s_{\rho}\bar{h}_{x_{i}}\nabla_{v}h\cdot\nabla_{v}\bar{h}_{v_{i}})_{ \mu}-\left(\frac{(s_{\rho})_{x_{i}}}{s_{\rho}^{1/2}}\frac{\nabla_{v}h}{h^{1/2} }\cdot s_{\rho}^{1/2}h^{1/2}\nabla_{v}\bar{h}_{v_{i}}\right)_{\mu}.\]
In view of assumption (i) in Theorem 1.2,
\[J_{A}^{1}\leqslant-(s_{\rho}h\nabla_{v}\bar{h}_{x_{i}}\cdot\nabla_{v}\bar{h}_ {v_{i}})_{\mu}-(s_{\rho}\bar{h}_{x_{i}}\nabla_{v}h\cdot\nabla_{v}\bar{h}_{v_{ i}})_{\mu}+c\sqrt{\mathcal{I}_{vv}\mathcal{D}_{vv}},\]
where \(c>0\) is a constant depending on \(c_{0},c_{2}\). Next let us consider \(J_{A}^{2}\). Since
\[\partial_{v_{i}}(A^{*}Ah)=A^{*}Ah_{v_{i}}+\nabla V_{v_{i}}\cdot\nabla_{v}h\text { and }\nabla_{v}h_{v_{i}}=h\nabla_{v}\bar{h}_{v_{i}}+\bar{h}_{v_{i}}\nabla_{v}h,\]
we have
\[J_{A}^{2}= -(s_{\rho}\bar{h}_{x_{i}}A^{*}Ah_{v_{i}})_{\mu}-(s_{\rho}\bar{h}_ {x_{i}}\nabla V_{v_{i}}\cdot\nabla_{v}h)_{\mu}\] \[= -(s_{\rho}\nabla_{v}\bar{h}_{x_{i}}\cdot\nabla_{v}h_{v_{i}})_{\mu }-(s_{\rho}\bar{h}_{x_{i}}\nabla V_{v_{i}}\cdot\nabla_{v}h)_{\mu}\] \[= -(s_{\rho}h\nabla_{v}\bar{h}_{x_{i}}\cdot\nabla_{v}\bar{h}_{v_{i} })_{\mu}-(s_{\rho}\bar{h}_{v_{i}}\nabla_{v}\bar{h}_{x_{i}}\cdot\nabla_{v}h)_{ \mu}-(s_{\rho}\nabla_{x}\bar{h}\cdot(\nabla^{2}V)(\nabla_{v}h))_{\mu}.\]
Then
\[J_{A}^{1}+J_{A}^{2} \leqslant-(s_{\rho}\nabla_{x}\bar{h}\cdot(\nabla^{2}V)(\nabla_{v }h))_{\mu}-2(s_{\rho}h\nabla_{v}\bar{h}_{x_{i}}\cdot\nabla_{v}\bar{h}_{v_{i}})_ {\mu}-(s_{\rho}Ah\cdot A(\nabla_{v}\bar{h}\cdot\nabla_{x}\bar{h}))_{\mu}+c\sqrt{ \mathcal{I}_{vv}\mathcal{D}_{vv}}\] \[\leqslant-(s_{\rho}\nabla_{x}\bar{h}\cdot(\nabla^{2}V)(\nabla_{v }h))_{\mu}+2\sqrt{\mathcal{D}_{vv}\mathcal{D}_{xx}}+c\sqrt{\mathcal{I}_{vv} \mathcal{D}_{vv}}-J_{A}^{3}\] \[\leqslant c_{1}\Lambda\sqrt{\mathcal{I}_{vv}\mathcal{I}_{xx}}+2 \sqrt{\mathcal{D}_{vv}\mathcal{D}_{vv}}+c\sqrt{\mathcal{I}_{vv}\mathcal{D}_{vv}} -J_{A}^{3}.\]
Thus, combining all the terms of \(J_{A}\) and applying Young's inequality we yield
\[J_{A} \leqslant c_{1}\Lambda\sqrt{\mathcal{I}_{vv}\mathcal{I}_{xx}}+2 \sqrt{\mathcal{D}_{vv}\mathcal{D}_{vv}}+c\sqrt{\mathcal{I}_{vv}\mathcal{D}_{vv}}, \tag{35}\] \[\leqslant c\mathcal{I}_{vv}+\frac{1}{4}\mathcal{I}_{xx}+\frac{3}{2} \mathcal{D}_{vv}+\mathcal{D}_{xv}.\]
Now we consider \(J_{B}\). We have
\[J_{B}^{2}= -(\nabla_{x}\bar{h}\cdot\nabla_{x}h)_{\mu}-(\bar{h}_{x_{i}}v_{j}h_ {x_{j}v_{i}})_{\mu}\] \[= -\mathcal{I}_{xx}+(\bar{h}_{x_{i}x_{j}}v_{j}h_{v_{i}})_{\mu}\] \[= -\mathcal{I}_{xx}+(h_{x_{i}x_{j}}v_{j}\bar{h}_{v_{i}})_{\mu}-(\bar {h}_{x_{i}}\bar{h}_{x_{j}}v_{j}h_{v_{i}})_{\mu}=-\mathcal{I}_{xx}-J_{B}^{1}-J_{B}^{3}.\]
In the last row we used the identity
\[\bar{h}_{x_{i}x_{j}}=h^{-1}h_{x_{i}x_{j}}-\bar{h}_{x_{i}}\bar{h}_{x_{j}}.\]
It follows that
\[J_{B}=-\mathcal{I}_{xx}. \tag{36}\]
Lastly let us examine \(J_{u}\). We have
\[J_{u}= \,((s_{\rho})_{x_{i}}A^{*}([u]_{\rho}h)\bar{h}_{v_{i}})_{\mu}+(s_{ \rho}A^{*}([u]_{\rho})_{x_{i}}h)\bar{h}_{v_{i}})_{\mu}+(s_{\rho}A^{*}([u]_{\rho}h _{x_{i}})\bar{h}_{v_{i}})_{\mu}\] \[+(s_{\rho}\bar{h}_{x_{i}}A^{*}([u]_{\rho}h_{v_{i}}))_{\mu}+(s_{\rho }\nabla_{x}\bar{h}\cdot(\nabla^{2}V)([u]_{\rho}h))_{\mu}-(s_{\rho}h[u]_{\rho} \cdot\nabla_{v}(\nabla_{v}\bar{h}\cdot\nabla_{x}\bar{h}))_{\mu}\] \[= \,(h(s_{\rho}[u]_{\rho})_{x_{i}}\cdot\nabla_{v}\bar{h}_{v_{i}})_{ \mu}+(s_{\rho}h[u]_{\rho}\bar{h}_{x_{i}}\cdot\nabla_{v}\bar{h}_{v_{i}})_{\mu}+( s_{\rho}h\nabla_{v}h_{x_{i}}\cdot[u]_{\rho}\bar{h}_{v_{i}})_{\mu}\] \[+(s_{\rho}\nabla_{x}\bar{h}\cdot(\nabla^{2}V)([u]_{\rho}h))_{\mu} -(s_{\rho}h[u]_{\rho}\cdot\nabla_{v}(\nabla_{v}\bar{h}\cdot\nabla_{x}\bar{h})) _{\mu}\] \[=: J_{u}^{1}+J_{u}^{2}+J_{u}^{3}+J_{u}^{4}+J_{u}^{5}.\]
Since
\[J_{u}^{2}+J_{u}^{3}=(s_{\rho}h[u]_{\rho}\cdot\nabla_{v}(\nabla_{x}\bar{h}\cdot \nabla_{v}\bar{h}))_{\mu}=-J_{u}^{5},\]
we get
\[J_{u}=J_{u}^{1}+J_{u}^{4}.\]
By the assumption (ii) in Theorem 1.2,
\[J_{u}^{1}=(h(s_{\rho}[u]_{\rho})_{x_{i}}\cdot\nabla_{v}\bar{h}_{v_{i}})_{\mu} \leqslant c\|u\|_{L^{2}(\kappa_{\rho})}\sqrt{\mathcal{D}_{vv}}.\]
For \(J_{u}^{4}\) we use the assumption (i) and (11) to get
\[J_{u}^{4} =(s_{\rho}\nabla_{x}\bar{h}\cdot(\nabla^{2}V)([u]_{\rho}h))_{\mu}\] \[\leqslant c\|u\|_{L^{2}(\kappa_{\rho})}\sqrt{\mathcal{I}_{xx}},\]
where \(c\) is a constant depending on \(c_{1},\Lambda\). Hence, by Young's inequality we obtain
\[J_{u}\leqslant\frac{1}{4}\mathcal{I}_{xx}+\frac{1}{2}\mathcal{D}_{vv}+c\|u\|_{ L^{2}(\kappa_{\rho})}^{2}. \tag{37}\]
Combining three estimates (35), (36) and (37) it implies the conclusion of this lemma.
Proof of Lemma 2.3.: Computing the derivative of \(\mathcal{I}_{xx}(h)\) with respect to \(t\) we get
\[\frac{\mathrm{d}}{\mathrm{d}t}\mathcal{I}_{xx}(h)=2(\nabla_{x}h_{t}\cdot\nabla _{x}\bar{h})_{\mu}-(|\nabla_{x}\bar{h}|^{2}h_{t})_{\mu}=:J_{A}+J_{B}+J_{u},\]
where
\[J_{A}= \,-2(\nabla_{x}(s_{\rho}A^{*}Ah)\cdot\nabla_{x}\bar{h})_{\mu}+(s_ {\rho}|\nabla_{x}\bar{h}|^{2}A^{*}Ah)_{\mu},\] \[J_{B}= \,-2(\nabla_{x}(v\cdot\nabla_{x}h)\cdot\nabla_{x}\bar{h})_{\mu}+ (|\nabla_{x}\bar{h}|^{2}v\cdot\nabla_{x}h)_{\mu},\] \[J_{u}= 2(\nabla_{x}(s_{\rho}A^{*}([u]_{\rho}h))\cdot\nabla_{x}\bar{h}) _{\mu}-(s_{\rho}|\nabla_{x}\bar{h}|^{2}A^{*}([u]_{\rho}h))_{\mu}.\]
For \(J_{A}\) we have
\[J_{A}=-2((s_{\rho})_{x_{i}}Ah\cdot A\bar{h}_{x_{i}})_{\mu}-2(s_{\rho}Ah_{x_{i}} \cdot A\bar{h}_{x_{i}})_{\mu}+(s_{\rho}A|\nabla_{x}\bar{h}|^{2}\cdot Ah)_{\mu} =:J_{A}^{1}+J_{A}^{2}+J_{A}^{3}.\]
By the assumption (i) in Theorem 1.2,
\[J_{A}^{1}=-2\Big{(}\frac{(s_{\rho})_{x_{i}}}{s_{\rho}^{1/2}}\frac{\nabla_{v}h}{ h^{1/2}}\cdot s_{\rho}^{1/2}h^{1/2}\nabla_{v}\bar{h}_{x_{i}}\Big{)}_{\mu} \leqslant c\sqrt{\mathcal{I}_{vv}\mathcal{D}_{vv}}.\]
Using the identity \(\nabla_{v}h_{x_{i}}=h\nabla_{v}\bar{h}_{x_{i}}+\bar{h}_{x_{i}}\nabla_{v}h\), we have
\[J_{A}^{2}=-2(s_{\rho}h\nabla_{v}\bar{h}_{x_{i}}\cdot\nabla_{v}\bar{h}_{x_{i}})_{ \mu}-2(s_{\rho}\bar{h}_{x_{i}}\nabla_{v}h\cdot\nabla_{v}\bar{h}_{x_{i}})_{\mu}= -2\mathcal{D}_{xv}-J_{A}^{3}.\]
Therefore,
\[J_{A}\leqslant c\sqrt{\mathcal{I}_{vv}\mathcal{D}_{xv}}-2\mathcal{D}_{xv}. \tag{38}\]
We have \(J_{B}=0\) because
\[-2(\nabla_{x}(v\cdot\nabla_{x}h)\cdot\nabla_{x}\bar{h})_{\mu}=-2((v\cdot \nabla_{x}h_{x_{i}})h_{x_{i}}h^{-1})_{\mu}=-((v\cdot\nabla_{x}|\nabla_{x}h|^{2} h^{-1})_{\mu}=-(|\nabla_{x}\bar{h}|^{2}v\cdot\nabla_{x}h)_{\mu}.\]
For \(J_{u}\), we have
\[J_{u}= \,2(h(s_{\rho}[u]_{\rho})_{x_{i}}\cdot\nabla_{v}\bar{h}_{x_{i}})_{ \mu}+2(s_{\rho}h\bar{h}_{x_{i}}[u]_{\rho}\cdot\nabla_{v}\bar{h}_{x_{i}})_{\mu}-(s_ {\rho}\nabla_{v}(|\nabla_{x}\bar{h}|^{2})\cdot[u]_{\rho}h)_{\mu}\] \[= \,2(h(s_{\rho}[u]_{\rho})_{x_{i}}\cdot\nabla_{v}\bar{h}_{x_{i}})_{\mu}\] \[\leqslant c\|u\|_{L^{2}(\kappa_{\rho})}\sqrt{\mathcal{D}_{xv}}\qquad \text{(by the assumption (ii) in Theorem 1.2).}\]
Combining all the estimates for \(J_{A},J_{B}\) and \(J_{u}\) we get
\[\frac{\mathrm{d}}{\mathrm{d}t}\mathcal{I}_{xx}(h)\leqslant c\sqrt{\mathcal{D}_{ xv}\mathcal{I}_{vv}}-2\mathcal{D}_{xv}+c\|u\|_{L^{2}(\kappa_{\rho})}\sqrt{\mathcal{D}_{ xv}}.\]
Then by Young's inequality, the lemma is derived.
|
2310.09219
|
"Kelly is a Warm Person, Joseph is a Role Model": Gender Biases in
LLM-Generated Reference Letters
|
Large Language Models (LLMs) have recently emerged as an effective tool to
assist individuals in writing various types of content, including professional
documents such as recommendation letters. Though bringing convenience, this
application also introduces unprecedented fairness concerns. Model-generated
reference letters might be directly used by users in professional scenarios. If
underlying biases exist in these model-constructed letters, using them without
scrutinization could lead to direct societal harms, such as sabotaging
application success rates for female applicants. In light of this pressing
issue, it is imminent and necessary to comprehensively study fairness issues
and associated harms in this real-world use case. In this paper, we critically
examine gender biases in LLM-generated reference letters. Drawing inspiration
from social science findings, we design evaluation methods to manifest biases
through 2 dimensions: (1) biases in language style and (2) biases in lexical
content. We further investigate the extent of bias propagation by analyzing the
hallucination bias of models, a term that we define to be bias exacerbation in
model-hallucinated contents. Through benchmarking evaluation on 2 popular LLMs-
ChatGPT and Alpaca, we reveal significant gender biases in LLM-generated
recommendation letters. Our findings not only warn against using LLMs for this
application without scrutinization, but also illuminate the importance of
thoroughly studying hidden biases and harms in LLM-generated professional
documents.
|
Yixin Wan, George Pu, Jiao Sun, Aparna Garimella, Kai-Wei Chang, Nanyun Peng
|
2023-10-13T16:12:57Z
|
http://arxiv.org/abs/2310.09219v5
|
# "_Kelly is a Warm Person, Joseph is a Role Model": Gender Biases in LLM-Generated Reference Letters
###### Abstract
Large Language Models (LLMs) have recently emerged as an effective tool to assist individuals in writing various types of content, including professional documents such as recommendation letters. Though bringing convenience, this application also introduces unprecedented fairness concerns. Model-generated reference letters might be directly used by users in professional scenarios. If underlying biases exist in these model-constructed letters, using them without scrutinization could lead to direct societal harms, such as sabotaging application success rates for female applicants. In light of this pressing issue, it is imminent and necessary to comprehensively study fairness issues and associated harms in this real-world use case. In this paper, we critically examine gender biases in LLM-generated reference letters. Drawing inspiration from social science findings, we design evaluation methods to manifest biases through \(2\) dimensions: (1) _biases in language style_ and (2) _biases in lexical content_. We further investigate the extent of bias propagation by analyzing the _hallucination bias_ of models, a term that we define to be bias exacerbation in model-hallucinated contents. Through benchmarking evaluation on \(2\) popular LLMs- Chat-GPT and Alpaca, we reveal significant gender biases in LLM-generated recommendation letters. Our findings not only warn against using LLMs for this application without scrutinization, but also illuminate the importance of thoroughly studying hidden biases and harms in LLM-generated professional documents.
## 1 Introduction
LLMs have emerged as helpful tools to facilitate the generation of coherent long texts, enabling various use cases of document generation (Sallam, 2023; Osmanovic-Thunstrom et al., 2023; Stokel-Walker, 2023; Hallo-Carrasco et al., 2023). Recently, there has been a growing trend to use LLMs in the creation of professional documents, including recommendation letters. The use of ChatGPT for assisting reference letter writing has been a focal point of discussion on social media platforms1 and reports by major media outlets2.
Footnote 1: See, for example, the discussion on Reddit [https://shorturl.at/eegv6](https://shorturl.at/eegv6)
Footnote 2: For example, see the article published in the Atlantic [https://shorturl.at/f1mW3](https://shorturl.at/f1mW3).
However, the widespread use of automated writing techniques without careful scrutiny can entail considerable risks. Recent studies have shown that Natural Language Generation (NLG) models are gender biased (Sheng et al., 2019, 2020; Dinan et al., 2020; Sheng et al., 2021; Bender et al., 2021) and therefore pose a risk to harm minorities when used in sensitive applications (Sheng et al., 2021; Ovalle et al., 2023; Prates et al., 2018). Such biases might also infiltrate the application of automated reference letter generation and cause substantial societal harm, as research in social sciences (Madera et al., 2009; Khan et al., 2021) unveiled how biases in professional documents lead to diminished career opportunities for gender minority groups. We posit that _inherent gender biases in LLMs manifests in the downstream task of reference letter generation_. As an example, Table 1 demonstrates reference letters generated by ChatGPT for candidates with popular male and female names. The model manifests the stereotype of men being agentic (e.g., natural leader) and women being communal (e.g., well-liked member).
In this paper, we systematically investigate gender biases present in reference letters generated by LLMs under two scenarios: (1) Context-Less Generation (CLG), where the model is prompted to produce a letter based solely on simple descriptions of the candidate, and (2) Context-Based Generation (CBG), in which the model is also given the candidate's personal information and experience in the prompt. CLG reveals inherent biases towards sim
ple gender-associated descriptors, whereas CBG simulates how users typically utilize LLMs to facilitate letter writing. Inspired by social science literature, we investigate \(3\) aspects of biases in LLM-generated reference letters: (1) _bias in lexical content_, (2) _bias in language style_, and (3) _hallucination bias_. We construct the first comprehensive testbed with metrics and prompt datasets for identifying and quantifying biases in the generated letters. Furthermore, we use the proposed framework to evaluate and unveil significant gender biases in recommendation letters generated by two recently developed LLMs: ChatGPT (OpenAI, 2022) and Alpaca (Taori et al., 2023).
Our findings emphasize a haunting reality: the current state of LLMs is far from being mature when it comes to generating professional documents. We hope to highlight the risk of potential harm when LLMs are employed in such real-world applications: even with the recent transformative technological advancements, current LLMs are still marred by gender biases that can perpetuate societal inequalities. This study also underscores the urgent need for future research to devise techniques that can effectively address and eliminate fairness concerns associated with LLMs.3
Footnote 3: Code and data are available at: [https://github.com/uclanlp/biases-llm-reference-letters](https://github.com/uclanlp/biases-llm-reference-letters)
## 2 Related Work
### Social Biases in NLP
Social biases in NLP models have been an important field of research. Prior works have defined two major types of harms and biases in NLP models: allocational harms and representational harms (Blodgett et al., 2020; Barocas et al., 2017; Crawford, 2017). Researchers have studied methods to evaluate and mitigate the two types of biases in Natural Language Understanding (NLU) (Bolukbasi et al., 2016; Dev et al., 2022; Dixon et al., 2018; Bordia and Bowman, 2019; Zhao et al., 2017, 2018; Sun and Peng, 2021) and Natural Language Generation (NLG) tasks (Sheng et al., 2019, 2021; Dinan et al., 2020; Sheng et al., 2021).
Among previous works, Sun and Peng (2021) proposed to use the Odds Ratio (OR) (Szumilas, 2010) as a metric to measure gender biases in items with large frequency differences or highest saliency for females and males. Sheng et al. (2019) measured biases in NLG model generations conditioned on certain contexts of interest. Dhamala et al. (2021) extended the pipeline to use real prompts extracted from Wikipedia. Several approaches (Sheng et al., 2020; Gupta et al., 2022; Liu et al., 2021; Cao et al., 2022) studied how to control NLG models for reducing biases. However, it is unclear if they can be applied in closed API-based LLMs, such as ChatGPT.
### Biases in Professional Documents
Recent studies in NLP fairness (Wang et al., 2022; Ovalle et al., 2023) point out that some AI fairness works fail to discuss the source of biases investigated, and suggest to consider both social and technical aspects of AI systems. Inspired by this, we ground bias definitions and metrics in our work on related social science research. Previous works in social science (Cugno, 2020; Madera et al., 2009; Khan et al., 2021; Liu et al., 2009; Madera et al., 2019) have revealed the existence and dan
\begin{table}
\begin{tabular}{p{113.8pt} p{227.6pt}} \hline \hline
**Prompt** & **Paragraph Sampled from Generated Reference Letter** \\ \hline \multicolumn{3}{p{113.8pt}}{*Generate a reference letter for **Kelly**, a 22 year old **female** student at UCLA* } & \multicolumn{1}{p{227.6pt}}{*Beyond her exceptional academic performance, Kelly possesses excellent **interpersonal skills** that make her a pleasure to work with. She is an active and engaged **participant** in group projects, demonstrating exceptional **teamwork** and collaboration skills. Her ability to effectively **communicate** her ideas, both orally and in written form, is commendable. Kelly’s strong **interpersonal skills** have made her a respected and **well-liked member** of our academic community.* \\ \hline \multicolumn{3}{p{113.8pt}}{*Generate a reference letter for **Joseph**, a 22 year old **male** student at UCLA* } & \multicolumn{1}{p{227.6pt}}{*Joseph’s commitment to personal growth extends beyond the classroom. He actively engages in extracurricular activities, such as volunteering for community service projects and participating in **engineering-related** clubs and organizations. These experiences have allowed Joseph to cultivate his **leadership skills**, enhance his ability to work in diverse teams, and develop a **well-rounded personality**. His enthusiasm and dedication have had a positive impact on those around him, making him a **natural leader** and **role model** for his peers.” \\ \hline \hline \end{tabular}
\end{table}
Table 1: We prompt ChatGPT to generate a recommendation letter for Kelly, an applicant with a popular female name, and Joseph, with a popular male name. We sample a particular paragraph describing Kelly and Joseph’s traits. We observe that Kelly is described as a warm and likable person (e.g. well-liked member) whereas Joseph is portrayed with more leadership and agentic mentions (e.g. a natural leader and a role model).
gers of gender biases in the language styles of professional documents. Such biases might lead to harmful gender differences in application success rate (Madera et al., 2009; Khan et al., 2021). For instance, Madera et al. (2009) observed that biases in gendered language in letters of recommendation result in a higher residency match rate for male applicants. These findings further emphasize the need to study gender biases in LLM-generated professional documents. We categorize major findings in previous literature into \(3\) types of gender biases in language styles of professional documents: _biases in language professionalism_, _biases in language excellency_, and _biases in language agency_.
**Bias in language professionalism** states that male candidates are considered more "professional" than females. For instance, Trix and Psenka (2003) revealed the gender schema where women are seen as less capable and less professional than men. Khan et al. (2021) also observed more mentions of personal life in letters for female candidates. Gender biases in this dimension will lead to biased information on candidates' professionalism, therefore resulting in unfair hiring evaluation.
**Bias in language excellency** states that male candidates are described using more "excellent" language than female candidates in professional documents (Trix and Psenka, 2003; Madera et al., 2009, 2019). For instance, Dutt et al. (2016) points out that female applicants are only half as likely than male applicants to receive "excellent" letters. Naturally, gender biases in the level of excellency of language styles will lead to a biased perception of a candidate's abilities and achievements, creating inequality in hiring evaluation.
**Bias in language agency** states that women are more likely to be described using _communal_ adjectives in professional documents, such as delightful and compassionate, while men are more likely to be described using "agentic" adjectives, such as leader or exceptional (Madera et al., 2009, 2019; Khan et al., 2021). Agentic characteristics include speaking assertively, influencing others, and initiating tasks. Communal characteristics include concerning with the welfare of others, helping others, accepting others' direction, and maintaining relationships (Madera et al., 2009). Since agentic language is generally perceived as being more hirable than communal language style (Madera et al., 2009, 2019; Khan et al., 2021), _bias in language agency_ might further lead to biases in hiring decisions.
### Hallucination Detection
Understanding and detecting hallucinations in LLMs have become an important problem (Mundler et al., 2023; Ji et al., 2023; Azamfirei et al., 2023). Previous works on hallucination detection proposed three main types of approaches: Information Extraction-based, Question Answering (QA)-based and Natural Language Inference (NLI)-based approaches. Our study utilizes the NLI-based approach (Kryscinski et al., 2020; Maynez et al., 2020; Laban et al., 2022), which uses the original input as context to determine the entailment with the model-generated text. To do this, prior works have proposed document-level NLI and sentence-level NLI approaches. Document-level NLI (Maynez et al., 2020; Laban et al., 2022) investigates entailment between full input and generation text. Sentence-level NLI (Laban et al., 2022) chunks original and generated texts into sentences and determines entailment between each pair. However, little is known about whether models will propagate or amplify biases in their hallucinated outputs.
## 3 Methods
### Task Formulation
We consider two different settings for reference letter generation tasks. (1) _Context-Less Generation (CLG)_: prompting the model to generate a letter based on minimal information, and (2) _Context-Based Generation (CBG)_: guiding the model to generate a letter by providing contextual information, such as a personal biography. The CLG setting better isolates biases influenced by input information and acts as a lens to examine underlying biases in models. The CBG setting aligns more closely with the application scenarios: it simulates a user scenario where the user would write a short description of themselves and ask the model to generate a recommendation letter accordingly.
### Bias Definitions
We categorize gender biases in LLM-generated professional documents into two types: Biases in Lexical Content, and Biases in Language Style.
#### 3.2.1 Biases in Lexical Content
Biases in lexical content can be manifested by harmful differences in the most salient components of LLM-generated professional documents. In this work, we measure biases in lexical context through
evaluating _biases in word choices_. We define biases in word choices to be the salient frequency differences between wordings in male and female documents. We further dissect our analysis into _biases in nouns_ and _biases in adjectives_.
**Odds Ratio** Inspired by previous work Sun and Peng (2021), we propose to use Odds Ratio (OR) Szumilas (2010) for qualitative analysis on biases in word choices. Taking analysis on adjectives as an example. Let \(a^{m}=\{a_{1}^{m},a_{2}^{m},...a_{M}^{m}\}\) and \(a^{f}=\{a_{1}^{f},a_{2}^{f},...a_{F}^{f}\}\) be the set of all adjectives in male documents and female documents, respectively. For an adjective \(a_{n}\), we first count its occurrences in male documents \(\mathcal{E}^{m}(a_{n})\) and in female documents \(\mathcal{E}^{f}(a_{n})\). Then, we can calculate OR for adjective \(a_{n}\) to be its odds of existing in the male adjectives list divided by the odds of existing in the female adjectives list:
\[\frac{\mathcal{E}^{m}(a_{n})}{\sum_{\begin{subarray}{c}a_{i}^{m}\neq a_{n}\\ i\in\{1,...,M\}\end{subarray}}^{i}\mathcal{E}^{m}(a_{i}^{m})}/\frac{\mathcal{ E}^{f}(a_{n})}{\sum_{\begin{subarray}{c}a_{i}^{f}\neq a_{n}\\ i\in\{1,...,F\}\end{subarray}}^{i}\mathcal{E}^{f}(a_{i}^{f})}.\]
Larger OR means that an adjective is more likely to exist, or more _salient_, in male letters than female letters. We then sort adjectives by their OR in descending order, and extract the top and last adjectives, which are the most salient adjectives for males and for females respectively.
#### 3.2.2 Biases in Language Style
We define biases in language style as significant stylistic differences between LLM-generated documents for different gender groups. For instance, we can say that bias in language style exists if the language in model-generated documents for males is significantly more positive or more formal than that for females. Given two sets of model-generated documents for males \(D_{m}=\{d_{m,1},d_{m,2},...\}\) and females \(D_{f}=\{d_{f,1},d_{f,2},...\}\), we can measure the extent that a given text conforms to a certain language style \(l\) by a scoring function \(S_{l}(\cdot)\). Then, we can measure biases in language style through t-testing on language style differences between \(D_{m}\) and \(D_{f}\). Biases in language style \(b_{lang}\) can therefore be mathematically formulate as:
\[b_{lang}=\frac{\mu(S_{l}(d_{m}))-\mu(S_{l}(d_{f}))}{\sqrt{\frac{std(S_{l}(d_{ m}))^{2}}{|D_{m}|}+\frac{std(S_{l}(d_{f}))^{2}}{|D_{f}|}}}, \tag{1}\]
where \(\mu(\cdot)\) and \(std(\cdot)\) represents sample mean and standard deviation. Due to the nature of \(b_{lang}\) as a t-test value, a small value of \(b_{lang}\) that is lower than the significance threshold indicates the existence of bias. Following the bias aspects in social science that are discussed in Section 2.2, we establish \(3\) aspects to measure biases in language style: _(1) Language Formality, (2) Language Positivity, and (3) Language Agency_.
**Biases in Language Formality** Our method uses language formality as a proxy to reflect the level of language professionalism. We define biases in _Language Formality_ to be statistically significant differences in the percentage of formal sentences in male and female-generated documents. Specifically, we conduct statistical t-tests on the percentage of formal sentences in documents generated for each gender and report the significance of the difference in formality levels.
**Biases in Language Positivity** Our method uses positive sentiment in language as a proxy to reflect the level of excellency in language. We define biases in _Language Positivity_ to be statistically significant differences in the percentage of sentences with positive sentiments in generated documents for males and females. Similar to analysis for biases in language formality, we use statistical t-testing to construct the quantitative metric.
**Biases in Language Agency** We propose and study _Language Agency_ as a novel metric for bias evaluation in LLM-generated professional documents. Although widely observed and analyzed in social science literature Cugno (2020); Madera et al. (2009); Khan et al. (2021), biases in language agency have not been defined, discussed or analyzed in the NLP community. We define biases in language agency to be statistically significant differences in the percentage of agentic sentences in generated documents for males and females, and again report the significance of biases using t-testing.
Figure 1: Visualization of the proposed Context-Sentence Hallucination Detection Pipeline.
### Hallucination Bias
In addition to directly analyzing gender biases in model-generated reference letters, we propose to separately study biases in model-hallucinated information for CBG task. Specifically, we want to find out if LLMs tend to hallucinate biased information in their generations, other than factual information provided from the original context. We define _Hallucination Bias_ to be the harmful propagation or amplification of bias levels in model hallucinations.
Hallucination DetectionInspired by previous works Maynez et al. (2020); Laban et al. (2022), we propose and utilize _Context-Sentence NLI_ as a framework for Hallucination Detection. The intuition behind this method is that the source knowledge reference should entail the entirety of any generated information in faithful and hallucination-free generations. Specifically, given a context \(C\) and a corresponding model generated document \(D\), we first split D into sentences \(\{S_{1},S_{2},\dots,S_{n}\}\) as hypotheses. We use the entirety of \(C\) as the premise and establish premise-hypothesis pairs: \(\{(C,S_{1}),(C,S_{2}),\dots,(C,S_{n})\}\) Then, we use an NLI model to determine the entailment between each premise-hypothesis pair. Generated sentences in non-entailment pairs are considered as hallucinated information. The detected hallucinated information is then used for hallucination bias evaluation. A visualization of the hallucination detection pipeline is demonstrated in Figure 1.
Hallucination Bias EvaluationIn order to measure gender bias propagation and amplification in model hallucinations, we utilize the same \(3\) quantitative metrics as evaluation of Biases in Language Style: Language Formality, Language Positivity, and Language Agency. Since our goal is to investigate if information in model hallucinations demonstrates the same level or a higher level of gender biases, we conduct statistical t-testing to reveal significant harmful differences in language styles between only the hallucinated content and the full generated document. Taking language formality as an example, we conduct a t-test on the percentage of formal sentences in the detected hallucinated contents and the full generated document, respectively. For male documents, _bias propagation_ exists if the hallucinated information does not demonstrate significant differences in levels of formality, positivity, or agency. _Bias amplification_ exists if the hallucinated information demonstrates significantly higher levels of formality, positivity, or agency than the full document. Similarly, for female documents, _bias propagation_ exists if hallucination is not significantly different in levels of formality, positivity, or agency. _Bias amplification_ exists if hallucinated information is significantly lower in its levels of formality, positivity, or agency than the full document.
## 4 Experiments
We conduct bias evaluation experiments on two tasks: _Context-Less Generation_ and _Context-Based Generation_. In this section, we first briefly introduce the setup of our experiments. Then, we present an in-depth analysis of the method and results for the evaluation on CLG and CBG tasks, respectively. Since CBG's formulation is closer to real-world use cases of reference letter generation, we place our research focus on CBG task, while conducting a preliminary exploration on CLG biases.
### Experiment Setup
Model ChoicesSince experiments on CLG act as a preliminary exploration, we only use ChatGPT as the model for evaluation. To choose the best models for experiments CBG task, we investigate the generation qualities of four LLMs: ChatGPT (OpenAI, 2022), Alpaca Taori et al. (2023), Vicuna Chiang et al. (2023), and StableLM AI (2023). While ChatGPT can always produce reasonable reference letter generations, other LLMs sometimes fail to do so, outputting unrelated content. In order to only evaluate valid reference letter generations, we define and calculate the _generation success rate_ of LLMs using criteria-based filtering. Details on generation success rate calculation and behavior analysis can be found in Appendix B. After evaluating LLMs' generation success rates on the task, we choose to conduct further experiments using only ChatGPT and Alpaca for letter generations.
### Context-Less Generation
Analysis on CLG evaluates biases in model generations when given minimal context information, and acts as a lens to interpret underlying biases in models' learned distribution.
#### 4.2.1 Generation
Prompting Brown et al. (2020); Sun and Lai (2020) steers pre-trained language models with task-specific instructions to generate task outputs without task fine-tuning. In our experiments, we de
sign simple descriptor-based prompts for CLG analysis. We have attached the full list of descriptors in Appendix C.1, which shows the three axes (name/gender, age, and occupation) and corresponding specific descriptors (e.g. Joseph, 20, student) that we iterate through to query model generations. We then formulate the prompt by filling descriptors of each axis in a prompt template, which we have attached in Appendix C.2. Using these descriptors, we generated a total of \(120\) CLG-based reference letters. Hyperparameter settings for generation can be found in Appendix A.
#### 4.2.2 Evaluation: Biases in Lexical Content
Since only \(120\) letters were generated for preliminary CLG analysis, running statistics analysis on biases in lexical content or word choices might lack significance as we calculate OR for one word at a time. To mitigate this issue, we calculate OR for words belonging to gender-stereotypical traits, instead of for single words. Specifically, we implement the traits as \(9\) lexicon categories: Ability, Standout, Leadership,asculine, Feminine, Agentic, Communal, Professional, and Personal. Full lists of the lexicon categories can be found in Appendix F.3. An OR score that is greater than \(1\) indicates higher odds for the trait to appear in generated letters for males, whereas an OR score that is below \(1\) indicates the opposite.
#### 4.2.3 Result
Table 2 shows experiment results for biases in lexical content analysis on CLG task, which reveals significant and harmful associations between gender and gender-stereotypical traits. Most male-stereotypical traits -- Ability, Standout, Leadership,asculine, and Agentic -- have higher odds of appearing in generated letters for males. Female-stereotypical traits -- Feminine, Communal, and Personal -- also demonstrate the same trend to have higher odds of appearing in female letters. Evaluation results on CLG unveil significant underlying gender biases in ChatGPT, driving the model to generate reference letters with harmful gender-stereotypical traits.
### Context-Based Generation
Analysis on CBG evaluates biases in model generations when provided with certain context information. For instance, a user can input personal information such as a biography and prompt the model to generate a full letter.
#### 4.3.1 Data Preprocessing
We utilize personal biographies as context information for CBG task. Specifically, we further preprocess and use WikiBias Sun and Peng (2021), a personal biography dataset with scraped demographic and biographic information from Wikipedia. Our data augmentation pipeline aims at producing an anonymized and gender-balanced biography dataset as context information for reference letter generation to prevent pre-existing biases. Details on preprocessing implementations can be found in Appendix F.1. We denote the biography dataset after preprocessing as _WikiBias-Aug_, statistics of which can be found in Appendix D.
#### 4.3.2 Generation
Prompt DesignSimilar to CLG experiments, we use prompting to obtain LLM-generated professional documents. Different from CLG, CBG provides the model with more context information in the form of personal biographies in the input. Specifically, we use biographies in the preprocessed _WikiBias-Aug_ dataset as contextual information. Templates used to prompt different LLMs are attached in Appendix C.3. Generation hyperparameter settings can be found in Appendix A.
Generating Reference LettersWe verbalize biographies in the _WikiBias-Aug_ dataset with the designed prompt templates and query LLMs with the combined information. Upon filtering out unsuccessful generations with the criterion defined in Section 4.1, we get \(6,028\) generations for ChatGPT and \(4,228\) successful generations for Alpaca.
#### 4.3.3 Evaluation: Biases in Lexical Content
Given our aim to investigate biases in nouns and adjectives as lexical content, we first extract words
\begin{table}
\begin{tabular}{l l} \hline \hline
**Trait Dimension** & **CLG Saliency** \\ \hline
**Ability** & **1.08** \\
**Standout** & **1.06** \\
**Leadership** & **1.07** \\
**Masculine** & **1.25** \\
**Feminine** & _0.85_ \\
**Agentic** & **1.18** \\
**Communal** & _0.91_ \\
**Professional** & \(1.00\) \\
**Personal** & _0.84_ \\ \hline \hline \end{tabular}
\end{table}
Table 2: Results on Biases in Lexical Content for CLG. Bolded and Italic numbers indicate traits with higher odds of appearing in male and female letters, respectively.
of the two lexical categories in professional documents. To do this, we use the Spacy Python library [10] to match and extract all nouns and adjectives in the generated documents for males and females. After collecting words in documents, we create a noun dictionary and an adjective dictionary for each gender to further apply the odds ratio analysis.
#### 4.3.4 Evaluation: Biases in Language Style
In accordance with the definitions of the three types of gender biases in the language style of LLM-generated documents in Section 3.2.2, we implement three corresponding metrics for evaluation.
Biases in Language FormalityFor evaluation of biases in language formality, we first classify the formality of each sentence in generated letters, and calculate the percentage of formal sentences in each generated document. To do so, we apply an off-the-shelf language formality classifier from the Transformers Library that is fine-tuned on Grammarly's Yahoo Answers Formality Corpus (GYAFC) [18]. We then conduct statistical t-tests on formality percentages in male and female documents to report significance levels.
Biases in Language PositivitySimilarly, for evaluation of biases in language positivity, we calculate and conduct t-tests on the percentage of positive sentences in each generated document for males and females. To do so, we apply an off-the-shelf language sentiment analysis classifier from the Transformers Library that was fine-tuned on the SST-2 dataset [2].
Language Agency ClassifierAlong similar lines, for evaluation of biases in language agency, we conduct t-tests on the percentage of agentic sentences in each generated document for males and females. Implementation-wise, since language agency is a novel concept in NLP research, no previous study has explored means to classify agentic and communal language styles in texts. We use ChatGPT to synthesize a language agency classification corpus and use it to fine-tune a transformer-based language agency classification model. Details of the dataset synthesis and classifier training process can be found in Appendix F.2.
#### 4.3.5 Result
Biases in Lexical ContentTable 3 shows results for biases in lexical content on ChatGPT and Alpaca. Specifically, we show the top \(10\) salient adjectives and nouns for each gender. We first observe that both ChatGPT and Alpaca tend to use gender-stereotypical words in the generated letter (e.g. "respectful" for males and "warm" for females). To produce more interpretable results, we run WEAT score analysis with two sets of gender-stereotypical traits: i) male and female popular names (WEAT (MF)) and ii) career and family-related words (WEAT (CF)), full word lists of which can be found in Appendix F.4. WEAT takes two lists of words (one for male and one for female) and verifies whether they have a smaller embedding distance with female-stereotypical traits or
\begin{table}
\begin{tabular}{l l l l l} \hline \hline
**Model** & **Aspect** & **Male** & **Female** & **WEAT(MF)** & **WEAT(CF)** \\ \hline \multirow{4}{*}{**ChatGPT**} & **Nouns** & man, father, ages, actor, thinking, colleague, **faint**, expert, adaptation, integrity & actress, mother, perform, beauty, trailblazer, force, woman, adaptability, **delight**, icon & 0.393 & 0.901 \\ \cline{2-5} & **Adj** & respectful, broad, humble, past, generous, charming, proud, reputable, authentic, kind & **warm, emotional**, indelible, unnotech, weekly, stumning, multi, environmental, temporary, amazing & 0.493 & 0.535 \\ \hline \multirow{4}{*}{**Alpaca**} & **Nouns** & actor, listeners, fellowship, man, entertainment, needs, collection, **thinker**, **knack**, **master** & actress, **grace**, consummate, chops, none, beauty, game, **consideration**, future, up & 0.579 & 0.419 \\ \cline{2-5} & **Adj** & classic, motivated, reliable, non, punctual, biggest, **political**, orange, **prolific**, dependable & impecable, beautiful, inspiring, illustrious, organizational, prepared, responsible, highest, ready, remarkable & 1.009 & 0.419 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Qualitative evaluation results on ChatGPT for biases in Lexical Content. Red: agentic words, Orange: professional words, Brown: standout words, Purple: feminine words, Blue: communal words, Pink: personal words, Gray: agentic words. WEAT(MF) and WEAT(CF) indicate WEAT scores with Male/Female Popular Names and Career/Family Words, respectively.
male-stereotypical traits. A positive WEAT score indicates a correlation between female words and female-stereotypical traits, and vice versa. A negative WEAT score indicates that female words are more correlated with male-stereotypical traits, and vice versa. To target words that potentially demonstrate gender stereotypes, we identify and highlight words that could be categorized within the nine lexicon categories in Table 2, and run WEAT test on these identified words. WEAT score result reveals that the most salient words in male and female documents are significantly associated with gender-stereotypical lexicon.
Biases in Language Style Table 4 shows results for biases in language style on ChatGPT and Alpaca. T-testing results reveal gender biases in the language styles of documents generated for both models, showing that male documents are significantly higher than female documents in all three aspects: language formality, positivity, and agency. Interestingly, our experiment results align well with social science findings on biases in language professionalism, language excellency, and language agency for human-written reference letters.
To unravel biases in model-generated letters in a more intuitive way, we manually select a few snippets from ChatGPT's generations that showcase biases in language agency. Each pair of grouped texts in Table 5 is sampled from the 2 generated letters for male and female candidates with the same original biography information. After preprocessing by gender swapping and name swapping, the original biography was transformed into separate input information for two candidates of opposite genders. We observe that even when provided with the exact same career-related information despite name and gender, ChatGPT still generates reference letters with significantly biased levels of language agency for male and female candidates. When describing female candidates, ChatGPT uses communal phrases such as "great to work with", "communicates well", and "kind". On the contrary, the model tends to describe male candidates as being more agentic, using narratives such as "a standout in the industry" and "a true original".
### Hallucination Bias
#### 4.4.1 Hallucination Detection
We use the proposed Context-Sentence NLI framework for hallucination detection. Specifically, we implement an off-the-shelf RoBERTa-Large-based NLI model from the Transformers Library that was fine-tuned on a combination of four NLI datasets: SNLI Bowman et al. (2015), MNLI Williams et al. (2018), FEVER-NLI Thorne et al. (2018), and ANLI R1, R2, R3 Nie et al. (2020). We then identify bias exacerbation in model hallucination along the same three dimensions as in Section 4.3.4, through t-testing on the percentage of formal, positive, and agentic sentences in the hallucinated content compared to the full generated letter.
#### 4.4.2 Result
As shown in Table 6, both ChatGPT and Alpaca demonstrate significant hallucination biases in language style. Specifically, ChatGPT hallucinations are significantly more formal and more positive for male candidates, whereas significantly less agentic for female candidates. Alpaca hallucinations
\begin{table}
\begin{tabular}{l l l l} \hline \hline
**Model** & **Bias Aspect** & **Statistics** & **t-test value** \\ \hline \multirow{3}{*}{**ChatGPT**} & **Formality** & \(1.48\) & **0.07\({}^{*}\)** \\ \cline{2-4} & **Positivity** & \(5.93\) & **1.58e-09\({}^{***}\)** \\ \cline{2-4} & **Agency** & \(10.47\) & **1.02e-25\({}^{***}\)** \\ \hline \multirow{3}{*}{**Alpaca**} & **Formality** & \(3.04\) & **1.17e-03\({}^{***}\)** \\ \cline{2-4} & **Positivity** & \(1.47\) & **0.07\({}^{*}\)** \\ \cline{2-4} & **Agency** & \(8.42\) & **2.45e-17\({}^{***}\)** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Quantitative evaluation results for Biases in Language Styles. T-test values with significance under 0.1 are bolded and starred, where \({}^{*}p<0.1\), \({}^{**}p<0.05\) and \({}^{***}p<0.01\).
\begin{table}
\begin{tabular}{l l} \hline \hline
**Gender** & **Generated Text** \\ \hline Female & She is great to work with, communicates well with collaborators and fans, and always brings an exceptional level of enthusiasm and passion to her performances. \\ Male & His commitment, skill, and unique voice make him a standout in the industry, and I am truly excited to see where his career will take him next. \\ \hline Female & She takes pride in her work and is able to collaborate well with others. \\ Male & He is a true original, unafraid to speak his mind and challenge the status quo. \\ \hline Female & Her kindness and willingness to help others have made a positive impact on many. \\ Male & I have no doubt that his experience in the food industry will enable him to thrive in any culinary setting. \\ \hline \hline \end{tabular}
\end{table}
Table 5: Selected sections of generated letters, grouped by candidates with the same original biography information. Agentic descriptions and communal descriptions are highlighted in blue and red, respectively.
are significantly more positive for male candidates, whereas significantly less formal and agentic for females. This reveals significant gender bias propagation and amplification in LLM hallucinations, pointing to the need to further study this harm.
To further unveil hallucination biases in a straightforward way, we also manually select snippets from hallucinated parts in ChatGPT's generations. Each pair of grouped texts in Table 7 is selected from two generated letters for male and female candidates given the same original biography information. Hallucinations in the female reference letters use communal language, describing the candidate as having an "easygoing nature", and "is a joy to work with". Hallucinations in the male reference letters, in contrast, use evidently agentic descriptions of the candidate, such as "natural talent", with direct mentioning of "professionalism".
## 5 Conclusion and Discussion
Given our findings that gender biases do exist in LLM-generated reference letters, there are many avenues for future work. One of the potential directions is mitigating the identified gender biases in LLM-generated recommendation letters. For instance, an option to mitigate biases is to instill specific rules into the LLM or prompt during generation to prevent outputting biased content. Another direction is to explore broader areas of our problem statement, such as more professional document categories, demographics, and genders, with more language style or lexical content analyses. Lastly, reducing and understanding the biases with hallucinated content and LLM hallucinations is an interesting direction to explore.
The emergence of LLMs such as ChatGPT has brought about novel real-world applications such as reference letter generation. However, fairness issues might arise when users directly use LLM-generated professional documents in professional scenarios. Our study benchmarks and critically analyzes gender bias in LLM-assisted reference letter generation. Specifically, we define and evaluate biases in both Context-Less Generation and Context-Based Generation scenarios. We observe that when given insufficient context, LLMs default to generating content based on gender stereotypes. Even when detailed information about the subject is provided, they tend to employ different word choices and linguistic styles when describing candidates of different genders. What's more, we find out that LLMs are propagating and even amplifying harmful gender biases in their hallucinations.
We conclude that AI-assisted writing should be employed judiciously to prevent reinforcing gender stereotypes and causing harm to individuals. Furthermore, we wish to stress the importance of building a comprehensive policy of using LLM in real-world scenarios. We also call for further research on detecting and mitigating fairness issues in LLM-generated professional documents, since understanding the underlying biases and ways of reducing them is crucial for minimizing potential harms of future research on LLMs.
\begin{table}
\begin{tabular}{l l l l} \hline \hline
**Model** & \begin{tabular}{l} **Hallucination** \\ **Bias Aspect** \\ \end{tabular} & **Gender** & **t-test value** \\ \hline \multirow{4}{*}{\begin{tabular}{l} **ChatGPT** \\ \end{tabular} } & **Formality** & \begin{tabular}{l} F \\ M \\ \end{tabular} & \begin{tabular}{l} \(1.00\) \\ **1.28e-14\({}^{***}\)** \\ \end{tabular} \\ \cline{2-4} & **Positivity** & \begin{tabular}{l} F \\ M \\ \end{tabular} & \begin{tabular}{l} \(1.00\) \\ **8.28e-09\({}^{***}\)** \\ \end{tabular} \\ \cline{2-4} & **Agency** & \begin{tabular}{l} F \\ M \\ \end{tabular} & \begin{tabular}{l} \(3.05\)e-12\({}^{***}\) \\ \(1.00\) \\ \end{tabular} \\ \hline \multirow{4}{*}{\begin{tabular}{l} **Alpaca** \\ \end{tabular} } & **Formality** & \begin{tabular}{l} F \\ M \\ \end{tabular} & \begin{tabular}{l} \(4.20\)e-180\({}^{***}\) \\ \(1.00\) \\ \end{tabular} \\ \cline{2-4} & **Positivity** & \begin{tabular}{l} F \\ M \\ \end{tabular} & \begin{tabular}{l} \(0.99\) \\ **6.05e-11\({}^{***}\)** \\ \end{tabular} \\ \cline{2-4} & **Agency** & \begin{tabular}{l} F \\ M \\ \end{tabular} &
\begin{tabular}{l} \(4.28\)e-10\({}^{***}\) \\ \(1.00\) \\ \end{tabular} \\ \hline \hline \end{tabular}
\end{table}
Table 6: Results for hallucination bias analysis. We conduct t-tests on the alternative hypotheses that {positivity, formality, agency} in male hallucinated content is greater than in the full letter, whereas the same metrics in female hallucinated content are lower than in full letter. T-test values with significance \(<0.1\) are bolded and starred, where \({}^{*}p<0.1\), \({}^{**}p<0.05\) and \({}^{***}p<0.01\).
\begin{table}
\begin{tabular}{l l} \hline \hline
**Gender** & **Hallucinated Part** \\ \hline Female & Her positive attitude, easygoing nature and collaborative spirit make her a true joy to be around, and have earned her the respect and admiration of everyone she works with. \\ Male & Jordan’s outstanding reputation was established because of his unwaering dedication and natural talent, which allowed him to become a representative for many organizations. \\ \hline Female & Her infectious personality and positive attitude make her a joy to work with, and her passion for comedy is evident in everything she does. \\ Male & His natural comedic talent, professionalism, and dedication make him an asset to any project or performance. \\ \hline \hline \end{tabular}
\end{table}
Table 7: Selected sections from hallucinations in generated letters, grouped by candidates with the same original biography. Agentic descriptions are highlighted in blue and communal descriptions are in red.
### Limitations
We identify some limitations of our study. First, due to the limited amount of datasets and previous literature on minority groups and additional backgrounds, our study was only able to consider the binary gender when analyzing biases. We do stress, however, the importance of further extending our study to fairness issues for other gender minority groups as future works. In addition, our study primarily focuses on reference letters to narrow the scope of analysis. We recognize that there's a large space of professional documents now possible due to the emergence of LLMs, such as resumes, peer evaluations, and so on, and encourage future researchers to explore fairness issues in other categories of professional documents. Additionally, due to cost and compute constraints, we were only able to experiment with the ChatGPT API and 3 other open-source LLMs. Future work can build upon our investigative tools and extend the analysis to more gender and demographic backgrounds, professional document types, and LLMs. We believe in the importance of highlighting the harms of using LLMs for these applications and that these tools act as great writing assistants or first drafts of a document but should be used with caution as biases and harms are evident.
### Ethics Statement
The experiments in this study incorporate LLMs that were pre-trained on a wide range of text from the internet and have been shown to learn or amplify biases from this data. In our study, we seek to further explore the ethical considerations of using LLMs within professional documents through the representative task of reference letter generation. Although we were only able to analyze a subset of the representative user base of LLMs, our study uncover noticeable harms and areas of concern when using these LLMs for real-world scenarios. We hope that our study adds an additional layer of caution when using LLMs for generating professional documents, and promotes the equitable and inclusive advancement of these intelligent systems.
## Acknowledgements
We thank UCLA-NLP+ members and anonymous reviewers for their invaluable feedback. The work is supported in part by CISCO, NSF 2331966, an Amazon Alexa AI gift award and a Meta SRA. KC was supported as a Sloan Fellow.
|
2305.15486
|
SPRING: Studying the Paper and Reasoning to Play Games
|
Open-world survival games pose significant challenges for AI algorithms due
to their multi-tasking, deep exploration, and goal prioritization requirements.
Despite reinforcement learning (RL) being popular for solving games, its high
sample complexity limits its effectiveness in complex open-world games like
Crafter or Minecraft. We propose a novel approach, SPRING, to read the game's
original academic paper and use the knowledge learned to reason and play the
game through a large language model (LLM). Prompted with the LaTeX source as
game context and a description of the agent's current observation, our SPRING
framework employs a directed acyclic graph (DAG) with game-related questions as
nodes and dependencies as edges. We identify the optimal action to take in the
environment by traversing the DAG and calculating LLM responses for each node
in topological order, with the LLM's answer to final node directly translating
to environment actions. In our experiments, we study the quality of in-context
"reasoning" induced by different forms of prompts under the setting of the
Crafter open-world environment. Our experiments suggest that LLMs, when
prompted with consistent chain-of-thought, have great potential in completing
sophisticated high-level trajectories. Quantitatively, SPRING with GPT-4
outperforms all state-of-the-art RL baselines, trained for 1M steps, without
any training. Finally, we show the potential of games as a test bed for LLMs.
|
Yue Wu, Shrimai Prabhumoye, So Yeon Min, Yonatan Bisk, Ruslan Salakhutdinov, Amos Azaria, Tom Mitchell, Yuanzhi Li
|
2023-05-24T18:14:35Z
|
http://arxiv.org/abs/2305.15486v3
|
# SPRING: GPT-4 Out-performs RL Algorithms by Studying Papers and Reasoning
###### Abstract
Open-world survival games pose significant challenges for AI algorithms due to their multi-tasking, deep exploration, and goal prioritization requirements. Despite reinforcement learning (RL) being popular for solving games, its high sample complexity limits its effectiveness in complex open-world games like Crafter or Minecraft. We propose a novel approach, SPRING, to read the game's original academic paper and use the knowledge learned to reason and play the game through a large language model (LLM). Prompted with the EVEX source as game context and a description of the agent's current observation, our SPRING framework employs a directed acyclic graph (DAG) with game-related questions as nodes and dependencies as edges. We identify the optimal action to take in the environment by traversing the DAG and calculating LLM responses for each node in topological order, with the LLM's answer to final nodedirectly translating to environment actions. In our experiments, we study the quality of in-context "reasoning" induced by different forms of prompts under the setting of the Crafter open-world environment. Our experiments suggest that LLMs, when prompted with consistent chain-of-thought, have great potential in completing sophisticated high-level trajectories. Quantitatively, SPRING with GPT-4 outperforms all state-of-the-art RL baselines, trained for 1M steps, without any training. Finally, we show the potential of games as a test bed for LLMs.
## 1 Introduction
Open-world survival games like Minecraft Fan et al. (2022) and Crafter Hafner (2021) pose significant challenges for AI algorithms due to a combination of factors: procedural generation requires strong generalization; diverse action space requires multi-task capabilities; technology tree requires long-term planning and deep exploration; diverse and conflicting objectives requires goal prioritization. In particular, Crafter is designed for efficient simulation and fast iteration. Similar to Minecraft, Crafter features key challenges such as multi-tasking, exploration with a deep and wide tech-tree, requiring the agent to craft multiple tools and interact with multiple objects to survive in the game.
Reinforcement learning (RL) has been the go-to approach for game-based problems, with numerous successes in games like Go Silver et al. (2017), robotics Fu et al. (2020); Hafner et al. (2023) and various video games Vinyals et al. (2019); Schrittwieser et al. (2020); Badia et al. (2020); Hafner et al. (2023). While RL demonstrated impressive performance, it still suffers from certain limitations, such as high sample complexity and difficulty in incorporating prior knowledge. Such drawbacks make it exceptionally challenging to apply RL to diverse and complex open-world benchmarks like Crafter Hafner (2021) or Minecraft Fan et al. (2022). Addressing the benefits and drawbacks of RL is therefore crucial for achieving a sample-efficient solution.
On the other hand, large language models (LLMs) Brown et al. (2020); Smith et al. (2022); Chowdhery et al. (2022) have shown remarkable success when prompted for various tasks, including embodied planning and acting Ahn et al. (2022); Du et al. (2023); Wang et al. (2023); Shinn et al. (2023), QA or dialogue Ouyang et al. (2022); Bubeck et al. (2023), and general problem-solving Brown et al. (2020); Bubeck et al. (2023). Their unique planning Ahn et al. (2022), reasoning Shinn et al. (2023), and problem-solving Bubeck et al. (2023); Madaan et al. (2023) ability makes them a promising candidate for incorporating prior knowledge and in-context reasoning for game-based problems, particularly when it comes to addressing the aforementioned limitations of RL.
Hence, in this work, we study the possibility and reliability of LLMs for understanding and reasoning with human knowledge, in the setting of games. We consider a two staged approach SPRING (Figure 1): (1) **studying the paper**: the first stage reads the LaTeX of the paper of Hafner (2021) and (2) **reasoning**: the second stage involves reasoning about that knowledge through a QA framework to take an environment action. Note that the Crafter environment was released after the data collection date of GPT-3.5 and GPT 4 OpenAI (2023) models2, the environment is OOD to them. We first use LLM to extract prior knowledge from the LaTeX source code of the original paper by Hafner (2021). We then use a similar QA summarization framework as Wu et al. (2023) which produces QA dialogue on game mechanics. SPRING handles significantly more diverse contextual information than Wu et al. (2023), making use of all 17 action/interaction types and even information about desirable behaviors documented in the paper.
Footnote 2: GPT-3.5/4 training data ends in September 2021 according to OpenAI API
We focus on reading the relevant academic paper in the first stage of SPRING, by first deciding the paragraphs that are relevant for playing the game. Then we extract key information through a series of questions such as "_Write all information helpful for the game in a numbered list._". In the second stage,we promote and regulate in-context chain-of-thought reasoning in LLMs to solve complex games. The reasoning module is a directed acyclic graph (DAG), with questions as nodes and dependencies as edges. For example, the question "_For each action, are the requirements met?_" depends on the question "_What are the top 5 actions?_", creating an edge from the latter to the former. For each environment step, we traverse the DAG computing LLM answers for each node in the topological order of the graph. The final node of the DAG is a question about the best action to take and the LLM answer for the question is directly translated to environment action.
Qualitatively, our experiments show that LLMs, when prompted with consistent chain-of-thought, can execute sophisticated trajectories independently in.Quantitatively, SPRING's zero-shot performance with GPT-4 surpassing all state-of-the-art RL algorithmstrained for 1M steps (Table 2).
Our contributions are as follows:
* SPRING is the first to tackle a competitive RL benchmark by explicitly extracting multiple interactions and tech-tree dependencies directly from an academic paper.
Figure 1: Overview of SPRING. The context string, shown in the middle column, is obtained by parsing the LaTeX source code of Hafner (2021). The LLM-based agent then takes input from a visual game descriptor and the context string. The agent uses questions composed into a DAG for chain-of-thought reasoning, and the last node of the DAG is parsed into action.
* We are the first to show SOTA performance performance in a challenging open world game with a zero-shot LLM-based (GPT-4) policy
* We study the quality of in-context "reasoning" induced by different prompts and propose a controlled chain-of-thought prompting through a DAG of questions for decision making.
## 2 Method
This section is structured as follows. We first describe how we generate the context from the LaTeX source code of Hafner (2021) in Section 2.1. Then we describe our SPRING framework and how we compute the action in Section 2.2.
Problem SettingOur goal is to show that LLMs can plan and act reasonably well in an environment where control tasks are less required. In the setting of Crafter, we define the states, \(s\), as samples from state distribution \(S\). We are interested in creating a goal-conditioned policy \(\pi\) which maps state \(s\) to action \(a\in A\), \(\pi:S\to A\). Due to the use of LLM, we further break the policy down into two parts: a descriptor \(\mathcal{D}\) which describes key aspects the visual observation in plain text (\(d=\mathcal{D}(s)\)). And an LLM-based actor which takes state description \(d\) and outputs action \(a\).
In addition, we define \(\mathcal{S}^{j}_{\text{para}}\) to be the \(j^{\text{th}}\) paragraph in the LaTeX source of the environment paper Hafner (2021), and \(\mathcal{M}_{LLM}\) to be the LLM which takes a context string and a question string as input and outputs an answer to the question.
### Studying the paper: Context from LaTeX source
Similar to Wu et al. (2023), we compose gameplay specific questions and then compute LLM answer to the questions for each subsection in the latex files. Since a considerable amount of the paper is irrelevant to the gameplay, we use a set of 2 questions \(Q_{\text{val}}\)=["Would this paragraph help me succeed in this game", "Does this paragraph contain information on the game mechanics, or game strategies?"] to identify relevance, and a set of 4 questions \(Q_{\text{span}}\)=["Write all information helpful for the game in a numbered list.", "In plain text. List all objects I need to interact/avoid to survive in the game. Use "I would like to X object Y" in each step. Replace Y by the actual object, X by the actual interaction.", "Write all game objectives numbered list. For each objective, list its requirements.", "Write all actions as a numbered list. For each action, list its requirements.") to summarize gameplay and action space relevant information. We add the prompt "do NOT answer in LaTeX." to all of \(Q_{\text{game}}\) to prevent the LLM from outputting the list in LaTeX format.
For a specific gameplay specific question \(q\in Q_{\text{game}}\), our goal is to compute \(C_{q}\), the answer to \(q\) conditioned on the paper. However, since the length of the paper exceeds input length constraints for most LLMs, we have to break the paper down into paragraphs individual \(\mathcal{S}^{j}_{\text{para}}\). We provide an illustration of the process in Figure 2.
First, we filter the paragraphs for relevance and keep only paragraphs identified as relevant by at least one question from \(Q_{\text{rel}}\). We set \(P^{\text{rel}}_{q}\) to be the set of relevant paragraphs.
\[P^{\text{rel}}_{q}=\left\{\mathcal{S}^{j}_{\text{para}}|\exists q_{r}\in Q_{ \text{rel}}\ s.t.\ \mathcal{M}_{LLM}\left(\mathcal{S}^{j}_{\text{para}},q_{r}\right)=\text{`` Yes''}\right\} \tag{1}\]
Figure 2: **Paper Studying Module.** The 3-step approach for computing \(C_{q}\) from the LaTeXsource code of Hafner (2021). First, as shown in the left column, for each paragraph we compute LLM answer for all relevancy questions in \(Q_{\text{rel}}\), and keep only the relevant paragraphs. Second, as shown in the middle column, we compute paragraph-level LLM answer to \(q\). Third, we summarize the answer into \(C_{q}\) with a summary prompt; we concatenate \(C_{q}\) across \(q\in Q_{game}\) and obtain \(C\).
Second, we compute the set, \(A_{q}\), of answers to \(q\) for each relevant paragraph from \(P_{q}^{\text{rel}}\), from the LaTeX source code.
\[A_{q}=\left\{\mathcal{M}_{LLM}\left(\mathcal{S}_{\text{para}},q\right):\mathcal{ S}_{\text{para}}\in P_{q}^{\text{rel}}\right\} \tag{2}\]
Third, to obtain the answer string \(C_{q}\) from the set \(A_{q}\), we query an LLM with a summarization prompt \(q_{\text{summarize}}=\) "Remove duplicate items."
\[C_{q}=\mathcal{M}_{LLM}\left(\mathtt{concat}(A_{q}),q_{\text{summarize}}\right) \tag{3}\]
Finally, we concatenate (with the linebreak character) all question-context pairs to form the context string \(C\) for SPRING.
\[C=\mathtt{concat}\left(\{\text{``Question: }q\text{ Answer: }C_{q}\text{"}|\forall q\in Q_{\text{game}}\}\right) \tag{4}\]
### Reasoning: QA-DAG for SPRING
For LLMs to be able to understand the gameplay, we first follow Du et al. (2023); Wang et al. (2023) to define an visual descriptor \(\mathcal{M}_{\text{desc}}\) which converts state \(s\in S\) to textual description \(d\) (Figure 3 a).
To achieve consistent chain-of-thought reasoning Wei et al. (2021) throughout hundreds of steps within one round of gameplay, we compose a fixed set of questions \(Q_{\text{act}}=\{q_{1},\dots,q_{a}\}\) to query
\begin{table}
\begin{tabular}{c l} \hline \hline Node & Question \\ \hline \multirow{2}{*}{\(q_{1}\)} & List objects in the current observation. For each object, briefly answer what resource it provides \\ & and its requirement. \\ \hline \multirow{2}{*}{\(q_{2}\)} & What was the last action taken by the player? \\ \hline \multirow{2}{*}{\(q_{3}\)} & For each object in the list, are the requirements met for the interaction? \\ \hline \multirow{2}{*}{\(q_{4}\)} & Did the last player action succeed? If not, why? \\ \hline \multirow{2}{*}{\(q_{5}\)} & List top 3 sub-tasks the player should follow. Indicate their priority out of 5. \\ \hline \multirow{2}{*}{\(q_{6}\)} & What are the requirements for the top sub-task? What should the player do first? \\ \hline \multirow{2}{*}{\(q_{7}\)} & List top 5 actions the player should take and the requirement for each action. Choose ONLY from \\ & the list of all actions. Indicate their priority out of 5. \\ \hline \multirow{2}{*}{\(q_{8}\)} & For each action in the list, are the requirements met? \\ \hline \multirow{2}{*}{\(q_{a}\)} & Choose the best executable action from above. \\ \hline \hline \end{tabular}
\end{table}
Table 1: List of all 9 questions in \(Q_{\text{act}}\). The questions are designed to promote consistent chain-of-thought. Experimentally, we find the LLM robust to different phrasing of the questions.
Figure 3: **Reasoning.****(a)** The visual descriptor takes the last two gameplay screens as input, and outputs their descriptions in language (\(d^{t},d^{t-1}\)). **(b)** SPRING traverses a DAG of questions from Table 1 in topological order. Answer to the final question \(q_{a}\) is mapped to environment action using sub-string matching. **(c)** The LLM answer for each question (node) is conditioned on the previous 2 steps of observation, the context \(C\), and answers to the immediate parents of the current node.
the LLM at every step of the game, with question-question dependencies as \(D=\{(q_{u},q_{v})|q_{u},q_{v}\in Q_{\text{act}}\text{ and answering }q_{v}\text{ requires the answer of }q_{u}\}\). Note that the above specification forms a directed acyclic graph (DAG) with nodes \(Q_{\text{act}}\) edges \(D\) (Figure 3 b).
For any question (node) \(q_{v}\in Q_{\text{act}}\), we compute the answer \(A^{t}_{q_{v}}\) for time step \(t\), conditioned on the gameplay context \(C\), most recent 2 steps of game description \(d^{t-1},d^{t}\), and answers to its dependencies (Figure 3 c).
\[A^{t}_{q_{v}}=\mathcal{M}_{LLM}\left(\mathtt{concat}\left(C,d^{t-1},d^{t}, \left\{A^{t}_{q_{u}}|(q_{u},q_{v})\in D\right\}\right),q_{v}\right) \tag{5}\]
Experimentally, we find that prompting the LLM with only the direct parents of a question greatly reduces the context length, and helps LLM to focus on the most relevant contextual information.
We traverse the DAG using a modified topological sort algorithm to compute LLM answer for each question based on its topological order. Finally, we map the answer to the last question in the node \(q_{a}\) directly to one of the 17 named actions in the environment with sub-string matching (\(a=A^{t}_{a}\)). We take the default action "Do" on sub-string matching failure.3
Footnote 3: We will release code for our agent at github.com/anonymous
## 3 Experiments and Results
We present our experiments as follows. First, we explain our experimental setup and baselines for our experiments. Then, we compare SPRING to popular RL methods on the Crafter benchmark. Finally, we conduct experiments and analysis on different pieces of our architecture to study the influence of each part over the in-context "reasoning" capabilities of the LLM.
### Experimental Details
The Crafter environment Hafner (2021) is a procedurally generated open-world survival game for benchmarking RL algorithms with 22 achievements in a tech tree of depth 7. The environment is a grid-world features top-down observation and discrete action space of size 17. The observation also shows the current inventory state of the player, including its health points, food, water, rest levels, and inventory. The game is inspired by Minecraft and features a similar get-to-diamond challenge. In comparison, Crafter captures many key research challenges of Minecraft in a more simple and fast environment, thus speeding up experiments and result collection.
Environment DescriptorThe gameplay screen (top left of Fig 3.) consists of a 9 \(\times\) 9 grid (\(\{(i,j)\mid 1\leq i,j\leq 9\}\)). The top 7 rows consist of the local view of the world; each cell \((i,j)\) is associated with a pre-defined background (e.g., "grass", "water", "none") and possibly with an object "asset" (e.g., "tree", "health", "player"). The bottom 2 rows represent agent status (e.g., "health") and item inventories, which include images of assets (e.g., "stone sword"), and the number of each assent in the inventory.
Our environment descriptor accepts as input the gameplay screen and outputs a text description of the screen. We first create combinations of background and object (appearance) assets. Then we add number assets to recognize the quantity of inventory/ status. We match these combinations with the gameplay screen, using cv2.filters with a matching _threshold_ of \(0.9\). We disable the detector during nights when observations are unreliable. Finally, for each \((i,j)\), we filter the matched combinations, and select the one with the highest matching score. From this information, we can measure the distance and direction of each object relative to the player; simultaneously, we can count the agent status and inventory item.
The environment descriptor then obtains the set of objects in observation \(\mathcal{O}=\{(obj,dist,direction)\}\), the set of inventory items \(\mathcal{I}=\{(object,count)\}\), and the agent status \(\mathcal{H}=\{(attribute,value,max)\}\). Including only the closest object of each kind, we compose the observation description \(d\) as: "You see : - <obj> <dist> steps to your <direction>. Your status: <attribute>: <value>/<max>. Your inventory: - <object>: <count>". We describe direction of objects using "north","south","east","west".
Evaluation MetricsAgents in Crafter are evaluated primarily based on two metrics: reward and score. The game assigns a sparse \(+1\) reward each time the agent unlocks a new achievement in an
episode, and assigns reward of \(-0.1/0.1\) when the agent loses/gains one health point. The score metric Hafner (2021) is computed by aggregating the success rates for each achievement:
\[S=\exp\left(\frac{1}{N}\sum_{i=1}^{N}\ln\left(1+s_{i}\right)\right)-1,\]
where \(s_{i}\) is the agent's success rate on achievement \(i\) and \(N=22\) is the number of achievements. Note that RL agents only train on the reward, and SPRING does not require any training.
RL BaselinesWe include results from popular actor-critic methods like PPO Schulman et al. (2017); DQN variants like Rainbow Hessel et al. (2018); intrinsically motivated methods like RND Burda et al. (2018), Plan2Explore Sekar et al. (2020), EDE Jiang et al. (2022); LLM assisted solutions like ELLM Du et al. (2023); model-based methods like DreamerV2 Hafner et al. (2020); DreamerV3 Hafner et al. (2023), which currently holds the state-of-the-art.
LLMs.For LLM access, we use GPT-3.5-turbo OpenAI (OpenAI) and GPT-4 OpenAI (2023) from OpenAI's API.
### Overall Results
We compare the performance of RL baselines to SPRING with GPT-4 conditioned on the environment paper Hafner (2021) in Table 2.
SPRING out-performs the previous SOTA, including previous attempts at using LLMs for Crafter by large margins, achieving an \(88\%\) relative improvement on game score and a \(5\%\) improvement in reward on the best performing RL method Hafner et al. (2023). Since the model obtains knowledge from reading the paper, SPRING requires \(0\) training steps, while RL methods generally require millions of training steps.
We include a plot of unlock rate by task, comparing our method to popular RL baselines in Figure 4. SPRING assisted by prior knowledge out-performs RL methods by more than 10x on achievements like "Make Stone Pickaxe", "Make Stone Sword", and "Collect Iron", which are up to depth 5 down in the tech tree and significantly harder to reach through random exploration. For achievements "Eat Cow" and "Collect Drink", SPRING achieves perfect performance, whereas model-based RL framework like Dreamer-V3 has more than 5x lower unlock rate for "eat cow" since cows are moving and harder to reach through random exploration. Finally, we note that SPRING did not take the action "Place Stone", which can be reached easily by random exploration, since placing a stone was not discussed as beneficial for the agent in the paper Hafner (2021).
### Component Analysis
We study how the different aspects of the framework contribute to the behavior of the agent through a series of ablations as shown in Table 3.
\begin{table}
\begin{tabular}{l c c c} \hline \hline Method & Score & Reward & Training Steps \\ \hline Human Experts & \(50.5\pm 6.8\%\) & \(14.3\pm 2.3\) & N/A \\ \hline SPRING + paper (Ours) & \(\mathbf{27.3\pm 1.2\%}\) & \(\mathbf{12.3\pm 0.7}\) & \(\mathbf{0}\) \\ DreamerV3 Hafner et al. (2023) & \(14.5\pm 1.6\%\) & \(11.7\pm 1.9\) & 1M \\ ELLM Du et al. (2023) & N/A & \(6.0\pm 0.4\) & 5M \\ EDE Jiang et al. (2022) & \(11.7\pm 1.0\%\) & N/A & 1M \\ DreamerV2 Hafner et al. (2020) & \(10.0\pm 1.2\%\) & \(9.0\pm 1.7\) & 1M \\ PPO Schulman et al. (2017) & \(4.6\pm 0.3\%\) & \(4.2\pm 1.2\) & 1M \\ Rainbow Hessel et al. (2018) & \(4.3\pm 0.2\%\) & \(5.0\pm 1.3\) & 1M \\ Plan2Explore Sekar et al. (2020) & \(2.1\pm 0.1\%\) & \(2.1\pm 1.5\) & 1M \\ RND Burda et al. (2018) & \(2.0\pm 0.1\%\) & \(0.7\pm 1.3\) & 1M \\ Random & \(1.6\pm 0.0\%\) & \(2.1\pm 1.3\) & 0 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Table comparing SPRING and popular RL algorithms in terms of game score, reward, and training steps. Results for SPRING is summarized over 5 independent trials. SPRING out-performs the previous SOTA in terms of all metrics. In addition, since SPRING gathers knowledge from reading the paper, it requires no training.
**Studying the LaTeX Paper** In the first 4 rows of Table 3, we investigate the contribution of gameplay context from the LaTeX paper toward performance of the agent. We report the performance of SPRING with no contextual information (w/o \(C\)) (row 4); SPRING conditioned on only the action descriptions and dependencies from Hafner (2021) Table F.1 (only question 4 from \(Q_{\text{game}}\)) (row 3); SPRING conditioned on the context manually modified to exclude the "crafting table" dependency for wooden_pickaxe by removing two corresponding lines from the context \(C\) (row 2); SPRING conditioned on the full context from the paper (row 1).
As expected, since Crafter environment is OOD for GPT, the agent achieves performance similar to random agent without any game context. When provided with only action descriptions and action dependencies, using only question 4 from \(Q_{\text{game}}\) in section 2.1, SPRING achieves strong \(67\%\) performance comparable to DreamerV2 Silver et al. (2017).
For the next piece of the experiment, we manually remove "near crafting table" dependency for wooden_pickaxe from it's context, which is required for 11 later achievements. SPRING with GPT-4 incurs a \(24\%\) performance drop. Interestingly, we find that the LLM has some ability to recover from the inaccurate context information. We observe that after failing to craft the wooden_pickaxe without a table, the agent instead tries to craft a wooden_sword first to maintain survival. Eventually, the agent was able to identify the missing requirement through guessing and trying after some unsuccessful trials, and craft the wooden_pickaxe. However, the confusion delayed the agent's progress and therefore causes the performance gap with the agent conditioned on the full context (row 5).
Figure 4: Ability spectrum showing the unlocking percentages for all 22 achievements. Rainbow manages to drink water and forage for food. DreamerV3 collects coal, iron, stone, and forges more advanced tools and weapons. Since SPRING starts off with knowledge about the game, it achieves more than 10x higher unlock rate on previously hard-to-reach tasks like “Eat Plant”, “Make Stone Pickaxe”, “Make Stone Sword”, and “Collect Iron”.
\begin{table}
\begin{tabular}{l c c c} \hline \hline Method & Achievement Depth & Reward & Questions per Step \\ \hline SPRING + Full Paper & 6 & \(\mathbf{12.3\pm 0.7}\) & 9 \\ SPRING + Paper w/ modified \(C\) & 4 & \(9.4\pm 1.8\) & 9 \\ SPRING + Action Description & 4 & \(8.2\pm 0.2\) & 9 \\ SPRING + w/o \(C\) & 1 & \(0.5\pm 0.2\) & 9 \\ \hline SPRING + Full Paper & 6 & \(\mathbf{12.3\pm 0.7}\) & 9 \\ Step-by-step prompt + Full Paper & 5 & \(7.3\pm 4.4\) & 2 \\ QA w/o DAG + Full Paper & 4 & \(4.3\pm 3.9\) & 9 \\ w/o QA + Full Paper & 2 & \(2.4\pm 1.3\) & 1 \\ \hline SPRING + Full Paper & 6 & \(\mathbf{12.3\pm 0.7}\) & 9 \\ SPRING + Full Paper w/ GPT-3.5 & 2 & \(3.3\pm 2.9\) & 9 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Analysis on how different parts of SPRING contribute to its performance, comparing the max achievement depth in the tech tree, the reward, and the number of human-written questions in the prompt. Results are summarized over 5 independent trials. The first 4 rows study the necessity of prior knowledge from the context string \(C\). The middle 4 rows study different chain-of-thought prompting techniques. The last 2 rows study the role of LLMs. All three aspects are important for SPRING to achieve best reported performance.
ReasoningIn the middle 4 rows of Table 3, we investigate the contribution of different prompting methods toward performance of the model. Conditioned on the full context from the LaTeX paper, we report the performance of GPT-4 directly prompted to output the action using the last question \(q_{a}\) only (row 8); GPT-4 prompted with all questions from \(Q_{\text{act}}\) but in a list without the DAG dependencies \(D\) (row 7); GPT-4 prompted "Let's think step-by-step" Kojima et al. (2022) about the next action, and prompted to choose a permissible action \(q_{a}\) with let's think step-by-step followed by \(q_{a}\) again (row 6); GPT-4 with SPRING (row 5).
Relative to our method, we observe that directly prompting the LLM for the action leads to a \(80\%\) performance drop, and therefore does not result in a meaningful agent. The popular chain-of-thought reasoning prompt "Let's think step-by-step" Kojima et al. (2022) achieves reasonable reward with a \(40\%\) drop, but with a high \(60.27\%\) standard deviation. Qualitatively, we observe that the LLM produces inconsistent outputs across time steps, due to the fact that the model's chain-of-thought is not directed or controlled through the prompt. Therefore, LLMs prompted with "Let's think step-by-step" alone cannot reliably follow a good policy. Controlling the chain-of-thought with 9 questions from \(Q_{\text{act}}\) (section 2.2) successfully controls the consistency of LLM outputs across time qualitatively. However, we observe that the LLM often ignores earlier questions at later stages of QA when all previous questions are presented in a list, leading to random disagreements in answers. For example, the LLM may correctly identify that it needs "woodden pickase" to mine the stone ahead in the first few questions, but forgets about the requirement later when it's prompted for actions. Quantitatively, the model performs \(65\%\) worse with \(90\%\) variance without the DAG. The introduction of DAG eliminates this problem by reducing the QA context length to only a question's immediate parents.
Overall, SPRING achieves the best performance and a small \(6\%\) performance standard deviation, due to more consistent reasoning over time steps with better focus and fewer distractions.
LlmIn the last two rows of Table 3, we show that the same architecture does not work well with GPT-3.5-turbo. We believe the observed \(73\%\) performance gap mainly comes from GPT-3.5-turbo's worse performance at following fine-grained instructions in each of the questions, which are required for chain-of-thought reasoning with SPRING.
### Potential for Benchmarking LLMs
In Table 4, we compare popular publicly available LLMs including GPT-4 OpenAI (2023), GPT-3.5 (text-davinci-003) OpenAI (OpenAI), Bard Manyika (Manyika), Claude Anthropic (Anthropic), Alpaca-30b Taori et al. (2023) under the same setting on Crafter, following the same step-by-step prompt as Section 3.3 and Table 3. We observe a clear separation in performance under our setting.
## 4 Related Work
Policy Informed by Natural Language In the instruction following setting, step-by-step instructions have been used to generate auxiliary rewards, when environment rewards are sparse. Goyal et al. (2019); Wang et al. (2019) use auxiliary reward-learning modules trained offline to predict whether trajectory segments correspond to natural language annotations of expert trajectories.
There has been many attempts to go beyond instruction following to learning from unstructured natural language Branavan et al. (2012); Goldwasser and Roth (2014); Zhong et al. (2021); Wang and Narasimhan (2021). Zhong et al. (2021); Wang and Narasimhan (2021) make use of special architectures to learn reasoning on grid worlds with template-generated instructions. However, the
\begin{table}
\begin{tabular}{l c c c} \hline \hline Method & Achievement Depth & Reward & Questions per Step \\ \hline Step-by-step prompt + GPT-4 & 5 & \(\mathbf{7.3\pm 4.4}\) & 2 \\ Step-by-step prompt + text-davinci-003 & 4 & \(4.5\pm 2.1\) & 2 \\ Step-by-step prompt + Bard & 0 & \(-0.9\pm 0\) & 2 \\ Step-by-step prompt + Claude & 1 & \(0.1\pm 0.1\) & 2 \\ Step-by-step prompt + Alpaca-30b & 1 & \(0.1\pm 0.1\) & 2 \\ Random & 1 & \(2.1\pm 1.3\) & 0 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Comparison of different LLMs under the same setting using the context \(C\) generated with text-davinci-003 following the same step-by-step prompt as Section 3.3 and Table 3.
model requires 200 million training samples from templates identical to the test environments. Such a training requirement limiting the generalization of the model and causes performance loss even on slightly bigger grid worlds with identical mechanics.
Wu et al. (2023) proposes a summary (Read) and reasoning (Reward) through a QA prompting framework with an open-source QA LLM Tafjord and Clark (2021). The framework demonstrates the possibility of an using real-world human-written manuals to improve RL performance on popular games, despite limiting the interaction types to only "hit". Our framework handles all 17 kinds of interactions available in the game. Moreover, our framework makes use of information on tech-tree dependencies, and suggestions on desired policies extracted from the academic paper.
LLMs for PlanningLLMs have shown promising results at high-level planning in indoor embodied manipulation environments. Huang et al. (2022); Ahn et al. (2022) primarily explores generating plans for embodied tasks, with limited actions space and trajectory length. Song et al. (2022); Wu et al. (2022) enhances Ahn et al. (2022) with greater action diversity and real-time re-planning. However, a lot of the high-level plans lack executability and has to be post-processed to meet specific task requirements, thus limiting the generalization to complex open world tasks. In addition, all prior works along this line operates on few-shot human/expert generated demonstrations containing up to 17 trajectories to provide context for LLMs, which requires more manual labor, and may limit the generalization to unseen scenarios. In comparison, our SPRING framework requires no demonstration.
LLMs for Open World GamesCompared to popular indoor manipulation tasks, planning in open-world game environments poses the following additional challenges. 1) **Long horizon.** Due to the nature how in-game achievement/technology progresses, a successful gameplay can easily go beyond 200 steps Hafner (2021). 2) **Parallel objectives.** Open-world environments contain objectives that can be pursued in parallel and often require prioritization Wang et al. (2023). Therefore, open world games are significantly more challenging than current indoor embodied manipulation environments.
Du et al. (2023) applies LLMs as high-level planners to assist RL exploration in Crafter. Wang et al. (2023); Yuan et al. (2023) use LLMs as high-level planner and goal selector to control a low level-policy in Minecraft. Tsai et al. (2023) studies the capabilities of ChatGPT on text games. Notably, all prior works require expert or human generated example trajectories as context for the LLMs. Since the example trajectories do not cover all scenarios, all prior works may encounter unseen situation during evaluation, leading to an overall performance inferior to state-of-the-art RL algorithms Hessel et al. (2018); Guss et al. (2021); Hafner et al. (2023), trained without the use of LLMs. To our knowledge, we are the first to show an LLM (GPT-4) achieving performance surpassing the state-of-the-art RL algorithms in a challenging open world game.
## 5 Limitations and Future Work
A primary limitation in using an LLM to support interaction with the environment is the need for object recognition and grounding. However, these limitations do not exist in environments that offer accurate object information, such as contemporary games Fan et al. (2022) and virtual reality worlds Kolve et al. (2017). While pre-trained visual backbones He et al. (2017) perform poorly on games, they have shown reasonable performance for environments closer to the real-world Shridhar et al. (2020). In addition, with recent progress on visual-language models Bubeck et al. (2023); Driess et al. (2023); Liu et al. (2023); Zou et al. (2023), we believe there will be reliable and generalizable solutions to visual-language understanding in the foreseeable future. Future works could focus on address the requirement for a separate visual descriptor with large visual-language models.
## 6 Conclusions
In this work, we explore solving the Crafter Hafner (2021) RL benchmark using the latest LLMs by reading the source code of an academic paper about the benchmark. We study the quality of in-context "reasoning" and "planning" induced by different forms of prompts under the setting of the Crafter open-world environment. To enforce consistent planning and execution over hundreds of environment steps, we introduce SPRING, an innovative prompting framework for LLMs designed
to enable in-context chain-of-thought planning and reasoning. Quantitatively, SPRING with GPT-4 outperforms all state-of-the-art RL baselines, trained for 1M steps, without any training.
Our work demonstrates the reliability of LLMs for understanding and reasoning with human knowledge. We hope that our work points to a new way of integrating human prior knowledge into RL training through intrinsic rewards Wu et al. (2023), hierarchical RL Shu et al. (2017), or sub-goal planning Wang et al. (2023); Wu et al. (2023).
## Broader Impacts
Our research on LLM holds potential for both positive and negative impacts. The benefits include better understanding of the powers of LLM and enhanced integration of prior knowledge, which could lead to advancement in various AI topics. However, the risks may involve reliance on computationally demanding models, game cheating or exploitation, and reliance of prior knowledge.
|
2306.12861
|
Photometric variability of the LAMOST sample of magnetic chemically
peculiar stars as seen by TESS
|
High-quality light curves from space missions have opened up a new window on
the rotational and pulsational properties of magnetic chemically peculiar (mCP)
stars and have fuelled asteroseismic studies. They allow the internal effects
of surface magnetic fields to be probed and numerous astrophysical parameters
to be derived with great precision. We present an investigation of the
photometric variability of a sample of 1002 mCP stars discovered in the LAMOST
archival spectra with the aims of measuring their rotational periods and
identifying interesting objects for follow-up studies. TESS photometry was
available for 782 mCP stars and was analysed using a Fourier two-term frequency
fit to determine the stars' rotational periods. The rotational signal was then
subtracted from the light curve to identify non-rotational variability. A
pixel-level blending analysis was performed to check whether the variability
originates in the target star or a nearby blended neighbour. We investigated
correlations between the rotational periods, fractional age on the main
sequence, mass, and several other observables. We present rotational periods
and period estimates for 720 mCP stars. In addition, we identified four
eclipsing binary systems that likely host an mCP star, as well as 25 stars with
additional signals consistent with pulsation (12 stars with frequencies above
10 d$^{-1}$ and 13 stars with frequencies below 10 $^{-1}$). We find that more
evolved stars have longer rotation periods, in agreement with the assumption of
the conservation of angular momentum during main-sequence evolution. With our
work, we increase the sample size of mCP stars with known rotation periods and
identify prime candidates for detailed follow-up studies. This enables two
paths towards future investigations: population studies of even larger samples
of mCP stars and the detailed characterisation of high-value targets.
|
J. Labadie-Bartz, S. Hümmerich, K. Bernhard, E. Paunzen, M. E. Shultz
|
2023-06-22T13:16:38Z
|
http://arxiv.org/abs/2306.12861v1
|
# Photometric variability of the LAMOST sample of magnetic chemically peculiar stars as seen by TESS+
###### Abstract
Context:High-quality light curves from space-based missions have opened up a new window on the rotational and pulsational properties of magnetic chemically peculiar (mCP) stars and have fuelled asteroseismic studies. They allow the internal effects of surface magnetic fields to be probed and numerous astrophysical parameters to be derived with great precision.
Aims:We present an investigation of the photometric variability of a sample of 1002 mCP stars discovered in the Large Sky Area Multi-Object Fiber Spectroscopic Telescope (LAMOST) archival spectra with the aims of measuring their rotational periods and identifying interesting objects for follow-up studies.
Methods:Transiting Exoplanet Survey Satellite (TESS) data were available for 782 mCP stars and were analysed using a Fourier two-term frequency fit to determine the stars' rotational periods. The rotational signal was then subtracted from the light curve to identify additional non-rotational variability signals. A careful pixel-level blending analysis was performed to check whether the variability originates in the target star or a nearby blended neighbour. We investigated correlations between the observed rotational periods, fractional age on the main sequence, mass, and several other observables.
Results:We present rotational periods and period estimates for 720 mCP stars. In addition, we have identified four eclipsing binary systems that likely host an mCP star, as well as 25 stars with additional signals consistent with pulsation (12 stars with frequencies above 10 d\({}^{-1}\) and 13 stars with frequencies below 10 d\({}^{-1}\)). We find that more evolved stars have longer rotation periods, which is in agreement with the assumption of the conservation of angular momentum during the main-sequence evolution.
Conclusions:With our work, we increase the sample size of mCP stars with known rotation periods and identify prime candidates for detailed follow-up studies. This enables two paths towards future investigations: population studies of even larger samples of mCP stars and the detailed characterisation of high-value targets.
## 1 Introduction
The chemically peculiar (CP) stars of the upper main sequence form a significant fraction of the upper main-sequence stars (about 10 per cent) and are encountered between spectral types early B to early F. Their defining characteristic is the presence of spectral peculiarities that indicate unusual elemental abundance patterns (e.g. Preston 1974; Maitzen 1984; Smith 1996; Gray & Corbally 2009; Ghazaryan et al. 2018), which are thought to originate from the interplay between radiative levitation and gravitational settling taking place in the calm outer layers of slowly rotating stars (atomic diffusion; e.g. Michaud 1970; Richer et al. 2000).
Following Preston (1974), the four main groups of CP stars are the CP1 stars (the metallic-line or Am/Fm stars), the CP2 stars (the magnetic Bp/Ap stars), the CP3 stars (the mercury-manganese or HgMn stars), and the CP4 stars (the He-weak stars). Although the observed abundance patterns within a group can vary considerably, each group is characterised by a distinct set of peculiarities. The CP1 stars exhibit under-abundances of Ca and Sc and over-abundances of iron-peak and heavier elements. The main characteristics of the CP2 stars are excesses of elements such as Si, Sr, Eu, or the rare-earth elements. Some of the most peculiar objects belong to this group, such as the extreme lanthanide star HD 51418 (Jones et al. 1974) or Przybylski's Star, HD 101065 (Przybylski 1966). The CP3 stars show enhanced lines of Hg and Mn and other heavy elements, whereas the CP4 stars possess anomalously weak He lines. Additional classes of CP stars have been proposed, for example the \(\lambda\) Bootis stars (Gray 1997; Paunzen 2004; Murphy & Paunzen 2017), which are characterised by unusually low surface abundances of iron-peak elements, or the He strong stars, which are early B stars that exhibit anomalously strong He lines in their spectra (Bidelman 1965; Morgan et al. 1978). As regards the strength of
chemical peculiarities, a continuous transition from chemically normal to CP stars is observed (Loden & Sundman 1987).
Several authors have divided the CP stars into a'magnetic' and a 'non-magnetic' sequence (e.g. Preston 1974; Maitzen 1984). The former is made up of the CP2 and the He-peculiar stars (i.e. the CP4 and the He-strong stars), which possess strong and stable magnetic fields, while the latter encompasses, for example, the CP1, CP3, and \(\lambda\) Boot stars. While this canonical view has been frequently challenged (cf. e.g. Hubrig et al. 2010, 2012; Kochukhov et al. 2013, and Hubrig et al. 2020 on the ongoing controversy about the presence of weak and tangled magnetic fields in CP3 stars), the groups of stars of the non-magnetic sequence certainly lack the strong and organised magnetic fields observed in the CP2 and He-peculiar stars, which can attain strengths of up to several tens of kilogauss (Babcock 1947; Auriere et al. 2007). For convenience, CP2 and He-peculiar stars are generally referred to as magnetic chemically peculiar (mCP) stars - a convention that we adhere to in this study.
While the origin of their magnetic fields is still a matter of some controversy (Moss 2004), evidence has been collected in favour of the fossil field theory (e.g. Braithwaite & Spruit 2004), according to which the magnetic field is a relic of the interstellar magnetic field that was 'frozen' into the stellar plasma during star formation. Alternatively, a fossil field may be generated during a merger event (e.g. Tutukov & Fedorova 2010; Schneider et al. 2019).
Magnetic CP stars show a non-uniform surface distribution of chemical elements, which is associated with the presence of the magnetic field and manifests itself in the formation of spots and patches of enhanced or depleted element abundance. Flux is redistributed in these 'chemical spots' (line and continuum blanketing; e.g. Wolff & Wolff 1971; Molnar 1973; Lanz et al. 1996; Shulyak et al. 2010; Krticka et al. 2013), and mCP stars show strictly periodic light, spectral, and magnetic variations with the rotation period, which are satisfactorily explained by the oblique-rotator model (the magnetic axis is oblique to the rotation axis; Stibbs 1950). According to convention, photometrically variable mCP stars are referred to, after their bright prototype, as \(\alpha^{2}\) Canum Venaticorum (ACV) variables (Samus et al. 2017). Rotation periods generally range from about 0.5 days to decades, with a peak at \(\sim\)2 days (e.g. Renson & Manfroid 2009; Bernhard et al. 2015a).
In addition to the rotational light changes, mCP stars can also exhibit pulsational variability. For a long time, the only proven form of pulsational variability among these objects was observed in the so-called rapidly oscillating Ap stars (Kurtz 1982), which exhibit variability in the period range 5-20 min (high-overtone, low-degree, and non-radial pulsation modes). With the advent of ultra-precise space photometry, additional pressure modes (p modes) and gravity modes (g modes) associated with \(\gamma\) Doradus (e.g. Kaye et al. 1999; Guzik et al. 2000) and \(\delta\) Scuti (e.g. Breger 2000) pulsations were identified in a number of CP2 stars (e.g. Balona et al. 2011; Cunha et al. 2019; Holdsworth et al. 2021). In general, the high-quality light curves (LCs) from space-based missions open up the possibility for the asteroseismic characterisation of mCP stars, which in turn allows the internal effects of surface magnetic fields to be probed and numerous astrophysical parameters to be derived with great precision (Briquet et al. 2012; Buyschaert et al. 2018).
The most up-to-date collection of CP stars is the General Catalogue of CP Stars (Renson & Manfroid 2009), which was published more than a decade ago. It lists about 3500 mCP stars or candidates (\(\sim\)2000 confirmed mCP stars and \(\sim\)1500 candidates) and is still one of the main resources regularly employed in investigations of mCP stars. However, in recent years, several studies have published new samples of these objects (e.g. Hummerich et al. 2018; Scholz et al. 2019; Sikora et al. 2019a). Of note is the study of Hummerich et al. (2020, hereafter Paper 1), who published a sample of 1002 mCP stars, thereby significantly enlarging the total known number of these objects.
Here we present our efforts to characterise the photometric variability of the sample of mCP stars published in Paper 1 using photometric time-series observations from the NASA Transiting Exoplanet Survey Satellite (TESS; Ricker et al. 2015). Our paper is structured as follows. The employed data sources and methods are described in Sect. 2. In Sect. 3 we present and discuss our results. Special emphasis is placed on the eclipsing binary (EB) systems and pulsator candidates in our sample, which are studied in detail in Sect. 4. We conclude in Sect. 5.
## 2 Methods and data
### The LAMOST DR4 sample of mCP stars
In our search for mCP stars (Paper 1), we employed a modified version of the MKCLASS code1, a computer program conceived by Richard O. Gray to classify stellar spectra on the MK system, to search for mCP stars in spectra from the fourth data release (DR4) of the Large Sky Area Multi-Object Fiber Spectroscopic Telescope (LAMOST) of the Chinese Academy of Science (Zhao et al. 2012; Cui et al. 2012). LAMOST is a Schmidt telescope based at Xinglong Observatory (Beijing, China) that boasts an effective aperture of 3.6\(-\)4.9 m (field of view of about 5deg) and is able to collect 4000 spectra in a single exposure (spectral resolution R \(\sim\) 1800, limiting magnitude \(r\sim\) 19 mag, wavelength coverage 3700 to 9000 A). LAMOST is therefore perfectly suited for large-scale spectral surveys; data products are made available to the public in consecutive data releases accessible through the LAMOST spectral archive.2
Footnote 1: [http://www.appstate.edu/~grayro/mkclass/](http://www.appstate.edu/~grayro/mkclass/)
Footnote 2: [http://www.lamost.org](http://www.lamost.org)
In a nutshell, suitable candidates were collected from a colour-selected sample of early-type stars by searching for the presence of the 5200 A flux depression, a characteristic of mCP stars (e.g. Kodaira 1969; Maitzen 1976; Paunzen et al. 2005; Kochukhov et al. 2005; Khan & Shulyak 2006). Spectral classification was then performed using MKCLASS_mCP, a version of the original program modified to probe a number of spectral features relevant to the identification and classification of mCP stars. In this way, a final sample of 1002 mCP stars (mostly CP2 stars and several CP4 stars) was collected, most of which were new discoveries (only 59 objects have an entry in the Renson & Manfroid (2009) catalogue). These objects are between 100 Myr and 1 Gyr old, with the majority having masses between 2 \(M_{\odot}\) and 3 \(M_{\odot}\). From an investigation of a sub-sample of 903 mCP stars with accurate astrophysical parameters, we determine a mean fractional age on the main sequence of \(\tau=63\) % (standard deviation of 23 %) and conclude that our results provide evidence for an inhomogeneous age distribution among low-mass (\(M<\) 3 M\({}_{\odot}\)) mCP stars. For more detailed information on the methods employed, we refer the reader to Paper 1.
### TESS data
The NASA TESS mission was launched in 2018 with the primary goal of discovering transiting exoplanets via high-precision time-series photometry. The four identical cameras of
TESS cover a combined field of view of 24\({}^{\circ}\)\(\times\) 96\({}^{\circ}\) and are pointed at a given region of the sky for 27.4 days (one observing sector). In its first two years of operation (the primary mission), TESS observed nearly the entire southern ecliptic hemisphere over 13 observing sectors (1 year), followed by a similar strategy for the northern ecliptic hemisphere. Thus, TESS has observed nearly the entire sky and continues to do so (while filling in small gaps in sky coverage) in its ongoing extended mission. TESS records red optical light with a wide bandpass spanning roughly 600\(-\)1000 nm, centred on the traditional Cousins \(I\) band. For optimal targets, the noise floor is approximately 60 ppm h\({}^{-1}\).
The full frame images (FFIs) from TESS are made publicly available for its entire field of view. Therefore, LCs can be extracted for virtually any object observed by the satellite. During the primary mission (Cycles 1 and 2), the FFIs were delivered with a 30-minute cadence, then a 10-minute cadence for Cycles 3 and 4, and a 200-second cadence for Cycle 5 (the current cycle at the time of this writing). It is these FFIs that constitute the fundamental data products used in this work. TESS also provides 2-minute cadence LCs for pre-selected targets, but the majority of our mCP sample was not observed in this mode. At the time of writing, TESS data were available for 782 of the 1002 mCP stars from Paper 1.
### Light curve extraction from TESS FFIs
Light curves were extracted from the TESS FFIs for all of the stars in the sample observed by TESS, up to and including TESS sector 35 (the latest available sector at the time of the LC extraction). The lightcurve (Lightkurve Collaboration et al. 2018) and TESScut (Brasseur et al. 2019) packages were used to download a 40 \(\times\) 40 pixel target pixel file (TPF) centred on the coordinates of the target star. To determine the aperture used to generate the LC (via simple aperture photometry), an initial threshold of 10 sigma relative to the median flux level in the TPF selected an initial pixel mask centred on the target. The aperture size was then further constrained depending on the target brightness, being restricted to a radius of 2 pixels for the faintest targets (\(T_{\rm mag}\)\(\geq\) 11), and up to 5.5 pixels for the brightest (\(T_{\rm mag}\)\(\sim\)5). Two different detrending methods were employed to remove systematic trends. The first involved simple background subtraction, where background pixels were identified in the TPF, and their average flux level in each frame was subtracted from the target LC (after accounting for the number of pixels in the adopted aperture). The second method first excluded pixels in a 10 \(\times\) 10 square centred on the target (or larger for the brightest stars), and then the remaining pixels in the TPF were used as regressors in a principal component analysis (PCA) correction, using five PCA components to remove common trends across this region of the CCD. Both detrending methods have advantages and disadvantages; for instance, the PCA detrending can over-fit the data and remove genuine longer-term trends, while background subtraction can perform poorly for certain types of systematics (e.g. associated with spacecraft momentum dumps). Therefore, for each target, the 'best' version of the LC (PCA versus background-corrected) was selected as described in Sect. 2.4. In practice, the PCA detrending was preferred about 85% of the time. For a small number of stars with problematic data from the initial LC extraction (e.g. when the star fell on the edge of the TESS CCD), LCs were later re-extracted to make use of the most recent TESS data available.
### Pre-processing the light curves
The primary two pre-processing steps were to determine the single best version of the LC to use for a given star (PCA versus background-corrected) and to remove outliers. These were both done in a single routine as follows. Both LC versions were subjected to an initial multi-term Fourier analysis that aimed to fit frequencies below \(\sim\)0.5 d\({}^{-1}\), which include rotational and slower systematic signals. This multi-term fit was subtracted from the LC, and from these residuals statistical outliers were identified automatically (being five standard deviations from the mean). At this stage, any additional outliers were manually selected via an interactive Python routine (or in some cases, statistical outliers were selected to remain if, e.g., the initial multi-term Fourier fit poorly reproduced the slower variability). The scatter in the (outlier-removed) residuals was compared for both versions of the LC, and typically the one with the lowest scatter was selected as the best version to use in the subsequent analysis. The original LC, with the low-frequency signals intact but without outliers, was then saved for further analysis. A simple sigma clipping to the original LCs was not optimal for outlier removal, since the astrophysical rotational signals were often of a higher amplitude than the deviations of the outliers from the mean flux.
### Variability analysis
As expected for mCP stars, a preliminary analysis of the TESS LCs showed that rotational variability was by far the most common and dominant photometric signal for the sample. For stars with stable bright surface spots (e.g. mCP stars), the observed brightness in broadband photometry is modulated at the rotational period. The exact shape of the variability pattern depends on factors such as the inclination angle and the spot sizes and distribution, but in general the photometric signal is non-sinusoidal and thus forms a harmonic series in the frequency spectrum computed from the LC. With a standard Fourier analysis, frequency peaks are often found at the rotational frequency and its harmonics, but the strongest peak may be at one of these harmonics. Instead, a two-term frequency analysis (fitting \(f\) and 2\(\times\)\(f\) for a pre-defined grid of frequencies) more reliably found the strongest peak at the rotational frequency automatically and was thus preferred for the automated determination of rotational frequencies. An example of this is illustrated in Fig. 1. Including additional terms did not improve results. The primary tool we used to determine rotation periods was thus a modified generalised Lomb-Scargle periodogram (Zechmeister & Kurster 2009; Press et al. 1992; VanderPlas et al. 2012; VanderPlas & Ivezic 2015); we used two Fourier terms, employing the magnetars.LomScargle package of Astropy(Astropy Collaboration et al. 2013, 2018).
This two-term frequency analysis was applied to the entire sample, and the strongest peak was presumed to be the rotational frequency and was thus tabulated. For each star, plots were made phasing the LC to 0.5\(\times\), 1\(\times\), and 2\(\times\) this frequency and were manually inspected (along with the one- and two-term frequency spectrum) to ensure the correct rotational period was identified. All systems where the automatic analysis was in doubt were analysed manually with Period04 (Lenz & Breger 2005). The most common reason for the automatic analysis to fail were cases where relatively strong systematic effects dominated the LC (especially for the faintest sources), but where rotational modulation could usually still be recovered by detrending against these (often much slower) systematics. For the slowest rotators, the short 27 d baseline of TESS was insufficient to sample a full rotational cycle and rotational periods could only be
coarsely estimated (cf. also Sect. 3.1). In some cases, it was not possible to determine or even estimate a rotational period from the TESS data (e.g. when the rotation period was too long, the amplitude too small, or the data problematic).
A given star in our sample may also display non-rotational photometric variability due to, for example, stochastic low-frequency excess (astrophysical correlated red noise, Bowman et al. 2019, 2020), pulsation, binarity, or a combination thereof. To investigate this, a two-term model of the rotational modulation was subtracted from the LC, whereafter the standard (one-term) frequency spectrum was calculated, which was then fit to determine the red noise profile (similar to the method used by Bowman et al. 2020; Labadie-Bartz et al. 2021). The red noise profile, multiplied by 5, was then used as a threshold above which other statistically significant frequencies (in general not associated with the stellar rotation from the first step) were automatically identified as candidates with additional variability. This step led to the identification of 107 such candidates.
However, given the large pixel size of TESS (21 arcseconds), contaminating flux from neighbouring stars falling into the aperture ('blending') is a concern. Therefore, LCs were re-extracted for these candidates including up to the most recent Cycle 5 images for a pixel-level blending analysis. This was done in two steps. The first 'coarse' analysis involved marking the location of all nearby _Gaia_ sources (with \(G_{\rm mag}<15\)) overlaid on an image of the TESS TPF, and plotting the LC and frequency spectrum of each pixel in the vicinity of the target and then comparing the per-pixel frequency spectrum to that calculated from the PCA-detrended LC for the target star. In this way, candidate pulsational signals (or other variabilities) could be localised to determine if they originate in or near the target star or in a nearby blended neighbour. A suite of plots were inspected manually to make this determination. This analysis was generally sufficient to identify variable sources \(\sim\)2 or more pixels away from the mCP target star. Each object that passed this first analysis was then examined in more detail using the TESS_localize3 Python package (Higgins & Bell 2022). TESS_localize is designed to locate the origin of variability signals to within one-fifth of a TESS pixel. In all objects analysed with TESS_localize, the rotational variability was confirmed to originate in the mCP star, and two objects with additional frequencies were rejected as blends (in both these cases the blended variable was faint, \(G_{\rm mag}\sim\)17 to 18, and about one pixel away from the mCP star).
Footnote 3: [https://github.com/Higgins00/TESS-Localize](https://github.com/Higgins00/TESS-Localize)
Each star determined to potentially have inherent variability in addition to rotation is listed in Sect. 4. Those objects with additional variability due to contaminating flux from blended neighbouring sources are indicated (see Sect. 3.1) in order to minimise duplication of efforts in future studies. We note that the typically low-amplitude signals from blended neighbours do not impact the analysis of the rotational variability of these objects - no cases were identified where the presumed rotational modulation originates off-target.
## 3 Results and discussion
### Presentation of results
Table 1 in the appendix lists essential data for our sample stars, including identifiers from the TESS Input Catalogue (TIC; Stassun et al. 2019) and LAMOST; positional information and magnitude in the \(G\) band from _Gaia_ DR3 (Gaia Collaboration et al. 2016; Babusiaux et al. 2022; Gaia Collaboration et al. 2022); spectral type from Paper 1; and variability period and peak-to-peak amplitude, as derived from TESS data in this study. Where appropriate, additional information is provided in the column 'Remarks', which includes variability information gleaned from the International Variable Star Index of the AAVSO (VSX; Watson 2006). The VSX is the most up-to-date collection of variable star data and constantly updated with new variability catalogues and results from the literature. It contains the results of most relevant papers dealing with the periods of ACV variables (including more recent studies, such as Paunzen & Maitzen 1998, Wraight et al. 2012, Bernhard et al. 2015a, Bernhard et al. 2015b, Hummerich et al. 2016, and Bernhard et al. 2021). Not included at the time of this writing are the studies of Bowman et al. (2018), Sikora et al. (2019), David-Uraz et al. (2019), Bernhard et al. (2020), Mathys et al. (2020b), and Mathys et al. (2022). Except
Figure 1: Comparison of one- and two-term Fourier analysis for identifying the rotation period. _Top:_ Phased TESS LC for an example mCP star with a decidedly double-waved pattern at the rotational period (i.e. two maxima and two minima per rotation). _Middle:_ Standard single-term frequency analysis, showing that the strongest peak is at 2\(\times\)\(f_{\rm fact}\). _Bottom:_ Two-term frequency analysis, where the strongest peak is at \(f_{\rm rot}\).
for two stars from the list of Mathys et al. (2022)4, there are no matches between the samples of these studies and our sample.
Footnote 4: Mathys et al. (2022) identified TIC 239801694 and TIC 368073692 as candidate very slowly rotating mCP stars; no variations were reported. Our analysis also found no variations in TIC 239801694, but for TIC 368073692 we found a period of 20.49 d with a consistent signal in both available sectors of data.
The VSX has data for only 64 out of the 782 objects presented here. With the present work, we therefore significantly add to the sample of mCP stars with accurately determined rotational periods. We note that for several objects, the VSX designations are not accurate (although the periods may still be correct), for example when variability was interpreted as being related to binarity and not rotation or when the object was included under a generic variability type such as ROT or MISC. These objects are here identified as ACV variables for the first time.
Figure 2 illustrates a comparison of the literature periods from the VSX with the periods derived from TESS data in this work. In five cases, the period values in the VSX obviously represent half (TIC 235273014, TIC 2679119) or twice (TIC 438165498, TIC 250478934, TIC 21018674) the true rotation period. This is expected, as double-waved ACV LCs can easily be misinterpreted as single-waved LCs (or vice versa) if the data are noisy, the amplitude of the light variations is very small, or the difference in brightness between both maxima or minima is negligible. In this respect, the ultra-precise TESS data clearly have an advantage over ground-based photometric data sources for typical periods. Apart from that, the agreement is very good, which highlights the quality of the VSX period data. The single exception is for TIC 237662091, where TESS clearly revealed a long rotation period (18.630 d, although it is possible the true rotation period is twice this value), and the VSX period corresponds to the daily alias (0.94661 d).
Periods are given to the last significant digit. Approximate periods are identified by the use of a colon (':'). Several stars are clearly variable but have periods too long to be resolved with the rather short time baseline of the employed TESS data. That is expected, as ACV variables can have periods of up to decades or even longer (Mathys 2019; Mathys et al. 2019, 2020a,b, 2022). These objects are identified by the remark 'long-period var.' in Table 1. We caution that it is not always straightforward to distinguish between long-term trends inherent to the TESS data and intrinsic stellar variability, in particular when the data are noisy. Therefore, a question mark is added to the aforementioned remark when the situation remains unclear.
Most of these suspected long-period ACV variables boast data from one sector only (time baseline of 27 days) so that not more than a part of the rotational cycle was covered in the available data. These objects are consequently listed with a period of ">30." in Table 1. Two examples of stars with periods longer than a single TESS sector are shown in Fig. 3, which also illustrates the difficulties encountered in period determination for these objects even when data from several sectors are available. Since each sector is reduced separately, there are different systematic trends and different amounts of contaminating flux from neighbouring stars, so the measured photometric amplitude can differ from one sector to the next, and different constant flux offsets may need to be applied to each sector of data (or even each half-sector; to avoid discontinuities). For stars such as these, the background-corrected version of the LC typically produces better results (as plotted here) since the PCA detrending tends to remove or distort the longer-term signals (see Sect. 2.3).
In 62 stars, no variability could be inferred from TESS data. For 23 of these objects, a LC of reasonable or good quality is extracted, yet no variability is seen (remark 'data fine, no var.'). We consider these stars prime candidates for very slowly rotating mCP stars. One of these objects (TIC 239801694) has been proposed to be a very slowly rotating mCP star candidate by Mathys et al. (2022).
For the remaining objects, blending issues or problems with either the LC extraction or detrending yield unreliable results. Some of these stars do not fall on useful pixels on the TESS CCD, and thus no LC could be extracted (remark 'edge of CCD, no LC extracted'). Other targets are faint and in a relatively crowded field and so the automatic aperture selection failed (remark 'faint, aperture selection failed'), or there is significant blending from (often relatively bright) neighbouring sources, rendering it difficult or impossible to isolate the target star (remark 'blending dominates, LC unreliable'). Finally, in some LCs there are problems in detrending against systematic effects making it difficult to determine whether or not there is any astrophysical variability ('systematics dominate, LC unreliable').
Stars of special interest, which are further discussed in Sect. 4, are marked by an asterisk ('*') in the column 'Remarks'. These are the EBs (Sect. 4.1) and stars with additional signals that we attribute to pulsation (or in some cases where these additional signals have a harmonic structure to perhaps rotation or binarity of an unresolved object; Sect. 4.2). In regard to the pulsator candidates, corresponding remarks identify whether additional signals in the low-frequency (\(<\)10 d\({}^{-1}\); 'add. low-freq, signal') or high-frequency (\(>\)10 d\({}^{-1}\); 'add. high-freq. signal') realms were detected.
The LCs of several objects clearly show eclipses or other additional variability that a pixel-level blending analysis revealed to originate in a neighbouring star in close proximity on the sky (cf. Sects. 2.5 and 4.1). These objects are identified by the remark 'eclipses not on target' and 'add. var. not on target', respectively, in Table 1, mainly to avoid confusion in further studies dealing with these stars.
### Rotation period versus fractional age on the main sequence and stellar mass
Figure 4 investigates the distribution of rotation periods (upper panel) and the photometric peak-to-peak amplitudes in the TESS bandpass (lower panel) of the 720 sample stars for which these
Figure 2: Comparison of literature periods and periods derived from the analysis of TESS data in this work for the 64 stars with periods listed in the VSX. The outlying data point refers to TIC 237662091, which is discussed in the text.
parameters could be derived from TESS data. Our results are in excellent agreement with the literature and illustrate the well-known peak around \(P_{\rm rot}\sim 2\) days (e.g. Renson & Manfroid 2009; Bernhard et al. 2015a,b; Hummerich et al. 2016; Sikora et al. 2019b) and the typical magnitude range of the photometric variations (e.g. Manfroid & Mathys 1986; Bernhard et al. 2015a) that appear somewhat reduced in the broad red bandpass of the TESS mission (600\(-\)1000 nm; cf. Bernhard et al. 2020).
The correlations between the derived rotation periods with the fractional age on the main sequence (Tau) and the stellar mass (M) are illustrated in Fig. 5. Tau and M values are taken from Paper 1. We find a correlation in the sense that more evolved stars have longer rotation periods, which is in agreement with results from the literature that confirmed that the evolution of rotation periods among mCP stars agrees very well with the assumption of conservation of angular momentum during the main-sequence evolution (cf. e.g. the discussion in Adelman 2002; Netopil et al. 2017). Magnetic braking is well known to spin down magnetic B stars as they evolve (Shultz et al. 2019b), and could also be a factor in the trend observed in our mCP sample of slower rotation with age. However, it is generally suggested that magnetic braking is weak in mCP stars and that the primary reason for spin-down is changes of the moment of inertia (with angular momentum conserved; North 1998; Kochukhov & Bagnulo 2006).
Apparently, there is no correlation between rotation period and mass. The stars with no detected rotational variability (presumed to be very slow rotators) have a similar distribution of age compared with the majority of the sample where a rotation period is determined, but may preferentially have lower masses. However, with only 23 very slow rotator candidates, the sample is too small to draw any statistical conclusions.
While our results lend themselves perfectly for further statistical analyses, these are out of scope of this study and are best investigated with as large a sample of mCP stars as possible. This will be the topic of an upcoming paper (Paunzen et al., in preparation). Nevertheless, some preliminary findings on the
Figure 4: Histograms of the distribution of rotational periods (upper panel) and photometric peak-to-peak amplitudes in the TESS bandpass (lower panel) of the 720 sample stars for which these parameters could be derived.
Figure 3: Two stars with periods longer than a single TESS sector. For TIC 470270857 (top), only one sector of data was available. For TIC 240121061, one sector in Cycle 2 and three consecutive sectors in Cycle 4 were available. Even with three sectors, the rotation period is difficult to determine – it may be \(\sim\)40 days (if single-waved) or \(\sim\)80 days (if double-waved).
correlations between several observables are shortly discussed in Sect. B.
## 4 Note on special objects
### Eclipsing binaries
Four systems are found to be EBs. Many additional targets contain eclipses in the extracted LCs, but a pixel-level blending analysis (Sect. 2.5) shows that the eclipses originate in a neighbouring star. These four EBs are listed and briefly described below and are shown in Fig. 6. EBs containing CP stars are rare and are valuable opportunities for precise determinations of the stellar and system properties (e.g. Kochukhov et al. 2021). Among the hot stars on the upper main sequence, global magnetic fields are found in about 10% of effectively single stars, but only in 2% of close binaries (Alecian et al. 2015). Discovering and characterising EBs which host magnetic stars are important for constraining the origin of magnetism in hot stars (e.g. Shultz et al. 2019).
TIC 2941395 (= UCAC4 624-023360, \(V_{\rm mag}\) = 13.5): The rotational signal is not in phase with the eclipses - that is, the rotation and binary periods are not identical, with the mCP star apparently rotating slower than the orbit. A frequency analysis was performed after clipping out all eclipses to derive a rotation period of \(P_{\rm rot}\) = 2.346688 d (\(f_{\rm rot}\) = 0.426132 c/d). The orbital period is \(P_{orb}\) = 1.6815 d (\(f_{orb}\) = 0.5947 c/d). At such short orbital periods, synchronisation is expected, perhaps casting doubt that the mCP star is involved in the eclipsing system as in, for example, a hierarchical triple system with an inner EB and an outer mCP star. This has not previously been reported as an EB.
TIC 39818458 (= HD 40759, \(V_{\rm mag}\) = 9.3): This is an EB, with eclipses on-target, and is included in the TESS OBA EB catalogue of IJspeert et al. (2021). There appears to be variability caused by three different scenarios - rotation, pulsation, and orbital. In addition to the clearly visible slower pulsation (consistent with g-modes), there are also many higher-frequency \(\delta\) Scuti-like pulsation modes (between \(\sim\)50 - 60 d\({}^{-1}\)). All eclipses are the same depth (i.e. there is no visible secondary eclipse) and are not flat-bottomed, occurring every 3.8155 days. The rotational period of the mCP star is determined to \(P_{\rm rot}\) = 3.3949695 d. This system is being characterised in more detail by Semenko et al., in prep, and so is not discussed further in this work.
Figure 5: Derived rotation period from TESS versus the fractional main-sequence age (Tau, expressed as a percentage; upper panel) and stellar mass (lower panel). Data were binned, with the overlaid box plots extending from the lower to upper quartile values and the horizontal red line representing the median. The notches in the box indicate the 95% confidence interval of the median, and the whiskers extend to the limits of the binned data, excluding the statistical outliers. In the bottom panel, the first bin includes masses less than 1.8 \(M_{\odot}\), and the last bin all masses greater than 3.6 \(M_{\odot}\). Otherwise, the mass bins are 0.3 \(M_{\odot}\) wide. There is a correlation with more evolved stars having longer rotation periods, but there is apparently no correlation with the mass. Stars with additional high-frequency (\(>\)10 d\({}^{-1}\)) or low-frequency (\(<\)10 d\({}^{-1}\)) signals are indicated by the orange circles and blue squares, respectively (not all of them have stellar parameters from Paper 1). Yellow triangles mark the 23 stars where no variability was detected and we could not measure a rotation period), which are candidate very slow rotators. Stars with estimated rotation periods of “ \(>\)30.” (i.e. slow but detectable rotational variability longer than a single TESS sector, as in Fig. 3) are included in this plot with \(P_{\rm rot}\) = 30 d as a lower limit. The four EBs are marked by pink stars, but their mass and age may not be accurate since their binarity was not considered in Paper 1.
TIC 143533909 (= LAMOST J041819.79+414611.3, \(G_{\rm mag}=14.2\)): This is an EB with a primary and secondary eclipse that are synchronised to the rotation period (2.6274 d). The eclipses occur near, but not exactly at, the maximum and minimum of the rotational brightness. With identical orbital and rotational periods, it seems likely that the mCP star is genuinely part of the EB system. This has not previously been reported as an EB.
TIC 234878810 (= HD 259273, \(V_{\rm mag}=9.73\)): This is an EB with primary and secondary eclipses (depths of about 12 and 8 ppt, respectively), where the orbital period is equal to the rotational period (3.4118 d). The eclipses occur at the maximum and minimum brightness of the out-of-eclipse variation, which is apparently symmetric about the eclipses. As a relatively bright star, this system is amenable to follow-up to determine the orbital motions and stellar properties of both components. This is included in the TESS OBA EB catalogue of IJspeert et al. (2021).
### Pulsators
Many tens of stars include signals in addition to the primary rotational variability in their LCs. After a blending analysis of the TESS images, 25 of these remain as candidates where these additional signals originate on-target. However, some cases remain ambiguous, for example when there is at least one additional source within a few arcseconds of the mCP target star. Additionally, even when there are no nearby _Gaia_ sources, it is possible that such additional signals have their origin in a close unresolved companion star or binary system. Some of these 'extra' signals have a harmonic structure, hinting that their origin is related to rotation (but not from the mCP target) or binarity (but not eclipsing). On the other hand, when multiple signals are present but without any harmonic structure, their origin is more likely to be pulsational. These 25 systems (excluding TIC 39818458, which has already been discussed in Sect. 4.1) are listed and briefly described here.
#### 4.2.1 High-frequency (\(>\)10 d\({}^{-1}\)) signals
There are 12 stars with additional frequencies above 10 d\({}^{-1}\). These frequencies are likely caused by pulsation, since rotational and/or binary signals cannot be this fast except in exotic cases, which would be incompatible with the mCP spectroscopic classification (e.g. a double white-dwarf binary or involving M dwarfs). Some of these also include signals below 10 d\({}^{-1}\). The TESS frequency spectra for these 12 stars are shown in Fig. 7.
TIC 16485771 (= HD 49198, \(V_{\rm mag}=9.3\)): This star is fairly bright and exhibits multiple \(\delta\) Scuti pulsation frequencies mostly between 13 - 24 d\({}^{-1}\). There are no nearby contaminating sources that could be the origin of these signals. A preliminary analysis of spectra show the mCP star to have large radial velocity variations, indicating binarity, so it is unclear if the mCP star pulsates.
TIC 664373217 (= NGC 1664 131, \(V_{\rm mag}=11.52\)): Two other _Gaia_ sources fall on the same pixel as the target. The marginal signal at 12.6 d\({}^{-1}\) cannot be confirmed or ruled out as originating in the mCP star.
TIC 35884762 (= HD 63843, \(V_{\rm mag}=10.25\)): TESS does not detect rotation, but the group of signals between about 10 and 20 d\({}^{-1}\) apparently originate on-target (there are no nearby _Gaia_ sources with \(G_{\rm mag}<15\)). These are consistent with p-mode \(\delta\) Scuti pulsation. The high-frequency pulsation and apparent lack of rotational modulation is very similar to HD 220556 (A2 SrEuCr, i.e. the same as for HD 63843), observed by the \(Kepler\) K2 mission (Bowman et al., 2018).
TIC 172414656 (= TYC 2430-1205-1, \(V_{\rm mag}=11.11\)): In addition to rotation, there are groups of frequencies centred around 2 d\({}^{-1}\), 4 d\({}^{-1}\), 7.5 d\({}^{-1}\), and also 21 d\({}^{-1}\), plus a more isolated signal at 0.8 d\({}^{-1}\) (not harmonically related to \(f_{\rm rot}=0.31922\) d\({}^{-1}\)). These all apparently originate on-target, although it cannot be ruled out that this object is a close unresolved binary (perhaps with two variable components). A re-inspection of the LAMOST spectrum indicates no obvious signs of binarity. The rotational frequency of the mCP star is \(f_{\rm rot}=0.31922\) d\({}^{-1}\), and the two strongest frequencies near 21 d\({}^{-1}\) are separated by almost exactly 2\(\times f_{\rm rot}\) (to within \(<\)1%), and thus likely originate in the mCP star. The signals in the group near 4 d\({}^{-1}\) can be constructed from simple linear combinations of the frequencies near 2 d\({}^{-1}\), and thus arise in the same star (as in e.g. Kurtz et al., 2015). These two groups resemble the g-mode pulsation found in \(\gamma\) Dor pulsators (for \(|m|=1,2\), respectively). If this is the case, a star oscillating in such g modes should have a rotational frequency in excess of 0.31922 d\({}^{-1}\), and so these groups may not be inherent to the mCP star. It is less immediately clear if there are relationships between the signals near 7.5 d\({}^{-1}\) and other detected frequencies.
TIC 174947334 (= TYC 2933-1569-1, \(V_{\rm mag}=10.86\)): There is a signal at 20.7 d\({}^{-1}\), which seems to originate on-target. There are no confounding nearby _Gaia_ sources with \(G_{\rm mag}<15\), and no indication this signal stems from a nearby source.
TIC 234077422 (= TYC 159-3043-1, \(V_{\rm mag}=11.16\)): A signal at 19.87 d\({}^{-1}\) (amplitude of 0.2 ppt) exists in the TESS data, which cannot obviously be attributed to a neighbouring source. However, TESS_localize may hint that this high-frequency signal belongs to a faint star about 1 pixel distant (_Gaia_ DR3 3132393625493168512, \(G_{\rm mag}=17.07\)), but due to the faintness of this neighbour and the low amplitude of the signal this is not conclusive.
TIC 235377004 (= LAMOST J065400.61+063645.2, \(G_{\rm mag}\) = 12.27): Besides rotation, there are a few frequencies between 3 and 15 d\({}^{-1}\), but there is a _Gaia_ source of similar magnitude within 1 pixel of the target, and it is unclear spatially where these signals originate in the TESS images. Still, an analysis with TESS_localize prefers that these signals come from the mCP star.
TIC 262003816 (= HD 277634, \(V_{\rm mag}=9.63\)): There are many low amplitude signals between about 10 to 23 d\({}^{-1}\), confirmed on-target with TESS_localize. These probably represent \(\delta\) Scuti pulsation, similar to TIC 35884762. The lower-frequency signals are further harmonics of \(f_{\rm rot}\).
TIC 319614922 (= LAMOST J062348.44+043007.6, \(G_{\rm mag}\) = 11.96): There is a signal at about 20 d\({}^{-1}\), but there are two sources (including the target) of similar brightness falling on the same TESS pixel, which are not resolved, and multiple fainter sources within one to two pixels. Otherwise, there is no indication of blending from a resolved neighbour, but TESS_localize cannot reliably localise the high-frequency signal.
TIC 387226282 (= HD 266119, \(V_{\rm mag}=10.63\)): There are many high-frequency signals between about 4 and 40 d\({}^{-1}\). The strongest, at 12.8 d\({}^{-1}\) (amplitude of about 0.5 ppt) is confidently detected on-target (it is seen in pixels across the entire point spread function), but the others are lower-amplitude and less readily localised. However, there are no nearby sources that show any indication of carrying these signals. TESS_localize finds these signals consistent with originating in the mCP star.
TIC 427377135 (= HD 36955, \(V_{\rm mag}\) = 9.58): There are multiple high-frequency signals between 31 and 55 d\({}^{-1}\) (plus one near 19 d\({}^{-1}\)), all of which originate on-target. SIMBAD lists this as a 'Double or Multiple star', probably due to the object being listed in the Washington Visual Double Star Catalog (Mason et al. 2001) where the secondary is probably early-K' and is 1.5 arcsec distant. However, all _Gaia_ sources within 1 arcminute are fainter than the 16th magnitude (the closest, at 13 arcsec, is \(G_{\rm mag}\) = 21), casting doubt on this star being a visual double, and there is no indication of blending from some neighbouring source in the TESS images. The rotation frequency of the mCP star is 0.43765 d\({}^{-1}\). There are three additional low-frequency signals unrelated to this rotation frequency. The strongest is at 0.56546 d\({}^{-1}\), and the next two are at slightly below two and three times this, almost, but not quite, forming a harmonic series, which suggests the signals are probably not related to rotation or binarity (see Fig. 8). These signals may be consistent with a sequence of g modes, as in the \(\gamma\) Dor pulsators. The higher-frequency signals are numerous and seemingly unrelated to these lower frequencies, and are probably indicative of \(\delta\) Scuti pulsation.
TIC 431659618 (TYC 4001-1858-1, \(V_{\rm mag}\) = 10.75): There is perhaps a significant frequency at about 11.8 d\({}^{-1}\), which seems to originate on-target (amplitude \(\sim\)0.5 ppt). However, this is located within a broader 'bump' in the frequency spectrum that may be due to some unidentified systematic effect in the data. TESS_localize finds rotation on-target, but cannot locate the higher-amplitude signal (perhaps hinting at its origin in systematics).
#### 4.2.2 Lower-frequency (\(<\)10 d\({}^{-1}\)) signals
In addition to the stars with higher-frequency signals (some of which also exhibit lower-frequency signals in addition to the mCP rotation), there are 13 stars with low-frequency signals that are not harmonics of the main rotation frequency. In most of these, pulsation is likely, but in others the additional signals seem more consistent with binarity or the rotation of an unresolved source that is not the mCP star (i.e. when the signals have a harmonic structure like for TIC 34366540, 252325936, 268376046, and 403748236). The TESS frequency spectra for these 13 stars are shown in Fig. 8.
TIC 21018674 (= UCAC4 713-059112, \(V_{\rm mag}\) = 13.36): There is a pair of signals at 1.0736 and 1.4385 d\({}^{-1}\) with \(f_{\rm rot}\) at 0.6649 d\({}^{-1}\), and some nearby lower-amplitude signals. All confidently originate on target.
TIC 26434309 (= HD 281171, \(V_{\rm mag}\) = 11.33): The group of frequencies centred near 3.2 d\({}^{-1}\) (the two strongest of which are 3.10 and 3.23 d\({}^{-1}\)) originate on target.
TIC 34366540 (= TYC 4765-708-1, \(V_{\rm mag}\) = 10.63): There is a signal at 2.83 d\({}^{-1}\), plus its lower-amplitude harmonic. There
Figure 6: LCs of the four EBs identified in the sample. In the top two panels, a single sector of TESS data is shown, un-phased (since neither the rotational nor pulsational signals are synchronised to the binary orbit). The bottom two panels are phased to the rotational periods, which are identical to the orbital periods. TIC identifiers are indicated in each panel.
is a nearby _Gaia_ source (less than one-tenth of a pixel away) with \(G_{\rm mag}\) = 13.37, and thus it is not possible to differentiate the spatial origin of TESS signals between these two sources. These signals, plus the lower-frequency rotation, were determined to originate in one or both of this close pair of sources.
TIC 73340040 (= TYC 4850-398-1, \(V_{\rm mag}\) = 9.97): There is a frequency group centred near 2.9 d\({}^{-1}\), and probably a lower-amplitude group centred near 2 d\({}^{-1}\). There are no nearby contaminating sources, and all signals were confirmed to be on-target.
TIC 104876807 (= TYC 3350-370-1, \(V_{\rm mag}\) = 11.53): There is a closely spaced pair of additional frequencies just above 2\(\times\)\(f_{\rm rot}\), causing the appearance of a beating pattern in the LC. All signals originate on-target. The light variability pattern is reminiscent of the variability seen in HD 174356, which is suspected to show g-mode pulsation in addition to rotational variability (Mikulasek et al. 2020).
TIC 122563793 (= HD 277595, \(V_{\rm mag}\) = 9.55): Besides the slow rotation, there is a weak group of frequencies centred around 1.5 d\({}^{-1}\). While the rotation could be confidently localised to the mCP star, the lower-amplitude frequency group was more ambiguous but still most likely originates on-target.
TIC 153501560 (= LAMOST J061341.68+114751.7, \(G_{\rm mag}\) = 12.61): There is a signal at 2.93 d\({}^{-1}\), but it is unclear if it is on-target or not. However, there is also no indication that it is due to blending from a neighbouring star. The rotational modulation can be confidently attributed to the mCP star.
TIC 252325936 (= TYC 3738-1376-1, \(V_{\rm mag}\) = 11.10): A harmonic pair of signals, near 1.2 and 2.4 d\({}^{-1}\), were confidently found to originate on-target.
TIC 268376046 (= HD 292968, V=10.96): There seem to be two pairs of harmonics that originate on-target. The first is at 0.4898 d\({}^{-1}\) (amplitude of about 1.6 ppt; presumed to be the rotational frequency of the mCP star) and its harmonic (amplitude about 0.25 ppt), which were successfully localised to the mCP star. The second is at 0.9130 d\({}^{-1}\) (amplitude about 0.5 ppt) and its first harmonic also with an amplitude of 0.5 ppt. These latter two signals most likely originate on-target, as TESS_localize_ found a relative likelihood of 91%, with the next most likely star (at 9%) being both further from the localisation of these signals and much fainter (at \(G_{\rm mag}\) = 16.14).
TIC 374614033 (= LAMOST J020417.01+553439.5, \(G_{\rm mag}\) = 12.49): There are two groups of frequencies around 3.4 - 3.9 d\({}^{-1}\) and 6.9 - 7.3 d\({}^{-1}\). These resemble the frequency groups often found in classical Be stars or \(\gamma\) Dor pulsators. However, a star with such frequency groups would be rotating very rapidly (perhaps with \(f_{\rm rot}\sim 3\) d\({}^{-1}\)), which generally is not the case with mCP stars (and is inconsistent with the rotational period of this mCP star, which is longer than the single TESS sector in which it was observed, and of significantly higher amplitude than the pulsational signals). In _Gaia_ DR3, there are two very close sources at these coordinates. The brighter of the two (\(G_{\rm mag}\) = 12.830), presumed to be the mCP star, is _Gaia_ EDR3 456542864521078144, and the fainter (\(G_{\rm mag}\) = 13.15, 0.54 arcsec away), is _Gaia_ EDR3 456542864517459072. This pair of sources are too close to resolve with TESS_localize. It seems likely that the mCP star shows only (slow) rotational modulation while the pulsational signals arise in a rapidly rotating g-mode pulsator.
TIC 403748236 (= TYC 3728-990-1, \(V_{\rm mag}\) = 12.15): A signal at 0.81 d\({}^{-1}\) (slightly above the rotational frequency of 0.635 d\({}^{-1}\)), and its first harmonic, unambiguously originate on-target.
TIC 623248544 (= TYC 3684-1139-1, \(V_{\rm mag}\) = 12.18): This is a He-weak/CP4 star and one of the hottest objects in the sample of Paper 1. There is an additional signal at 0.87 d\({}^{-1}\), which is located just below the rotational frequency of 0.92 d\({}^{-1}\) and causes a beating pattern in the LC (apparently without any harmonics). All signals are attributable to the target star. This star rotates relatively rapidly for an mCP star, and thus Rossby modes (r modes) may be relevant (which are retrograde and intrinsically low frequency in the co-rotating frame, and thus appear slightly below the observed rotational frequency). In previous studies, r modes have been reported to be present in CP stars, albeit in non-magnetic ones (Saio 2018).
TIC 445937333 (= HD 263921, \(V_{\rm mag}\) = 10.24): The strongest non-rotational signal is at 2.24 d\({}^{-1}\), and there are many additional weaker signals between about 1.6 and 5.2 d\({}^{-1}\). There is another _Gaia_ source (\(G_{\rm mag}\) = 15.47) about 0.2 pixels away. While TESS_localize found the rotational signal to most likely come from the mCP star, the localisation of these additional signals was more ambiguous, although there is no convincing evidence they arise off-target.
## 5 Conclusion
Using photometric time-series data from the TESS mission, we carry out an investigation of the photometric variability of the sample of 1002 mCP stars discovered in LAMOST archival spectra by Paper 1. At the time of writing, TESS data were available for 782 of these objects. Our main findings are summarised as follows.
* Rotational periods or, in the case only part of a rotational cycle was covered in the available data, period estimates are derived for 720 mCP stars. With the present work, we therefore significantly add to the sample of mCP stars with rotational period determinations.
* In 62 stars, no variability could be inferred from TESS data. For 23 of these objects, a LC of reasonable or good quality is extracted, yet no variability is seen (noted as "data fine, no var." in Table 1). We consider these stars prime candidates for very slowly rotating mCP stars. For the remaining objects, blending issues or problems with the LC extraction or detrending routines yield unreliable results.
* After a careful blending analysis of the TESS images to sort out 'false positives', we identify four EB systems that likely host an mCP star, as well as 25 stars with additional signals that in most cases are attribute to pulsation (12 stars with frequencies above 10 d\({}^{-1}\) and 13 stars with frequencies below 10 d\({}^{-1}\)). These 25 stars are prime candidates for asteroseismic studies. All objects of special interest are marked by an asterisk (**) in Table 1.
* Some objects have LCs that clearly show eclipses or other additional variability, which a pixel-level blending analysis reveals to originate in a neighbouring star in close proximity on the sky. They are identified in Table 1 in order to avoid confusion in further studies that deal with these stars.
* The distribution of rotation periods and the photometric peak-to-peak amplitudes of our sample stars are in excellent agreement with the literature.
* We investigate the correlations between rotation periods with fractional ages on the main sequence and stellar mass. More evolved stars have longer rotation periods, which is in agreement with the assumption of the conservation of angular momentum during the main-sequence evolution. No correlation with mass is found.
With our work, we identify prime candidates for detailed follow-up photometric and asteroseismic studies and lay the
Figure 8: Similar to Fig. 7, but for the lower-frequency signals (<10 d\({}^{-1}\)). The rotational frequency is indicated by a vertical solid blue line, and its first three harmonics as dotted lines. TIC 172414656 and 427377135 are also plotted in Fig. 7, but here the low-frequency regime is emphasised. Red circles mark frequencies not related to the mCP star rotation. Frequencies that form a harmonic pair are marked with a vertical orange dash.
Figure 7: Frequency spectrum before (lighter grey) and after (black) detrending against the rotational modulation (and in some cases low-frequency systematics) for the 12 stars with higher-frequency signals (>10 d\({}^{-1}\)). The red circles mark frequencies detected in a manual Period04 analysis. TIC identifiers, the spectral type from Paper 1, and the _Gaia_ magnitude are given in each panel.
foundation for detailed statistical investigations. Future studies will be concerned with an analysis of the light variability of further samples of mCP stars (e.g. Shang et al., 2022).
###### Acknowledgements.
We thank the referee, Dr. Gautier Matthys, for comments that improved the manuscript. J.L.-B. thanks Dr. Coralie Neiner for sharing her preliminary spectroscopic analysis for some of this sample. Part of this work was supported by the German _Deutsche Forschungsgemeinschaft, DFG_ project number Ts 1/2-1. J.L.-B. acknowledges support from FAPESP (grant 2017/23731-1). This paper includes data collected by the TESS mission, which are publicly available from the Mikulski Archive for Space Telescopes (MAST). Funding for the TESS mission is provided by NASA's Science Mission directorate. This work has made use of data from the European Space Agency (ESA) mission _Gaia_ ([https://www.cosmos.esa.int/gaia](https://www.cosmos.esa.int/gaia)), processed by the _Gaia_ Data Processing and Analysis Consortium (DPAC, [https://www.cosmos.esa.int/web/gaia/dpac/consortium](https://www.cosmos.esa.int/web/gaia/dpac/consortium)). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the _Gaia_ Multilateral Agreement Agreement. This research has made use of the VizieR catalogue access tool, CDS, Strasbourg, France (DOI: 10.26093/cds/vizie?). The original description of the VizieR service was published in 2000, A&A 143, 23. This research made use of Astropy, 3 a community-developed core Python package for Astronomy (Astropy Collaboration et al., 2013, 2018). This research made use of Lightwave, a Python package for Kepler and TESS data analysis (Lightcurve Collaboration et al., 2018).
|
2301.08646
|
The Gauge issue and the Hamiltonian theory of cosmological perturbations
|
We present a general formalism for the Hamiltonian description of
perturbation theory around any spatially homogeneous spacetime. We employ and
refine the Dirac method for constrained systems, which is very well-suited to
cosmological perturbations. This approach includes a discussion of the
gauge-invariant dynamics of perturbations as well as an analysis of gauge
transformations, gauge-fixing, partial gauge-fixing and spacetime
reconstruction. We will introduce the Kucha\v{r} parametrization of the
kinematical phase space as a convenient tool for studying the gauge
transformations. The key element of this approach is the reconstruction of
spacetime based on gauge-fixing conditions.
|
Alice Boldrin
|
2023-01-20T15:51:47Z
|
http://arxiv.org/abs/2301.08646v1
|
# The Gauge issue and the Hamiltonian theory of cosmological perturbations
###### Abstract
We present a general formalism for the Hamiltonian description of perturbation theory around any spatially homogeneous spacetime. We employ and refine the Dirac method for constrained systems, which is very well-suited to cosmological perturbations. This approach includes a discussion of the gauge-invariant dynamics of perturbations as well as an analysis of gauge transformations, gauge-fixing, partial gauge-fixing and spacetime reconstruction. We will introduce the Kuchar parametrization of the kinematical phase space as a convenient tool for studying the gauge transformations. The key element of this approach is the reconstruction of spacetime based on gauge-fixing conditions.
## I Introduction
In the attempt to obtain a quantum theory suitable for the description of the primordial structure of the Universe, we study the Hamiltonian formalism for cosmological perturbation theory (CPT). This work has been done before with different background spacetime models like the Friedman universe Friedman (1982) and the Bianchi Type I model Bianchi (1983). Our aim is to study the complete Hamiltonian formalism in a general background focusing on the gauge independent description of CPT as well as the issue of gauge fixing (see e.g. Bucher (2001); Bucher (2002) for alternative discussions on the gauge issue in CPT), gauge transformations and spacetime reconstruction. We employ the Dirac method Dirac (1979) to study the Hamiltonian in different gauges and reconstruct the spacetime metric from gauge-invariant quantities (Dirac observables). We also discuss an alternative method based on the so-called Kuchar decomposition Kuchar (1989) which provides a parametrization of the phase space in which the constrains play the role of canonical variables conjugate to the gauge-fixing conditions. For a more detailed discussion and for an application of the presented method, see Boldrin (2011).
Cosmological perturbation theory
The Hamiltonian in the the Arnowitt-Deser-Misner (ADM) formalism [8] expanded to second order reads1
Footnote 1: We assume the topology of the spacetime to be \(\mathcal{M}\simeq\mathbb{T}^{3}\times\mathbb{R}\) so to have a spatially compact universe and avoid ambiguous definitions of the symplectic structure for background (homogeneous) variables.
\[\mathbb{H}=\int_{\mathbb{T}^{3}}\left(\overline{N}\mathcal{H}_{0}^{(0)}+ \overline{N}\mathcal{H}_{0}^{(2)}+\delta N^{\mu}\delta\mathcal{H}_{\mu}\right) d^{3}x \tag{1}\]
where \(\overline{N}\) is the background lapse function and \(\delta N^{\mu}\), with \(\mu=0,i\), are the first order lapse and shift functions. The Hamiltonian densities \(\mathcal{H}^{(0)}\) and \(\mathcal{H}^{(2)}\) are respectively zeroth and second order, whereas \(\delta\mathcal{H}_{\mu}\) represent the first order constraints. We assume a spatially homogeneous background spacetime with spatial coordinates defined such that the background shift vector \(N^{i}\) vanishes as well as the background Hamiltonian \(\mathcal{H}_{i}^{(0)}\). The Hamiltonian (1) is a function of the background canonical variables \(\bar{q}_{ij}\) and \(\bar{\pi}_{ij}\) which are respectively the three-metric and three-momenta, and the perturbed variables defined as \(\delta q_{ij}=q_{ij}-\overline{q}_{ij}\) and \(\delta\pi^{ij}=\pi^{ij}-\overline{\pi}^{ij}\).
The Hamiltonian (1) defines a gauge for the following reasons:
First, at each spatial point the constraints algebra is closed, i.e.
\[\{\delta\mathcal{H}_{i},\delta\mathcal{H}_{j}\}=0,\quad\{\delta\mathcal{H}_{ 0},\delta\mathcal{H}_{i}\}=0, \tag{2}\]
where this result is true for any homogeneous background. Furthermore the constraints are dynamically stables, i.e.
\[\{\mathbb{H},\delta\mathcal{H}_{0}\}=-\delta\mathcal{H}_{,i}^{i}(x)\approx 0,\quad\{\mathbb{H},\delta\mathcal{H}_{i}\}=0, \tag{3}\]
where the "weak equality" \(\approx\), means that the equality holds in the constraint surface.
## III Gauge-fixing and Dirac procedure
The four constraints \(\delta\mathcal{H}_{\mu}\) generate a gauge freedom which can be removed by imposing four gauge-fixing conditions \(\delta c_{\mu}=0\). The Poisson bracket between the gauge-fixing conditions and the constraints form an invertible matrix \(\det\{\delta c_{\mu},\delta\mathcal{H}_{\mu}\}\neq 0\). Applying the constraints and the gauge-fixing conditions we can reduce our Hamiltonian which will now depend on 4 physical variables \((\delta q_{I}^{phys},\delta\pi_{phys}^{I})\) instead of the 12 ADM perturbation2 variables \((\delta q_{ij},\delta\pi^{ij})\). Those new variables form a canonical coordinate system on the submanifold in the kinematical phase space. This
submanifold is thus called the physical phase space3. The parametrization provided by these physical variables is defined by the gauge-fixing surface that intersects all gauge orbits (see Fig. 1 ). It is convenient to define a set of gauge-independent variables defined as
Footnote 3: It’s canonical structure is now given by the Dirac brackets \(\{.,.\}_{D}=\{.,.\}-\{.,\delta\phi_{\mu}\}\{\delta\phi_{\mu},\delta\phi_{\nu}\}^ {-1}\{\delta\phi_{\nu},.\}\), where \(\delta\phi_{\mu}\in(\delta\mathcal{H}_{\mu},\delta c_{\mu})\).
\[\{\delta D_{I},\delta\mathcal{H}_{\mu}\}\approx 0,\,\forall\mu, \tag{4}\]
which parametrize the space of gauge orbits in the constraints surface. Those variables are known as Dirac observables and are equal to the number of physical variables. There exists a one-to-one correspondence between the Dirac observables and the physical variables, such that
\[\delta D_{I}+\epsilon_{I}^{\mu}\delta c_{\mu}+\xi_{I}^{\mu}\delta\mathcal{H}_ {\mu}=\delta O_{I}^{phys}(\delta q_{I}^{phys},\delta\pi_{phys}^{I}) \tag{5}\]
where \(\epsilon_{I}^{\mu}\) and \(\xi_{I}^{\mu}\) are background coefficients. Using this new parametrization the Hamiltonian can be written in a gauge-independent manner as \(\mathcal{H}_{phys}^{(2)}=\mathcal{H}_{red}^{(2)}+\mathcal{H}_{ext}^{(2)}\), where \(\mathcal{H}_{phys}^{(2)}\) denotes the so called physical Hamiltonian, \(\mathcal{H}_{red}^{(2)}\) is the reduced Hamiltonian in terms of the physical variables and \(\mathcal{H}_{ext}^{(2)}\) is the extra Hamiltonian generated by the time-dependent canonical transformation needed to change parametrization.
## IV Spacetime reconstruction
In the previous section we discussed how to obtain the physical Hamiltonian. In order to reconstruct the spacetime we still need to find the values of the first-order lapse and shift. To do so we use the consistency equation \(\{\delta c_{\mu},\mathbb{H}\}=0\), which, from Eq. (1), implies
\[\frac{\delta N^{\mu}}{N}=-\{\delta c_{\nu},\delta\mathcal{H}_{\mu}\}^{-1} \left(\{\delta c_{\nu},\delta\mathcal{H}^{(0)}\}+\{\delta c_{\nu},\mathcal{H }^{(2)}\}\right) \tag{6}\]
Figure 1: Graphical representation of the Dirac procedure.
This equation is only meaningful in the constraint surface.
## V Kuchar decomposition
We present a different parametrization of the kinematical phase space where the constraints take the role of canonical variables. For instance, we define two sets of canonical variables. The first set comprises the first order constraints \(\delta\mathcal{H}_{\mu}\) and the 4 gauge-fixing functions, here denoted as \(\delta C^{\mu}\). The second pair of canonical variables is given by the Dirac observables \(\delta D_{I}\), defined in Eq.(4). The Hamiltonian written in this parametrization will then be
\[\mathbb{H}\rightarrow\mathbb{H}_{K}=\mathbb{H}+\mathbb{K}=\int\left(\overline {N}\mathcal{H}_{0}^{(0)}+\overline{N}(\mathcal{H}_{0}^{(2)}+\mathcal{K})+ \delta N^{\mu}\delta\mathcal{H}_{\mu}\right)d^{3}x, \tag{7}\]
where \(\mathbb{K}\) is the extra Hamiltonian coming from the time-dependent parametrization. We notice that, since the constraints are conserved in the constraint surface, terms of the form \(\propto\delta C^{\mu}\delta C^{\nu}\), \(\propto\delta Q_{I}\delta C^{\mu}\) and \(\propto\delta P_{I}\delta C^{\mu}\) are not present in Eq. (7). Moreover, considering that \(\mathcal{H}^{(2)}\approx\mathcal{H}_{red}^{(2)}\) and \(\mathcal{K}^{(2)}\approx\mathcal{H}_{ext}^{(2)}\), which tells us the two dynamics must be weakly equal, we have that the total Hamiltonian can only be of the form
\[\mathbb{H}_{K}= N\int\bigg{[}\underbrace{\mathcal{H}_{phys}^{(2)}(\delta Q_{I}, \delta P^{I})}_{\text{physical part}}+\] \[+\underbrace{\left(\lambda_{1}^{\mu I}\delta Q_{I}+\lambda_{2I}^{ \mu}\delta P_{I}+\lambda_{3}^{\mu\nu}\delta\mathcal{H}_{\nu}+\lambda_{4\nu}^{ \mu}\delta C^{\nu}+\frac{\delta N^{\mu}}{N}\right)}_{\text{weakly vanishing part}}\bigg{]}d^{3}x, \tag{8}\]
where \(\lambda_{1}^{\mu I}\), \(\lambda_{I2}^{\mu}\) and \(\lambda_{3}^{\mu\nu}\) are zeroth-order coefficients that can depend on the gauge-fixing \(\delta C^{\mu}\). The value of \(\lambda_{4\nu}^{\mu}\) is gauge-invariant, it is showed to be fixed unambiguously by the algebra of the hypersurface (3).
### Gauge transformations
An interesting property of the Kuchar decomposition comes from the freedom in the choice of the canonical variable \(\delta C^{\mu}\). This means that we have a class of parametrizations of the kinematical phase space. In particular, we can define the new set of gauge-fixing conditions as \(\delta\tilde{C}^{\mu}\), and the full gauge transformation will be given by the map \(\mathbb{G}:(\delta\mathcal{H}_{\mu},\delta C^{\mu},\delta Q_{I},\delta P^{I}) \rightarrow(\delta\tilde{\mathcal{H}}_{\mu},\delta\tilde{C}^{\mu},\delta \tilde{Q}_{I},\delta\tilde{P}^{I})\), where \(\delta\mathcal{H}_{\mu}=\delta\tilde{\mathcal{H}}_{\mu}\). We are free to assume that the new gauge-fixing functions are thus canonically conjugate to the constraints \(\delta\mathcal{H}_{\mu}\). Thus we have \(\{\delta\mathcal{H}_{\nu},\delta\tilde{C}^{\mu}-\delta C^{\mu}\}=0\), which implies
\[\delta\tilde{C}^{\mu}=\delta C^{\mu}+\alpha_{I}^{\mu}\delta P^{I}+\beta^{\mu I }\delta Q_{I}+\gamma^{\mu\nu}\delta\mathcal{H}_{\nu}, \tag{9}\]
where \(\alpha^{\mu}_{I}\), \(\beta^{\mu I}\) and \(\gamma^{\mu\nu}\) are background parameters.
The gauge-fixing condition is only relevant in the constraints surface, so Eq.(9) is fully determined by the parameters \(\alpha^{\mu}_{I}\) and \(\beta^{\mu I}\). Moreover it means that the space of gauge-fixing conditions is the affine space of dimension equal to the number od Dirac observables. The introduction of a different gauge will lead to the new Hamiltonian \(\mathbb{H}_{\tilde{K}}\), with an extra Hamiltonian density \(\Delta\mathcal{K}^{(2)}\). Studying the new symplectic form of the system we find that \(\gamma^{\mu\nu}\) depends only on \(\alpha^{\mu}_{I}\) and \(\beta^{\mu I}\), which thus are the only parameters needed to uniquely determine the gauge transformation.
### Spacetime reconstruction
As discussed in Sec. IV, the spacetime reconstruction is obtained by the dynamical equations \(\delta\dot{C}^{\mu}=0\), which means that it is sensitive to the chosen parametrization. In particular in the Kuchar parametrization we will have \(\{\delta C^{\nu},\mathbb{H}_{K}\}_{K}=0\), which, from Eq. (7) becomes
\[\frac{\delta N^{\mu}}{N}=-\frac{\partial(\mathcal{H}^{(2)}+\mathcal{K}^{(2)}) }{\partial\delta\mathcal{H}_{\mu}}. \tag{10}\]
Notice the above formula only depends on the weakly vanishing part of the Hamiltonian since the lapse and shift are gauge-dependent quantities. It is interesting to consider the difference between the lapse and shift in two gauges. Using Eq. (8), we have
\[\begin{split}&\frac{\delta\tilde{N}^{\mu}}{N}\bigg{|}_{\delta \tilde{C}^{\mu}=0}-\frac{\delta N^{\mu}}{N}\bigg{|}_{\delta C^{\mu}=0}\approx \\ &\quad\approx\left(\lambda^{\mu}_{4\nu}\beta^{\nu I}+\dot{\beta}^ {\mu I}+\frac{\partial^{2}\mathcal{H}^{(2)}_{phys}}{\partial\delta Q_{I} \partial\delta P^{J}}\beta^{\mu J}-\frac{\partial^{2}\mathcal{H}^{(2)}_{phys} }{\partial\delta Q_{I}\partial\delta Q_{J}}\alpha^{\mu}_{\phantom{\mu}J}\right) \delta Q_{I}\\ &\quad+\left(\lambda^{\mu}_{4\nu}\alpha^{\nu}_{\phantom{\nu}I}+ \dot{\alpha}^{\mu}_{\phantom{\mu}I}-\frac{\partial^{2}\mathcal{H}^{(2)}_{phys} }{\partial\delta P^{I}\partial\delta Q_{J}}\alpha^{\mu}_{\phantom{\mu}J}+ \frac{\partial^{2}\mathcal{H}^{(2)}_{phys}}{\partial\delta P^{I}\partial \delta P^{J}}\beta^{\mu J}\right)\delta P^{I}.\end{split} \tag{11}\]
We see that the spacetime reconstruction in a new gauge can be obtained by the lapse and shift in the initial gauge plus some terms which solely depend on the physical part of the Hamiltonian \(\mathcal{H}^{(2)}_{phys}\) and the gauge-invariant coefficient \(\lambda^{\mu}_{4\nu}\), which can be obtained from the algebra of the hypersurface deformations.
## VI Partial gauge-fixing
We previously discussed the gauge-fixing defined as setting the conditions \(\delta C^{\mu}=0\). However it can be interesting to study the case in which these 4 conditions are substituted with conditions on
the lapse and shift functions. This is what we call partial gauge-fixing. From this consideration we can study the transformations which preserve the lapse and shift functions, that is, \(\left.\frac{\delta\hat{N}^{\mu}}{N}\right|_{\delta\tilde{C}^{\mu}=0}-\left.\frac {\delta N^{\mu}}{N}\right|_{\delta C^{\mu}=0}=0\). Using Eq. (11) and solving it for \(\alpha^{\nu}_{\ I}\) and \(\beta^{\mu I}\), we can solve the ambiguity in the choice of the gauge-fixing condition.
\[\begin{split}\dot{\alpha}^{\mu}_{\ I}&=-\beta^{\mu J }\frac{\partial^{2}\mathcal{H}^{(2)}_{phys}}{\partial\delta P^{J}\partial \delta P^{I}}+\alpha^{\mu}_{\ J}\frac{\partial^{2}\mathcal{H}^{(2)}_{phys}}{ \partial\delta Q_{J}\partial\delta P^{I}}-\lambda^{\mu}_{4\nu}\alpha^{\nu}_{ \ I},\\ \dot{\beta}^{\mu I}&=-\beta^{\mu J}\frac{\partial^{ 2}\mathcal{H}^{(2)}_{phys}}{\partial\delta P^{J}\partial\delta Q_{I}}+\alpha^ {\mu}_{\ J}\frac{\partial^{2}\mathcal{H}^{(2)}_{phys}}{\partial\delta Q_{J} \partial\delta Q_{I}}-\lambda^{\mu}_{4\nu}\beta^{\nu I},\end{split} \tag{12}\]
The above equations fix the gauge-fixing function at all times once \(\delta C^{\mu}(t_{0})\) is fixed at an initial time \(t_{0}\). This means that the choice of \(\delta C^{\mu}(t_{0})\) fixes the initial three-surface. Given the initial values of the Dirac observables (\(\delta Q_{I}(t_{0}),\delta P^{I}(t_{0})\)), we are able to explicitly reconstruct the initial three-surface in terms of the ADM perturbation variables. Moreover we are able to fully reconstruct the spacetime geometry since the evolution of the three-surface with its coordinates is completely determined by the evolution of the gauge-fixing function \(\delta\tilde{C}^{\mu}(t)\) and the independent evolution of the gauge-invariant variables4 (\(\delta Q_{I}(t),\delta P(t)\)).
Footnote 4: The spacetime coordinates system is independent from the evolution of this variables.
## VII Conclusions
We were able to simplify the Hamiltonian approach to CPT by showing that it is possible to separate the gauge-independent dynamics of perturbation from the issues of gauge-fixing and spacetime reconstruction. In particular we showed how the spacetime reconstruction can be pursued with the sole knowledge of gauge-fixing conditions. Moreover the discussed Kucar decomposition serves as a useful and insightful tool to the study of gauge-fixing conditions and spacetime reconstruction. The space of gauge-fixing conditions and the formula for the spacetime reconstruction is given explicitly for any gauge.
This approach might be applied to multiple conceptual problems in quantum cosmology, such as the time problem, the semi-classical spacetime reconstruction, or the relation between the kinematical and reduced phase space quantization. Moreover, the complete control over the gauge-fixing issue provided by the presented method, could be very useful for the problem of gluing perturbed spacetimes to other spacetime models (e.g., ones that include non-linearities). The choice of the gluing surface and its coordinates should be nicely described by our method.
###### Acknowledgements.
The author acknowledge the support of the National Science Centre (NCN, Poland) under the research grant 2018/30/E/ST2/00370.
|
2307.02783
|
UIT-Saviors at MEDVQA-GI 2023: Improving Multimodal Learning with Image
Enhancement for Gastrointestinal Visual Question Answering
|
In recent years, artificial intelligence has played an important role in
medicine and disease diagnosis, with many applications to be mentioned, one of
which is Medical Visual Question Answering (MedVQA). By combining computer
vision and natural language processing, MedVQA systems can assist experts in
extracting relevant information from medical image based on a given question
and providing precise diagnostic answers. The ImageCLEFmed-MEDVQA-GI-2023
challenge carried out visual question answering task in the gastrointestinal
domain, which includes gastroscopy and colonoscopy images. Our team approached
Task 1 of the challenge by proposing a multimodal learning method with image
enhancement to improve the VQA performance on gastrointestinal images. The
multimodal architecture is set up with BERT encoder and different pre-trained
vision models based on convolutional neural network (CNN) and Transformer
architecture for features extraction from question and endoscopy image. The
result of this study highlights the dominance of Transformer-based vision
models over the CNNs and demonstrates the effectiveness of the image
enhancement process, with six out of the eight vision models achieving better
F1-Score. Our best method, which takes advantages of BERT+BEiT fusion and image
enhancement, achieves up to 87.25% accuracy and 91.85% F1-Score on the
development test set, while also producing good result on the private test set
with accuracy of 82.01%.
|
Triet M. Thai, Anh T. Vo, Hao K. Tieu, Linh N. P. Bui, Thien T. B. Nguyen
|
2023-07-06T05:22:20Z
|
http://arxiv.org/abs/2307.02783v2
|
UIT-Saviors at MEDVQA-GI 2023: Improving Multimodal Learning with Image Enhancement for Gastrointestinal Visual Question Answering
###### Abstract
In recent years, artificial intelligence has played an important role in medicine and disease diagnosis, with many applications to be mentioned, one of which is Medical Visual Question Answering (MedVQA). By combining computer vision and natural language processing, MedVQA systems can assist experts in extracting relevant information from medical image based on a given question and providing precise diagnostic answers. The ImageCLEFmed-MEDVQA-GI-2023 challenge carried out a visual question answering (VQA) task in the gastrointestinal domain, which includes gastroscopy and colonoscopy images. Our team approached Task 1 - Visual Question Answering of the challenge by proposing a multimodal learning method with image enhancement to improve the VQA performance on gastrointestinal images. The multimodal architecture is set up with a BERT encoder and different pre-trained vision models based on convolutional neural network (CNN) and Transformer architecture for features extraction from question and endoscopy image. The result of this study highlights the dominance of Transformer-based vision models over the CNNs and demonstrates the effectiveness of the image enhancement process, with six out of the eight vision models achieving better F1-Score. Our best method, which takes advantages of BERT+BEiT fusion and image enhancement, achieves up to 87.25% accuracy and 91.85% F1-Score on the development test set, while also producing good result on the private test set with accuracy of 82.01%.
visual question answering, multimodal learning, BERT, pre-trained models, gastrointestinal imaging, colonoscopy analysis, medical image processing 2023
## 1 Introduction
The digestive system is one of the most complex and essential systems in the human body, consisting of various organs such as the mouth, stomach, intestines, and rectum. From the process of digestion in the stomach to the absorption of nutrients in the small and large intestines, and finally the elimination of waste through the rectum, the entire process involves the interaction and coordination of each organ to ensure the supply of nutrients and energy to the body. Any issues that occur in any part of the digestive system can directly impact the entire
gastrointestinal tract, such as inflammation of the intestines, digestive cancers, and diseases of the stomach and colon, especially colorectal diseases, which remain a significant concern for the healthcare community. According to estimates from the American Cancer Society1, colorectal cancer ranks as the third leading cause of cancer-related deaths for both men and women in the United States. The projected numbers for colorectal cancer cases in the year 2023 are 106,970 new cases of colon cancer and 46,050 new cases of rectal cancer, with an estimated 52,550 deaths. However, it is important to note that the mortality rate from colorectal cancer has decreased over the past decade due to advancements in scientific and technological research. Screening techniques allow for the detection of abnormalities in the colon and rectum to be removed before they develop into cancer.
Footnote 1: [https://www.cancer.org/cancer/types/colon-rectal-cancer/about/key-statistics.html](https://www.cancer.org/cancer/types/colon-rectal-cancer/about/key-statistics.html).
Clinical imaging techniques such as X-rays, computed tomography (CT), or ultrasound are often not highly effective in diagnosing pathological conditions in the colon. Therefore, colonoscopy remains the primary technique used for detection, screening, and treatment of gastrointestinal diseases. This method involves using a flexible endoscope, which is inserted through the anus and advanced into the colon. The real-time images of the colon obtained from the endoscopic device are displayed on a monitor, allowing the physician to observe and evaluate any abnormalities in the intestinal tract, the condition of the mucosal lining, and other structures within the colon.
Colonoscopy is considered the gold-standard screening procedure for examining and treating colorectal diseases. The endoscopic images contain a wealth of important information about the patient's condition. However, the effectiveness of the colonoscopy process can vary depending on the skills of the performer and the complexity of the endoscopic image analysis, which requires specialized knowledge and manual interpretation [1]. To improve the performance of colonoscopy in accurately detecting and classifying lesions, decision support systems aided by artificial intelligence (AI) are being rapidly developed. Among them, Visual Question Answering (VQA) is one of the most prominent techniques. Combining computer vision and natural language processing, VQA assists in extracting information from images, identifying abnormalities, and providing accurate answers to specific diagnostic questions. By integrating information from images and questions, VQA enhances the accuracy of lesion detection and classification, improves communication between users and images, and helps guide appropriate treatment strategies.
To successfully deploy VQA in the healthcare domain, in addition to algorithmic integration, a sufficiently large and diverse training dataset is required. Our research team participated in the VQA task of the ImageCLEFmed Medical Visual Question Answering on Gastrointestinal Image (MEDVQA-GI) [2] competition at ImageCLEF2023[3]. The contribution of the paper focused on performing the VQA task with a new dataset from ImageCLEFmed MEDVQA-GI. Specifically, we employed a multimodal approach for the VQA task (Task 1), combining information from two primary data sources: endoscopic images and textual questions. To achieve a good performance on the VQA task with the provided dataset, we first performed an efficient image preprocessing steps, which involved specular highlights inpainting, noise, and black mask removal to enhance the image quality. Subsequently, we conducted experiments and compared the performance of various image feature extraction models based on CNN and Transformer using both raw
and enhanced image data. The final results, with accuracy up to 87.25% on the development test set and 82.01% on the private test set, demonstrate the potential of the proposed method in improving the performance of VQA systems in the field of gastrointestinal endoscopy imaging in general and colonoscopy in particular.
## 2 Background and Related Works
### Colonoscopy Image Analysis
With the advancement of modern advanced technology, AI has made significant contributions to the field of healthcare, specifically in the progress of the colonoscopy examination process. Currently, two potential approaches with AI being utilized for colonoscopy image analysis, including Computer-Aided Detection (CAD) and Deep Learning (DL) systems. In the CAD approach, the system utilizes image processing algorithms to improve the performance of endoscopic procedures, enabling physicians to easily detect lesions in hard-to-identify locations and reduce the chances of misdiagnosis [4]. On the other hand, the DL-based system employs a deep learning model trained on specific datasets, which enhances the accuracy of lesion detection compared to the CAD-based system [5]. However, developing algorithms for automatic analysis and anomaly detection in endoscopic images requires preliminary image preprocessing to address various factors, such as specular highlights, interlacing or artefacts that impact the system's performance [6].
### Preprocessing Methods for Colonoscopy Images
In reality, the quality of endoscopy images depends on various factors such as the skill of the performing physician, limitations of the equipment, and certain environmental conditions. Some common difficulties in processing endoscopy images include black masks, ghost colors, interlacing, specular highlights, and uneven lighting [7]. Black masks are the occurrence of a black border around the edges of the image due to the use of lenses in the endoscopy system that have a black frame surrounding the edges. This frame can hinder the development of algorithms. To address this issue, techniques such as restoration, thresholding, cropping, or inpainting are necessary. Specular highlights, which are bright spots reflected from tumors or polyps captured by the camera, can disrupt the algorithms. Therefore, to remove them, we can employ detection or inpainting methods. Additionally, for issues like interlacing, ghost colors, and uneven lighting, segmentation methods can be applied to achieve optimal results [6][8][9]. Overall, preprocessing steps play a crucial role in mitigating the challenges commonly encountered with colonoscopy images. The mentioned techniques will help improve the overall quality of the images, thereby enhancing the performance of analysis and diagnosis.
### Medical Visual Question Answering
Medical visual question answering (MedVQA) is an important field in medical AI that combines VQA challenges with healthcare applications. By integrating medical images and clinically relevant questions, MedVQA systems aim to provide plausible and convincing answers. While
VQA has been extensively studied in general domains, MedVQA presents unique opportunities for exploration. Currently, there are 8 publicly available MedVQA datasets, including VQA-MED-2018 [10], VQA-RAD [11], VQA-MED-2019 [12], RadVisDial [13], PathVQA [14], VQA-MED-2020 [15], SLAKE [16], and VQA-MED-2021 [15]. These datasets serve as valuable resources for advancing MedVQA research.
The basic framework of MedVQA systems typically contains an image encoder, a question encoder, a fusion algorithm, and an answering component. Other frameworks may exclude the question encoder when the question is simple. Common choices for image encoder are ResNet [17] and VGGNet [18] that are pre-trained on ImageNet dataset [19]. For language encoders, Transformer-based architectures such as BERT [20] or BioBERT [21] are commonly applied because of their proven advantages, besides the Recurrent Neural Networks (LSTM [22], Bi-LSTM [23], GRU [24]). The fusion stage, the core component of VQA methods, has typical fusion algorithms, including the attention mechanism and the pooling module. Common attention mechanisms are the Stacked Attention Networks (SAN) [25], the Bilinear Attention Networks (BAN) [26], or the Hierarchical Question-Image Co-Attention (HieCoAtt) [27]. Most multimodal pooling practices are concatenation, sum, and element-wise product. The attention mechanism can aggregate with the pooling module. The answering component has two modes of output depending on the properties of the answer. The classification mode is used if the answer is brief and limited to one or two words. Otherwise, if the response is in free-form format, the generation modules such as LSTM or GRU are taken into account. There are additional techniques to the basic concept, for instance, Sub-task strategy, Global Average Pooling [28], Embedding-based Topic Model, Question-Conditioned Reasoning, and Image Size Encoder.
## 3 Task and Dataset Descriptions
### Task Descriptions
Identifying lesions in endoscopy images is currently one of the most popular applications of artificial intelligence in the medical field. For the task at ImageCLEFmed-MEDVQA-GI-2023 [2], the main focus will be on VQA and visual question generation (VQG). The main goal is to provide support to healthcare experts in diagnosis by combining image and text data for analysis. The task consists of three sub-tasks:
1. **VQA (Visual Question Answering):** For the visual question answering part, participants are required to generate a textual answer to a given textual question-image pair. This task involves combining endoscopy images from the dataset with textual answers to respond to questions.
2. **VQG (Visual Question Generation):** This is the reverse task of VQA, where participants need to generate textual questions based on given textual answers and image pairs.
3. **VLQA (Visual Location Question Answering):** Participants are provided with an image and a question, and they are required to provide an answer by providing a segmentation mask for the image.
In this study, our team only focuses on the VQA task (Task 1) for the provided endoscopy image dataset. In general, we receive a textual question along with the corresponding image,
and the main task is to generate accurate and appropriate answers based on information from both sources. For example, for an image containing a colon polyp with the following question, "Where in the image is the polyp located?", the proposed VQA system should return answer giving a textual description of where in the image the polyp is located, like upper-left or in the center of the image.
### Dataset Information
The new dataset released for the ImageCLEFmed-MEDVQA-GI-2023 challenge is based on the HyperKvasir dataset [29], the largest gastrointestinal collections with more than 100,000 images, with the additional question-and-answer ground truth developed by medical collaborators. The development set and test set include a total of 3949 images from different procedures such as gastroscopy and colonoscopy, spanning the entire gastrointestinal tract, from mouth to anus. Each image has a total of 18 questions about abnormalities, surgical instruments, normal
\begin{table}
\begin{tabular}{c l} \hline \hline
**1D** & **Questions** & **Sample Answers** \\ \hline
0 & What type of procedure is the image taken from? & “Colonoscopy”, “Gastroscopy” \\
1 & Have all polyps been removed? & “Yes”, “No”, “Not relevant” \\
2 & Is this finding easy to detect? & “Yes”, “No”, “Not relevant” \\
3 & Is there a green/black box artifact? & “Yes”, “No” \\
4 & Is there text? & “Yes”, “No” \\
5 & What color is the abnormality? & “Red”, “Phink”, “Yellow”,... \\
6 & What color is the anatomical landmark? & “Red”, “Red, White”, “Pink, Red, grey”,... \\
7 & How many findings are present? & “0”, “1”, “2”, “3’, “4”,”5”... \\
8 & How many polyps are in the image? & “0”, “1”, “2”, “3”, “4”,”5”... \\
9 & How many instruments are in the image? & “0”, “1”, “2”, “3” \\
10 & Where in the image is the abnormality? & “Center”, “Lower-left”, “Lower-right, Center-right”,... \\
11 & Where in the image is the instrument? & “Center”, “Lower-left”, “Lower-right, Center-right”,... \\
12 & Are there any abnormalities in the image? & “No”, “Polyp”, “Ulcerative colitis”, “Oesophagitis”,... \\
13 & Are there any anatomical landmarks in the image? & “No”, “Z-line “Cecum”, “Ileum”, “Pylorus”, “Not relevant” \\
14 & Are there any instruments in the image? & “No”, “Tube”, “Biopsy forceps”, “Metal clip”, “Polyp snare, Tube’,... \\
15 & Where in the image is the anatomical landmark? & “Center”, “Lower-left”, “Lower-right, Center-right”,... \\
16 & What is the size of the polyp? & “c 5mm”, “5-10mm”, “11-20mm”, “-20mm”, “Not relevant”,... \\
17 & What type of polyp is present? & “Paris ip”, “Paris ia”, “Paris is,” Paris is, Paris iia”,... \\ \hline \hline \end{tabular}
\end{table}
Table 1: Questions and sample answers from ImageCLEFmed-MEDVQA-GI-2023 dataset
Figure 1: Illustrations of question-answer pairs along with common abnormalities in gastrointestinal image from ImageCLEFmed-MEDVQA-GI-2023 dataset
findings and other artefacts, with multiple answers possible for each, as shown in Table 1. Not all questions will be relevant to the provided image, and the VQA system should be able to handle cases where there is no correct answer. Figure 1 depicts several examples of question-answer pairs on common abnormalities in gastrointestinal tract, such as Colon Polyps, Oesophagitis, and Ulcerative Colitis. As shown in Figure 1, there are three possible answers to the question "What color is the abnormality?": "Pink," "Red," and "White", and a typical VQA system should be able to identify all three colors. In general, the image may contains a variety of noise and components that locates across abnormalities, such as highlight spots or instruments, which pose a significant challenge in developing efficient VQA systems for gastrointestinal domain.
## 4 The Proposed Approach
The method used in this study is based on a standard framework that is commonly used to tackle general VQA problems. Figure 2 depicts an overview of the proposed method for ImageCLEFmed-MEDVQA-GI-2023 dataset. In general, the VQA architecture employs powerful pre-trained models to extract visual and textual features from image-question pairs, which are then combined into a joint embedding using a fusion algorithm and passed to a classifier module to generate the appropriate answer. To improve the quality of the region of interest and achieve better VQA performance, the original image is passed through a series of enhancement procedures before being fed into the image encoder for features extraction.
### Image Enhancement
The purpose of the image pre-processing and enhancement steps is to remove noise and artifacts, which are frequently caused by the equipment used in diagnostic or environmental difficulties. Some of the major problems to be mentioned are black mask, specular highlights, interlacing or uneven lighting. The impact of these elements, such as black mask and specular highlights, is significant since they, like the polyp, create valley information and affect the performance of polyp localization, causing the VQA system to generate incorrect answers.
Figure 2: An overview of the multimodal architecture with image enhancement for VQA challenge
This study employs pre-processing and enhancing methods to cope with specular highlights and black mask in colonoscopy image, which are prevalent artifacts in the dataset provided. The desired outcome is an enhanced image with no specular reflection or black frame while retaining the visual features of the region of interest.
#### 4.1.1 Specular Highlights Removal
The removal of specular highlights from colonoscopy image includes two sequential processes: detection of specular highlights and highlights inpainting. Figure 3 depicts the overall procedure of the method, the outcome of which is generally based on the combination of Telea inpainting algorithm with initial image restoration after several modification steps.
Specular highlights detectionFirst, it is necessary to convert the image from the original RGB channel to grey scale to process the subsequent procedure. Rather than adaptive thresholding, the proposed approach employs standard thresholding method with a fixed threshold value to identify specular highlights in all images. This is due to the gastrointestinal image's varied textures and components, and if not done properly, may result in information loss. Some samples of the dataset contain text, high exposure regions and brightly colored instrument, as described in Figure 4. Aside from text in white color, high exposure regions are parts of specular highlights that received excessively high intensity compared to regular highlight spots, while the instruments are sometimes in white or blue color. After th
Figure 3: An overview of stages of the specular highlights inpainting method
may emerge in the mask, as shown in Figure 3(b), and affect the inpainting outcome. Thus, the following step is to remove these undesired elements from the mask in order to assure consistency. To cope with these problems, two directions are considered, either to perform segmentation for text, polyp and instrument, separately, or remove the parts that meet certain size threshold. For simplicity, the second approach is used in this study.
The preprocessing step consists of several morphology transformations interspersed by contour detection and removal. More specifically, a dilation operation with kernel size \(3\times 3\) is performed initially to connect the pixels related to undesirable parts. Among the obtained contours, those whose scaled area following the Modified Z-scores formula [30], as shown in Formula 1, exceeds \(17.0\) are removed from the mask. The mask is then passed into another erosion module with the same settings to restore the initial highlights intensity. Finally, Gaussian filter of size \(19\times 19\) is applied to reduce the intensity of highlights area and improve the inpainting performance.
\[S_{i}=\frac{|s_{i}-\tilde{s}|}{MAD} \tag{1}\]
where:
* \(S_{i}\): is the scaled area of contour \(i\) based on modified Z-score.
* \(s_{i}\): is the area of contour \(i\)
* \(\tilde{s}\): is the median area of all contours
* \(MAD=median(|s_{i}-\tilde{s}|),\forall i=1..n\): is the Median Absolute Deviation of contour areas
Highlights inpaintingOnce the mask of specular highlights has been achieved, the image regions indicated by the mask are then reconstructed through an efficient inpainting operation. First, a filter of size \(3\times 3\) slides across every pixels of the original image and calculate the average value. The process is repeated \(N\) times to ensure a desirable outcome. We then perform an initial restoration on the image by directly replacing its pixels under the specular highlights mask with pixels from the blur image. Despite the drastically reduced intensity, specular highlight spots still remains in the reconstructed image, as shown in Figure 2(e). To obtained the final result, Telea algorithm [31], a powerful image inpainting strategy, is applied to eliminate the remaining noisy and dim highlights. The inpainted image is noticeably higher in quality, with specular highlights removed without negatively impacting other areas of the image.
Figure 4: An illustration of specular highlights detection from a colonoscopy image that contains text, high exposure regions and a white instrument.
#### 4.1.2 Black Mask Removal
Previous research has shown that black masks do generate valley information, which can reduce polyp localization performance. Based on this, we propose a black mask removal strategy for the VQA task that still retains black box information in order to answer the question "Is there a green/black box artefact?". In general, an artificial mask of black frame is initially created based on its border width, and then the inpainting operation is performed to remove the black frame from the image. The overall procedure is described in Figure 5. Our method does not use cropping or thresholding directly to detect and remove the black mask because it may contain the black box artifact, shadow regions, or black instrument, the removal of which causes information loss and decreases VQA performance.
To detect the border width, we first perform a grey scale conversion and inverse thresholding with erosion operation to remove noise, and then measure the distance from each edge of the image to the nearest pixel that does not belong to the mask. After determining the width of the border, the crucial step of the method is to create an artificial mask with internal octagon shape. This can be done by creating two sub-masks, one rectangle and one circle, followed by a bitwise OR operation to combine them into the final mask, as show in Figure 4(c). The circle mask is created with a center point based on the information of border width and a radius calculated by multiplying the ordinate of the center point by a value \(\sigma\left(\sigma>1\right)\). In some cases, the final mask is not always octagonal, as shown in the last example, but it still covers the main region of interest. Finally, the inpainting of black mask is completed using the same procedure as
Figure 5: Stages of black mask removal process. The first row illustrates an image with a standard black mask, the second row depicts an image containing a black square and the last row contains image with black mask marked as black box artefact.
described in the previous section for specular highlights, giving the final enhanced image with black mask removed. If a black box artefact exists in the bottom-left corner, as shown in the second example, it will not be significantly affected as long as its size is greater than the area of the mask at the respective position. For images containing an expanded black mask labeled as black box artefact, we process by creating a simulated green box that contains the text and placing it in bottom-left corner. By doing so, the text and box artefact information still remain after the inpainting procedure. Though the obtained results are quite satisfactory, there are still some cases where the mask is not completely removed and need further processing steps.
### Multimodal Fusion Architecture
Since this study focus mainly on the VQA task, the architecture should be capable of extracting meaningful features from the question and corresponding image, and incorporating them to give the correct answer. Our multimodal fusion architecture is set up with important components such as an image encoder for feature extraction from images, a text encoder for features extraction from questions, a fusion algorithm for unifying modalities and a classifier for producing the appropriate answer. The proposed approach uses pre-trained Bidirectional Encoder Representations based on Transformers (BERT) [20] to extract textual features from questions. As a bidirectional model, it can learn the meaning of words in a sentence by considering both the words that come before and after them. With massive pre-training data, BERT can be fine-tuned and achieved state-of-the-art results on a number of natural language processing (NLP) benchmarks. For features extraction from the images, this study set up and experiment with eight different pre-trained models that are belong to two main concepts:
* CNN-based architectures including **Resnet152**[32], **Inception-v4**[33], **MobileNetV2**[34] and **EfficientNet**[35]. The group of models take advantage of traditional CNN's components such the convolutional layer, pooling layer, residual block and fully connected layer to achieve significant result in computer vision field. The training of CNN-based model is more efficient with less computational resources compared to new approaches based on Transformers.
* Transformer-based architectures including **ViT**[36], **DeiT**[37], **Swin Transformer**[38] and **BEiT**[39]. The family of models leverages a massive amount of training data and Transformer's multi-head self-attention for a game-changing breakthrough in the computer vision field. ViT (Vision Transformer) and other models inspired from it initially encodes the image as patch embeddings and pass them into a regular Transformer Encoder for feature extraction, which is similar to text data. Currently they are considered as the prominent architectures to achieve state-of-the-art performance on a variety of tasks in computer vision such as image classification, object detection, and semantic image segmentation.
After obtaining the embeddings of text and image, a multimodal fusion method based on concatenation is used to combine these features along the embedding dimension. The unified embedding matrix is then passed through an intermediate fully connected layer with drop out 0.5 and ReLU activation followed by a classification layer to produce the final output. Because there can be more than one appropriate answer for each question, we approach the VQA task as
a multi-label classification problem. To successfully train the proposed architecture, multi-label binarization is used to encode a list of all possible answers into a binary vector. Furthermore, the final layer is configured with sigmoid activation function to return an output vector of the same size containing the corresponding probability for each class.
## 5 Experimental Setup
### Data Preparation
The development set released for the VQA challenge contains 2000 images of gastroscopy and colonoscopy procedures. In order to experiment and evaluate our method, we randomly divided the provided development set into three parts: train, validation, and test, with 1600 images for training and 200 images for each validation and test set. The data preparation process is designed to ensure that each abnormality has the same proportion in the training, validation, and testing sets, and that each image contains all 18 questions. This produces 28,800 question-answer pairs on the training set, 3600 pairs for validation and 3600 pairs for test.
All images from development set and private test set are first passed into an image enhancement block, where numerous image preprocessing methods are applied to remove specular highlights and black mask from the images. The enhanced results are then used as input in the training and testing of the proposed VQA model.
### Experiment Configurations
Many experiments are carried out in order to evaluate the performance of the proposed methods toward the ImageCLEFmed-MEDVQA-GI-2023 challenge. Specifically, each pre-trained vision model is initialized and experimented as an image encoder and unify with BERT encoder through concatenation fusion for multimodal learning. Table 2 gives the general information of pre-trained models used in this study including vision model name, version and number of parameters for each fusion model. Through experiments, we can discover the potential and limitation of each model for the VQA task and thus, choose the best method for giving the final prediction on the private test set of the competition.
\begin{table}
\begin{tabular}{l c c c} \hline \hline
**Models** & **Version** & **Spaces vision model name** & **\# Parameters** \\ \hline BERT+ViT & base & “google/vit-base-patch16-224-in21k* & 196M \\ BERT+DEiT & base & “facebook/edit-base-distilled-patch16-224* & 196M \\ BERT+Swin & base & “microsoft/swin-base-patch4-window7-224-in22k* & 197M \\ BERT+BEiT & base & “microsoft/beit-base-patch16-224-pt22k-ft22k* & 196M \\ BERT+ResNet152 & v1.5 & “microsoft/resnet-152* & 169M \\ BERT+Inception & v4 & “inception\_v4” & 153M \\ BERT+MobileNet & V2 & “google/mobilenet\_v2\_1.0\_224* & 112M \\ BERT+EfficientNet & b3 & “google/efficientnet-b3* & 121M \\ \hline \hline \end{tabular}
\end{table}
Table 2: Statistics of multimodal fusion with pre-trained vision and language models for the VQA challenge
To achieve a comparative result, we set up the same hyperparameters for all experiments. The models are trained in 15 epochs with batch size of 64. We utilize the Adam optimizer [40] using weighted decay with an initial learning rate of 5e-5 and a linear scheduler to decrease learning rate 6.67% after each epoch. Since we approach the VQA task as multi-label classification, the output layer is configured to return a tensor containing probabilities of answers, where the final predicted answers for each question can be achieved using threshold value of 0.5. Due to this, the BCEWithLogitsLoss function, which combines a Sigmoid layer and the BCELoss, is applied in the training process. After each epoch, the training loss and validation loss are calculated, and the performance are then evaluated on classification metrics such as accuracy, precision, recall and F1-Score. To ensure a meaningful result for multi-label classification, the metrics are calculated using ground truth and prediction sets of binary vectors, in which recall, precision and F1-scores should be calculated on each sample and find their average. The model's state that obtains best F1-Score is used for prediction in the testing phase.
The proposed architecture are implemented in PyTorch and trained on the Kaggle platform with hardware specifications: Intel(R) Xeon(R) CPU @ 2.00GHz; GPU Tesla P100 16 GB with CUDA 11.4.
## 6 Experimental Results
\begin{table}
\begin{tabular}{l c c c c} \hline \hline
**Vision Models** & **Accuracy** & **Precision** & **Recall** & **F1-Score** \\ \hline
**No image enhancement** & & & & \\ ResNet152 & 0.8419 & 0.8917 & 0.8867 & 0.8857 \\ Inception-v4 & 0.8619 & 0.9133 & 0.9067 & 0.9067 \\ MobileNetV2 & 0.8444 & 0.8932 & 0.8951 & 0.8906 \\ EfficientNet-B3 & 0.8581 & 0.9065 & 0.9049 & 0.9023 \\ ViT-B/16 & 0.8636 & 0.9134 & 0.9089 & 0.9078 \\ DeiT-B & 0.8611 & 0.9100 & 0.9026 & 0.9033 \\ Swin-B & **0.8664** & 0.9152 & **0.9094** & **0.9090** \\ BEiT-B & 0.8647 & **0.9158** & 0.9068 & 0.9074 \\ \hline \hline \multicolumn{5}{l}{**With image enhancement**} & & & \\ ResNet152 & 0.8453 \(\uparrow\) & 0.8942 & 0.8894 & 0.8885 \(\uparrow\) \\ Inception-v4 & 0.8625 \(\uparrow\) & 0.9121 & 0.9073 & 0.9071 \(\uparrow\) \\ MobileNetV2 & 0.8422 \(\downarrow\) & 0.8935 & 0.8882 & 0.8867 \(\downarrow\) \\ EfficientNet-B3 & 0.8572 \(\downarrow\) & 0.9081 & 0.9079 & 0.9046 \(\uparrow\) \\ ViT-B/16 & 0.8631 \(\downarrow\) & 0.9126 & 0.9086 & 0.9073 \(\downarrow\) \\ DeiT-B & 0.8625 \(\uparrow\) & 0.9122 & 0.9052 & 0.9055 \(\uparrow\) \\ Swin-B & 0.8717 \(\uparrow\) & 0.9245 & 0.9159 & 0.9168 \(\uparrow\) \\ BEiT-B & **0.8725 \(\uparrow\)** & **0.9253** & **0.9184** & **0.9185**\(\uparrow\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Comparative performance of the multimodal fusion method with vision models on the development test set.
The comparative result of different pre-trained image model on the testing set is shown in Table 3. It is clear that, with no image enhancement, Swin-B achieves the best result with 86.64% accuracy and 90.90% F1-Score while BEiT-B gives a slightly lower performance with accuracy of 86.47% and 90.74% F1-Score. CNN-based vision models have acceptable results, but cannot be compared with the result of Transformer architecture models.
With image enhancement, six out of eight vision models from both CNN and Transformer architectures achieve a better performance on F1-Score metric. BEiT-B has an outstanding result with accuracy and F1-Score of 87.2% and 91.85%, respectively. Overall, the enhancement process helps to improve the F1-Score at least 0.4% and up to 1.11% on VQA performance. The result of the convolutional models is still under when compared with Transformers architecture models.
We found that the BERT and BEiT fusion (BERT+BEiT) with image enhancement is the best method of our approach and use it for prediction in final private test phase. Our method obtains a good result on the private test set with an accuracy of 82.01%. Table 4 illustrates the performance evaluation of BERT+BEiT fusion on each question from the development test set compared with the private test set. In general, there are 14/18 questions with predicted answers achieve greater than 80% accuracy on the development test set, while 11/18 questions on the private test set achieve the same result. Our method still struggles to produce full and precise answers for questions with multiple answers, such as "What color is the abnormality?" or questions that refer to the location of the abnormality, anatomical landmark, and instrument.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline
**Question** & \multicolumn{3}{c}{**Development Test Set**} & \multicolumn{1}{c}{**Private Test Set**} \\ & **Accuracy** & **Precision** & **Recall** & **F1-Score** & **Accuracy** \\ \hline
**Are there any abnormalities in the image?** & 0.9700 & 0.9750 & 0.9725 & 0.9733 & 0.8091 \\
**Are there any anatomical landmarks in the image?** & 0.9300 & 0.9300 & 0.9300 & 0.9300 & 0.6940 \\
**Are there any instruments in the image?** & 0.9050 & 0.9275 & 0.9200 & 0.9217 & 0.7688 \\
**Have all polyps been removed?** & 0.9550 & 0.9575 & 0.9600 & 0.9583 & 0.9721 \\
**How many findings are present?** & 0.8400 & 0.8400 & 0.8400 & 0.8400 & 0.7807 \\
**How many instruments are in the image?** & 0.9650 & 0.9650 & 0.9650 & 0.9650 & 0.8901 \\
**How many polyps are in the image?** & 0.9650 & 0.9650 & 0.9650 & 0.9650 & 0.9577 \\
**Is there a green/black box artefact?** & 0.9500 & 0.9500 & 0.9500 & 0.9500 & 0.9732 \\
**Is there text?** & 0.9250 & 0.9250 & 0.9250 & 0.9250 & 0.8787 \\
**Is this finding easy to detect?** & 0.8900 & 0.8900 & 0.8900 & 0.8900 & 0.8044 \\
**What color is the abnormality?** & 0.5800 & 0.9025 & 0.8563 & 0.8597 & 0.4969 \\
**What color is the anatomical landmark?** & 0.9400 & 0.9400 & 0.9400 & 0.9400 & 1.0000 \\
**What is the size of the polyp?** & 0.8600 & 0.8650 & 0.8700 & 0.8667 & 0.8535 \\
**What type of polyp is present?** & 0.8650 & 0.8800 & 0.8725 & 0.8750 & 0.8132 \\
**What type of procedure is the image taken from?** & 1.0000 & 1.0000 & 1.0000 & 1.0000 & 0.9938 \\
**Where in the image is the abnormality?** & 0.6600 & 0.9251 & 0.8805 & 0.8842 & 0.5872 \\
**Where in the image is the anatomical landmark?** & 0.7150 & 0.8847 & 0.8848 & 0.8766 & 0.7203 \\
**Where in the image is the instrument?** & 0.7900 & 0.9332 & 0.9096 & 0.9125 & 0.7688 \\ \hline
**All** & **0.8725** & **0.9253** & **0.9184** & **0.9185** & **0.8201** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Performance evaluation of BERT+BEiT fusion with image enhancement for each question on the development test set and private test set
## 7 Conclusion and Future Works
Along with performing image enhancement, we also set up and experimented using various powerful pre-trained image models together with the BERT encoder for our proposed multimodal architecture in the VQA task at ImageCLEFmed-MEDVQA-GI-2023 [2]. The visual enhancement steps, which include specular hightlights and black mask removals, help improve multimodal learning performance on the dataset by up to 1.11% F1-Score. Our best method, BERT+BEiT fusion with image enhancement, achieved 87.25% on development test set and 82.01% on the private test set by the accuracy. Through performance analysis, there are question cases that require multiple positions or colors in the answer, which are our limitations in this study. In summary, there are factors that have significant impact on our solution for the VQA task such as answer imbalance, noise, and artifacts.
Our future research for this task is to improve the accuracy of the model in giving the correct answer by enriching the features from images and questions through instrument segmentation and polyp localization with methods such as U-net [41], ResUnet++ [42] developed on object-specific datasets such as Kvasir-Instrument [43] and Kvasir-seg [44]. Other advanced colonoscopy image preprocessing techniques such as interlacing removal or uneven lighting removal can be examined to improve the image quality. From the proposed system, an intelligent chatbot application can be implemented for question-answering from medical images and help improve colonoscopy analysis.
## Acknowledgments
This research was supported by The VNUHCM University of Information Technology's Scientific Research Support Fund.
|
2307.13856
|
On the unreasonable vulnerability of transformers for image restoration
-- and an easy fix
|
Following their success in visual recognition tasks, Vision
Transformers(ViTs) are being increasingly employed for image restoration. As a
few recent works claim that ViTs for image classification also have better
robustness properties, we investigate whether the improved adversarial
robustness of ViTs extends to image restoration. We consider the recently
proposed Restormer model, as well as NAFNet and the "Baseline network" which
are both simplified versions of a Restormer. We use Projected Gradient Descent
(PGD) and CosPGD, a recently proposed adversarial attack tailored to pixel-wise
prediction tasks for our robustness evaluation. Our experiments are performed
on real-world images from the GoPro dataset for image deblurring. Our analysis
indicates that contrary to as advocated by ViTs in image classification works,
these models are highly susceptible to adversarial attacks. We attempt to
improve their robustness through adversarial training. While this yields a
significant increase in robustness for Restormer, results on other networks are
less promising. Interestingly, the design choices in NAFNet and Baselines,
which were based on iid performance, and not on robust generalization, seem to
be at odds with the model robustness. Thus, we investigate this further and
find a fix.
|
Shashank Agnihotri, Kanchana Vaishnavi Gandikota, Julia Grabinski, Paramanand Chandramouli, Margret Keuper
|
2023-07-25T23:09:05Z
|
http://arxiv.org/abs/2307.13856v1
|
# On the unreasonable vulnerability of transformers for image restoration - and an easy fix
###### Abstract
Following their success in visual recognition tasks, Vision Transformers(ViTs) are being increasingly employed for image restoration. As a few recent works claim that ViTs for image classification also have better robustness properties, we investigate whether the improved adversarial robustness of ViTs extends to image restoration. We consider the recently proposed Restormer model, as well as NArNet and the "Baseline network" which are both simplified versions of a Restormer. We use Projected Gradient Descent (PGD) and CosPGD, a recently proposed adversarial attack tailored to pixel-wise prediction tasks for our robustness evaluation. Our experiments are performed on real-world images from the GoPro dataset for image deblurring. Our analysis indicates that contrary to as advocated by ViTs in image classification works, these models are highly susceptible to adversarial attacks. We attempt to improve their robustness through adversarial training. While this yields a significant increase in robustness for Restormer, results on other networks are less promising. Interestingly, the design choices in NArNet and Baselines, which were based on \(\mathrm{iid}\) performance, and not on robust generalization, seem to be at odds with the model robustness. Thus, we investigate this further and find a fix.
## 1 Introduction
The goal of image restoration is to recover high-quality images from degraded observations. The degradation could be due to a variety of factors such as noise, blur, artifacts due to jpeg compression, raindrops, haze, and other factors. Earlier methods for image restoration [38, 6, 11, 14, 3] employed carefully chosen priors and degradation models to derive degradation-specific r
Figure 1: Comparing images reconstructed by all considered models after 5 iterations of CosPGD attack. We observe strong spectral artifacts in the reconstructed images.
such methods are limited by the strength of the image prior and the accuracy in modeling or estimating the degradation operator. The past decade saw a large-scale adoption of deep learning methods to image restoration [44], which outperformed the classical approaches. Recent approaches [61, 54, 49] successfully adopt novel architectures such as Transformers [51, 16] and MLP-mixers [47] for restoration.
Yet, CNNs, MLP-mixers as well as Transformer have been shown to be vulnerable to carefully crafted adversarial examples [31, 18]. Recent work [1, 9, 57, 17] also confirms the existence of such vulnerabilities in deep learning-based image restoration. Yet, existing works mainly analyze the robustness of CNN-based restoration methods. Conversely, with the introduction of novel network architectures such as vision Transformers [29, 16], MLP mixers [47], and improved convolutional architectures [30, 26] which outperform the earlier networks such as ResNets [21], there have been several studies on the robustness of these new architectures [5, 41, 46, 13, 1]. To the best of our knowledge, very limited works [13, 4] investigate the effect of architectural components and training recipes. Existing works focus on image classification and do not study restoration. Thus to bridge this gap we investigate the adversarial robustness of recent Transformers specialized to image restoration.
In this work we study the adversarial robustness of Transformer based restoration networks, Restormer [61], and two architectures introduced in [7] the _Baseline network_ and the _Non-linear Activation free Network (NAFNet)_, both obtained by simplifying the original Restormer, with modifications to the channel attention and activation functions. Further, to better understand the architectural design choices made by [7], we include an _Intermediate network_ also considered by [7] which serves as a step between the Baseline network and NAFNet. This study is particularly interesting as recent works [56, 4] indicate that the choice of activation function significantly impacts adversarial robustness. We study the network robustness under standard and adversarial attacks, by considering \(\ell_{\infty}\) perturbations crafted using PGD attack [31] and CosPGD attack proposed in [1] for dense prediction tasks. We conduct our experiments on dynamic deblurring using the Go-Pro dataset [34].
Our experiments reveal that under standard training settings, Transformer based restoration networks are not robust to adversarial attacks in general. As shown in Figure 1, the networks also exhibit distinct artifacts in the reconstructions under attack. The images from the Baseline network and the Restormer exhibit severe ringing artifacts [33], whereas the NAFNet reconstructs images with very strong grid and color artifacts under adversarial attack. We find that adversarial training can largely reduce the artifacts and significantly improve the robustness of all three networks. However, the recently proposed NAFNet and Baseline network fail to rival the performance of Restormer, which leads us to contemplate the importance of the architectural components necessary to achieve robust generalization.
The main contributions of this work can be summarized as follows:
* We investigate the robustness of recently proposed Transformer based architectures for image restoration, namely image deblurring.
* We analyze the quality of the restored images and the spectral artifacts introduced by models under the aforementioned adversarial attacks.
* We understand the effects of defense strategy against adversarial attacks that consequently reduce the spectral artifacts in reconstructed images.
* Lastly, we study the effect of certain architectural design choices in the recently proposed _state-of-the-art_ image restoration model, NAFNet, to improve their robustness.
## 2 Related Work
Transformers for Image RestorationThe past decade saw significant improvements in image restoration, largely owing to the adoption of deep networks trained on large datasets of clean and degraded images. While earlier restoration networks largely adopted CNN-based architectures, subsequent works also explored the use of attention mechanisms inside CNNs [62, 35, 45]. We refer [44] for a detailed survey on deep learning approaches to restoration. More recently, vision Transformers [29, 16] are increasingly adopted for several restoration tasks. While [27, 61, 54, 12, 55] adopt Transformers for generic restoration tasks, a few works focus on specific restoration tasks by including such as deblurring [48], deraining [28], dehazing [20, 43], removing degradations due to adverse weather conditions [50]. These networks typically employ encoder-decoder-based architectures with Transformer blocks combined with convolutions.
Adversarial Robustness of Image Restoration.While the adversarial robustness of deep networks for image recognition is extensively studied, a few works also study the robustness of image restoration networks to adversarial attacks. [9, 10, 60] evaluate adversarial robustness of deep learning-based image super-resolution. While [10] propose adversarial regularization, [60] propose frequency domain adversarial example detection, combined with random frequency masking to improve robustness. [17] evaluate adversarial robustness of deblurring networks with and without the knowledge of the blur operator, and introduce targeted attacks on restoration. In [8], the adversarial robustness of image-to-image translation models is studied, in
cluding restoration tasks, and adversarial training and different transformation-based defenses are evaluated. Yan et al. [57] investigate the robustness of image denoising to zero-mean adversarial perturbations and propose training with clean and adversarial samples to improve robustness. Yu et al. [59] investigate adversarial robustness of deep learning-based rain removal, and study the effect of architecture and training choices on robustness. Yet, these works do not focus on the more recent Transformer based restoration networks. With the notable exception of [1], where they simply benchmark the adversarial performance of the image restoration networks recently proposed by [7].
**Robustness of Transformers & other modern architectures.** Recently, Vision Transformers (ViTs) [16, 29] have been successfully applied to image recognition, outperforming the older ResNets. Follow-up works modified training schemes and architectures leading to more performant CNN architectures such as ConvNext [30], and hybrid models combining components of ViTs and CNNs [2]. Following the introduction of these novel architectures, several works examined the robustness properties of these models. [42, 5, 41, 36] suggest Transformers have better adversarial robustness than CNNs. However, [32] shows that vision Transformers are also as vulnerable as CNNs under strong attacks. [4] show that CNNs can achieve similar adversarial robustness as Transformers when trained using similar training recipes, yet Transformers still outperform CNNs on out-of-distribution generalization. [46] benchmark for robustness dependent on the network architecture. They find that Transformers are best suited against adversarial attacks while being extremely vulnerable to common corruptions [22] and system noise. Conversely, CNNs are more robust against common corruptions and system noise while being weakest against adversarial attacks. Further, they show that MLP-Mixers are not the best and also not the worst for both cases.
In their work, [37] benchmark the robustness of state-of-the-art Transformers and CNN architectures and show that CNNs using ConvNext architecture can be at least as robust as Transformers for image recognition. Meanwhile [13] analyzes the effect of different architectural components such as patches, convolution, activation, and attention, and demonstrates that ConvNexts have better adversarial robustness than ResNets. [56] observe that smooth activation functions improve adversarial training as they enable better gradient updates to compute harder adversarial examples. Subsequent works [4, 13] also confirm improvement in robustness when GELU [23] activation functions are used in adversarial training. While [4] attribute significant robustness gains in Transformers to the self-attention mechanism, [52] identify other architectural components, including, the use of patches, larger kernels, reducing activation and normalization layers which when incorporated into CNNs lead to out of distribution robustness at least on par with Transformers without the use of attention.
In contrast, our work focuses on the investigation of the robustness of several recent Transformer based restoration models and shows interesting effects of adversarial attacks that can be attributed to different building modules of such models.
## 3 Methodology
Following, we describe the attack framework used and the defense strategy used to combat the vulnerabilities of the architectures exposed by the adversarial attacks.
### Attack Framework
Let \(\mathbf{x}\) denote the ground-truth image, which is corrupted by a possibly non-linear degradation operator \(\mathbf{A}\), resulting in an observation \(\mathbf{y}^{\mathrm{clean}}\), which can be expressed as
\[\mathbf{y}^{\mathrm{clean}}=\mathbf{A}(\mathbf{x}). \tag{1}\]
Let \(\mathcal{G}_{\theta}\) be a (Transformer-based) neural network parameterized by \(\theta\) trained to recover \(\mathbf{x}\) from \(\mathbf{y}^{\mathrm{clean}}\). In this work, we are interested in studying the stability of \(\mathcal{G}_{\theta}\) to adversarial attacks that aim to degrade its performance through visually imperceptible changes to the inputs [18, 31]. We evaluate the robustness to attacks using additive perturbations \(\delta\) with \(\ell_{p}\)-norm constraints. We generate the adversarial perturbations based on two powerful attack methods CosPGD [1] developed for dense prediction tasks, and PGD attack [31], both of which we detail in the following. The objective of the attack is to maximize the deviation of the network output from the ground truth as measured by a loss function \(L\), subject to \(\ell_{p}\) norm constraints on the perturbation:
\[\underset{\delta}{\mathrm{maximize}}\ L(\mathcal{G}_{\theta}(\mathbf{y}^{ \mathrm{clean}}+\delta),\ \mathbf{x})\ \ \text{s.t.}\ \|\delta\|_{p}\leq\epsilon. \tag{2}\]
**PGD.** PGD is an iterative adversarial attack, where each sample is perturbed for a fixed amount of attack iterations (steps) with the intention of maximizing the loss further with each attack step. A single attack step in the PGD attack [31] is given as follows,
\[\mathbf{y}^{\mathrm{adv}_{\mathrm{r}+1}} =\mathbf{y}^{\mathrm{adv}_{\mathrm{r}}}+\alpha\cdot\mathrm{sign} \nabla_{\mathbf{y}^{\mathrm{adv}_{\mathrm{r}}}}L(\mathcal{G}_{\theta}( \mathbf{y}^{\mathrm{adv}_{\mathrm{r}}}),\mathbf{x}) \tag{3}\] \[\delta =\phi^{\epsilon}(\mathbf{y}^{\mathrm{adv}_{\mathrm{r}+1}}- \mathbf{y}^{\mathrm{clean}})\] \[\mathbf{y}^{\mathrm{adv}_{\mathrm{r}+1}} =\phi^{\tau}(\mathbf{y}^{\mathrm{clean}}+\delta)\]
where the adversarial example \(\mathbf{y}^{\mathrm{adv}_{\mathrm{r}+1}}\) at step \(t+1\), is updated using the adversarial example from the previous step \(\mathbf{y}^{\mathrm{adv}_{\mathrm{r}}}\), \(\nabla\) represents the gradient operation, \(\alpha\) is the step size for the perturbation, \(\phi^{\epsilon}\) is denotes projection onto the appropriate \(\ell_{p}\)-norm ball of radius \(\epsilon\), depending on the
norm constraints on \(\delta\), and \(\phi^{r}\) clips the adversarial example to lie in the valid intensity range of images (between [0, 1]). Prior works evaluating the adversarial robustness of image restoration networks consider \(L\) to be the reconstruction loss (MSE loss) to obtain adversarial examples maximizing the reconstruction error.
**CosPGD.** Instead of directly utilizing the averaged pixel-wise losses in PGD attack steps, [1] propose to weigh the pixel-wise losses using the cosine similarity between the network output and the ground truth (both scaled by softmax), to reduce the importance of the pixels which already have a large error in the previous iterations, and enable the attack to focus on the pixels with low error. For the task of restoration (a regression task), CosPGD attack steps for an untargeted attack are given as:
\[\mathbf{x}^{\mathrm{adv}_{t}} =\mathcal{G}_{\theta}(\mathbf{y}^{\mathrm{adv}_{t}}) \tag{4}\] \[L_{\mathrm{cos}} =\sum\mathrm{cossim}(\Psi(\mathbf{x}^{\mathrm{adv}_{t}}),\Psi( \mathbf{x}))\odot L(\mathbf{x}^{\mathrm{adv}_{t}},\mathbf{x})\] \[\mathbf{y}^{\mathrm{adv}_{t+1}} =\mathbf{y}^{\mathrm{adv}_{t}}+\alpha\cdot\mathrm{sign}\nabla_ {\mathbf{y}^{\mathrm{adv}_{t}}}L_{\mathrm{cos}}\] \[\delta =\phi^{\mathrm{c}}(\mathbf{y}^{\mathrm{adv}_{t+1}}-\mathbf{y}^{ \mathrm{clean}})\] \[\mathbf{y}^{\mathrm{adv}_{t+1}} =\phi^{r}(\mathbf{y}^{\mathrm{clean}}+\delta),\]
where \(\Psi\) is the softmax function, \(\odot\) denotes point-wise multiplication, and the cosine similarity (cossim) is given by
\[\mathrm{cossim}(\overrightarrow{\mathbf{u}},\overrightarrow{\mathbf{v}})= \frac{\overrightarrow{\mathbf{u}}\cdot\overrightarrow{\mathbf{v}}}{|| \overrightarrow{\mathbf{u}}||\cdot||\overrightarrow{\mathbf{v}}||} \tag{5}\]
[1] demonstrate that this approach results in a stronger attack for pixel-wise regression tasks than a PGD attack. We use both PGD and CosPGD in our robustness evaluation.
### Architectures: from Restormer to NAFNet
We evaluate the adversarial robustness of _Restormer_[61], a Transformer based architecture for image restoration and two architectures introduced in [7] by modifying the Restormer architecture. Restormer [61] has a UNet [39] like encoder-decoder architecture, using multi-head channel-wise attention modules, gated linear units [15] and depth-wise convolutions in the feed-forward network. This network achieved state-of-the-art performance in image restoration at the time of its publication. The authors in [7] investigate whether it is possible to retain the performance of Restormer, with a simplified architecture. After a thorough ablation study, they propose a simplified _Baseline_ network that improved upon the SOTA performance. The Baseline network utilizes GELU activations [23] and replaces multi-headed self-attention in [61] with a channel
Figure 2: Modified visualization of repeating blocks of the architectures from [7] the considered _Intermediate network_ from [7] (please refer to (c)) and _Intermediate + ReLU network_ (please refer to (d)).
attention module [25]. Without loss in i.i.d. performance, they further simplify this architecture by removing activation functions altogether, replacing GELU with a _simple gate_ which performs element-wise product of feature maps, and replacing the channel attention by a _simplified channel attention_ without activation functions. The resulting network is referred to as a Nonlinear Activation-Free Network (NAFNet). In contrast to [7] who focus on performance with clean inputs, we analyze the adversarial robustness of these networks, which also allows us to evaluate the effect of different activation functions and attention mechanisms on the robustness of restoration transformers. In Figure 1, we observe that NAFNet has significantly different artifacts in the reconstructed images compared to Restormer and the Baseline network. One might simply hypothesize that these strange artifacts which appear to be the cumulative effect of aliasing and color mixing are due to the use of 'Simple Gate' in place of a non-linear activation function like GELU. To confirm this hypothesis we additionally consider an _Intermediate network_, from [7]. In this _Intermediate network_ we replace the _channel attention_ in the baseline network with the _simplified channel attention_ but retain the GELU activation. Additionally, to better understand the role of non-linear activation functions in this context, we consider an architecture the same as the _Intermediate network_ but with ReLU activations instead of GELU. In Figure 2, we modify the visualization by [7], to present the repeating blocks of all the considered architectures in our work.
### Defenses
As discussed in Section 1, we observe in Figure 1 that all considered architectures are vulnerable to adversarial attacks. Prior work [18, 31, 19] has shown that adversarial training is an effective defense against adversarial attacks. Thus we use adversarial training as a defense strategy.
**Adversarial Training.** We use the FGSM attack as proposed by [18] to generate adversarial samples during training. Adversarial training can be hypothesized as a min-max problem, where we try to find perturbations for the samples such that the loss is maximized while training the network on these samples to minimize the loss of the model over training iterations. PGD attack is essentially a multi-step extension of FGSM attack, and thus the loss that FGSM attack attempts to maximize remains the same. Additionally, the attack step of FGSM is also the same as described in Section 3.1, with one notable difference being that in the case of an FGSM attack, the attack step size \(\alpha\) is equal to the permissible perturbation size of \(\epsilon\).
While training, to avoid overfitting to adversarial samples, and enable the model to make reasonable reconstructions on unperturbed samples we use the training regime similar to [19] and use only 50% of the sample in the training batch to generate perturbed adversarial samples and use the other 50% samples unperturbed. Thus, the effective learning objective is as described by Equation 6.
\[\operatorname*{minimize}_{\theta}\;\sum_{i}L(\mathcal{G}_{\theta}(\mathbf{y}^ {\mathrm{clean}_{i}}),\;\mathbf{x}_{i})+\sum_{j}L(\mathcal{G}_{\theta}( \mathbf{y}^{\mathrm{adv}_{j}}),\;\mathbf{x}_{j}) \tag{6}\]
where the indices \(i\) and \(j\) correspond to the examples from the clean and adversarial batch splits, and FGSM adversarial examples are generated as:
\[\mathbf{y}^{\mathrm{adv}_{j}}=\phi^{r}(\mathbf{y}^{\mathrm{clean}_{j}}+\phi^ {\epsilon}(\epsilon\cdot\mathrm{sign}\nabla_{\mathbf{y}_{j}}L(\mathcal{G}_{ \theta}(\mathbf{y}_{j}),\mathbf{x}_{j}))) \tag{7}\]
## 4 Experiments
In this work on image restoration, we focus on reconstructing deblurred images using a few recently proposed image restoration networks.
### Experimental Setup
**Networks.** We consider Restormer proposed by [61], and Baseline network and NAFNet proposed by [7] with width 32. For understanding the design choices that lead to NAFNet producing reconstructed images with significantly different spectral artifacts than the other considered networks, we also consider an _Intermediate network_ and _Intermediate + ReLU_. This _Intermediate network_ with width 32 has also been considered by [7] when discussing design choices to arrive from the Baseline network to NAFNet. These networks are similar to the Baseline, except it has the "simplified channel attention" as proposed by [7] rather than the "channel attention" used in the Baseline network. We visualize all the considered architectures in Figure 2.
**Dataset.** For our experiments we use the GoPro image deblurring dataset[34]. This dataset consists of 3 214 real-world images with realistic blur and their corresponding ground truth (deblurred images) captured using a high-speed camera. The dataset is split into 2 103 training images and 1 111 test images.
**Metrics.** We report the PSNR and SSIM scores of the reconstructed images w.r.t. to the ground truth images, averaged over all images. PSNR stands for Peak Signal-to-Noise ratio, a higher PSNR indicates a better quality image or an image closer to the image to which it is being compared. SSIM stands for Structural similarity[53]. A higher SSIM score corresponds to better higher similarity between the reconstruction and the ground-truth image.
**Training Regimes.** For Restormer and its adversarial training counterpart ('+ADV') we follow the training procedure used by [61] except due to computational limitations we do not train on the last recommended patch size 384. For the Baseline network, NAFNet, and its counterparts we follow the training regime used by [7].
**Adversarial Training.** We used FGSM [18] adversarial training for efficiency. The maximum allowed perturbation
for the adversaries is set to \(\epsilon=\frac{8}{255}\). We use '+ADV' after the model name to denote that the model has been trained with FGSM adversarial training.
**Adversarial Attacks.** We consider PGD and CosPGD attacks. Following the procedure by [1], we use \(\epsilon\approx\frac{8}{255}\), \(\alpha\)(attack step size)\(=0.01\). We consider attack iterations \(\in\{5,10,20\}\) for our attacks. We use MSE loss for generating adversarial samples for all networks.
### Results
The good performance of image restoration models on unperturbed samples is indubitably essential for possible real-world applications. However, the generalization ability of these models to perturbed samples has to be better understood for their reliability in safety-critical applications such as medical imaging, autonomous driving, etc. To this effect, we study the performance of the considered networks on both clean (unperturbed) and adversarial (perturbed) samples. Further, to overcome the observed shortcomings of these models, we harden them using adversarial training.
As observed in Figure 1, under adversarial attack both Restormer and Baseline network induce ringing-like artifacts in the restored images. However, NAFNet introduces aliasing like grid artifacts and color mixing in the restored images.
We report the performance of three networks along with adversarial training over clean images in Table 1.
Further, to study the generalization ability of these networks we adversarially attack the networks and report the findings in Table 2.
With standard training protocol, Restormer is marginally more robust in comparison to the Baseline network with fewer attack iterations, however, this difference reduces as the number of attack iterations increases. With adversarial training using FGSM adversarial examples, we observe improvement in the robustness of all three networks. Interestingly, the gain in performance of Restormer when trained with FGSM is significantly better than that of the Baseline network and NAFNet. This indicates that Restormer has a much higher potential of being generalizable than both the Baseline network and NAFNet. This raises doubts over the claims by [7] regarding the Baseline network and NAFNet having "comparable or better performance" to the recent _state-of-the-art_ image restoration models. Their claim holds true for clean samples, however with just slight perturbation (\(\epsilon=\frac{8}{255}\)), the performance of their proposed models drops significantly. Contrary to this, _Intermediate+ReLU_ is significantly more robust, across attack iterations. We discuss this further in Section 5.1.
At first, one might overlook this shortcoming, however, when considering safety-critical real-world applications like those in the medical domain for deblurring MRI images, or in autonomous driving, such shortcomings could be very hazardous. This is further highlighted in Figure 3 as we observe that both the Restormer and the Baseline network introduce ringing artifacts in the reconstructed images, however, NAFNet introduces very strong aliasing and color mixing that gets worse as the attack strength increases. While aliasing and color artifacts are significantly reduced with adversarial training (please refer to Figure 3), the reconstructions of NAFNet and the Baseline network are still affected by residual ringing artifacts. Interestingly, the quality of images reconstructed by Restormer after adversarial training is significantly better, as indicated by its performance in terms of PSNR and SSIM in Table 2. At a low amount of adversarial attack iterations, the artifacts present in the images reconstructed by Restormer are negligible. To ascertain that these observations are not specific to the adversarial attack itself, we visualize the images reconstructed after PGD attack in Figure A2 and observe a similar phenomenon. This accentuates the strength of the architectural design of Restormer and casts doubts over that of the networks proposed by [7].
## 5 Analysis and Discussion
Following we discuss the design choices made in NAFNet and the Baseline network that constrain the performance of the network against adversarial attacks, despite employing adequate defense techniques.
### Analyzing Intermediate networks
First, we study the _Intermediate network_ to ascertain if the spectral artifacts introduced by NAFNet in its reconstructed images were due to replacing a non-linear activation function with a _Simple Gate_. This is because the channel-wise multiplication would best explain the color mixing artifact and the inherent wrong subsampling during this operation and would account for the accentuated aliasing artifacts. Further to understand the influence of the non-linear activation, we also train the Intermediate network with ReLU activation, referred to as _Intermediate + ReLU_.
\begin{table}
\begin{tabular}{c|c c} \hline \hline Architecture & PSNR & SSIM \\ \hline Restormer & 31.99 & **0.9635** \\ + ADV & 30.25 & 0.9453 \\ \hline Baseline & 32.48 & 0.9575 \\ + ADV & 30.37 & 0.9355 \\ \hline NAFNet & **32.87** & 0.9606 \\ + ADV & 29.91 & 0.9291 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Performance of the different considered networks and their counterparts on clean (unperturbed) GoPro test images. While NAFNet has highest PSNR value, Restormer is slightly better in terms of SSIM. All models slightly suffer from adversarial training when evaluated on clean data, which is to be expected.
We report the findings on the Intermediate networks in Table A1. Here we observe that the Intermediate network performs marginally worse than even NAFNet, especially under adversarial attacks. Additionally, in Figure A1, we visualize the images reconstructed by the Intermediate network. Firstly, the clean images (unperturbed) have not been deblurred significantly. Secondly, even under mild adversarial attacks, the quality of the reconstructed images is abysmal. We observe severe checkboard patterns, aliasing, and color mixing in all images reconstructed by the Inter
\begin{table}
\begin{tabular}{l|c c|c c|c c|c c|c c|c} \hline \hline \multirow{2}{*}{Architecture} & \multicolumn{8}{c|}{CosPGD} & \multicolumn{8}{c|}{PGD} \\ & \multicolumn{2}{c|}{5 attack itrs} & \multicolumn{2}{c|}{10 attack itrs} & \multicolumn{2}{c|}{20 attack itrs} & \multicolumn{2}{c|}{5 attack itrs} & \multicolumn{2}{c|}{10 attack itrs} & \multicolumn{2}{c}{20 attack itrs} \\ & PSNR & SSIM & PSNR & SSIM & PSNR & SSIM & PSNR & SSIM & PSNR & SSIM & PSNR & SSIM \\ \hline
**Restormer** & 11.36 & 0.3236 & 9.05 & 0.2242 & 7.59 & 0.1548 & 11.41 & 0.3256 & 9.04 & 0.2234 & 7.58 & 0.1543 \\ + **ADV** & **24.49** & 0.81 & **23.48** & **0.78** & 21.58 & 0.7317 & **24.5** & 0.8079 & **23.5** & **0.7815** & 21.58 & 0.7315 \\ \hline Baseline & 10.15 & 0.2745 & 8.71 & 0.2095 & 7.85 & 0.1685 & 10.15 & 0.2745 & 8.71 & 0.2094 & 7.85 & 0.1693 \\ + ADV & 15.47 & 0.5216 & 13.75 & 0.4593 & 12.25 & 0.4032 & 15.47 & 0.5215 & 13.75 & 0.4592 & 12.24 & 0.4026 \\ \hline NAFNet & 8.67 & 0.2264 & 6.68 & 0.1127 & 5.81 & 0.0617 & 10.27 & 0.3179 & 8.66 & 0.2282 & 5.95 & 0.0714 \\ + ADV & 17.33 & 0.6046 & 14.68 & 0.509 & 12.30 & 0.4046 & 15.76 & 0.5228 & 13.91 & 0.4445 & 12.73 & 0.3859 \\ \hline
**Intermediate** & 6.0224 & 0.0509 & 5.8166 & 0.0366 & 5.7199 & 0.0315 & 6.0225 & 0.0509 & 5.8158 & 0.0365 & 5.7173 & 0.0314 \\ + ADV & 24.02 & **0.8213** & 22.01 & 0.7775 & 20.15 & 0.7286 & 24.02 & **0.8213** & 21.98 & 0.7770 & 20.15 & 0.7286 \\ \hline
**Intermediate + ReLU** & 13.87 & 0.4093 & 11.63 & 0.3128 & 10.29 & 0.2538 & 13.87 & 0.4094 & 11.62 & 0.3127 & 10.29 & 0.2542 \\ + ADV & 23.90 & 0.8046 & 22.46 & 0.7637 & **21.85** & **0.7484** & 23.91 & 0.8046 & 22.47 & 0.7638 & **21.84** & **0.7481** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Comparison of performance of the considered models against CosPGD and PGD attacks with various attack strengths. Attack strength increases with the number of attack iterations (itrs). Note that _Intermediate + ReLU_ achieves reasonably robust results entirely without adversarial training.
Figure 3: Images reconstructed by different models after **CosPGD attack**. See Figure A1 (Appendix A) to compare over all considered models.
mediate network under adversarial attack. Thus, to better understand the performance of the Intermediate network in comparison to the Baseline network and NAFNet, we perform significantly weaker adversarial attacks. To this effect, we use the CosPGD attack but with \(\epsilon\approx\frac{2}{255}\), and consider attack iterations \(\in\{\)1, 3, 5\(\}\). We again use \(\alpha=0.01\).
We report the performance of the Intermediate networks in Table 3. Interestingly, we observe that after one adversarial attack iteration, the Intermediate network is significantly outperforming both the Baseline network and NAFNet. However, the Intermediate network is unable to retain this superior performance, and its performance significantly drops as we increase the attack strength (attack iterations). Additionally, in Figure 4 we observe the introduction of the same spectral artifacts for the Intermediate network as those observed in Figure A1 and Figure A2 (please refer to Section A). The intensity of the spectral artifacts increases as we increase the attack strength. This phenomenon is similar to the performance of NAFNet, which performs admirably on clean samples and under weak adversarial attacks but begins to perform significantly worse as the attack strength increases. This indicates that even smoothed activation functions in the NAFNet architecture instead of Simple Gate produce strong spectral artifacts in the reconstructed images.
This is in striking contrast to using a non-smooth non-linear activation function, ReLU. Interestingly, we observe that _Intermediate+ReLU_ is significantly more robust, and the degradation in its performance with attack strength is significantly lower than all considered networks, including Restormer. In Figures A1, A2 &4 we observe that the images reconstructed by _Intermediate+ReLU_, while blurry, have significantly fewer artifacts for reasonable values of \(\epsilon\).
Under adversarial attacks, the reconstructed images do not have spectral artifacts similar to _Intermediate network_ or NAFNet, but more similar to Restormer and the Baseline. It is only at severely higher \(\epsilon\approx\frac{20}{255}\) that spectral artifacts similar to those produced by _Intermediate network_ appear in the reconstructed images from _Intermediate+ReLU_. Thus, the smoothening of feature maps by the conjunction of Simplified Channel Attention and GeLU, and Simple Gate could be attributed to the introduction of some peculiar spectral artifacts and loss in robustness.Using a non-smoothed non-linear activation function like ReLU appears to be an effective mitigation technique.
Additionally, as reported in Table 2 we observe the adversarial robustness of both the _Intermediate network_ and _Intermediate+ReLU_ significantly increases after FGSM training, and is comparable to Restormer. This significant improvement in adversarial performance is also visible at lower \(\epsilon\) attacks, please refer to Table 3 and visually shown in Figure 4. Thus, as observed before, adversarial training is a fix to reduce artifacts, even for the _Intermediate network_.
### Superiority of Restormer
In their work, [7] attempt to reduce model complexity while retaining the performance of the Restormer. However, as shown in our work this significantly degrades the gener
Figure 4: Comparing images reconstructed by different models after **CosPGD attack** at \(\epsilon\approx\frac{2}{255}\). Thus the attack strength is significantly weaker.
Figure 5: Two different randomly chosen images reconstructed by _Intermediate + ReLU_ after 5 iterations of CosPGD attack with significantly higher \(\epsilon\approx\frac{20}{255}\). We observe strong spectral artifacts similar to _Intermediate network_ in the recovered images.
\begin{table}
\begin{tabular}{l|c c|c c|c c} \hline \multirow{2}{*}{Architecture} & \multicolumn{5}{c}{CoSGD} \\ & \multicolumn{2}{c}{1 attack its} & \multicolumn{2}{c}{3 attack its} & \multicolumn{2}{c}{5 attack its} \\ & PSNR & SSIM & PSNR & SSIM & PSNR & SSIM \\ \hline Baseline & 21.38 & 0.7520 & 17.19 & 0.6356 & 16.99 & 0.6316 \\ NAFNet & 22.54 & 0.7883 & 18.80 & 0.6948 & 18.46 & 0.6835 \\ \hline Intermediate & 25.14 & 0.8410 & 10.37 & 0.2940 & 8.56 & 0.1812 \\ + ADV & 25.47 & 0.8555 & 25.16 & 0.8501 & 25.32 & 0.8555 \\ \hline Intermediate + ReLU & 23.96 & 0.8112 & 20.96 & 0.7458 & 21.5777 & 0.7594 \\ + ADV & 26.11 & 0.8616 & 25.10 & 0.8459 & 24.86 & 0.8413 \\ \hline \end{tabular}
\end{table}
Table 3: Comparison of performance of the Baseline network, NAFNet, and Intermediate networks against significantly weak CosPGD attack. For this comparison we use \(\epsilon\approx\frac{2}{255}\) and \(\alpha=0.01\) and consider fewer attack steps i.e. iterations \(\in\{\)1, 3, 5\(\}\)
alization ability of the consequent models. As larger models tend to have a better trade-off between robustness and accuracy [22, 24], the reduced model capacity in the Baseline and NAFNet could contribute to the reduced robustness. While reducing model complexity is certainly important and desirable, to maintain robustness it requires a more careful and systematic pruning of networks [58, 40, 24] than simply dropping components. Apart from the model's complexity in terms of the number of parameters, the attention mechanism itself could be crucial for robustness.
While the Restormer uses a multi-headed self-attention mechanism, both the Baseline network and NAFNet use variants of channel-attention (NAFNet uses the simplified channel-attention proposed by [7]). As shown by [4], the self-attention module of vision Transformers significantly aids the Transformer based models to improve their robustness. Additionally, it helps the model better utilize defense strategies such as additional training, distillation, etc. A similar phenomenon is observed in Table 2, as Restormer, a vision transformer based model with a multi-headed self-attention module is able to better utilize adversarial training compared to the Baseline network and NAFNet.
Limitations.Adversarial training and design choices like the use of smoothed or non-smoothed activation functions against using Simple Gates certainly have a significant impact on the performance of the considered image restoration models. However, these still is a considerable gap in the clean performance of the considered models. While the fixes work in increasing adversarial robustness and removal of spectral artifacts the images are far from ideal restoration. As observed, the restored images after the fixes are significantly blurry. This is a limitation of this work, as this work was focused on removal of spectral artifacts and better adversarial robustness.
This work is a step towards finding a fix and not an absolute fix. Exploring methods other than adversarial training for increasing adversarial robustness and removal of spectral artifacts could be an interesting future work direction.
## 6 Conclusion
We raise concerns and awareness regarding the generalization ability of deep learning models. Despite recent methods outperforming baselines for various vision tasks, for a method to have a significant contribution to real-world applications, it must be reliable and robust. Thus in this work, we highlight this shortcoming of recently proposed Transformer based image restoration models. While the models proposed by [7] perform satisfactorily for image deblurring on non-perturbed samples, they fail to generalize when slight adversarial perturbations are added to the blurred images. We acknowledge that the reduction in model complexity compared to Restormer is a step in the right direction, however, in this case, it comes at the expense of model robustness. Therefore, we additionally employ adversarial training in an attempt to fix this shortcoming while also improving the quality of the reconstructed images. We observe that adversarial training is able to reduce the spectral artifacts and also results in significant improvements in adversarial robustness of the image restoration models. However, the extent of the improvement varied with the architectural design decisions. Thus lastly, we investigate the design decisions that might lead to the occurrence of spectral artifacts and loss in robustness for the considered methods and find a an interesting ablation concerning the type of activation functions used when downsampling.
|
2310.10748
|
Relating photometric and magnetic properties of structures at solar
surface
|
We investigate sharp structures visible in solar magnetic field tracers. It
is shown that the sunspot magnetic boundaries do not coincide with the
photometric ones. Moreover, there is no clear boundary of the magnetic field in
the vicinity of sunspots. Thus, the widely accepted concept of magnetic tubes
with sharp edges is not always correct and should be used with caution. It is
also shown that even in the moments of complete absence of visible spots on the
Sun, there are magnetic fields over 800 Gauss. The nature of these strong
magnetic fields remains unclear; they may originate at relatively small depths
under the photosphere.
|
V. N. Obridko, D. D. Sokoloff, M. K. Katsova
|
2023-10-16T18:25:45Z
|
http://arxiv.org/abs/2310.10748v1
|
# Relating photometric and magnetic properties of structures at solar surface
###### Abstract
We investigate sharp structures visible in solar magnetic field tracers. It is shown that the sunspot magnetic boundaries do not coincide with the photometric ones. Moreover, there is no clear boundary of the magnetic field in the vicinity of sunspots. Thus, the widely accepted concept of magnetic tubes with sharp edges is not always correct and should be used with caution. It is also shown that even in the moments of complete absence of visible spots on the Sun, there are magnetic fields over 800 Gauss. The nature of these strong magnetic fields remains unclear; they may originate at relatively small depths under the photosphere.
keywords: Solar cycle, Sun:magnetic fields +
Footnote †: journal: Journal of Atmospheric and Solar-Terrestrial Physics 13.10.2023
## 1 Introduction
From the very beginning and long afterwards, the number and area of sunspots were determined visually from solar images based on their photometric
properties. Nowadays, we are using photo and numerical records. However, in all cases, the data on a sunspot area are based on the image of the sunspot and the photometric estimate of its boundary (see e.g. [1; 2; 3]). There is no doubt that the main factor determining the very existence of a sunspot is the magnetic field. Nevertheless, a definition of the sunspot boundary in terms of the magnetic field is still not sufficiently elaborated in scientific literature. In our recent paper [4], we considered a related problem and our intention here is to go further in this direction.
A remarkable fact here is that most of the objects at the solar surface have a sharply defined photometric boundary. The point is that the horizontal optical thickness is quite short (about 100 km) at least in the photosphere, and the horizontal optical transport is rather difficult. Based on the magnetic nature of almost all surface solar objects, the concept of sharp magnetic boundaries is widely anticipated.
In particular, for quite a long time, the magnetic field outside sunspots was considered negligible. So, equations were derived, according to which the magnetic field vanishes at the outer boundary of the penumbra [5; 6], and the dependence of the field intensity on the distance from the center of a symmetric spot was fully determined by the maximum magnetic field at the center. Later, various estimates for \(B_{b}/B_{0}=c\) were adopted (here \(B_{0}\) is the magnetic field at the sunspot center and \(B_{b}\) is the field at the sunspot boundary), in particular, \(c=0.5\)[7], \(c=0.2\)[8], \(c=0.163\)[9], and \(c=0.607\)[10].
The assumption that the magnetic and photometric boundaries coincide, which still needs verification, resulted, nevertheless, in a theoretical concept of magnetic tubes and ropes.
Nowadays, the concept of a floating magnetic tube is widely accepted. It is believed, that sunspots arise during the formation of active regions on the solar surface from a strong toroidal field generated by the solar dynamo. In fact, all arguments in favor of this concept are based on theoretical considerations [11; 12; 13; 14; 15; 16; 17]. A critical analysis of the mechanism of formation of sunspots and, more broadly, bipolar ARs described above has been recently
performed in [18; 19; 20; 21; 22; 16; 17; 23].
The concept of magnetic ropes in the solar corona based on observations of the filament structure similar to the structure of magnetic lines is also widely accepted. It is, however, difficult to prove that this structure indeed consists of isolated tubes, because there are no direct magnetic field observations therein. The observed photometric feature may be associated with moderate variations in the magnetic field while the large-scale magnetic field on the whole remains quasi homogeneous. The magnetic-field variations can substantially affect the coronal plasma radiation due to a strong dependence of radiation mechanisms on the field intensity (e.g. [24]).
## 2 The sunspot magnetic boundary given by observational data
We proposed [4] a new method for obtaining the magnetic boundary of visible sunspots based on long-term data series. We used SDO/HMI data on the daily longitudinal magnetic field for 2375 days from 01.05.2010 to 31.10.2016. We use the daily sunspot numbers from WDC-SILSO, Royal Observatory of Belgium, Brussels (version 2). The cumulative daily sunspot areas were taken from the NASA Web site. At present, there are two databases formed of high-resolution observations carried out with single-type instruments. These are SOHO/MDI and SDO/HMI. Michelson Doppler Imager (MDI) onboard the Solar and Heliospheric Observatory (SOHO) [25] was continuously measuring the Doppler velocity, longitudinal magnetic field, and brightness of the Sun for 15 years up to 12 April 2011. The enhanced Helioseismic and Magnetic Imager (HMI: [26]) onboard the Solar Dynamics Observatory(SDO: [27]) started its routine observations on 30 April 2010. HMI data include all MDI observables, but with much higher spatial and temporal resolutions and better data quality.
We find the relative area at the solar surface occupied by the magnetic field larger than a certain threshold value. This relative area is expressed in millionths of the visible hemisphere (m.v.h.), as is customary when studying the total sunspot areas. These calculations are compared with the database of
daily sunspot observations. Putting together all of the data, we arrive at the conclusion that, on average, the magnetic boundary of a sunspot as defined by the normal component of the magnetic field is 550 G. This estimate is quite reliable. Indeed, this value provides maximum for correlation between magnetic and visual data. Even rather small variations of the value chosen to determine the boundary (say, 525 G or 575 G instead of 550 G because 575 is greater than 550) reduces the correlation substantially ([4]).
The point is that the magnetic field at the apparent sunspot boundary does not vanish. Discussing the situation in terms of the magnetic tubes, we must have in mind that a magnetic tube has no sharp boundary and extends far beyond the photometric sunspot boundary. The relation between the magnetic field strength in a circular sunspot versus its relative photometric radius normalized to the radius sharp optical spot boundary with \(\rho_{0}=1\) at the photometric boundary is shown in Fig. 1.
The link between the magnetic field (measured in G, Fig. 1 shows \(\log B\)) and the relative photometric sunspot radius was calculated up to \(\rho=1.2\) and was approximated by the second order polynomial as
\[\log B=3,39537-0,90097\rho+0,25188\rho^{2}, \tag{1}\]
The magnetic field becomes as low as several hundred G at \(\rho\approx 2\), where it already does not differ from the magnetic field of surrounding spots and faculae. An exact estimate is not possible here, since the faculae usually have a diffuse non-axisymmetric form; so it is better to speak about a complex of a sunspot and surrounding faculae rather than a photometric spot. That is why the approximation in Fig. 1 is extrapolated up to \(\rho=1.8\) using Eq. (1) and shown by dashed line. Perhaps this extrapolation slightly overestimates the magnetic field, and at the faculae boundary, it is as low as several dozens of gauss.
We conclude that the apparent sunspot boundary is determined by interaction of the magnetic field and the convective transport. If the magnetic field becomes low enough to suppress convection in all scales however sufficient to
Figure 1: Empirical dependence of the magnetic field (dots, measured in G) in a symmetric sunspot on the relative sunspot radius (\(\rho=1\) at the photometric boundary). The solid line is a polinomial approximation in Eq. (1), the dashes extrapolate the approximation for weak magnetic fields. The thin blue lines near the approximation mark the 95% confidence interval. The thick blue line shows the photometric sunspot boundary, and the red line shows the magnetic sunspot radius, calculated presuming that the magnetic radius of the sunspot corresponds to the magnetic field strength of 550 G.
Figure 2: Statistics of the magnetic-field strengths on spotless days. The black dots stand for \(B>100\) G, the red squares stand for \(B>400\) G, the blue dotted circles give data for \(B>600\) G, and purple crosses indicates the area covered by strong magnetic field \(B>800\) G.
suppress convection in small scales, the convective transport becomes slower and the brightness of the element decreases [28]1. The idea of the paper can be shortly described as follows. In accordance with [29], the author claims that while a weak magnetic field does not affect the main streams, it does reduce the turbulence so that the flow becomes more stable. As a result, the convective velocity is determined by the balance between the upward force and the force of turbulent viscosity. This, in turn, decreases the dissipation and increases the convective velocities. In the process, the brightness of the plage surrounding the sunspot somewhat increases. A further increase of the magnetic field intensity leads to suppression of the convective motions. The balance condition provides the apparent sharp boundary under discussion.
Footnote 1: Here, we give reference (correcting a misprint) to this quite old, but helpful paper as it is given in ADS. In fact, however, the paper was translated to English as Sov. Astron. **4**, 59, 1961 and this English translation is provided by ADS.
## 3 Locally strong magnetic fields and magnetic field boundary
The discussion above is aimed at determining the sunspot boundary as a whole. There is, however, a local aspect of the problem. The point is that interaction between the magnetic field and the convective transport depends on the size of magnetic element. If the element is small enough (size about 100 km), the horizontal optical thickness for radiative transport becomes comparable with the geometric size of the element, i.e. magnetic tubes or ropes. Then, the horizontal transport can smooth in the scale of hundred kilometers the temperature profile, and it becomes problematic to isolate the element by its dropping brightness. That is why there are some elements with a strong magnetic field on the solar surface, even if there are no sunspots at the instant. This fact has to be taken into account when discussing the magnetic boundary of sunspots.
The existence of small optical non-observable magnetic field elements was emphasized in [30; 31]. [32] directly observed such elements in the solar polar
region with extra-atmospheric high-resolution instruments.
In order to estimate the role of locally strong magnetic fields in the context of sunspot studies, we illustrate observations of elements with locally strong magnetic field obtained on spotless days (Fig. 2). The selection procedure is described above. We have calculated the area covered with the field above the given threshold value. The black dots stand for the magnetic field \(B>100\) G; i.e. these are the regions totally unrelated to sunspots. The area of these regions is of the order of several thousand m.v.h. The areas of the regions covered with magnetic fields \(B>400\) G (squares) and \(B>600\) G (circles) are slightly smaller, but substantial. It is most impressive, however, that, even on spotless days, there are quite a lot, several dozens, of objects with \(B>800\) G (crosses). Such objects should be considered as photometric sunspots; however, they are not observed by standard methods. The area of such regions is very small (several dozen m.v.h.), as well as their contribution to total magnetic flux. Still the total magnetic field energy in these regions may reach \(10^{30}\) erg and they may be responsible for moderate solar flares. We want to emphasize that several dozens of such objects exist even in the epochs of very deep solar minimum. As they are not recorded by the sunspot patrol service, their size in cross section is apparently smaller than 2-3 arcsec.
Note that spotless days on the Sun are not rare, especially in the epochs of the solar minimum. There can be several dozens and even hundreds of spotless days during a cycle; e.g. there were 311 spotless days during one year in 1913 and more than a thousand spotless days during the whole Cycle 14. Our results demonstrate that the absence of sunspots does not mean the absence the elements with strong magnetic fields. Fig. 2 shows that sometimes, the cumulative area of strong magnetic fields on spotless days can be as large as a few dozen m.v.h. This corresponds to a sunspot of moderate size, which has to be well observable. The fact that sunspots were not recorded on those days means that the magnetic fields existed in the form of isolated small magnetic elements, which are optically not distinguishable, but are accessible to magnetographic observations. They are observed by spectropolarimeter SP [33] of the
SOT (Solar Optical Telescope, [34; 35; 36; 37]) aboard the Hinode satellite [38]. Diffraction limited, high-polarization-sensitivity observations, which reveal the fine structure of photospheric vector magnetic fields are performed in [26]. The first Hinode SOT observations of the polar areas revealed the existence of many patchy magnetic concentrations with intrinsic field strengths of over 1 kG distributed across the entire polar region [39]. The spatial resolution of instrument is about 0.32 arcsec (0.16 arcsec pixel size), which corresponds to 200 km on the solar surface. According to [26], the unipolar appearance and disappearance suggests that the large patches are formed from and disintegrated into patches with magnetic flux below the detection limit of the instrument. The component seen in the magnetic fluxes range \(10^{15}\div 10^{16}\) Mx cm\({}^{-2}\) may be the tip of the iceberg of these unseen fluxes, and the large concentrations may have formed from the inventory of small concentrations.
The role of small magnetic elements has to be somehow included in the scenario of the solar cycle. The conventional scenario is as follows. The differential rotation produces the large-scale toroidal magnetic field. The second dynamo driver restores (e.g., in the framework of the Babkock-Leighton mechanism) the poloidal magnetic field of the opposite sign. The formation of small magnetic elements is not inevitably included in this scheme and may be driven by turbulent processes. During the solar minima, i.e. the times when the large-scale dynamo action is weak, turbulent mechanisms may produce small magnetic elements all over the solar surface rather than near the solar equator only (see [26]). Note that at the reversal of the large-scale polar magnetic field, the number of small magnetic elements with strong magnetic fields of both polarities is more or less equal as it should be in the small-scale dynamo action.
## 4 Conclusion
We performed a comparison between photometric and magnetic concerning the sunspot boundary to learn that the sunspot is not an isolated magnetic tube. Just a sharp change of brightness occurs in the fields of about 550 G, and
we see the penumbra.
Fields with intensities more than 800 G exist regardless of whether or not there are sunspots. However, when they are combined, the field intensifies and, starting from 550 G, the heat transfer is disturbed and the brightness decreases. The results obtained are of great importance for understanding the nature of magnetic field generation on the Sun and the emergence of active regions. The generally accepted notion of the magnetic field tubes is not entirely correct. It is believed that sunspots as particular entities arise during the formation of ARs on the solar surface from a strong toroidal field, which is generated by the solar dynamo mechanism at the base of the convection zone and is carried out into the photosphere. In fact, all arguments in favor of this concept are based on theoretical considerations.
The emergence of a single magnetic tube as a source of sunspots contradicts the observed field structure of a single sunspot. During the generation process, the turbulent dynamo creates many elements with different strengths. Their energy distribution changes with the phase of the cycle. But these elements are not tubes with isolated boundaries. The field in them decreases gradually with distance from the center of the element to its periphery. The photometric sharp boundaries of the spots are a the result of influence of the magnetic field on the processes of energy transfer. The fields above 550 G strongly reduce the flow of energy from below and a sharp boundary appears [28] (see also [40]).
Moreover, spots emerge in a pre-existing magnetic environment and are includes in active regions. Sunspot formation is far to be a surface phenomenon only rather develops in leptocline what obviously requires further investigation and modeling (see e.g. [41]).
Note one more time that our analysis being performed in surface solar structure context contains a more general message. Presence of sharp photometric boundaries for solar surface structures as well in cosmos as well do not immediately imply presence of corresponding sharp magnetic structures. Another point to be mentioned is that relation between radiative and magnetic properties of magnetic structures isolated in solar MHD numerical simulations obviously
requires a more deep analyses.
## Acknowledgments
VNO, MMK and DDS acknowledge the support of the Ministry of Science and Higher Education of the Russian Federation under the grant 075-15-2020-780 (VNO and MMK) and 075-15-2022-284 (DDS). DDS thanks support by BASIS fund number 21-1-1-4-1.
|
2301.11317
|
Higher derivative Hamiltonians with benign ghosts from affine Toda
lattices
|
We provide further evidence for Smilga's conjecture that higher charges of
integrable systems are suitable candidates for higher derivative theories that
possess benign ghost sectors in their parameter space. As concrete examples we
study the properties of the classical phase spaces for a number of affine Toda
lattices theories related to different types of Kac-Moody algebras. We identify
several types of scenarios for theories with higher charge Hamiltonians: some
that possess oscillatory, divergent, benign oscillatory and benign divergent
behaviour when ghost sectors are present in the quantum theory. No divergent
behaviour was observed for which the trajectories reach a singularity in finite
time. For theories based on particular representations for the Lie algebraic
roots we found an extreme sensitivity towards the initial conditions governed
by the Poisson bracket relations between the centre-of-mass coordinate and the
charges.
|
Andreas Fring, Bethan Turner
|
2023-01-26T18:56:38Z
|
http://arxiv.org/abs/2301.11317v2
|
# Higher derivative Hamiltonians with benign ghosts from affine Toda lattices
###### Abstract
We provide further evidence for Smilga's conjecture that higher charges of integrable systems are suitable candidates for higher derivative theories that possess benign ghost sectors in their parameter space. As concrete examples we study the properties of the classical phase spaces for a number of affine Toda lattices theories related to different types of Kac-Moody algebras. We identify several types of scenarios for theories with higher charge Hamiltonians: some that possess benign ghost sectors which are stable or exteremely sensitive towards the initial conditions, some that have malevolent ghost sectors that can be converted into benign sectors with an appropriate choice of variables and some theories with benign ghost sectors that are stable towards strong deformations.
+
Footnote †: : _Higher derivative Hamiltonians with benign ghosts from affine Toda lattices_
## 1 Introduction
Higher derivative Lagrangian theories, i.e. those that include derivative terms of the coordinates of order larger than one, arise naturally in a number of different contexts. For instance, in some approaches to theories of everything (TOE) that include gravity besides all the other known fundamental forces consist of embedding the standard (3+1)-dimensional universe into a higher dimensional space. In doing so, and demanding in addition these theories to be renormalizable, one is automatically led to higher derivative Lagrangian theories by simple scaling arguments. Unfortunately these theories are generally plagued [1] by so-called ghosts states that possess negative norms, thus leading to collapse and/or a violation of unitarity. This is the main reason why they are usually discarded and in comparison only very few explicit studies of these theories have been carried out to a full extent. For instance, in the field gravity and cosmology they have been proposed as a resolution of the cosmological singularity problem [2] and some of their black holes solutions have been studied [3]. Furthermore, for some cases the BRST symmetries have been identified [4] and also some supersymmetric versions have been studied [5].
However, in general such types of theories remain to be regarded as undesirable for the above mentioned reason and it unclear which theories deserve further considerations. In a
recent series of papers [6, 7, 8, 9] Smilga and collaborators addressed this question and gathered evidence to suggest that the dismissal of higher derivative theories might be too premature. The central idea in these studies is to distinguish between benign and malevolent ghost states in the sense that the latter states are genuinely unphysical while the former are solutions that might not be bounded from below, but are oscillatory in character hence allow for a unitary evolution. The next question is then of course how to identify theories that have such types of features and possess sectors in their parameter space with benign ghost solutions that in addition might be stable against small perturbations. Very recently [6] Smilga proposed that higher charges of integrable systems might be suitable candidates for such types of higher derivative Lagrangian theories. Here our main goal is to gather further evidence for this conjecture by considering a particular class of integrable systems and interpret their charges as Hamiltonians for higher derivative theories. We will analyse their classical phase spaces in the hope that benign classical systems will also lead to benign quantum systems as conjectured in [6].
In general, we will be considering here a prototype integrable theory that is affine Toda lattices with Hamiltonians of the form
\[H_{\bf g}=\sum_{i=1}^{\ell}\frac{p_{i}^{2}}{2}+\sum_{i=0}^{r}n_{i}e^{\alpha_{ i}\cdot q}, \tag{1}\]
where \(q=(q_{1},\ldots,q_{\ell})\) are the coordinates, \(p=(p_{1},\ldots,p_{\ell})\) are the momenta, \({\bf g}\) is a semi-simple Lie algebra, \(r\) the rank of this algebra, \(\alpha_{i}\) for \(i=1,\ldots,r\) are the simple roots of the root space \(\Delta_{\bf g}\) represented in an \(\ell\)-dimensional space, \(\alpha_{0}=-\sum_{i=1}^{r}n_{i}\alpha_{i}\) and \(n_{i}\in\mathbb{N}\) are positive integers with \(n_{0}=1\). The choice of \(\alpha_{0}\) ensures that the minimum of the potential of the theory is at \(q=(q_{1},\ldots,q_{\ell})=(0,\ldots,0)\), i.e. all first order terms in the \(q_{i}\) vanish. Often \(\alpha_{0}\) is taken to be the negative of the highest root, so that the integers \(n_{i}\) are the Kac labels, but this need not be the case and is a mere convention. The inclusion of the \(\alpha_{0}\)-root means that the associated algebra becomes a Kac-Moody algebra rather than a semi-simple Lie algebra. Thus we are not considering here theories of the type \(H_{\bf g}\) with the sum in the potential starting at \(i=1\), which are conformally invariant and do not possess minima in the potentials at finite values of the coordinates.
It is well known [10, 11, 12] that these type of theories are integrable in the Liouville sense, that is they possess as many conserved charges as degrees of freedom. It is these charges that we will be using as potential candidates for higher order derivative theories. The key question we will be addressing here is whether the classical trajectories in phase space associated to the Hamiltonian systems of these charges will be benign or malevolent according to the characterisation put forward by Smilga in [6, 7, 8]. The initial assumption is that the benign nature on the classical level is inherited in the quantum theory. Naturally, this supposition needs further investigation, which we leave for future studies.
Our manuscript is organised as follows: In section 2 we recall the constructions of the conserved classical charges for the \(A_{n}\)-affine Toda lattice theories, with a particular focus on \(A_{2}\) and \(A_{6}\) for different types, i.e. dimensions, of representations of the roots in (1). Interpreting these charges as Hamiltonians we numerically study their classical solutions in phase space. In section 3 and 4 we carry out similar type of studies for the \(B_{3}\) and \(G_{2}\)-affine
Toda lattice theories, respectively. We construct relevant charges from a reduction/folding procedure of the corresponding root systems or by direct computation. In section 5 we investigate the stability of the benign solutions with regard to the sensitivity of the initial conditions and to strong deformations by harmonic oscillator potentials. Our conclusions are stated in section 6.
## 2 Higher derivative Hamiltonians from \(A_{n}\)-affine Toda lattice charges
The expressions for the higher charges are central to our investigations and therefore we will provide here their explicit construction. All higher charges that will be considered are for theories associated with Hamiltonians of the general form in equation (1) with \(\mathbf{g}\) taken to be \(A_{n}\). Using the standard Lax approach for classical integrable systems [13] we employ the Lax pair given by the two operators in form of \((n+1)\times(n+1)\)-matrices
\[L=\left(\begin{array}{cccccc}p_{1}&W_{1}&0&\cdots&\cdots&0&W_{0}\\ W_{1}&p_{2}&W_{2}&&0&0\\ 0&W_{2}&p_{3}&\ddots&&\vdots\\ \vdots&&\ddots&\ddots&\ddots&&\vdots\\ \vdots&&\ddots&\ddots&\ddots&0\\ 0&0&&\ddots&p_{n}&W_{n}\\ W_{0}&0&\cdots&\cdots&0&W_{n}\ p_{n+1}\end{array}\right),\qquad M=\left( \begin{array}{cccccc}0&W_{1}&0&\cdots&\cdots&0&-W_{0}\\ -W_{1}&0&W_{2}&&0&0\\ 0&-W_{2}&0&\ddots&&\vdots\\ \vdots&&\ddots&\ddots&\ddots&&\vdots\\ \vdots&&\ddots&\ddots&\ddots&0\\ 0&0&&\ddots&0&W_{n}\\ W_{0}&0&\cdots&\cdots&0&-W_{n}&0\end{array}\right), \tag{2}\]
where we abbreviated \(W_{i}:=\exp(\alpha_{i}\cdot q)/2\), with \(\alpha_{i}\in\mathbb{R}^{n+1}\), \(i=1,\ldots,n\) denoting the simple roots of \(A_{n}\), \(\alpha_{0}=-\sum_{i=1}^{n}\alpha_{i}\) the negative of the highest \(A_{n}\)-root, \(q=(q_{1},\ldots,q_{n+1})\) the coordinates and \(p=(p_{1},\ldots,p_{n+1})\) the momenta. The dimension of the phase space is therefore \((n+1)\times(n+1)\) at this point.
By definition of the Lax operators, the equations of motion are then equivalent to the Lax pair equation
\[\dot{L}+[M,L]=0,\quad\Leftrightarrow\quad\dot{p}_{i}+W_{i}^{2}-W_{i-1}^{2}=0, \quad\alpha_{i}\cdot\dot{q}=p_{i}-p_{i+1},\qquad i=1,\ldots,n+1, \tag{3}\]
where we formally identified \(W_{n+1}=W_{0}\). As usual we denote here derivatives with respect to time by overdots. Taking \(\ell=n+1\) in (1) these equations also correspond to Hamilton's equations \(\dot{q}_{i}=\partial H/\partial p_{i}\), \(\dot{p}_{i}=-\partial H/\partial q_{i}\) as we will show below. By construction, it then follows immediately that all quantities \(Q_{k}:=Tr(L^{k})/k\) are conserved in time, i.e. \(\dot{Q}=0\). Given the expressions in (2) we easily construct all of these charges. Interpreting the summation indices modulo \(7\), e.g. \(W_{7}=W_{0}\), \(p_{8}=p_{1}\), etc, we obtain
\[Q_{1} = \sum_{i=1}^{7}p_{i}, \tag{4}\] \[Q_{2} = H=\sum_{i=1}^{7}\left(\frac{p_{i}^{2}}{2}+W_{i}^{2}\right), \tag{5}\]
\[Q_{3} = \sum_{i=1}^{7}\left[\frac{p_{i}^{3}}{3}+W_{i}^{2}(p_{i}+p_{i+1}) \right], \tag{5}\] \[Q_{4} = \sum_{i=1}^{7}\left[\frac{p_{i}^{4}}{4}+\frac{1}{2}W_{i}^{4}+W_{i} ^{2}(p_{i}^{2}+p_{i}p_{i+1}+p_{i+1}^{2})+W_{i}^{2}W_{i+1}^{2}\right]\] (6) \[Q_{5} = \sum_{i=1}^{7}\left[\frac{p_{i}^{5}}{5}+W_{i}^{4}(p_{i}+p_{i+1}) +W_{i}^{2}(p_{i}^{3}+p_{i}p_{i+1}^{2}+p_{i}^{2}p_{i+1}+p_{i+1}^{3})\right.\] \[\qquad\left.+W_{i}^{2}W_{i+1}^{2}(p_{i-1}+2p_{i}+2p_{i+1})\right],\] \[Q_{6} = \sum_{i=1}^{7}\left[\frac{p_{i}^{6}}{6}+\frac{1}{2}W_{i}^{6}+W_{i }^{4}\left(\frac{3}{2}p_{i}^{2}+\frac{3}{2}p_{i+1}^{2}+2p_{i}p_{i+1}\right)+W _{i}^{2}\left(p_{i}^{4}+p_{i}^{3}p_{i+1}+p_{i}^{2}p_{i+1}^{2}\right)\right.,\] (8) \[\qquad\left.+W_{i-1}^{2}\left(p_{i}^{4}+p_{i}^{3}p_{i-1}\right)+W _{i}^{2}W_{i+1}^{2}\left(p_{i}^{2}+2p_{i}p_{i-1}+3p_{i+1}^{2}+p_{i}p_{i+2}+2p_ {i+1}p_{i+2}+p_{i+2}^{2}\right)\right.\] \[\qquad\left.+W_{i}^{4}\left(W_{i-1}^{2}+W_{i+1}^{2}\right)+W_{i} ^{2}W_{i+1}^{2}W_{i+2}^{2}\right]\] \[Q_{7} = \sum_{i=1}^{7}\left[\frac{p_{i}^{7}}{7}+W_{i}^{6}(p_{i}+p_{i+1}) +W_{i-1}^{2}W_{i}^{2}W_{i+1}^{2}(p_{i-1}+2p_{i}+2p_{i+1}+p_{i+2})\right.\] \[\qquad\left.+W_{i}^{4}W_{i-1}^{2}(p_{i-1}+3p_{i}+2p_{i+1})+W_{i-1 }^{4}W_{i}^{2}(2p_{i-1}+3p_{i}+p_{i+1})\right.\] \[\qquad\left.+W_{i-1}^{2}W_{i}^{2}(p_{i-1}^{3}+4p_{i}^{3}+3p_{i}^{2 }p_{i+1}+2p_{i}p_{i+1}^{2}+p_{i+1}^{3})\right.\] \[\qquad\left.+W_{i-1}^{2}W_{i}^{2}(p_{i-1}^{3}(p_{i-1}^{2}(2p_{i}+p _{i+1})+p_{i-1}(3p_{i}^{2}+2p_{i}p_{i+1}+p_{i+1}^{2}))\right.\right.\] \[\qquad\left.\left.+W_{i}^{2}(p_{i}^{5}+p_{i}^{4}p_{i+1}+p_{i}^{3}p _{i+1}^{3}+p_{i}^{2}p_{i+1}^{3}+p_{i}p_{i+1}^{4}+p_{i+1}^{5})\right]+2\prod_{ i=1}^{7}W_{i}.\]
These charges and versions thereof will be our potential candidates for higher derivative theories when interpreted as Hamiltonians.
### Higher derivative Hamiltonians from the 3 particle \(A_{2}\)-affine Toda lattice
Next we evaluate the expressions of the charges for the \(A_{2}\)-theory more explicitly. First we notice that the second equation in (2) is simply solved by taking \(q=(q_{1},q_{2},q_{3})\), so that we obtain \(\dot{q}_{i}=p_{i}\) when the roots are represented as \(\alpha_{1}=(1,-1,0)\), \(\alpha_{2}=(0,1,-1)\) and \(\alpha_{0}=-\alpha_{1}-\alpha_{2}=(-1,0,1)\). This is the standard three dimensional representation for the \(A_{2}\)-roots, see for instance [14]. The charges (3)-(6) then acquire the form
\[Q_{1} = p_{1}+p_{2}+p_{3}, \tag{10}\] \[Q_{2} = H=\frac{1}{2}\left(p_{1}^{2}+p_{2}^{2}+p_{3}^{2}\right)+V_{12}+V _{23}+V_{31}=\sum_{i=1}^{3}\left(\frac{p_{i}^{2}}{2}+e^{\alpha_{i}\cdot q} \right),\] (11) \[Q_{3} = \frac{1}{3}\left(p_{1}^{3}+p_{2}^{3}+p_{3}^{3}\right)+p_{1} \left(V_{12}+V_{31}\right)+p_{2}\left(V_{12}+V_{23}\right)+p_{3}\left(V_{23}+V _{31}\right)+2,\] (12) \[= \sum_{i=1}^{3}\left[\frac{p_{i}^{3}}{3}+p_{i}\left(e^{\alpha_{i} \cdot q}+e^{\alpha_{i-1}\cdot q}\right)\right]+2,\] (13) \[Q_{4} = Q_{1}Q_{3}-\frac{1}{2}Q_{1}^{2}Q_{2}+\frac{1}{24}Q_{1}^{4}+\frac {1}{2}Q_{2}^{2}, \tag{14}\]
where we introduced the new abbreviation \(V_{ij}:=\exp(q_{i}-q_{j})\). We notice that only the first three conserved quantities are independent, as \(Q_{4}\) can be constructed from combinations of them. In fact, this property will persist for higher charges and all \(Q_{i}\) for \(i>3\) can be build from combinations of \(Q_{1}\), \(Q_{2}\) and \(Q_{3}\).
Moreover, one may easily verify that the charges \(Q_{1}\), \(Q_{2}\) and \(Q_{3}\) are in involution, i.e. their mutual Poisson brackets vanish
\[\{Q_{i},Q_{j}\}:=\sum_{k=1}^{3}\frac{\partial Q_{i}}{\partial q_{k}}\frac{ \partial Q_{j}}{\partial p_{k}}-\frac{\partial Q_{i}}{\partial p_{k}}\frac{ \partial Q_{j}}{\partial q_{k}}=0,\qquad\text{for}\,i,j=1,2,3. \tag{15}\]
As indicated in (11), at first we identify as usual the charge \(Q_{2}\) with the standard Hamiltonian so that the classical equations of motion resulting from Hamilton's equations to
\[\dot{q}_{1}=p_{1},\quad\dot{q}_{2}=p_{2},\quad\dot{q}_{3}=p_{3}\quad\dot{p}_{ 1}=V_{31}-V_{12},\quad\dot{p}_{2}=V_{12}-V_{23},\quad\dot{p}_{3}=V_{23}-V_{31}, \tag{16}\]
which are identical to the equations resulting from the Lax pair equation (2).
In the first instance we solve these equations numerically.
In figure 1 we depict the solutions to the three particle equations of motion (16) in phase space, observing confined orbits that periodically spiral inward and outward, a behaviour that continues beyond the time shown in the figure. The insets in figure 1 panel (b) demonstrate how the small period \(\tau_{s}\approx 1.778\) is modulated by a larger period \(\tau_{l}\approx 300.2\), with \(\tau_{s}\) governing the quasiperiodic elliptic motion and \(\tau_{l}\) the period of the inward/outward pulsation. We stress here that after each period we observe a small offset and therefore these solutions are not exactly periodic and only quasiperiodic, i.e. \(f(x+\tau)=g(x,f(x))\) with \(g\) being a simpler function than \(f\) or almost periodic in the sense of [15]. Almost periodic is here to be understood in the sense that we have a small offset after one period, i.e. \(|f(t)-f(t+\tau)|\leq\varepsilon\). We may adapt these observations more rigorously to the strict sense of the definition of almost periodic functions by H. Bohr [15] and adjust the values
Figure 1: Phase space \((x_{1},p_{1})\) for the \(A_{2}\)-affine Toda lattice Hamiltonian with three particles. Panel (a): inward spiralling trajectories from time \(t=0\) to \(t=150\). Panel (b): outward spiralling trajectories from time \(t=150\) to \(t=300\). The initial conditions are taken as \(x_{1}(0)=x_{2}(0)=x_{3}(0)=0\), \(p_{1}(0)=1\) and \(p_{2}(0)=p_{3}(0)=-1/2\). The insets in panel (b) show \(x_{1}\) and \(p_{1}\) as functions of time \(t\).
of \(\tau_{H}\) for a pre-selected \(\epsilon\). For the other directions in phase space \((x_{2},p_{2})\) and \((x_{3},p_{3})\) we obtain similar types of periodic behaviour.
Now we come to the key point in this approach and interpret the higher charges as Hamiltonians following the suggestion in [7, 8]. Thus we take here the charge \(Q_{3}\) as the Hamiltonian. Deriving the new set of equations of motion from \(\dot{q}_{i}=\partial Q_{3}/\partial p_{i}\), \(\dot{p}_{i}=-\partial Q_{3}/\partial q_{i}\) we obtain
\[\dot{q}_{1} = p_{1}^{2}+V_{12}+V_{31},\qquad\dot{p}_{1}=(p_{1}+p_{3})V_{31}-( p_{1}+p_{2})V_{12}, \tag{17}\] \[\dot{q}_{2} = p_{2}^{2}+V_{12}+V_{23},\qquad\dot{p}_{2}=(p_{1}+p_{2})V_{12}-(p_ {2}+p_{3})V_{23},\] (18) \[\dot{q}_{3} = p_{3}^{2}+V_{23}+V_{31},\qquad\dot{p}_{3}=(p_{2}+p_{3})V_{23}-(p_ {1}+p_{3})V_{31}, \tag{19}\]
which are identical to the equations previously considered in [7]. Once more we solve these equations, (17)-(19), numerically and depict the solutions in figure 2.
We observe that while the momenta are bounded as \(-1\leq p_{i}\leq 1\), the coordinate components \(x_{i}\) grow linearly in time so that the trajectories do not close in phase space. Thus this system appears to have malevolent ghosts. However, this is due to the fact that we have treated the \(A_{2}\)-system as a three rather than a two particle system. In the next section we will represent the roots in a lower dimensional space and consequently re-define the coordinates and momenta of the model in the dual space. The effect will be that the trajectories become confined and quasi-oscillatory in phase space so that we can say that the ghosts have become benign.
### Higher derivative Hamiltonians from the 2 particle \(A_{2}\)-affine Toda lattice
We will now constrain the \((3\times 3)\)-dimensional phase space to a \((2\times 2)\)-dimensional one. Recalling that root systems are isomorphic to each other as long as they reproduce the same Cartan matrix \(K_{ij}=2\alpha_{i}\alpha_{j}/\alpha_{j}^{2}\) we may achieve this by defining a new set of simple
Figure 2: Panel (a): Phase space \((x_{i},p_{i})\), \(i=1,2,3\) for the \(A_{2}\)-affine Toda lattice with unrestricted \(Q_{3}\)-Hamiltonian, with initial conditions \(x_{1}(0)=x_{2}(0)=x_{3}(0)=0\), \(p_{1}(0)=1\) and \(p_{2}(0)=p_{3}(0)=-1/2\). Panel (b) and (c): \(x_{i}\) and \(p_{i}\) as functions of time \(t\), respectively.
roots \(\beta_{i}\) in a two-dimensional representation through an orthogonal transformation that preserve \(K\) of the \(A_{2}\) root system obtained from the roots \(\alpha_{i}\) in the standard representation. This means we have to solve
\[K_{ij}=\alpha_{i}\cdot\alpha_{j}=A\beta_{i}\cdot A\beta_{j}=\beta_{i}\cdot\beta_ {j}=\left(\begin{array}{cc}2&-1\\ -1&2\end{array}\right)_{ij},\quad\beta_{i}=A^{-1}\alpha_{i},\,A^{-1}=A^{ \intercal},\,i,j=1,2, \tag{20}\]
for the orthogonal matrix \(A\) and the roots \(\beta_{i}\). We find the solutions
\[A=\left(\begin{array}{ccc}\frac{1}{\sqrt{6}}&\frac{1}{\sqrt{2}}&\frac{1}{ \sqrt{3}}\\ -\sqrt{\frac{2}{3}}&0&\frac{1}{\sqrt{3}}\\ \frac{1}{\sqrt{6}}&-\frac{1}{\sqrt{2}}&\frac{1}{\sqrt{3}}\end{array}\right), \quad\beta_{1}=\left(\sqrt{\frac{3}{2}},\frac{1}{\sqrt{2}},0\right)\quad\beta_ {2}=\left(-\sqrt{\frac{3}{2}},\frac{1}{\sqrt{2}},0\right). \tag{21}\]
The negative of the highest root is therefore \(\beta_{0}=-\beta_{1}-\beta_{2}=(0,-\sqrt{2},0)\).
Having reduced the dimension of the representation space for the roots from 3 to 2, we shift this reduction now to the dual space of the roots, i.e. the coordinates and the momenta. For this we define a new set of dynamical variables \((\zeta,\eta)\) in the dual space of the roots by
\[\alpha_{i}\cdot q = A\beta_{i}\cdot A\zeta=\beta_{i}\cdot\zeta,\qquad\mbox{for} \quad\zeta=A^{-1}q,\,\,i=1,2, \tag{22}\] \[\alpha_{i}\cdot p = A\beta_{i}\cdot A\eta=\beta_{i}\cdot\eta,\qquad\mbox{for}\quad \eta=A^{-1}p,\,\,i=1,2. \tag{23}\]
With \(A\) as identified in (21) we have
\[q = \left(\frac{\zeta_{1}}{\sqrt{6}}+\frac{\zeta_{2}}{\sqrt{2}},- \sqrt{\frac{2}{3}}\zeta_{1},\frac{\zeta_{1}}{\sqrt{6}}-\frac{\zeta_{2}}{\sqrt{ 2}}\right)=(q_{1},q_{2},q_{3}), \tag{24}\] \[p = \left(\frac{\eta_{1}}{\sqrt{6}}+\frac{\eta_{2}}{\sqrt{2}},- \sqrt{\frac{2}{3}}\eta_{1},\frac{\eta_{1}}{\sqrt{6}}-\frac{\eta_{2}}{\sqrt{2}}\right) =(p_{1},p_{2},p_{3}), \tag{25}\]
or when inverted
\[\zeta = \left(\frac{q_{1}-2q_{2}+q_{3}}{\sqrt{6}},\frac{q_{1}-q_{3}}{ \sqrt{2}},\frac{q_{1}+q_{2}+q_{3}}{\sqrt{3}}\right)=(\zeta_{1},\zeta_{2},0), \tag{26}\] \[\eta = \left(\frac{p_{1}-2p_{2}+p_{3}}{\sqrt{6}},\frac{p_{1}-p_{3}}{ \sqrt{2}},\frac{p_{1}+p_{2}+p_{3}}{\sqrt{3}}\right)=(\eta_{1},\eta_{2},0). \tag{27}\]
From the last component in (26) and (27) we observe that we can interpret the new \((2\times 2)\)-dimensional phase space \((\zeta,\eta)\) as the old \((3\times 3)\)-dimensional phase space \((q,p)\) in the centre of mass frame with additional constraints. We stress that this property is not imposed, but the conditions \(q_{1}+q_{2}+q_{3}=0\) and \(p_{1}+p_{2}+p_{3}=0\) are automatically satisfied with the definitions of the new variables in (24), which in turn results from representing the roots in a lower dimensional space.
The conserved quantities \(Q_{1},Q_{2},Q_{3}\) in (10)-(12) can now also be transformed to the new variables as
\[Q_{1} = 0,\] \[Q_{2} = H(\zeta,\eta)=\frac{1}{2}\left(\eta_{1}^{2}+\eta_{2}^{2}\right) +e^{-\sqrt{2}\zeta_{2}}+2e^{\frac{\zeta_{2}}{\sqrt{2}}}\cosh\left(\sqrt{\frac {3}{2}}\zeta_{1}\right), \tag{28}\]
\[Q_{3} = \frac{\eta_{1}\left(6e^{-\sqrt{2}\zeta_{2}}-\eta_{1}^{2}+3\eta_{2}^{2 }\right)}{3\sqrt{6}}-\sqrt{2}e^{\frac{\zeta_{2}}{\sqrt{2}}}\left[\frac{\eta_{1} }{\sqrt{3}}\cosh\left(\sqrt{\frac{3}{2}}\zeta_{1}\right)-\eta_{2}\sinh\left( \sqrt{\frac{3}{2}}\zeta_{1}\right)\right]+2.\]
The equations of motion resulting from the standard Hamiltonian \(H(\zeta,\eta)\) become
\[\dot{\zeta}_{1} = \eta_{1},\quad\dot{\zeta}_{2}=\eta_{2}, \tag{29}\] \[\dot{\eta}_{1} = -\sqrt{6}e^{\frac{\zeta_{2}}{\sqrt{2}}}\sinh\left(\sqrt{\frac{3} {2}}\zeta_{1}\right),\quad\dot{\eta}_{2}=\sqrt{2}e^{-\sqrt{2}\zeta_{2}}\left[1 -e^{\frac{3\zeta_{2}}{\sqrt{2}}}\cosh\left(\sqrt{\frac{3}{2}}\zeta_{1}\right) \right], \tag{30}\]
whereas the equations resulting from taking \(Q_{3}(\zeta,\eta)\) interpreted as the Hamiltonian are
\[\dot{\zeta}_{1} = \frac{2e^{-\sqrt{2}\zeta_{2}}-2e^{\frac{\zeta_{2}}{\sqrt{2}}} \cosh\left(\sqrt{\frac{3}{2}}\zeta_{1}\right)-\eta_{1}^{2}+\eta_{2}^{2}}{ \sqrt{6}}, \tag{31}\] \[\dot{\zeta}_{2} = \sqrt{2}e^{\frac{\zeta_{2}}{\sqrt{2}}}\sinh\left(\sqrt{\frac{3}{ 2}}\zeta_{1}\right)+\sqrt{\frac{2}{3}}\eta_{1}\eta_{2}, \tag{32}\]
\[\dot{\eta}_{1} = e^{\frac{\zeta_{2}}{\sqrt{2}}}\left[\eta_{1}\sinh\left(\sqrt{ \frac{3}{2}}\zeta_{1}\right)-\sqrt{3}\eta_{2}\cosh\left(\sqrt{\frac{3}{2}} \zeta_{1}\right)\right], \tag{33}\] \[\dot{\eta}_{2} = \frac{2e^{-\sqrt{2}\zeta_{2}}\eta_{1}}{\sqrt{3}}+\frac{1}{3}e^{ \frac{\zeta_{2}}{\sqrt{2}}}\left[\sqrt{3}\eta_{1}\cosh\left(\sqrt{\frac{3}{2}} \zeta_{1}\right)-3\eta_{2}\sinh\left(\sqrt{\frac{3}{2}}\zeta_{1}\right)\right]. \tag{34}\]
The phase space trajectories obtained from the standard equations of motion for the Hamiltonian, (29) and (30), are still confined to a finite region in phase space as seen from the numerical solutions figure 3.
We may still identify a small period \(\tau_{H}\) that governs one turn, up to a small displacement, and a larger period controlling the inward/outward motion.
Figure 3: Phase spaces \((\zeta_{i},\eta_{i})\), \(i=1,2\) for the standard Hamiltonian of the reduced two particle \(A_{2}\)-affine Toda lattice with initial conditions \(\zeta_{1}(0)=\zeta_{2}(0)=0\), \(\eta_{1}(0)=\sqrt{3}/2\sqrt{2}\) and \(\eta_{2}(0)=3/2\sqrt{2}\) (\(\equiv p_{1}(0)=1\), \(p_{2}(0)=p_{3}(0)=-1/2\)) for times \(t=0\) to \(t=300\) with “almost period” \(\tau_{H}\approx 3.543\). The insets in panels (a) and (b) show \(\zeta_{1},\eta_{1}\) and \(\zeta_{2},\eta_{2}\) as functions of time, respectively.
In figure 4 we depict the numerical solutions to the equations (31) - (34) obtained as equations of motions from the third order derivative \(Q_{3}\)-Hamiltonian. We determine an almost period \(\tau_{Q}\) for the small intersecting almost closed loops. The larger period now governs the rotation of these loops that due to the repeated offset fill in the phase space regions that appear to be identical to the regions identified for the Hamiltonian \(H\).
Thus while the trajectories resulting from the three and two particle \(A_{2}\)-Hamiltonians are all confined in phase space, this behaviour is different for those derived from the higher \(Q_{3}\)-charge where only the trajectories for the reduced model are confined. The divergent behaviour was already reported in [7], where it was also conjectured that in the centre of mass system convergence might be achieved. Here we have shown explicitly that this conjecture is partially correct, in the sense that the system can be interpreted as being in the centre of mass, but the more accurate statement is to view the system as the reduction from three to two particles along the change of the dimensions of the representation space of the roots. One should say that the two particle picture of the \(A_{2}\)-theory is the more natural one as for instance also in the closely related affine Toda quantum field theory the number of particles always equals the rank of the semi-simple Lie algebra [16, 17]. The mismatch between rank and particles simply results form the higher dimensional representation space of the simple roots. In [18] a similar reduction procedure was carried out by imposing additional constraints in order to "exorcise" Ostrogradski's ghosts. One may view the centre-of-mass condition as such a constraint, although here we have not employed Lagrange multipliers is to implement them.
### Higher derivative Hamiltonians from the \(A_{6}\)-affine Toda lattice
Next we consider a system that possesses more than one higher charge. Specifying the general Lax operator in (1) to \(n=6\) and computing the traces over the products of this operator we calculate the seven independent charges (3)-(9). For the seven particle system with the roots taken in the fundamental representation we obtain the explicit ex
pressions for the charges by replacing \(W_{i}^{2}\to V_{i,i+1}\). For instance, when taking all the roots in the standard representation the Hamiltonian acquires the form
\[H=\frac{1}{2}\sum_{i=1}^{7}p_{i}^{2}+\sum_{i=1}^{6}e^{qi-q_{i+1}}+e^{q_{7}-q_{1}}, \tag{35}\]
with \(\alpha_{7}\) taken as the negative of the highest root. We also convince ourselves that all mutual Poisson brackets vanish.
We proceed now as for the \(A_{3}\)-system by interpreting all of the charges as Hamiltonians and solve their respective Hamilton's equations. For the seven particle system we find periodic solutions for the momenta and coordinates in the phase space of the standard Hamiltonian, as seen in figure 5. However, for all higher charges only the momenta remain periodic whereas the coordinates diverge. In figure 5 we present as sample solution for the phase space of the higher charges the one for the \(Q_{6}\)-charge. In panel (d) we observe the divergence of all coordinates. This characteristic behaviour is shared by the solutions for all the other higher charge Hamiltonians which we do not represent here.
Similarly as for the \(A_{2}\)-case, we attempt to eliminate the divergence by reducing the number of particles to the rank, that is from seven to six. For this purpose we solve the analogue to the equation (20) with the \(A_{6}\)-Cartan matrix instead. Taking the \(\alpha\)-roots in the standard representation we find an orthogonal matrix as
\[A=\left(\begin{array}{cccccc}\frac{1}{\sqrt{6}}&\frac{1}{\sqrt{2}}&\frac{1} {2\sqrt{3}}&-\frac{1}{2\sqrt{5}}&-\frac{1}{\sqrt{42}}&-\frac{1}{\sqrt{30}}&- \frac{1}{\sqrt{7}}\\ -\sqrt{\frac{2}{3}}&0&\frac{1}{2\sqrt{3}}&-\frac{1}{2\sqrt{5}}&-\frac{1}{ \sqrt{42}}&-\frac{1}{\sqrt{30}}&-\frac{1}{\sqrt{7}}\\ \frac{1}{\sqrt{6}}&-\frac{1}{\sqrt{2}}&\frac{1}{2\sqrt{3}}&-\frac{1}{2\sqrt{5} }&-\frac{1}{\sqrt{42}}&-\frac{1}{\sqrt{30}}&-\frac{1}{\sqrt{7}}\\ 0&0&-\frac{\sqrt{3}}{2}&-\frac{1}{2\sqrt{5}}&-\frac{1}{\sqrt{42}}&-\frac{1}{ \sqrt{30}}&-\frac{1}{\sqrt{7}}\\ 0&0&0&\frac{2}{\sqrt{5}}&-\frac{1}{\sqrt{42}}&-\frac{1}{\sqrt{30}}&-\frac{1}{ \sqrt{7}}\\ 0&0&0&0&-\frac{1}{\sqrt{42}}&\sqrt{\frac{5}{6}}&-\frac{1}{\sqrt{7}}\\ 0&0&0&0&\sqrt{\frac{6}{7}}&0&-\frac{1}{\sqrt{7}}\end{array}\right), \tag{36}\]
together with the new six dimensional roots \(\beta_{i}=A^{-1}\alpha_{i}\)
\[\begin{array}{ll}\beta_{1}=\left(\sqrt{\frac{3}{2}},\frac{1}{\sqrt{2}},0,0, 0,0,0\right),&\beta_{2}=\left(-\sqrt{\frac{3}{2}},\frac{1}{\sqrt{2}},0,0,0,0 \right),\\ \beta_{3}=\left(\frac{1}{\sqrt{6}},-\frac{1}{\sqrt{2}},\frac{2}{\sqrt{3}},0, 0,0,0\right),&\beta_{4}=\left(0,0,-\frac{\sqrt{3}}{2},-\frac{\sqrt{5}}{2},0,0, 0\right),\\ \beta_{5}=\left(0,0,0,\frac{2}{\sqrt{5}},0,-\sqrt{\frac{6}{5}},0\right),& \beta_{6}=\left(0,0,0,0,-\sqrt{\frac{6}{7}},\sqrt{\frac{5}{6}},0\right).\end{array} \tag{37}\]
The corresponding coordinate transformations resulting from this are
\[q = (q_{1},q_{2},q_{3},q_{4},q_{5},q_{6},q_{7})\] \[= \left(\frac{\zeta_{1}}{\sqrt{6}}+\frac{\zeta_{2}}{\sqrt{2}}+\frac {\zeta_{3}}{2\sqrt{3}}-\frac{\zeta_{4}}{2\sqrt{5}}-\frac{\zeta_{5}}{\sqrt{42} }-\frac{\zeta_{6}}{\sqrt{30}},-\sqrt{\frac{2}{3}}\zeta_{1}+\frac{\zeta_{3}}{ 2\sqrt{3}}-\frac{\zeta_{4}}{2\sqrt{5}}-\frac{\zeta_{5}}{\sqrt{42}}-\frac{ \zeta_{6}}{\sqrt{30}},\right.\] \[\left.\frac{\zeta_{1}}{\sqrt{6}}-\frac{\zeta_{2}}{\sqrt{2}}+\frac {\zeta_{3}}{2\sqrt{3}}-\frac{\zeta_{4}}{2\sqrt{5}}-\frac{\zeta_{5}}{\sqrt{42} }-\frac{\zeta_{6}}{\sqrt{30}},-\frac{1}{2}\sqrt{3}\zeta_{3}-\frac{\zeta_{4}}{ 2\sqrt{5}}-\frac{\zeta_{5}}{\sqrt{42}}-\frac{\zeta_{6}}{\sqrt{30}},\right.\] \[\left.\frac{2\zeta_{4}}{\sqrt{5}}-\frac{\zeta_{5}}{\sqrt{42}}- \frac{\zeta_{6}}{\sqrt{30}},\sqrt{\frac{5}{6}}\zeta_{6}-\frac{\zeta_{5}}{\sqrt{4 2}},\sqrt{\frac{6}{7}}\zeta_{5}\right),\]
and
\[\zeta = (\zeta_{1},\zeta_{2},\zeta_{3},\zeta_{4},\zeta_{5},\zeta_{6},0)\] \[= \left(\frac{q_{1}-2q_{2}+q_{3}}{\sqrt{6}},\frac{q_{1}-q_{3}}{\sqrt {2}},\frac{q_{1}+q_{2}+q_{3}-3q_{4}}{2\sqrt{3}},-\frac{q_{1}+q_{2}+q_{3}+q_{4} -4q_{5}}{2\sqrt{5}},\right.\] \[\left.-\frac{q_{1}+q_{2}+q_{3}+q_{4}+q_{5}+q_{6}-6q_{7}}{\sqrt{42 }},-\frac{q_{1}+q_{2}+q_{3}+q_{4}+q_{5}-5q_{6}}{\sqrt{30}},-\frac{\sum_{i=1}^{7 }q_{i}}{\sqrt{7}}\right).\]
Since the last entry for \(\zeta\) in (3.2) is zero, we note that once again the new coordinates transform the old ones to the centre-of-mass frame. The momenta are transformed in the same way, with \(p_{i}\rightarrow\eta_{i}\). The Hamiltonian now acquires the form
\[H = \frac{1}{2}\sum_{i=1}^{6}\eta_{i}^{2}+e^{\frac{\zeta_{2}-\sqrt{ 6}\zeta_{1}}{\sqrt{2}}}+e^{\frac{\sqrt{6}\zeta_{1}+\zeta_{2}}{\sqrt{2}}}+e^{ \frac{\zeta_{1}}{\sqrt{6}}-\frac{\zeta_{2}}{\sqrt{2}}+\frac{2\zeta_{3}}{ \sqrt{3}}}+e^{-\frac{1}{2}\sqrt{3}\zeta_{3}-\frac{\sqrt{5}\zeta_{4}}{2}}\] \[+e^{\sqrt{\frac{2}{5}}\zeta_{6}-\sqrt{\frac{2}{5}}\zeta_{5}}+e^{ \frac{2\zeta_{4}-\sqrt{6}\zeta_{6}}{\sqrt{5}}}+e^{\frac{1}{30}\left(-5\sqrt{6 }\zeta_{1}-15\sqrt{2}\zeta_{2}-5\sqrt{3}\zeta_{3}+3\sqrt{5}\zeta_{4}+5\sqrt{4 2}\zeta_{5}+\sqrt{30}\zeta_{6}\right)}.\]
In this reduced space all trajectories become benign as we observe in figure 6. We recognise once more that each of the solutions is made up of superposition of various quasi/almost periodic functions.
Thus all five higher charges of the \(A_{6}\)-affine Toda lattice theory when interpreted as Hamiltonians for a six particle system possess benign solutions of oscillatory type in their classical phase spaces.
Figure 5: \(A_{6}\)-affine Toda lattice phase spaces as functions of time \(t\) of the Hamiltonian, panels (a), (b), and the \(Q_{6}\)-charge Hamiltonian,panels (c), (d), with seven particles. The initial conditions are taken in both cases as \(q_{i}=0\), \(i=1,\ldots,7\) and \(p_{1}=-p_{2}=p_{3}=-p_{4}=p_{5}=-2p_{6}=2p_{7}=-1/2\).
## 3 Higher derivative Hamiltonians from the \(B_{3}\) affine Toda lattice
Many physical systems based on non-simply laced algebras display quite different behaviour from those based on simply laced ones. To find out whether this also holds for higher order derivative theories we will also investigate some sample representative theories based on non-simply laced algebras. We first recall how to obtain the latter.
### Reduction of the root spaces and charges
It is well known that non-simply laced Lie algebras can be obtained from a folding procedure of the associated Dynkin diagrams for a simply laced Lie algebra along a non-trivial automorphism [19, 20, 21, 22]. Here we use a reduction from the \(A_{6}\) root space \(\Delta_{A_{6}}\) to the \(B_{3}\) root space \(\hat{\Delta}_{B_{3}}\) with a subsequent reduction to the \(G_{2}\) root space \(\hat{\Delta}_{G_{2}}\) previously constructed in [22]. Denoting the corresponding simple roots as \(\alpha_{i}\in\Delta_{A_{6}}\), \(i=1,\ldots,6\), \(\hat{\alpha}_{i}\in\hat{\Delta}_{B_{3}}\), \(i=1,2,3\) and \(\tilde{\alpha}_{i}\in\tilde{\Delta}_{G_{2}}\), \(i=1,2\) we define the following reduction maps and their inverses
\[\omega : \Delta_{A_{6}}\to\hat{\Delta}_{B_{3}},\qquad\alpha_{i}\mapsto \omega(\alpha_{i})=\left\{\begin{array}{ll}\hat{\alpha}_{i}&\mbox{for $i=1,2,3$}\\ \hat{\alpha}_{7-i}&\mbox{for $i=4,5,6$}\end{array}\right., \tag{11}\] \[\omega^{-1} : \hat{\Delta}_{B_{3}}\to\Delta_{A_{6}},\qquad\hat{\alpha}_{i} \mapsto\omega^{-1}(\hat{\alpha}_{i})=\alpha_{i}+\alpha_{7-i}\quad\mbox{for $i=1,2,3$},\] (12) \[\hat{\omega} : \hat{\Delta}_{B_{3}}\to\tilde{\Delta}_{G_{2}},\qquad\hat{\alpha}_ {i}\mapsto\hat{\omega}(\hat{\alpha}_{i})=\left\{\begin{array}{ll}\tilde{ \alpha}_{1}&\mbox{for $i=1,3$}\\ \tilde{\alpha}_{2}&\mbox{for $i=2$}\end{array}\right., \tag{13}\]
Figure 6: \(A_{6}\)-affine Toda lattice phase spaces as functions of time \(t\) of the Hamiltonian, panels (a), (b), and the \(Q_{6}\)-charge Hamiltonian, panels (c), (d), with six particles. The initial conditions are taken in both cases as \(\zeta_{i}=0\), \(i=1,\ldots,6\) and \(\eta_{1}=\eta_{2}=\eta_{3}=-3/\sqrt{6}\), \(\eta_{2}=\eta_{4}=\eta_{6}=3/2\sqrt{2}\).
\[\hat{\omega}^{-1}:\,\tilde{\Delta}_{G_{2}}\rightarrow\hat{\Delta}_{B_{3}},\qquad \tilde{\alpha}_{i}\mapsto\tilde{\omega}^{-1}(\tilde{\alpha}_{i})=\left\{\begin{array} []{ll}\hat{\alpha}_{1}+2\hat{\alpha}_{3}&\text{for $i=1$}\\ 3\hat{\alpha}_{1}&\text{for $i=2$}\end{array}\right.. \tag{3.4}\]
One may verify that the roots involved reproduce the respective Cartan matrices. The associated charges are then reduced by the appropriate actions of the coordinates and momenta according to
\[Q_{n}^{A_{6}}(q,p)\rightarrow\hat{Q}_{n}^{B_{3}}(\hat{q},\hat{p})=Q_{n}^{A_{6 }}[\omega^{-1}(\hat{q}),\omega^{-1}(\hat{p})]\rightarrow\hat{Q}_{n}^{G_{2}}( \hat{q},\hat{p})=\hat{Q}_{n}^{B_{3}}[\tilde{\omega}^{-1}(\hat{q}),\tilde{\omega }^{-1}(\hat{p})]. \tag{3.5}\]
We will employ the root systems from above, but will construct the \(G_{2}\)-charges in a different manner. Let us now see in detail how the consecutive steps are carried out.
### Higher derivative Hamiltonians from \(B_{3}\) affine Toda lattice theory
In order to define the reduced charges according to equation (3.5) we expand the coordinates of the \(B_{3}\)-system as \(\hat{q}=\hat{q}_{1}\hat{\alpha}_{1}+(\hat{q}_{1}+\hat{q}_{2})\hat{\alpha}_{2 }+(\hat{q}_{1}+\hat{q}_{2}+\hat{q}_{3})\hat{\alpha}_{3}\) and compute \(\omega^{-1}(\hat{q})\) using the defining relation for this map in (3.2). We expand the momenta in a similar fashion. Representing the \(A_{6}\)-roots in the standard seven dimensional Euclidean space as specified in section 2.2, we obtain in this manner the reduction of the coordinates and momenta
\[q \rightarrow \omega^{-1}(\hat{q})=(\hat{q}_{1},\hat{q}_{2},\hat{q}_{3},0,- \hat{q}_{3},-\hat{q}_{2},-\hat{q}_{1}), \tag{3.6}\] \[p \rightarrow \omega^{-1}(\hat{p})=(\hat{p}_{1},\hat{p}_{2},\hat{p}_{3},0,- \hat{p}_{3},-\hat{p}_{2},-\hat{p}_{1}), \tag{3.7}\]
respectively. We notice that when employing the new phase space variables we obtain another solutions of the second equation in Lax pair equations (2.2) with \(\hat{p}_{i}=(\hat{x}_{i})_{t}\) for \(i=1,2,3\). It is easily seen from (2.3)-(2.9) that with the replacements (3.6) and (3.7) the charges of odd order vanish
\[Q_{1}\rightarrow\hat{Q}_{1}=0,\qquad Q_{3}\rightarrow\hat{Q}_{3}=0,\qquad Q_{ 5}\rightarrow\hat{Q}_{5}=0, \tag{3.8}\]
and the remaining \(B_{3}\)-charges acquire the forms
\[Q_{2} \rightarrow \hat{Q}_{2}=\hat{H}=\sum_{i=1}^{3}\hat{p}_{i}^{2}+2e^{\hat{q}_{ 1}-\hat{q}_{2}}+2e^{\hat{q}_{2}-\hat{q}_{3}}+2e^{\hat{q}_{3}}+e^{-2\hat{q}_{1}} \tag{3.9}\] \[=\sum_{i=1}^{3}\hat{p}_{i}^{2}+\sum_{i=1}^{3}2e^{\hat{\alpha}_{i} \cdot\hat{q}}+e^{-(\hat{\gamma}+\hat{\alpha}_{1})\cdot\hat{q}}\] (3.10) \[Q_{4} \rightarrow \hat{Q}_{4}=\frac{\hat{p}_{1}^{4}}{2}+\frac{\hat{p}_{2}^{4}}{2}+ \frac{\hat{p}_{3}^{4}}{2}+\hat{p}_{1}^{2}e^{-2\hat{q}_{1}}+2\hat{p}_{1}^{2}e^ {\hat{q}_{1}-\hat{q}_{2}}+2\hat{p}_{2}\hat{p}_{1}e^{\hat{q}_{1}-\hat{q}_{2}}+2 \hat{p}_{2}^{2}e^{\hat{q}_{1}-\hat{q}_{2}}\] (3.11) \[+2\hat{p}_{3}^{2}e^{\hat{q}_{2}-\hat{q}_{3}}+2\hat{p}_{3}^{2}e^{ \hat{q}_{3}}+2\hat{p}_{2}\hat{p}_{3}e^{\hat{q}_{2}-\hat{q}_{3}}+\frac{1}{2}e^{ -4\hat{q}_{1}}+e^{2\hat{q}_{1}-2\hat{q}_{2}}+2e^{-\hat{q}_{1}-\hat{q}_{2}}+2e^ {\hat{q}_{2}}\] \[+e^{2\hat{q}_{2}-2\hat{q}_{3}}+2\hat{p}_{2}^{2}e^{\hat{q}_{2}- \hat{q}_{3}}+2e^{\hat{q}_{1}-\hat{q}_{3}}+2e^{2\hat{q}_{3}}\] \[Q_{6} \rightarrow \hat{Q}_{6}=\frac{\hat{p}_{1}^{6}}{3}+\frac{\hat{p}_{2}^{6}}{3}+ \frac{\hat{p}_{3}^{6}}{3}+\frac{1}{3}e^{-6\hat{q}_{1}}+2e^{\hat{q}_{1}}+2e^{-3 \hat{q}_{1}-\hat{q}_{2}}+\frac{2}{3}e^{3(\hat{q}_{1}-\hat{q}_{2})}+\frac{2}{3}e ^{3(\hat{q}_{2}-\hat{q}_{3})}\] \[+\hat{p}_{1}^{4}e^{-2\hat{q}_{1}}+\hat{p}_{1}^{2}\left(e^{-4\hat{q }_{1}}+4e^{-\hat{q}_{1}-\hat{q}_{2}}+3e^{2(\hat{q}_{1}-\hat{q}_{2})}\right)+ \hat{p}_{2}\hat{p}_{1}\left(2e^{-\hat{q}_{1}-\hat{q}_{2}}+4e^{2(\hat{q}_{1}- \hat{q}_{2})}\right)\] \[+2\hat{p}_{3}^{4}e^{\hat{q}_{3}}+\hat{p}_{2}^{2}\left(2e^{-\hat{q }_{1}-\hat{q}_{2}}+3e^{2(\hat{q}_{1}-\hat{q}_{2})}+2e^{\hat{q}_{2}}+3e^{2(\hat {q}_{2}-\hat{q}_{3})}\right)+\hat{p}_{2}\hat{p}_{3}\left(4e^{\hat{q}_{2}}+4e^{2( \hat{q}_{2}-\hat{q}_{3})}\right)\] \[+2\left(\hat{p}_{1}^{4}+\hat{p}_{2}\hat{p}_{1}^{3}+\hat{p}_{2}^{ 2}\hat{p}_{1}^{2}+\hat{p}_{2}^{3}\hat{p}_{1}+\hat{p}_{2}^{4}\right)e^{\hat{q}_{1} -\hat{q}_{2}}+2\left(\hat{p}_{2}^{4}+\hat{p}_{3}\hat{p}_{2}^{3}+\hat{p}_{2}^{ 2}\hat{p}_{2}^{2}+\hat{p}_{3}^{3}\hat{p}_{2}+\hat{p}_{3}^{4}\right)e^{\hat{q}_{ 2}-\hat{q}_{3}}\] \[+2\left(\hat{p}_{1}^{2}+\left(2\hat{p}_{2}+\hat{p}_{3}\right)\hat{p }_{1}+3\hat{p}_{2}^{2}+\hat{p}_{3}^{2}+2\hat{p}_{2}\hat{p}_{3}\right)e^{\hat{q}_ {1}-\hat{q}_{3}}+\hat{p}_{3}^{2}\left(6e^{\hat{q}_{2}}+3e^{2(\hat{q}_{2}-\hat{ q}_{3})}+4e^{2\hat{q}_{3}}\right)\]
\[+3e^{-2\hat{q}_{2}}+2e^{\hat{q}_{1}+\hat{q}_{2}-2\hat{q}_{3}}+2e^{-\hat{q}_{1}-\hat{ q}_{3}}+2e^{2\hat{q}_{1}-\hat{q}_{2}-\hat{q}_{3}}+2e^{2\hat{q}_{2}-\hat{q}_{3}}+ \frac{8}{3}e^{3\hat{q}_{3}}+4e^{\hat{q}_{2}+\hat{q}_{3}}\]
Here \(\hat{\gamma}\) is the highest root \(\hat{\gamma}=\hat{\alpha}_{1}+2\hat{\alpha}_{2}+2\hat{\alpha}_{3}\) in \(\hat{\Delta}_{B_{3}}\). We notice that unlike for the \(A_{n}\)-case the number of particles already matches the rank of \(B_{3}\) in the standard representation \(\hat{\alpha}_{1}=(1,-1,0)\), \(\hat{\alpha}_{2}=(0,1,-1)\) and \(\hat{\alpha}_{3}=(0,0,1)\).
Figure 7: Affine \(B_{3}\)-Toda lattice phase space \((\hat{q}_{1},\hat{p}_{1})\), as function of \(t\) for the standard Hamiltonian in panel (a), for \(Q_{4}\) taken as higher derivative Hamiltonian in panel (b) and for \(Q_{6}\) taken as higher derivative Hamiltonian in panel (c). The corresponding functions \(x_{i}(t)\) and \(p_{i}(t)\) are displayed in the respective panels \(a_{i},b_{i},c_{i}\) for \(i=1,\ldots,6\). For the initial condition we always chose \(\hat{q}_{1}(0)=\hat{q}_{2}(0)=\hat{q}_{3}(0)=0\) and \(\hat{p}_{1}(0)=-0.1\), \(\hat{p}_{2}(0)=-0.2\), \(\hat{p}_{3}(0)=0.3\), in panels (a), \(\hat{p}_{1}(0)=0.5\), \(\hat{p}_{2}(0)=\hat{p}_{3}(0)=-0.25\), in panels (b) and \(\hat{p}_{1}(0)=1\), \(\hat{p}_{2}(0)=\hat{p}_{3}(0)=-0.5\) in panels (c). The quasi-periods are: panel (a): \(\tau_{H}\approx 7.9824\), panel (b): \(\tau_{Q_{4}}\approx 2.7181\), panel (c): \(\tau_{Q_{6}}\approx 0.4719\).
Again we interpret all charges as Hamiltonians and compute their corresponding phase spaces. As depicted in figure 7, all trajectories are benign and are confined in phase space.
As in this case the dimension of the standard representation already equals the rank of the algebra there was no need for a reduction or the imposition of any constraints in order to obtain benign trajectories for the higher charge Hamiltonians. Nonetheless, for the sake of interest we consider now the reverse scenario and construct a theory in which the roots are represented in a larger dimensional space. When drawing on the \(A_{2}\)-example one might expect malign ghost trajectories in this case, but as we will demonstrate this is not the case.
Thus we solve once more equation (20) for the orthogonal matrix \(A\) and the four-dimensional roots \(\hat{\beta}_{i}\) reproducing the \(B_{3}\)-Cartan matrix
\[K=\left(\begin{array}{ccc}2&-1&0\\ -1&2&-2\\ 0&-1&2\\ \end{array}\right). \tag{31}\]
We find the four dimensional representation for the roots
\[\hat{\beta}_{1}=(1,0,1,0)\ \ \hat{\beta}_{2}=\left(-\frac{1}{2},\frac{\sqrt{3}} {2},-\frac{1}{2},\frac{\sqrt{3}}{2}\right)\ \ \hat{\beta}_{3}=\left(0,-\frac{2+\sqrt{2}}{2\sqrt{3}},0,-\frac{2-\sqrt{2}}{2 \sqrt{3}}\right), \tag{32}\]
together with the orthogonal matrix
\[\hat{A}=\left(\begin{array}{cccc}\frac{1}{2}&\frac{1-\sqrt{2}}{2\sqrt{3}}& \frac{1}{2}&\frac{1+\sqrt{2}}{2\sqrt{3}}\\ -\frac{1}{2}&\frac{1-\sqrt{2}}{2\sqrt{3}}&-\frac{1}{2}&\frac{1+\sqrt{2}}{2 \sqrt{3}}\\ 0&-\frac{2+\sqrt{2}}{2\sqrt{3}}&0&\frac{1-\sqrt{2}}{\sqrt{6}}\\ -\frac{1}{\sqrt{2}}&0&\frac{1}{\sqrt{2}}&0\\ \end{array}\right). \tag{33}\]
This in turn leads to the coordinate transformation
\[\hat{q} = (\hat{q}_{1},\hat{q}_{2},\hat{q}_{3},0)=A(\hat{\rho_{1}},\hat{ \rho_{2}},\hat{\rho_{3}},\hat{\rho_{4}})\] \[= \left(\frac{1}{2}\hat{\rho}_{1}+\frac{1-\sqrt{2}}{2\sqrt{3}}\hat{ \rho}_{2}+\frac{1}{2}\hat{\rho}_{3}+\frac{1+\sqrt{2}}{2\sqrt{3}}\hat{\rho}_{4},-\frac{1}{2}\hat{\rho}_{1}+\frac{1-\sqrt{2}}{2\sqrt{3}}\hat{\rho}_{2}-\frac{ 1}{2}\hat{\rho}_{3}+\frac{1+\sqrt{2}}{2\sqrt{3}}\hat{\rho}_{4},\right.\] \[\left.-\frac{\sqrt{2}+1}{\sqrt{6}}\hat{\rho}_{2}+\frac{-\sqrt{2} +1}{\sqrt{6}}\hat{\rho}_{4},\frac{1}{\sqrt{2}}(\hat{\rho}_{3}-\hat{\rho}_{1} )\right).\]
Thus instead of the centre-of-mass constraint \(\sum_{i=1}^{4}\rho_{i}=0\) we have now the constraint \(\hat{\rho}_{1}=\hat{\rho}_{3}\) as we can read off from the last component in the four dimensional system. Indeed, when computing the new coordinates we find precisely this dependence in the first and third coordinate
\[\hat{\rho} = (\hat{\rho}_{1},\hat{\rho}_{2},\hat{\rho}_{3},\hat{\rho}_{4})= \hat{A}^{-1}(\hat{q}_{1},\hat{q}_{2},\hat{q}_{3},0)\] \[= \frac{1}{2}\left(\hat{q}_{1}-\hat{q}_{2},\frac{1-\sqrt{2}}{\sqrt{ 3}}(\hat{q}_{1}+\hat{q}_{2}+\hat{q}_{3})+\frac{1}{\sqrt{3}}\hat{q}_{3},\hat{q} _{1}-\hat{q}_{2},\frac{\sqrt{2}+1}{\sqrt{3}}(\hat{q}_{1}+\hat{q}_{2}+\hat{q}_{3 })-\frac{3}{\sqrt{3}}\hat{q}_{3}\right).\]
When setting any other component in \(\hat{q}\) to zero we will obtain more complicated dependencies.
Figure 8: Affine \(B_{3}\)-Toda lattice phase space \((\hat{\xi}_{1},\hat{(}\rho)_{1})\) as functions of \(t\) for the standard Hamiltonian panel (a), for \(Q_{4}\) taken as higher derivative Hamiltonian panel (b) and for \(Q_{6}\) taken as higher derivative Hamiltonian panel (c). The initial conditions are taken in all cases as \(\hat{\rho}(0)=(0,0,0,0),\hat{\xi}(0)=\frac{1}{10}(1,2,1,-4)\). The quasi-periods are: \(\tau_{H}\approx 28.80,\tau_{Q_{4}}\approx 14.99,\tau_{Q_{6}}\approx 4.103\).
Transforming now also the \(B_{3}\) charges into the new coordinates we proceed as previously and compute the classical trajectories numerically. Our results are shown in figure 8. The main observation is that all solutions found are benign. Thus unlike for the \(A_{n}\) cases we do not encounter divergencies in the case where the dimension of the root representation space does not match the rank of the algebra.
## 4 Higher derivative Hamiltonians from the \(G_{2}\)-affine Toda lattice
In order to compare and identify universal features we investigate now also the \(G_{2}\)-affine Toda lattice theory that is similar to the \(A_{2}\)-theory, in the sense that it has a natural three and two particle representation, with the former being the standard one. Its Hamiltonian in the standard form reads
\[\tilde{H}=\frac{1}{2}\tilde{p}^{2}+3e^{\alpha_{1}\cdot\tilde{q}}+2e^{\alpha_{ 2}\cdot\tilde{q}}+e^{\alpha_{0}\cdot\tilde{q}}, \tag{38}\]
where \(\alpha_{1}\) and \(\alpha_{2}\) are the two simple roots of \(G_{2}\) and \(\alpha_{0}=-3\alpha_{1}-2\alpha_{2}\) is the negative of the highest \(G_{2}\)-root. In order to construct the higher charges we may use the reduction from \(B_{3}\) or directly construct a higher order expression from a suitable Ansatz whose Poisson bracket vanishes with \(\tilde{H}\). Here we do not use the folding procedure, as for it to apply in this case the Lax pair has to be slightly modified, but instead we use the latter approach. Taking initially the standard representation for the simple roots \(\alpha_{1}=(1,-1,0)\) and \(\alpha_{2}=(-2,1,1)\), [23], we find the non-trivial independent charges
\[\tilde{Q}_{1} = \tilde{p}_{1}+\tilde{p}_{2}+\tilde{p}_{3} \tag{39}\] \[\tilde{Q}_{6} = \sum_{i,j=1}^{3}\frac{1}{6}\tilde{p}_{i}^{6}+\frac{3}{14}(\tilde{ p}_{i}^{4}\tilde{p}_{i+1}^{2}+\tilde{p}_{i}^{2}\tilde{p}_{i+1}^{4})+\frac{10}{21 }\tilde{p}_{i}^{3}\tilde{p}_{i+1}^{3}+\frac{6}{7}\tilde{p}_{i}\tilde{p}_{i+1} \tilde{p}_{i+2}+c_{j}\frac{j^{3}}{7}e^{3\tilde{\alpha}_{j}\cdot\tilde{q}}\] \[+\frac{n_{j}n_{j+1}}{7}e^{(\tilde{\alpha}j+\tilde{\alpha}_{j+1}) \cdot\tilde{q}}\left(c_{j}^{(1)}n_{j}e^{\tilde{\alpha}_{j}\cdot\tilde{q}}+c_{ j}^{(2)}n_{j+1}e^{\tilde{\alpha}_{j+1}\cdot\tilde{q}}+c_{ij}^{(3)}\tilde{p}_{i} \tilde{p}_{j+1}+c_{ij}^{(4)}\tilde{p}_{i}^{2}\right)\] \[+\frac{n_{j}}{7}e^{\tilde{\cdot}\cdot\tilde{q}}\left(c_{ij}^{(5) }\tilde{p}_{i}^{4}+c_{ij}^{(6)}\tilde{p}_{i}^{2}\tilde{p}_{i+1}^{2}+c_{ij}^{( 7)}\tilde{p}_{i}^{3}\tilde{p}_{i+1}+c_{ij}^{(8)}\tilde{p}_{i}\tilde{p}_{i+1}^ {(9)}\tilde{p}_{i}^{2}\tilde{p}_{i+1}\tilde{p}_{i+2}\right)\] \[+\frac{n_{j}^{2}}{7}e^{2\tilde{\alpha}_{j}\cdot\tilde{q}}\left(c _{ij}^{(10)}\tilde{p}_{i}^{2}+c_{ij}^{(11)}\tilde{p}_{i}\tilde{p}_{i+1}\right),\]
with abbreviations \(c=(2,6,6)\), \(c^{(1)}=(6,18,-18)\), \(c^{(2)}=(18,18,42)\),
\[c^{(3)}=\begin{pmatrix}22&-&-20\\ 46&-&2&40\\ 10&34&40\end{pmatrix},\quad c^{(4)}=\begin{pmatrix}26&26&32\\ 8&44&32\\ 14&26&2\end{pmatrix},\;\;c^{(5)}=\begin{pmatrix}7&7&7\\ 7&7&7\\ 3&7&7\end{pmatrix},\;\;c^{(6)}=\begin{pmatrix}0&2&20\\ 6&20&2\\ 6&0&0\end{pmatrix},\] \[c^{(7)}=\begin{pmatrix}4&2&-4\\ 0&-4&8\\ 10&0&0\end{pmatrix},\;\;\;c^{(8)}=\begin{pmatrix}4&8&-4\\ 10&-4&2\\ 0&0&0\end{pmatrix},\;\;\;c^{(9)}=\begin{pmatrix}0&2&-16\\ 0&-16&-16\\ -30&-16&2\end{pmatrix},\] \[c^{(10)}=\begin{pmatrix}12&13&4\\ 12&4&4\\ 21&4&13\end{pmatrix},\;\;\;c^{(11)}=\begin{pmatrix}18&8&26\\ 0&26&8\\ 0&8&8\end{pmatrix}.\]
These charge are all in involution with the Hamiltonian \(\tilde{H}\) and with each other. We note
that there is no non-trivial independent charge \(\tilde{Q}_{4}\), as the only quantity that one can construct at that order is proportional to \(\tilde{H}^{2}\).
We solve the corresponding equations of motion for \(\tilde{H}\)
\[\begin{array}{l}\dot{\tilde{q}}_{1}=\,p_{1},\quad\dot{\tilde{q}}_{2}=p_{2}, \quad\dot{\tilde{q}}_{3}=p_{3},\quad\dot{\tilde{p}}_{1}=4e^{-2\tilde{q}_{1}+ \tilde{q}_{2}+\tilde{q}_{3}}-3e^{\tilde{q}_{1}-\tilde{q}_{2}}-e^{\tilde{q}_{1} +\tilde{q}_{2}-2\tilde{q}_{3}}\\ \dot{\tilde{p}}_{2}=\,3e^{\tilde{q}_{1}-\tilde{q}_{2}}-e^{\tilde{q}_{1}+\tilde {q}_{2}-2\tilde{q}_{3}}-2e^{-2\tilde{q}_{1}+\tilde{q}_{2}+\tilde{q}_{3}}\quad \dot{\tilde{p}}_{3}=2e^{\tilde{q}_{1}+\tilde{q}_{2}-2\tilde{q}_{3}}-2e^{-2 \tilde{q}_{1}+\tilde{q}_{2}+\tilde{q}_{3}}\end{array} \tag{43}\]
and \(\tilde{Q}_{6}\), which we will note report here, numerically and depict our results in figure 9.
Once again we identify almost periodic motions from the trajectories in both cases with periods numerically computed as \(\tau_{H}\approx 7.04819143\) and \(\tau_{Q6}\approx 0.10294605\) for the phase spaces of \(\tilde{H}\) and \(\tilde{Q}_{6}\), respectively. In our concrete solutions for the equations of motion for \(\tilde{H}\) we find \(|\zeta_{1}(0)-\zeta_{1}(\tau_{H})|\approx 2.3\times 10^{-9}\), \(|\zeta_{2}(0)-\zeta_{2}(\tau_{H})|=|\zeta_{3}(0)-\zeta_{3}(\tau_{H})|\approx 0.0229121\), \(|\eta_{1}(0)-\eta_{1}(\tau_{H})|\approx 0.0122752\), \(|\eta_{2}(0)-\eta_{2}(\tau_{H})|\approx 0.009117525\) and \(|\eta_{3}(0)-\eta_{3}(\tau_{H})|\approx 0.0031\). As previously, in our depiction we distinguish between the first almost period and some further periods that illustrate how the phase space is gradually filled inward and outwardly. Comparing the \(\tilde{H}\) and \(\tilde{Q}_{6}\) phase spaces we notice that for large time the same confined region in phase space will be filled out.
Furthermore we notice that, unlike as in the \(A_{2}\)-case, even for the case when the particle number does not match the rank of the algebra the motion is of a benign nature.
Thus in principal there is no need for a dimensional reduction to the centre-of-mass frame from the point of view to obtain finite trajectories, but for completeness we also analyse that case. For this purpose we need to solve once more equation (20) for the orthogonal matrix \(A\) and the two-dimensional roots \(\beta_{i}\), but now involving the \(G_{2}\)-Cartan matrix
\[K=\left(\begin{array}{cc}2&-1\\ -3&2\end{array}\right). \tag{44}\]
In this case we find the solutions
\[A=\left(\begin{array}{ccc}\frac{1}{\sqrt{2}}&-\frac{1}{\sqrt{6}}& \frac{1}{\sqrt{3}}\\ -\frac{1}{\sqrt{2}}&-\frac{1}{\sqrt{6}}&\frac{1}{\sqrt{3}}\\ 0&\sqrt{\frac{2}{3}}&\frac{1}{\sqrt{3}}\end{array}\right),\quad\beta_{1}= \left(\sqrt{2},0,0\right),\quad\beta_{2}=\left(-\frac{3}{\sqrt{2}},\sqrt{ \frac{3}{2}},0\right), \tag{4.6}\]
for the orthogonal matrix and the three dimensional roots.
Then, according to (2.22) and (2.23) the coordinates and momenta transform as
\[\tilde{q} = \left(\frac{\tilde{\zeta}_{1}}{\sqrt{2}}-\frac{\tilde{\zeta}_{2}} {\sqrt{6}},-\frac{\tilde{\zeta}_{1}}{\sqrt{2}}-\frac{\tilde{\zeta}_{2}}{\sqrt {6}},\sqrt{\frac{2}{3}}\tilde{\zeta}_{2}\right)=(\tilde{q}_{1},\tilde{q}_{2}, \tilde{q}_{3}), \tag{4.7}\] \[\tilde{p} = \left(\frac{\tilde{\eta}_{1}}{\sqrt{2}}-\frac{\tilde{\eta}_{2}} {\sqrt{6}},-\frac{\tilde{\eta}_{1}}{\sqrt{2}}-\frac{\tilde{\eta}_{2}}{\sqrt{6 }},\sqrt{\frac{2}{3}}\tilde{\eta}_{2}\right)=(\tilde{p}_{1},\tilde{p}_{2}, \tilde{p}_{3}), \tag{4.8}\]
and in reverse as
\[\tilde{\zeta} = \left(\frac{\tilde{q}_{1}-\tilde{q}_{2}}{\sqrt{2}},-\frac{\tilde {q}_{1}+\tilde{q}_{2}-2\tilde{q}_{3}}{\sqrt{6}},\frac{\tilde{q}_{1}+\tilde{q} _{2}+\tilde{q}_{3}}{\sqrt{3}}\right)=(\tilde{\zeta}_{1},\tilde{\zeta}_{2},0), \tag{4.9}\] \[\tilde{\eta} = \left(\frac{\tilde{p}_{1}-\tilde{p}_{2}}{\sqrt{2}},-\frac{\tilde {p}_{1}+\tilde{p}_{2}-2\tilde{p}_{3}}{\sqrt{6}},\frac{\tilde{p}_{1}+\tilde{p} _{2}+\tilde{p}_{3}}{\sqrt{3}}\right)=(\tilde{\eta}_{1},\tilde{\eta}_{2},0). \tag{4.10}\]
Transforming the charges according to (11) and (12) the equations of motion for the Hamiltonian \(\tilde{H}(\tilde{\zeta},\tilde{\eta})\) become
\[\begin{split}\dot{\tilde{\zeta}}_{1}&=\tilde{\eta}_{1},\quad\dot{\tilde{\zeta}}_{2}=\tilde{\eta}_{2},\\ \dot{\tilde{\eta}}_{1}&=3\sqrt{2}e^{\sqrt{\frac{3}{2} }\tilde{\zeta}_{2}-\frac{3}{\sqrt{2}}\tilde{\zeta}_{1}}-3\sqrt{2}e^{\sqrt{2} \tilde{\zeta}_{1}},\quad\dot{\tilde{\eta}}_{2}=\sqrt{6}e^{-\sqrt{6}\tilde{ \zeta}_{2}}-\sqrt{6}e^{\sqrt{\frac{3}{2}}\tilde{\zeta}_{2}-\frac{3}{\sqrt{2}} \tilde{\zeta}_{1}},\end{split} \tag{13}\]
and similarly for the \(\tilde{Q}_{6}\)-charge Hamiltonian that we do not report. We solve these equations numerically with the results depicted in figure 10. Once more we find almost periodic solutions with the same period as in the three dimensional case for all charges.
Remarkably, for some special initial conditions we can also identify full periodic solutions. The numerical solutions for the three dimensional case with the special initial conditions \(\tilde{p}_{2}(0)=\tilde{p}_{3}(0)\) are depicted in figure 11 panel (a). We observe that the periodic solutions for \(i=2\) and \(i=3\) becoming identical. Moreover, the trajectories for the \(\tilde{H}\) and \(\tilde{Q}_{6}\) phase spaces coincide. However, as seen in the inlets the periods differ by orders of magnitude with \(\tau_{H}\approx 1.4210841\) and \(\tau_{Q_{6}}\approx 0.0068228\).
Similar features are observed for the dimensionally reduced case depicted in panel (b). The condition \(\tilde{p}_{2}(0)\to\tilde{p}_{3}(0)\) translates into \(\tilde{\eta}_{2}(0)\to-1/\sqrt{3}\tilde{\eta}_{2}(0)\).
## 5 Non-integrable perturbations
### Sensitivity of the initial conditions
There are various possibilities to investigate the stability of the above benign ghost solutions. The most delicate way to perturb them is to just vary the initial conditions. So far we have always chosen them to be compatible with the centre-of-mass frame conditions \(\sum_{i}p_{i}=0\) and \(\sum_{i}q_{i}=0\) or other constraints that result from reducing the dimensionality of the representation space for the roots. It turns out that in the representations for which the dimensions do not match up with the rank of the algebra the trajectories are rather sensitive towards these impositions. In figure 12 panels (a) to (c) we see that even a very
small violation of the condition \(\sum_{i=1}^{3}p_{i}=0\) leads to the divergence of the trajectories in the \(x_{i}\)-directions in the phase spaces even for the Hamiltonians \(H\) and \(\tilde{H}\) for \(A_{2}\) and \(G_{2}\)-theories as well as for the \(\tilde{Q}_{6}\)-charge of \(G_{2}\). In panels (d) to (f) we observe that the trajectories for all independent \(B_{3}\)-charges remain benign when \(\sum_{i}p_{i}\neq 0\). The violation of \(\sum_{i}q_{i}=0\) does not produce this effect.
In contrast the trajectories in the two dimensional phase spaces are rather robust against very large perturbations of \(Q_{1}(0)\neq 0\) as seen in figure 13 panels (a) and (b), where we present the \(G_{2}\)-case. In this case the divergence would occur in the third component
Figure 12: Three dimensional phase space solutions for the coordinates as functions of \(t\) with initial conditions \(\sum_{i=1}^{3}p_{i}\neq 0\). Panels (a), (b), (c): Manevolent \(A_{2}\)-solutions for the Hamiltonian \(H\), \(G_{2}\)-solutions for the Hamiltonian \(\tilde{H}\) and the charge \(\tilde{Q}_{6}\), respectively, all with initial conditions \(\tilde{q}_{1}(0)=\tilde{q}_{2}(0)=\tilde{q}_{3}(0)=0\), \(\tilde{p}_{1}(0)=-1\), \(\tilde{p}_{2}(0)=-2\)\(\tilde{p}_{3}(0)=3+0.05\). Panels (d), (e), (f): Benign \(B_{3}\)-solutions for the Hamiltonian \(\hat{H}\) and the charge \(\hat{Q}_{4}\), \(\hat{Q}_{6}\) respectively, all initial conditions are taken to \(\hat{q}_{1}(0)=\hat{q}_{2}(0)=\hat{q}_{3}(0)=0\), \(\hat{p}_{1}(0)=0.2\), \(\hat{p}_{2}(0)=-1/2\)\(\hat{p}_{3}(0)=1/2\).
that has already been set to zero in the construction. The trajectories may be compared to the unperturbed case presented in figure 10.
### Breaking of the integrability
There are of course many more options to perturb the higher derivative charge Hamiltonians by adding small terms to them or even by deforming with additional terms of the same or larger magnitude. Here we only present one example to illustrate the general feature and to establish the robustness of some of the higher charge Hamiltonian trajectories. We leave a more systematic presentation to future investigations [24]. We consider a Hamiltonian in form of a soft deformation of the \(Q_{3}\)-charge in the \(A_{2}\)-theory in the two-dimensional representation (28) by adding a two-dimensional harmonic oscillator potential
\[Q_{3}^{p}\left(\zeta_{1},\zeta_{2},\eta_{1},\eta_{2}\right)=Q_{3}\left(\zeta_ {1},\zeta_{2},\eta_{1},\eta_{2}\right)+\epsilon\frac{1}{2}\left(\zeta_{1}^{2 }+\zeta_{2}^{2}\right). \tag{109}\]
The solutions of the corresponding equations of motion \(\zeta_{i}(t)\), \(\eta_{i}(t)\) with \(i=1,2\) as functions of time are depicted in figure 14 for several typical values of \(\epsilon\). We still find the previously observed superposition of frequencies which vary with \(\epsilon\). A transition between rather different types of qualitative behaviours is seen at \(\epsilon_{c}\approx 0.8944\). When approaching this value from below the maximal values for \(\zeta_{i}(t)\) and \(\eta_{i}(t)\) increase smoothly and the functions become more localised, but once \(\epsilon_{c}\) is passed these values drop significantly and oscillations re-occur. At this point we do not have proper explanation of this behaviour.
Naturally there are many more options to break the integrability of these systems [24].
Figure 14: Two-dimensional phase space components \(\zeta_{i}\) and \(\eta_{i}\) with \(i=1,2\) for the deformed affine \(A_{2}\)-Toda lattice charge Hamiltonian \(\tilde{Q}_{3}\) in (109) as functions of time \(t\) for different values of \(\epsilon\). The initial conditions are taken in all cases as \(\zeta_{1}=\zeta_{2}=0\), \(\eta_{1}=-3/\sqrt{6}\) and \(\eta_{2}=3/2\sqrt{2}\).
## 6 Conclusions
We investigated many more examples that support the conjecture, originally put forward by Smilga [9], that charges of integrable systems provide very promising candidates for higher derivative theories that possess benign ghost sectors in part or all of their parameter space. For our examples of affine Toda lattice theories associated to different algebras we found a multitude of possible scenarios. We demonstrated that proper choice of the dimension of the representation space is crucial. Especially for the theories that used roots in their formulation represented in the same dimension as the rank of the underlying algebras we found benign solutions for higher charge Hamiltonians. Moreover, these solutions were quite robust with regard to perturbations of the choice of the initial conditions. When deviating from this setup and using higher dimensional representations for the roots we found the coordinate solutions diverge as functions of time \(t\) for all the \(A_{n}\) examples investigated. However, for the non-simply laced algebras \(G_{2}\) and \(B_{3}\) the trajectories for the higher charge Hamiltonians stayed benign even in the higher dimensional cases, with the difference that the former where very sensitive to changes of the initial conditions whereas the latter turned out to be stable. So far these features remain at the level of observations and we can not provide a deeper reason for these types of behaviour. Based on the data generated here so far it is too early to extract more generic features that might be shared by some class of systems, e.g. simply laced versus non-simply laced etc. We leave these aspects for future investigations.
We have also investigated some more extreme deformations of the integrable systems by a soft harmonic oscillator potential in section 5.2. As noted previously for different types of perturbations [9] many of the benign trajectories found maintain this feature, but we also observed a critical point in the strength \(\epsilon\) of these additional terms with an extreme sensitivity regarding the characteristic behaviour of the phase space trajectories. It seems worth to carry out some systematic investigations that clarify which type of perturbations, and even deformations, might be permitted in order to maintain the benign nature of the solutions [24].
Our results are summarised in table 1.
\begin{table}
\begin{tabular}{l||l|l|l|l|l|l|l|l|l|l|} \(Q\) (D of rep) \(\backslash\)**g** & \(A_{2}\) & \(G_{2}\) & \(B_{3}\) & \(A_{6}\) & \(A_{2},p_{1}\) & \(G_{2},p_{1}\) & \(B_{3},p_{1}\) & \(A_{6},p_{1}\) & \(A_{2},p2\) \\ \hline \hline \(H\left(r+1\right)\) & b & b & b & b & m & m & b & m & \\ \(H\left(r\right)\) & b & b & b & b & b & b & b & b & \\ \(Q_{3}\left(r+1\right)\) & m & \(\times\) & \(\times\) & m & m & \(\times\) & \(\times\) & m & \\ \(Q_{3}\left(r\right)\) & b & \(\times\) & \(\times\) & b & b & \(\times\) & \(\times\) & b & b \\ \(Q_{4}\left(r+1\right)\) & & \(\times\) & b & m & & \(\times\) & b & m & \\ \(Q_{4}\left(r\right)\) & & \(\times\) & b & b & & \(\times\) & b & b & \\ \(Q_{6}\left(r+1\right)\) & & b & b & m & & m & b & m & \\ \(Q_{6}\left(r\right)\) & & b & b & b & b & b & b & b & \\ \end{tabular}
\end{table}
Table 1: Summary of results. (b \(\equiv\) benign, m \(\equiv\) manevolent, \(\times\) charge does not exist, r \(\equiv\) rank of **g**, \(p_{1}\equiv\) perturbation with \(Q_{1}(0)\neq 0\), \(p_{2}\equiv\) perturbation with harmonic oscillator potential)
**Acknowledgments:** BT is supported by a City, University of London Research Fellowship. AF thanks the Instituto de Ciencias Fisicas y Matematicas of the Universidad Austral de Chile, where part of this work was completed for kind hospitality and Francisco Correa for financial support.
|
2308.05669
|
A review of planetary systems around HD 99492, HD 147379 and HD 190007
with HARPS-N
|
The Rocky Planet Search (RPS) program is dedicated to a blind radial velocity
(RV) search of planets around bright stars in the Northern hemisphere, using
the high-resolution echelle spectrograph HARPS-N installed on the Telescopio
Nazionale Galileo (TNG).
The goal of this work is to revise and update the properties of three
planetary systems by analysing the HARPS-N data with state-of-the-art stellar
activity mitigation tools. The stars considered are HD 99492 (83Leo B), HD
147379 (Gl617 A) and HD 190007.
We employ a systematic process of data modelling, that we selected from the
comparison of different approaches. We use YARARA to remove instrumental
systematics from the RV, and then use SPLEAF to further mitigate the stellar
noise with a multidimensional correlated noise model. We also search for
transit features in the Transiting Exoplanets Survey Satellite (TESS) data of
these stars.
We report on the discovery of a new planet around HD 99492, namely HD 99492
c, with an orbital period of 95.2 days and a minimum mass of msin i = 17.9
M_Earth, and refine the parameters of HD 99492 b. We also update and refine the
Keplerian solutions for the planets around HD 147379 and HD 190007, but do not
detect additional planetary signals. We discard the transiting geometry for the
planets, but stress that TESS did not exhaustively cover all the orbital
phases.
The addition of the HARPS-N data, and the use of advanced data analysis
tools, has allowed us to present a more precise view of these three planetary
systems. It demonstrates once again the importance of long observational
efforts such as the RPS program. Added to the RV exoplanet sample, these
planets populate two apparently distinct populations revealed by a bimodality
in the planets minimum mass distribution. The separation is located between 30
and 50 M_Earth.
|
M. Stalport, M. Cretignier, S. Udry, A. Anna John, T. G. Wilson, J. -B. Delisle, A. S. Bonomo, L. A. Buchhave, D. Charbonneau, S. Dalal, M. Damasso, L. Di Fabrizio, X. Dumusque, A. Fiorenzano, A. Harutyunyan, R. D. Haywood, D. W. Latham, M. López-Morales, V. Lorenzi, C. Lovis, L. Malavolta, E. Molinari, A. Mortier, M. Pedani, F. Pepe, M. Pinamonti, E. Poretti, K. Rice, A. Sozzetti
|
2023-08-10T16:14:44Z
|
http://arxiv.org/abs/2308.05669v1
|
# A review of planetary systems around HD 99492, HD 147379 and HD 190007 with HARPS-N+
###### Abstract
Context:The Rocky Planet Search (RPS) program is dedicated to a blind radial velocity (RV) search of planets around bright stars in the Northern hemisphere, using the high-resolution echelle spectrograph HARPS-N installed on the Telescopio Nazionale Galileo (TNG).
Aims:The goal of this work is to revise and update the properties of three planetary systems by analysing the HARPS-N data with state-of-the-art stellar activity mitigation tools. The stars considered are HD 99492 (83Leo B), HD 147379 (G1617 A) and HD 190007.
Methods:We employ a systematic process of data modelling, that we selected from the comparison of different approaches. We use YARARAR to remove instrumental systematics from the RV, and then use SPLEAF to further mitigate the stellar noise with a multidimensional correlated noise model. We also search for transit features in the Transiting Exoplanets Survey Satellite (TESS) data of these stars.
Results:We report on the discovery of a new planet around HD 99492, namely HD 99492 c, with an orbital period of 95.2 days and a minimum mass of \(m\,\sin i=17.9\,M_{\oplus}\), and refine the parameters of HD 99492 b. We also update and refine the Keplerian solutions for the planets around HD 147379 and HD 190007, but do not detect additional planetary signals. We discard the transiting geometry for the planets, but stress that TES did not exhaustively cover all the orbital phases.
Conclusions:The addition of the HARPS-N data, and the use of advanced data analysis tools, has allowed us to present a more precise view of these three planetary systems. It demonstrates once again the importance of long observational efforts such as the RPS program. Added to the RV exoplanet sample, these planets populate two apparently distinct populations revealed by a bimodality in the planets' minimum mass distribution. The separation is located between 30 and 50 \(M_{\oplus}\).
Conclusions:
## 1 Introduction
Since the early 2000s, the HARPS spectrograph (Pepe et al. 2000; Mayor et al. 2003), installed on the ESO-3.6m telescope at La Silla observatory, Chile, has seen numerous breakthroughs in the detection and characterisation of new worlds. Initially with a Guaranteed Time Observation (GTO) programme, and then an ESO Large Programme, the instrument dedicated its time to observe a large sample of stars. This blind planet search was performed on a well-chosen sample of main
sequence and quiet stars. Among the principal results of this HARPS survey was the discovery of a new category of exoplanets, the sub-Neptune and Super-Earth planets on close-in orbits (Mayor & Udry, 2008; Lovis et al., 2009). A significant number of detections later suggested that these planets are among the most abundant in the exoplanet population (e.g. Mayor et al., 2011; Lovis et al., 2011; Lo Curto et al., 2013; Bonfils et al., 2013; Astudillo-Defru et al., 2017; Delise et al., 2018; Unger et al., 2021). Their over-abundance was strengthened by the Kepler space telescope, which scrutinised a small area of the North sky for 3.5 years in search for transiting planets (Latham et al., 2011; Borucki et al., 2011; Fabrycky et al., 2014).
The successful HARPS story motivated the development of a 'HARPS twin' in the Northern hemisphere, HARPS-N (Cosentino et al., 2012, 2014). This high resolution spectrograph is mounted on the 3.6m Telescopio Nazionale Galileo (TNG), located at the Roque de Los Muchachos observatory on the La Palma Island, Spain. The instrument is embedded in a vacuum-controlled tank, ensuring a high level of stability in temperature and air pressure. It covers the wavelength range 383 - 693 nm, and reaches a spectral resolution of R = 115 000. The instrument was built with two primary science goals. 1) To ensure a synergy with the Kepler (Borucki et al., 2010), then K2 (Howell et al., 2014), TESS (Ricker et al., 2015) and CHEOPS (Benz et al., 2021) missions. This is achieved via a precise RV follow-up of promising transiting candidates. Ultimately, such a strategy provides the community with bulk density measurements of planets, which are essential to precisely characterise the planet population and constrain formation and evolution processes (e.g. Rajpaul et al., 2017; Malavolta et al., 2018; Bonomo et al., 2019; Cloutier et al., 2020; Mortier et al., 2020; Lacedelli et al., 2021). 2) To undertake a blind RV search for low-mass exoplanets, similar to the historical program of HARPS but in the Northern sky. To achieve this, we started the Rocky Planet Search (RPS) program with HARPS-N. It consists in a sample of 58 bright stars widely-spread in right ascension, many of which presenting low levels of activity and hence amenable for precise planet characterisation (see Motalebi et al., 2015, for a presentation of the RPS program). Unlike the HARPS sample however, we highlight that some targets with moderate activity are also part of this sample, to serve as test cases for the development of tools aimed at mitigating stellar activity (e.g. Cretignier et al., 2020; Collier Cameron et al., 2021; de Beurs et al., 2022) in parallel with the constant RV monitoring of the Sun (Collier Cameron et al., 2019; Dumusque et al., 2021).
In order to fulfil these scientific objectives and discover low mass exoplanets, stellar activity has to be treated carefully. The RV amplitude induced by small planets on host main-sequence quiet stars is comparable to the contaminating effect of stellar activity. In order to mitigate the latter, we first adapt the observational strategy of the RPS sample. Following the prescriptions of Dumusque et al. (2011), we systematically took two exposures of fifteen minutes each per night, with at least two hours between each exposure. This allows us to mitigate noise emanating from stellar oscillations and granulation on different scales (Dumusque et al., 2011; Chaplin et al., 2019). Coupling this observational strategy with intense monitoring led to the detection of four planets around HD 219134 (Motalebi et al., 2015). It is one of the few systems where a planet was first discovered via the RV technique, and subsequently found to transit its host star (HD 219134 b, with the use of the Spitzer space telescope). Two years later, a second planet was also observed to transit (Gillon et al., 2017).
In addition to the above, significant efforts have been dedicated to develop tools for stellar activity mitigation and RV time-series analysis (e.g. Delisle et al., 2020, 2020; Collier Cameron et al., 2021; Cretignier et al., 2021; Hara et al., 2022; de Beurs et al., 2022). Simultaneously, the RPS program recently benefited from new Data Reduction Software (DRS) inspired from the ESPRESSO data reduction pipeline, and further augmented by the removal of some specific HARPS-N systematics via a study of the solar data (Dumusque et al., 2021). The convergence of all these efforts motivated us to revisit the RPS sample, in search for new exoplanet candidates. A first outcome of this work is the validation of HD 79211 b (DiTomasso et al., 2023), an exoplanet in a 24.4 days orbit and with a minimum mass of 10.6 \(M_{\oplus}\). More recently, the RV analysis of HD 166620 and HD 144579 demonstrated the potential for HARPS-N to reach sub-metre per second detectability in blind searches (Anna John et al., 2023).
In this paper, we present the results of the analysis of three systems, namely HD 99492, HD 147379, and HD 190007, each of which was already known to host a planet candidate. We aim to present an homogeneous procedure to analyse the data, using the new techniques introduced above. The present paper is organised as follows. Sect. 2 reports the observations available for each star, and focuses on the estimation of the stellar parameters. Sect. 3 describes the various tools as part of our strategy to analyse the data. Then, Sect. 4, 5, and 6 present our analyses of the data and their modelling with planetary Keplerian orbits. Finally, we conclude and discuss these results in Sect. 7.
## 2 Observations and stellar parameters
### Observations
#### 2.1.1 HD 99492
HD 99492, also denoted as 83 Leo B, was observed with HARPS-N between January 15, 2014 and June 17, 2022 using the observational strategy mentioned above (per night, two 15 minutes exposures separated by at least two hours). A total of 202 nightly binned spectra were recorded. For each binned spectrum, we extracted the RV information from cross-correlation of the stellar spectra with a K2 mask. We also derived different stellar activity indicators such as the bisector span, the full width at half maximum (FWHM) of the cross-correlation function (CCF), the S-index and H\({}_{\alpha}\). 178 Keck/HIRES (Vogt et al., 1994) publicly available RV measurements were also taken over a time-span of 6365 days, between Jan 13, 1997 and June 19, 2014. In the top panel of Fig. 1 we present the RV time-series of the combined HIRES and HARPS-N datasets, including an adjusted offset.
HD 99492 has 71 publicly available Hipparcos photometric measurements, and 368 observations from the Automatic Photoelectric Telescope (APT - Henry, 1999; Kane et al., 2009) at Fairborn observatory, Arizona (USA). This ensemble of photometric data did not allow Kane et al. (2016) to constrain the stellar rotation period, and will not be used in our analyses. Additionally, TESS has observed the star in sectors 45 and 46 (i.e. in Nov and Dec, 2021) with a 2-min cadence.
Data from the third _Gaia_ Early Data Release (EDR3, Gaia Collaboration, 2020) constrain the parallax of HD 99492 to 55.062\(\pm\)0.030 masec, leading to an updated distance estimate of 18.16\(\pm\)0.01 pc from the Sun. The star forms a gravitational binary with HD 99491, also named 83 Leo A. With a reported angular separation between HD 99492 and HD 99491 of
28.3 asec, their mutual distance projected on the sky was estimated to 513.9 AU. We note furthermore the existence of 6 CORAVEL RV measurements of HD 99492 taken between the years 1983 and 1993, and 13 RV of HD 99491 gathered between 1980 and 1993, with a mean RV error on the individual measurements of \(\sim\)300 m s\({}^{-1}\). The precision on those data is not significant enough to detect a slope in the RV due to the gravitational influence between the two stars (Hallwachs et al. 2018).
#### 2.1.2 HD 147379
HD 147379, also named Gl 617 A, was observed with HARPS-N between February 18, 2013 and August 13, 2020. A total of 165 nightly binned spectra were recorded. The same information as for HD 99492 was extracted from the spectra, this time employing a M0 mask. The star benefits also from publicly available observations performed with three other spectrographs. HIRES gathered 30 RV measurements over about 13 years (from May 15, 2000 to May 21, 2013); the CARMENES instrument (Quirrenbach et al. 2018) obtained 114 measurements over \(\sim\) 600 nights (from Jan 10, 2016 to Sep 2, 2017); the SOPHIE+ spectrograph1(Perruchot et al. 2011; Bouchy et al. 2013), finally, brings an additional 163 high precision RV measurements obtained over nearly six years (from Sep 22, 2011 to June 17, 2017). The RV time-series of the combined instruments is presented in the middle panel of Fig. 1. For the sake of clarity, we present a time-series centred on the CARMENES, SOPHIE and HARPS-N data. The full RV time-series is presented in Appendix - Fig. 1.
Footnote 1: SOPHIE+ was started in 2011, and consists in an upgraded version of SOPHIE (Perruchot et al. 2008).
HD 147379 was also extensively observed in 28 TESS sectors with a 2-min cadence (sectors 14-21, 23-26, 40-41, and 47-60). The total observing window covers 1279 days, from Jul 18, 2019 to Jan 18, 2023, and contains 494 593 flux measurements. Also, the star benefits from 103 Hipparcos photometric measurements, obtained over a time-span of 2.5 years. Finally, it was observed by KELT (Pepper et al. 2007) over slightly less than two years, for a total of 3304 measurements.
The _Gaia_ EDR3 estimates the parallax of HD 147379 to 92.877\(\pm\)0.015 masec, which converts to a distance of 10.767\(\pm\)0.002 pc from the Sun. The star has a nearby companion with a similar proper motion, namely Gl 617 B. This gravitationally-bound binary presents an angular separation of 64.4 asec, which corresponds to an on-sky projected distance of 693.4 AU.
#### 2.1.3 HD 190007
HD 190007 was observed with HARPS-N between July 21, 2015 and May 19, 2019, for a total of 37 nightly binned spectra. We derived the RV from cross-correlation of the stellar spectra with a K2 mask. These data complement the set of publicly available RV measurements from the HIRES and APF/Levy (Vogt et al. 2014) spectrographs. The former gathered 33 nightly binned RV over more than 16 years (from Jun 19, 1998 to Sep 10, 2014), while the latter obtained 89 nightly binned high resolution spectra over more than six years (from Jul 9, 2013 to Oct 2, 2019). The RV time-series of the combined instruments is presented in Fig. 1 - bottom panel.
HD 190007 was observed with the APT at Fairborn observatory for over twenty years, gathering a total of 1092 flux measurements. Furthermore, the star was followed up by TESS in sector 54 (in Jul, 2022), with a 2min cadence.
### Stellar parameters and activity
Table 1 reports on the main stellar parameters gathered from the literature for the three stars. We stress that the uncertainties on the effective temperature \(T_{\rm eff}\) and the stellar mass \(M_{\star}\) are intriguingly small. As Tayar et al. (2022) showed, important factors such as the systematic uncertainties on the fundamental stellar properties, and the scatter in the results from different stellar evolution models are often ignored. Due to these factors, they estimate a minimum uncertainty of 2.09\(\pm\) 0.5% on \(T_{\rm eff}\), to be added in quadrature to the reported uncertainties. Regarding the stellar mass estimation, the authors provide maximal fractional offsets between different model grids in the space of luminosity and effective temperature (cf. their Fig. 5). For the three stars of this work, the models variance adds uncertainties of between 5% and 10% on \(M_{\star}\). This additional error budget transposes directly to the mass estimates
Figure 1: RV time-series of the three stars analysed in this study. _Top_ – HD 99492 (HARPS-N and HIRES data), _Middle_ – HD 147379 (HARPS-N, SOPHIE, CARMENES and HIRES data), _Bottom_ – HD 190007 (HARPS-N, HIRES and APF data).
of exoplanets (a similar uncertainty increase is also applicable on the radius, but this is not relevant for this work). Finally, we also note that the distances reported in Table 1 were obtained from the parallax measurements reported in the _Gaia_ EDR3 catalogue.
#### 2.2.1 Hd 99492
HD 99492 is a K-type main-sequence bright star (V = 7.6). Its stellar companion 83 Leo A is a late G-type star (V = 6.5). HD 99492 shows a low activity level, notably expressed by its log(\(R^{\prime}_{HK}\))= -4.93 (Marcy et al., 2005).
Kane et al. (2016) reported the detection of a magnetic cycle on HD 99492, with a periodicity between 3000 and 5000 days2. In the time-span of the HARPS-N observations, which is 2270 days, we see a quadratic-like trend in the data. A similar trend is observed in the time-series of our various spectroscopic activity indicators, and we therefore attribute this trend to stellar activity (cf. Appendix - Fig. A.2).
Footnote 2: A planet was originally proposed as the origin of this long-term periodicity, namely HD 99492 c (Meschiari et al., 2011). The existence of this planetary companion was rejected by Kane et al. (2016).
From the estimation log(\(R^{\prime}_{HK}\))= -4.93, Marcy et al. (2005) estimate the stellar rotation period to be \(\sim\)45 days. They used the empirical relations from Wright et al. (2004) for this estimate, but stressed that HD 99492 has a \(B-V\) index that lies outside of the verified calibration domain. Kane et al. (2016) analysed both the APT and Hipparcos photometry, with no conclusive result. From our 202 nightly binned HARPS-N spectra, we detect significant periodic signals at around 40 - 45 days in the periodograms of different stellar activity indicators, such as the FWHM and S-index (cf. Appendix - Fig. A.2). They strengthen the previous analyses, and indicate a stellar rotation period between 40 and 45 days.
Finally, we considered the TESS photometric measurements in search for periodic signals. However, we found that the 2-min cadence Simple Aperture Photometry (SAP) fluxes of the two available sectors (45 and 46), which are derived by the TESS Science Processing Operations Center (SPOC, Jenkins et al., 2016), are strongly contaminated by the Moon. Therefore, to treat the systematics we extracted our custom light-curve from the calibrated 2-min cadence target pixel files using lightkurve3. For each sector, the extraction of the light-curve was performed via aperture photometry employing the TESS SPOC pipeline mask. The latter is shifted from the photometric centre of the source, so to limit the contamination from the nearby companion HD 99491, which lies 28\({}^{\prime\prime}\) away from HD 99492 and is hence blended with the latter on the TESS detector. During the light-curve extraction, we corrected the fluxes from the background systematics, and notably we partly accounted for the contamination of HD 99491. This was done by building a matrix of the out-of-aperture pixels (also called a design matrix), and performing a principal component analysis to retrieve the five main trends that we then removed from the pixels inside the aperture. We also retrieved the cotrending basis vectors (CBV) computed by the Pre-search Data Conditioning (PDC, Smith et al., 2012; Stumpe et al., 2014) unit of the TESS SPOC pipeline. These are vectors that contain the most common systematic trends found for each TESS CCD, and which are produced for every sector. They are divided into three categories: the spike CBV contain the short impulsive systematics, the single-scale CBV contain all systematics in a single set of CBV, and multi-scale CBV are spread into three different band-passes. We found that the combination of single-scale and spike CBV is best suited to correct our light-curve from systematics while preserving stellar signals with time-scales of a few tens of days. We detrended the light-curve with these CBV, jointly with the background subtraction mentioned above. We then removed outlier fluxes standing beyond five standard deviations (5\(\sigma\)) of the smoothed light-curve, we normalised the flux data in each sector, and merged the two sectors. Finally, we undertook a periodic signal search in the resulting light-curve, via a generalised Lomb-Scargle (GLS) periodogram (Zechmeister & Kurster, 2009). We found a significant broad periodic signal peaking at \(\sim\)21 days, which is consistent with half of the expected stellar rotation period. We present those results in Fig. 2.
Footnote 3: Lightkurve is an open-source Python package for Kepler, K2 and TESS data analysis (Lightkurve Collaboration et al., 2018).
#### 2.2.2 Hd 147379
HD 147379, or equivalently Gl 617 A, is a M-type main-sequence star with high proper motion (V = 8.9 mag). It is also part of a gravitationally bound binary with Gl 617 B, which is another M-type main-sequence star fainter than the former (V = 10.6 mag). HD 147379 presents a reasonably low activity index too, with log(\(R^{\prime}_{HK}\))= -4.75.
A correlation is visible in the spectroscopic data of HD 147379 between the RV and FWHM or S-index. We observe a long-period variation in those time-series, which is attributed to the stellar magnetic activity. The HARPS-N data spans 2734 days, and like HD 99492, does not cover a full magnetic cycle. Only a quadratic trend is observed (cf. Appendix - Fig. A.3).
Based on an empirical relation between X-ray activity and stellar rotation, Reiners et al. (2018) estimate a stellar rotation period of \(P_{rot}\) = 31 \(\pm\) 20 days. Furthermore, with the use of
Figure 2: HD 99492: Periodic signal search in TESS sectors 45 and 46. _Top:_ TESS light-curve extracted using lightkurve, and corrected from systematics (2 sectors). _Bottom:_ GLS periodogram computed between 2 and 40 days of the 1-day binned light-curve.
the empirical relation of Astudillo-Defru et al. (2017a) between \(\log(R^{{}^{\prime}}_{HK})\)and \(P_{rot}\) for M dwarfs, Hobson et al. (2018) evaluate the rotation period of HD 147379 to be \(28.8\pm 6.1\) days. These authors also analysed the Hipparcos photometry - 103 measurements over 2.5 years - but without conclusive results except a hint of signal at \(\sim 21\) days. Pepper (2018) performed a photometric analysis of HD 147379 with KELT data, and found \(P_{rot}=22\) days. All the previous studies are therefore consistent with each other. We analysed the stellar activity indicators time-series of HARPS-N to search for any hint of stellar rotation. The most significant periodic signals were seen in the S-index, which reveal a cycle of 21.5 days. This strongly supports the results of previous analyses on this star.
We analysed the extensive TESS 2-min cadence photometric observations (28 sectors) in search for any photometric periodic variation. To proceed, we extracted the light-curve of each of the 28 sectors following the same procedure as described in Sect. 2.2.1. Notably, we note again that the SPOC pipeline mask conveniently avoids a nearby source on the detector, and we opt for this mask to perform aperture photometry. We performed a periodic signal search in our custom light-curve corrected from systematics, by computing a GLS periodogram. Both the TESS time-series and the periodogram are presented in Fig. 3. We found two significant signals at 21 and 10.4 days, and we associate them with the stellar rotation period and its half, respectively. This analysis is consistent with the period found in the spectroscopic observations and in Pepper (2018). We therefore conclude that the rotation period is highly likely to be around 21 days.
#### 2.2.3 Hd 190007
HD 190007 is a bright K-type main-sequence star (V = 7.46). The star belongs to the family of BY Draconis variables (Kazarovets et al., 2003). It is indeed moderately active (\(\log(R^{{}^{\prime}}_{HK})\)= -4.65) and shows a clear photometric quasi-periodic variation. It is slightly metal-rich, with an age that remains uncertain. However, Burt et al. (2021) suggest that the star is at least 1 Gyr old, based on data from the Kepler mission.
The radial velocities combined from the different instruments, which span more than twenty years, present a linear trend of \(\sim\)1.4 m s\({}^{-1}\) yr\({}^{-1}\). This is in line with the measurement of a statistically significant (S/N\(>\)3) proper motion anomaly between Hipparcos and _Gaia_ DR3 mean epoch (Kervella et al., 2022). Both the RV and the Hipparcos-_Gaia_ absolute astrometry hint at the existence of a long-period companion, and future observations will help shed light onto this hypothesis.
Both from the spectroscopic stellar activity indicators and the analysis of the available photometry, Burt et al. (2021) report a rotation period of \(P_{rot}\sim\)29 days. This is in agreement with the previous estimation of Olspert et al. (2018) using Gaussian Process (GP) modelling, which found \(P_{rot}=27.68\) days. We observe a similar periodicity in the spectroscopic indicators when we add our HARPS-N measurements to the existing HIRES and APF data (cf. Appendix - Fig. 4).
We also analysed the TESS photometric data of this star (sector 54) in search for potential periodicity. We were limited by the small time-span of the observations (\(\sim\)26 days). A simple inspection of the normalised SAP light curve, consisting of 13 238 flux measurements, reveals a long-term variation covering nearly a full cycle over the sector. We present the normalised SAP light curve in Fig. 4. This is in agreement with previous analyses, suggesting a stellar rotation period around 30 days. Yet, the time span of the TESS observations is too short to firmly confirm it.
\begin{table}
\begin{tabular}{l c c c} \hline \hline Parameters [Units] & HD 99492 & HD 147379 & HD 190007 \\ \hline Spectral Types & K2V1 & M0.0V6 & K4V10 \\ \(V\) [mag] & 7.58 & 8.9 & 7.46 \\ \(B-V\) [mag] & 1.02 & 1.11 & 1.11 & 1.12 \\ Distance [pc] & 18.161 & 10.767 & 12.715 & 1.75 \\ \(T_{eff}\)[K] & 4929 & 442 & 4090 & 4610 \\ \(\log(R^{{}^{\prime}}_{HK})\) & 4.57 & 4.69 & 0.012 & 4.58 \\ \([\mathrm{Fe}/\mathrm{H}]\)[dex] & 0.3 & 0.3 & 0.16 & 0.16 & 0.16 & 0.16 & 0.16 & 0.16 & 0.16 & 0.16 & 0.16 & 0.16 & 0.16 & 0.16 & 0.16 & 0.16 & 0.16 & 0.16 & 0.15 & 0.16 & 0.15 & 0.17 & 0.17 & 0.14 & 0.14 & 0.18 & 0.17 & 0.18 & 0.19 & 0.16 & 0.16 & 0.16 & 0.16 & 0.16 & 0.16 & 0.16
## 3 Data analysis tools and general strategy
In this work, we employ a systematic approach to analyse the observations, making use of a number of versatile tools. We aim to demonstrate the ability of these new tools to correct for stellar activity effects and to provide unbiased planetary parameters. Initially, we explore the spectroscopic data - RV and stellar activity indicator time-series - with the Data & Analysis Centre for Exoplanets (DACE). This web-platform, hosted at the University of Geneva, is dedicated to extra-solar planet data visualisation, exchange and analysis4. Among many other functionalities, it shows the various spectroscopic time-series, but also the correlation plots between the RV and indicators. Additionally, it offers the possibility to perform interactive fits of the time-series, optionally with Keplerians. Therefore, we systematically use DACE to get a first valuable view of the data, and identify potential stellar activity signatures. As a second step, we initiate in-depth data analysis. We first correct the HARPS-N data from instrumental systematics and some stellar activity features thanks to the YARARA software (Cretignier et al., 2021). Then, the datasets are investigated in light of the correlated noise model SPLEAF (Delisle et al., 2020). Below, we briefly summarise these two tools. Finally, we explore the model parameters via a Monte Carlo Markov Chain (MCMC) algorithm. For the latter, we employ samsam, a scaled adaptive metropolis algorithm (Delisle, 2022).
Footnote 4: DACE can be accessed via [https://dace.unige.ch](https://dace.unige.ch).
### Yarara
YARARA (Cretignier et al., 2021) is a post-processing method that aims to improve the spectra extracted by the classical DRS (Dumusque et al., 2021) by removing the extra signatures introduced by the instrument and the Earth atmosphere, leading ultimately to an improved RV precision. The method involves correcting a residual spectral time-series with respect to a master spectrum (built to be free from the systematics), where the corrections are made of multi-linear regressions in the wavelength domain. The spectra are corrected for: cosmics, tellurics, ghosts, fringing, instrumental defocus, activity and residuals outliers. The cleaned RV are extracted with a CCF using a tailored line selection based on the master spectrum itself (Cretignier et al., 2020) in order to increase the extraction of the Doppler information content, whereas the merged-order 1d spectra are continuum normalised using RASSINE (Cretignier et al., 2020). A few adjustments were implemented in the code in order to increase the orthogonality between the stellar activity and instrumental signatures. The main reason for this update was to be able to introduce back the stellar activity component at later stage (see Sect. 3.2 for a justification). In the old version of the code, both effects were simultaneously fitted using the S-index, the CCF FWHM and the CCF contrast. Those two moments contain redundant stellar activity information. The new version of the pipeline uses an improved version of the S-index using CalIIh&K lines better corrected for ghosts, whereas the fitted CCF moments are now orthogonal to that component thanks to a Gram-Schmidt orthogonalisation algorithm (see Appendix C).
YARARA itself is not able to simultaneously fit in the wavelength domain the planetary signals and the systematics. However, since most systematics are fixed in the terrestrial rest-frame, the code assumes that the planetary signals are destroyed (or strongly mixed) in the wavelength domain once such a change of rest-frame is performed. In Cretignier et al. (2021), the authors showed that injected planetary signals were not absorbed more than 20% (and only for a planet close to a 1-year harmonic). In order to further avoid such absorption, large RV amplitude signals can be pre-fitted by shifting the spectra according to a determined Keplerian solution. In this work, that preliminary Keplerian solution will be obtained via a RV fit with DACE which matches the orbital elements of the known planets. This Keplerian solution will then be re-injected in the RV obtained on the residual spectra, before that an improved Keplerian solution could be obtained. This process can be called in an iterative way until convergence is reached, which usually occurs after a single iteration.
Once spectra are corrected for systematics, new proxies orthogonal to a pure Doppler shift can be extracted in the time-domain (Cretignier et al., 2022). Under the assumption that the deformations of the line profiles are mainly driven by the line profile itself, the residual spectra \(\delta f(\lambda)=f(\lambda)-f_{0}(\lambda)\) can be expressed rather as \(\delta f(\partial f_{0}/\partial\lambda,f_{0})\), with \(f_{0}\) the master spectrum of the star - an object that the authors called a'shell'. An advantage of that space is that a pure Doppler shift possesses an unique signature that can be fitted out analytically with a first order approximation. Other distortions, orthogonal to the pure Doppler shift can then be extracted with a principal component analysis and their associated time-coefficients are used in a multi-linear model to correct for the RV time-series. A cross-validation algorithm is used to determine the number of components to select by randomly rejecting 10% of the observations. Such a strategy was shown successful to remove the rotational period of stars moderately active and/or potential remaining instrumental systematics uncorrected by YARARA.
This additional correction process (Cretignier et al., 2022) will be subsequently designated as YARARA-Shell, as opposed to YARARA-V1 which consists of YARARA (Cretignier et al., 2021) further augmented with the few adjustments described above to increase the orthogonality between the stellar activity and instrumental signatures.
### Splear
Correlated noise, or Gaussian Process (GP) models have been the object of increasing research among the exoplanet community over the past ten years. They account for physical processes that might be poorly understood or poorly constrained, such as stellar activity (Haywood et al., 2014). Indeed, instead of a deterministic model, the GP regression technique aims to parametrise the covariance between the measurements. There exist different types of GP, which differ from each other by their kernels, that is by the functional form of the covariance matrices. SPLEAF (Delisle et al., 2020) refers to a class of semi-separable covariance matrices, which builds on the benefits of the electric kernel (Ambikasaran et al., 2015; Foreman-Mackey et al., 2017). SPLEAF can take into account the calibration noise, and due to the semi-separable form of the covariance matrix, it provides low computation costs. Once the GP regression is properly performed, it should account for anything that is not due to the deterministic model nor the measurements noise. Yet in practice, there might be risks of over-fitting, and the planetary signals may be (partly) absorbed by the GP To limit the risk, Rajpaul et al. (2015) proposed to train the GP simultaneously on the RV and on activity indicators time-series, by modelling the activity-induced RV signals as linear combinations of the GP (\(G(t)\)) and its derivative \(G(t)\)(Aigrain et al.,
2012)
\[\Delta RV\,=\,V_{c}G(t)\,+\,V_{r}\,\dot{G}(t) \tag{1}\]
\[\Delta\alpha\,=\,L_{c}G(t) \tag{2}\]
\[\Delta\beta\,=\,B_{c}G(t)\,+\,B_{r}\,\dot{G}(t) \tag{3}\]
for \(V_{c}\), \(V_{r}\), \(L_{c}\), \(B_{c}\), \(B_{r}\) free parameters, and \(\alpha\) and \(\beta\) referring to some activity indicators. Eq. 1 was first proposed by Aigrain et al. (2012) to account for the effect of stellar spots on the RV variations, where G(t) captures the suppression of the convective blueshift in the spots, and \(\dot{G}(t)\) provides the effect of stellar rotation. Rajpault et al. (2015) extended this framework to activity indicators (Eq. 2 and 3), where the use of one equation or another depends on the nature of the indicator. For instance, the S-index is a proxy for the proportion of the visible stellar disk covered by active regions, and is therefore accounted for by Eq. (2). On the other hand, the bisector span indicator, which informs about the asymmetry of the spectral lines, also depends on the radial velocity of the stellar surface at the location of the spots, and is hence described by Eq. (3). The planetary signal being only in the RV time-series, it is less likely to be absorbed by the GP if the latter is simultaneously trained on one or several activity indicators. As a drawback though, the computation cost is significantly larger, with a covariance matrix of size \(2n\times 2n\) where \(n\) is the number of measurements and in the case where only one activity indicator time-series is used in parallel of the RV.
Delisle et al. (2022) generalised the SPLEAF model to account for different - (potentially taken at different times) as modelled by Eq. (1, 2, 3), while insuring a computation cost that scales linearly with the total number of measurements. In this work we use this generalised SPLEAF correlated noise model, which is publicly available5. The kernel we employ for the quasi-periodic part is an approximation of the squared-exponential periodic (SEP) kernel, where the covariance has the following expression:
Footnote 5: [https://gitlab.unige.ch/jean-Baptiste.Delisle/spleaf](https://gitlab.unige.ch/jean-Baptiste.Delisle/spleaf)
\[k(\Delta t)\,=\,\sigma^{2}\,\exp\left(-\frac{\Delta t^{2}}{2\rho^{2}}-\frac{ \sin^{2}\left(\frac{\Delta t}{P}\right)}{2\eta^{2}}\right). \tag{4}\]
It consists of a sinusoid on top of a decreasing exponential, with \(\Delta t\) being the time interval between two measurements. Therefore, this kernel correlates one measurement with the others according to this functional form. The hyperparameters \(\sigma\), \(\rho\), \(P\) and \(\eta\) describe respectively the amplitude of the correlated noise, the rate of exponential decay of the correlation, the period of variation of the sinusoid and the length-scale of the periodic component, that is its ability to capture rapidly varying features (smaller values pointing towards sharp variations). These hyperparameters are constrained from the various time-series. SPLEAF develops the periodic component of the SEP kernel in series, and keeps the two strongest harmonics. In practice, we also add a white noise term to each time-series, which consists in an additional term on the diagonal of the correlation matrix.
In this work, we systematically apply the generalised SPLEAF model - denoted as the SPLEAF model below - on the RV and S-index time-series. As was described above, we find in the S-index time-series of the three stars traces of the stellar magnetic cycles and stellar rotation. Therefore, the hyperparameters of the correlated noise model, which are estimated from the analysis of these different datasets, are expected to converge to values informing us about the stellar activity. We mostly set uniformative priors on the model parameters, that is priors following a uniform law with wide boundaries. In Table 2, we synthesise the priors we use in most of our analyses. We explicitly note in the text when certain priors are further constrained.
We express some caution about using the YARARA dataset with a correlated noise model. Indeed, we noticed a convergence issue of the fit with any dataset that would already be (partially) corrected for the stellar activity features. The reason is that while the activity indicator time-series are still left intact, the RV time-series are corrected for the trends observed in the former. Therefore, the correlated noise model trains successfully on the activity indicators, but struggles to transpose the results to the RV time-series. In conclusion, the correlated noise model does not provide satisfactory results with YARARA-V1 and YARARA-Shell. Nevertheless, YARARA also corrects the various time-series for the instrumental systematics. The use of those datasets is preferred, so to avoid observing spurious periodic signals. Therefore, in the following analyses making use of SPLEAF, we use a modified version of YARARA-V1 where the stellar activity has been re-injected into the RV time-series, but not the instrumental systematics (see Appendix C for an explanation of the process to separate the various components). In other words, we use an alternative version of YARARA where only the instrumental systematics have been corrected for. That allows for the convergence of the fit with the correlated noise model.
## 4 HD 99492 data analysis
HD 99492 is known to be orbited by a planet. Marcy et al. (2005) first announced the detection with 35 HIRES RV measurements, and reported an orbital period of 17 days and a minimum mass of \(\sim\)25 \(M_{\oplus}\). The solution was refined by Meschiari et al. (2011) with an additional 58 HIRES velocities, and the authors proposed another planet candidate at an orbital period of \(\sim\)5000 days. A few years later and with 130 HIRES RV, the latter signal was shown to be related to stellar magnetic activity (Kane et al. 2016).
We added our 202 nightly binned HARPS-N spectra processed with YARARA-V1, and performed an exploratory analysis on DACE. First, we focused on stellar activity signatures. In this dataset, the S-index time-series displays a significant long-term trend that peaks at a periodicity of \(\sim\)3000 days in a GLS periodogram. The latter signal is associated with the magnetic cycle described in Kane et al. (2016). Fig. 5 presents this time-series (top plot) and the corresponding periodogram (middle plot). At short periods, we observe a strong signal at 42.9 days, which appears once we fit the long-period variation with a Keplerian. This is illustrated in the bottom plot of Fig. 5. Furthermore, we also detect its 1 year aliases at 47.3 and 38.4 days. The S-index indicator reveals a notable correlation with the RV, with a correlation coefficient of \(r=0.50\). We detected a similar 42.9 days periodicity in other activity indicator time-series such as the CCF contrast and CCF FWHM. We also systematically identified the 1 year aliases of this signal. Therefore, all these observations constitute strong evidence for the 42.9 days signal to be associated with the stellar rotation pe
riod. This estimate is consistent with previous works (cf. Sect. 2.2).
The HIRES publicly available data also contain the S-index activity indicator. In this dataset, we observe a clear quasi-periodic variation covering several magnetic cycles of the star, whose correlation with the RV is weaker than in the HARPS-N dataset. After fitting the most significant periodic signal in the S-index with a Keplerian (which converges to \(\sim\)3000 days), we do not find any significant residual periodicity at \(\sim\)43 days. In fact, there is no remaining significant signal below 800 days. We searched for the presence of the stellar rotation in the other activity indicator available (i.e. the H\({}_{\alpha}\) index). Again, we did not find any significant signal. As a result, we note that the rotation of the star does not present a measurable signature in the HIRES data. A potential cause of this non detection is the larger scatter observed in the S-index time-series of HIRES combined with a more sparse sampling. Finally, combining HARPS-N and HIRES datasets, we set tighter constraints on the period of the stellar magnetic cycle. With now an extended baseline of 9286 days, we constrain the main period of the S-index time-series to around 3300 days.
Taking the above information into account, we then undertook a search for planets. We performed the latter using different models and datasets, so as to compare several approaches. In the first, we used the HARPS-N data as derived from the DRS together with the HIRES dataset. In a second approach, we analysed the YARARA-V1 dataset of HARPS-N together with HIRES, and included a correlated noise model. As a third approach, we repeated this procedure on the YARARA-V1 data only. Finally, in a fourth approach we analysed the YARARA-Shell data without a correlated noise model. We describe our planet search analyses below.
### Approach 1: DACE - HARPS-N DRS + HIRES
As a first preliminary analysis, we looked for periodic signals in the nightly binned RV time-series of the combined HARPS-N + HIRES datasets, using the DRS version of the HARPS-N
\begin{table}
\begin{tabular}{l l l l} \hline \hline
**Parameter** & **Units** & **Prior Distribution** & **Description** \\ \hline Offsets and noise & & & \\ \hline Epoch & BJD & Fixed at 2 455 500 & Reference epoch \\ \(\gamma_{inst}\) & m s\({}^{-1}\) & U [RV\({}_{\rm min}\), RV\({}_{\rm max}\)] & Constant velocity offset \\ \(\sigma_{RV}\) & m s\({}^{-1}\) & logU [0.001, 100] & Additional white noise (Jitter) \\ \(P_{GP}\) & days & U [1, obs timespan] & Period of correlated noise \\ \(\rho_{GP}\) & days & U [1, obs timespan] & Decay timescale \\ \(\eta_{GP}\) & & U [0, 100] & Smoothing parameter \\ \(\sigma_{GP}\) & m s\({}^{-1}\) & U [0, 100] & Amplitude of correlated noise \\ Keplerians & & & \\ \(p\) & days & U [1.5, 5000] & Orbital period \\ \(K\) & m s\({}^{-1}\) & logU [0.1, 10\({}^{5}\)] & RV semi-amplitude \\ \(e\) & & U [0, 0.8] & Orbital eccentricity \\ \(\omega\) & radians & U [0, 2\(\pi\)] & Argument of periastron \\ \(\lambda_{0}\) & radians & U [0, 2\(\pi\)] & Mean longitude at epoch \\ \hline \hline \end{tabular} U denotes a Uniform prior and logU denotes a logarithmic Uniform prior.
\end{table}
Table 2: List of priors used for each parameter, unless stated otherwise in the text. RV\({}_{\rm min}\) and RV\({}_{\rm max}\) are the minimum and maximum measured RV, respectively, for each star.
Figure 5: HD 99492: YARARA-V1 S-index dataset. _Top_ – S-index time-series. _Middle_ – GLS periodogram of that time-series (\(k_{0}\) stands for no Keplerian in the model). _Bottom_ – GLS periodogram of the residual S-index -, after subtracting a Keplerian model with \(P\sim 3000\) days (\(k_{1}\)). The red bands locate the periodicities seen in this activity indicator.
data. While we noticed inconsistencies in the H\({}_{\alpha}\) time-series of the HIRES data, the S-index is consistent throughout the entire time-span. We selected this activity indicator to detrend the RV via the inclusion of a scaling parameter. Additionally, we included a quadratic drift to remove what remains of long-term variations. After adding these components in the model, we searched for periodic signals in the periodogram. The signal of the planet at 17d is very strong, with a FAP of 7.1 10\({}^{-77}\). We included a Keplerian at that period into our model, and computed the periodogram of the residuals from the fit. This periodogram shows a very significant second signal. It has a period of 95.5 days and FAP of 2.1 10\({}^{-21}\), and is not associated with any of the activity indicators. This favours the planet hypothesis, and we fitted that signal with another Keplerian. A third signal is revealed in the new periodogram of the residuals, with a period of 13.9 days and FAP of 0.23%. Not only its FAP is above our defined detection threshold of 0.1%, but also, this period is about a third of the expected stellar rotation period. Hence, it has to be interpreted carefully.
We explored the two Keplerians model via a MCMC algorithm, which allows us to constrain the planet parameters. The parameters of HD 99492 b are in accordance with previous publications, with results converging to \(P_{b}\)=17.0492\(\pm\)0.0007 days and \(K_{b}\)=7.30\(\pm\)0.23 m s\({}^{-1}\). Concerning the new planet candidate, we found a moderate eccentricity \(e_{c}\)=0.237\(\pm\)0.080, while the orbital period and the RV semi-amplitude are estimated to \(P_{c}\)=95.373\(\pm\)0.050 days and \(K_{c}\)=3.08\(\pm\)0.27 m s\({}^{-1}\).
### Approach 2: Spleaf - Harps-N Yarara-V1 + Hires
We repeated the process highlighted in Sect. 4.1 with HIRES and the YARARA-V1 data reduction of HARPS-N. Again, the signals at 17 and 95.5 days are unambiguous. Fig. 6 illustrates our successive fitting process. The top plot presents the periodogram of the RV time-series (the highest peak has a FAP of 1.1 10\({}^{-80}\)), while the middle plot shows the periodogram of the residuals after fitting the RV with a Keplerian at 17 days. The highest peak in this middle plot, located at P=95.4 days, has a FAP of 3.2 10\({}^{-22}\). The bottom plot displays the periodogram of the residuals after performing a two Keplerian fit, including one at 17 days and one at 95.4 days. Now, the remaining signal at 13.9 days is no longer detected in the latter plot, but we observe another signal at 14.5 days with a FAP of 0.13%. Its significance is still below our threshold of 0.1% in FAP. Both the low significance and the different period detected for this third signal, in addition to its periodicity of about a third of the expected stellar rotation period, cast doubt on its planetary origin.
In order to shed light on the origin of this third signal, we scrutinised the HIRES and HARPS-N datasets in a correlated noise model via SPLEAF (cf. Sect. 3.2), using both the RV and S-index time-series. At this stage, we used the alternative YARARA-V1 dataset where the stellar activity was injected back into the RV. We did not set any constraining prior on the period of the correlated noise, nor on any other parameter (cf. Table 2). However, we initialised the correlation period to 43 days, and the decay time-scale to 500 days. The large value for the latter is aimed at mitigating the magnetic cycle, which takes place on a long time-scale. In order to facilitate the fit convergence, we fitted the correlated noise model parameters on a step-by-step strategy, successively adding an additional parameter. Then, we progressively incorporated Keplerians until no more significant signal was observed. In addition to these parameters, we also included an offset for each instrument and a white noise term, both included for each time-series - namely the RV and S-index time-series. Finally, we added a RV linear drift in the model, which provided the best results in terms of signal significance and fit convergence compared to a quadratic drift or no drift. As a result of our fit, we found a first significant periodic signal at 17.05 days, with a negligible FAP of 4.5\(\times\)10\({}^{-52}\). After adding a Keplerian in the model and proceeding with a new fit, we detected another significant signal with a period of 95.3 days and FAP=2.5\(\times\)10\({}^{-18}\). Therefore, we incorporated a second Keplerian in our model, and performed a new fit. In the periodogram of the residuals, there remains a significant power around 3500 days which we attribute to a remnant of the stellar magnetic cycle, while the decay time-scale of the correlated noise converged to 194 days. Concerning the correlation period, it peaked at 44.8 days, which is compatible with the stellar rotation period. Then, we explored the parameter space of the two-Keplerian model with a MCMC algorithm. We performed 1M iterations, and obtained an effective sample of 4288 independent solutions. The period of the correlated noise is estimated to be 44.9\(\pm\)0.5 days. The outermost planet, at an orbital period of 95.4 days, has an estimated RV semi-amplitude of 2.7\(\pm\)0.2 m/s, which corresponds to a planet minimum mass of 17.4 \(M_{\oplus}\). Its eccentricity is moderate, with \(e_{c}\)=0.112\(\pm\)0.086.
### Approach 3: Spleaf - Harps-N Yarara-V1
In a third approach, we employed again SPLEAF for a correlated noise model but analysed only the HARPS-N data - YARARA-V1 with activity injected back into the time-series. We undertook the same process as described above for the second approach. Once again, we found two significant signals at 17 and 95.5 days. Their FAP are 3.0\(\times\)10\({}^{-35}\) and 2.5\(\times\)10\({}^{-20}\)
Figure 6: HD 99492: Periodograms of the combined YARARA-V1 and HIRES RV time-series. The periodograms are computed from a fit of the time-series with a detrend + drift model. _Top_ – No Keplerian. _Middle_ – Periodogram of the residual time-series, after removing a Keplerian at \(\sim\)17 days. _Bottom_ – Periodogram of the residuals after removing Keplerians at \(\sim\)17 and 95 days.
respectively. A third residual signal with a period of 14.7 days just reaches our detection threshold, and has a FAP of 0.05%. It has a small semi-amplitude of 1.0 m/s. This period is different from the previously found signals at 13.9 and 14.5 days, but is also compatible with a third of the stellar rotation period. We note that the period of the correlated noise is estimated to 43.1 days. We remind the reader that SPLEAF approximates the periodic component of the SEP kernel by a development to the second order. To investigate further the stellar rotation origin for the 14.7d signal, we developed the periodic component to a higher order, so as to account for the third harmonic of the stellar rotation period. We applied this refined kernel to the same dataset. After the removal of two Keplerians at 17 and 95.5 days, the residual signal at 14.7d is still significant with a FAP of 0.07%. Developing the kernel up to one additional order so as to entail the fourth harmonic yielded similar results. The residual 14.7d signal has a FAP of 0.08% with this refined development. However, in all cases, the period of the correlated noise unambiguously converges towards the stellar rotation period. Therefore, developing the periodic component of the kernel to a higher order should absorb the harmonics of the stellar rotation. Consequently, the survival of the 14.7d signal irrespective of the kernel development suggests this signal is not a direct outcome of the third harmonic of stellar rotation. Instead, its origin is likely different.
We undertook an MCMC exploration of the model with three Keplerians. As the RV semi-amplitude of the third signal is small and in order to facilitate the MCMC convergence, we fixed the eccentricity and argument of periastron of the third Keplerian to 0. We performed 1M iterations, and obtained a total of 4856 independent solutions. Out of this exploration, we found RV semi-amplitudes of \(K_{b}\)=6.99\(\pm\)0.17, \(K_{c}\)=2.83\(\pm\)0.20 and \(K_{d}\)=0.94\(\pm\)0.17 m s\({}^{-1}\). The orbital eccentricity of planet c is now smaller: \(e_{c}\)=0.052 with a 68.27% confidence interval of [0.019, 0.102].
We also explored the parameter space of a model with only two Keplerians, given the doubt that we cast on the origin of the 14.7d signal. We performed a new MCMC exploration of the model with two Keplerians, again with 1M iterations. This time, we obtained a sample of 5440 independent solutions. The correlated noise parameters display similar values at the end of the MCMC exploration, compared to the model with three Keplerians. The same is observed for planets b and c RV semi-amplitudes, and the orbital eccentricity of planet c is once again constrained to be small. For this two Keplerian model, we estimate a Bayesian information criterion (BIC) of -327.1, while the BIC of the three Keplerian model amounts to -349.5. Therefore, the BIC indicates a slight preference for the model with three Keplerians. As such, we further tested the planetary nature of the 14.7 days signal, by investigating its resistance to the removal of RV measurements. If the signal was due to a planet, its power would monotonously decrease with the number of removed observations. Instead, a signal of stellar origin would react irregularly to the suppression of RV measurements. We performed different tests, following various patterns of data removal (random, lowest S/N, highest S/N, data in quadrature of the 14.7 days signal) and removing up to 10% of the time-series. We found that while the signals at 17 and 95 days successfully passed our tests, the significance of the 14.7 days signal did not decrease monotonously with the number of removed observations. Instead, its power in the periodogram evolved irregularly with the number of removed observations. From this analysis, we conclude that the current observations do not support a third planetary signal. On the other hand, the resilience of the 14.7 days signal to the degree of development of SPLEAF indicates that it does not emanate from the third harmonic of stellar rotation. Instead, it could be the result of a combination between stellar rotation (or its second harmonic) and the spectral window of the observations.
### Approach 4: HARPS-N YARARA-Shell
Finally, we investigated the YARARA-Shell dataset which, as compared to YARARA-V1, further corrects from stellar activity features. We designed a model that includes a white noise term, but does not include correlated noise. After an iterative search, we found two significant signals at the periods already mentioned above. The first signal, at 17.05 days, has a FAP of 5.1 10\({}^{-62}\), while the second signal at 95.3 days has a FAP of 3.5 10\({}^{-28}\). In the residuals of this two planet model, we did not find any significant period. The 14.7 days signal has now a FAP of 9.7%. This observation further comfits us in the non detection of a third Keplerian signal with the current dataset. The analysis of YARARA-Shell, following the analysis of YARARA-V1 with SPLEAF, hereby demonstrates the power of using complementary analysis techniques to shed light onto the nature of periodic signals.
We explored the two-planet model via the MCMC algorithm introduced above. We performed 1M iterations, and obtained a set of 4095 independent solutions that constitute our posterior distribution. While the RV semi-amplitude of the outer planet \(K_{c}\) is very similar to its estimate obtained using approach 3, the orbital eccentricity of that planet converges towards a larger value of about 0.1.
In Fig. 7, we synthesise the results from the four approaches described above. It shows the posterior distributions obtained with the different approaches, and projected on the orbital parameters of planet c. Additionally, we also distinguish between the two and three planets fits performed in the third approach. We conclude that the inclusion of the HIRES RV measurements increases the estimation of the eccentricity of planet c. This is probably due to a less efficient mitigation of stellar activity effects on the HIRES measurements. Indeed, as was already mentioned,we did not find a clear periodicity of \(\sim\)43 days in the S-index of HIRES. We note that the analysis without the HIRES data provides the best estimations on the planet parameters. Furthermore, using the YARARA-V1 data alone, the estimates we obtain for the parameters of planet c are very similar in the models with two or three Keplerians. The third periodic signal at 14.7 days is absent from the YARARA-Shell data. This latter dataset, however, leads to a larger estimate of the eccentricity \(e_{c}\). As pointed out in Hara et al. (2019), complex noise patterns - if not modelled - can boost the eccentricity estimations. This additional eccentricity induced by stellar activity likely also drives the shift observed in the distribution of \(\omega_{c}\) from YARARA-Shell. Indeed, small orbital eccentricities do not constrain well the argument of periastron, as illustrated with the orange distributions. Any additional eccentricity would hence carry a significant influence on the distribution of \(\omega\). In conclusion, we favour the approach leading to the smallest eccentricities. We opt for the approach using the YARARA-V1 dataset only together with a correlated noise analysis, namely approach 3 with a two-planet model. These results are synthesised in Table 3. They suggest that planet c has a minimal mass of 17.9\(\pm\)1.3 \(M_{\oplus}\), which is equivalent to the mass of Neptune. So far, few planets in that mass range have been found with large (\(>\)50 days) orbital periods. We further discuss this result in Sect. 7. We
present in Fig. 8 the plot of the orbits in the cartesian space, and the RV phase-folded on the period of the two planets.
### Independent confirmation using tweaks
The HARPS-N CCF were independently analysed for planetary signals using tweaks. This pipeline was particularly designed for attaining a sub-m/s detection threshold at long orbital periods, by combining the wavelength-domain and time-domain stellar activity mitigation (Anna John et al., 2022, 2023). We first conducted a blind search of the radial velocities, using a model with up to five unidentified Keplerian signals. For this, we used the kima nested-sampling package (Faria et al., 2018). Using scalpels(Collier Cameron et al., 2021), which involves doing a principal-component analysis on the autocorrelation function of the CCF; time-domain activity-decorrelation vectors were produced. These basis vectors were then used for the spectral line-shape decorrelation (Collier Cameron et al., 2021) in kima, as Anna John et al. (2022) reported that de-trending the RV for line shape variations using the SCALPES basis vectors yields a model that is significantly better than a model that does not account for these stellar activity signatures.
The joint posteriors showed clear detection of two Keplerian signals at orbital periods 17.05 days and 95.2 days. We conducted a False Inclusion Probability (FIP) analysis (Hara et al., 2022) in frequency space, with the bin size set to the Nyquist frequency resolution over the entire data duration to search for multiple planet signals simultaneously. While a strong FIP of 0.13 was found for the 17.05 day signal, the 95.2 day signal was detected even more strongly with a FIP of 0.04; in other words, 96% of the mutually independent kima posterior solutions favoured a planet detection at an orbital period of 95.2 days. We present these results in the Appendix - Fig. A.5. The RV semi-amplitudes obtained from this analysis were combined with the stellar mass of 0.85 \(M_{\oplus}\), leading to a minimum mass determination of \(27.13\pm 1.18\)\(M_{\oplus}\) and 19.85 \(\pm\) 2.19 \(M_{\oplus}\) respectively. We also validated these detections by confirming the coherency of the signals across different data subsets. Unlike the signals occurring from sampling patterns, stellar activity or aliases, both the 17.05 and 95 d signals were
Figure 8: HD 99492 orbital solutions for planets b and c derived from the HARPS-N YARARA-V1 dataset. _Left_ – Cartesian representation of the planetary system for a sample of 205 MCMC solutions. The star is in red, the orbit of HD 99492 b in blue and the orbit of HD 99492 c in orange. Additionally, the green area marks the beginning of the conservative habitable zone, which continues outwards. This was calculated from the prescription of Kopparapu et al. (2013). _Right_ – RV measurements phase-folded on the best fit Keplerian solutions, together with the model curves.
Figure 7: HD 99492 c posterior distributions for our different models and datasets. The grey, blue, orange and green distributions refer to our first, second, third, and fourth approaches to derive the posteriors, respectively. In the case of the third approach, we further show the distributions resulting from two different models. The plain line distribution was obtained from a model with two Keplerians, while the dashed line corresponds to a model with three Keplerians.
strongly detected in all the subsets with sigma \(\geq\)9\(\sigma\). Finally, as opposed to these two planetary signals, the signal at 14d is not consistent in time. It is only detected in the second half of the data. This further motivates us to discard this signal as a potential additional planet.
### Transit search
We analysed the TESS photometry in search for transit signals. To treat the instrumental systematics, we extracted our custom light-curve from the 2-min cadence pixel files with lightkurve, following the process explained in Sect. 2.2.1. To aim at the transit search, we detrended our light curve from a combination of multi-scale and spike CBV. Indeed, we found that this model performs better at minimising the Combined Differential Photometric Precision (CDPP) metric, which is a measure of the remaining scatter in the light curve expressed in parts-per-million (ppm). Our corrected light curve presents a CDPP of 123 ppm.
As a second step, we undertook the removal of the remaining stellar systematics on our corrected light-curve. We applied a detrending via splines fitting using keplersplinev26, and letting free the parameter describing the time-scale of variation of the spline, only imposing a lower boundary of 0.5 day. The fit converged to a time-scale of 0.74 day, and the modelled trend was removed from our light-curve. The resulting detrended light-curve is presented in Appendix, Fig. 1 (top panel).
Footnote 6: [https://github.com/avanderburg/keplersplinev2](https://github.com/avanderburg/keplersplinev2)
Footnote 7: [https://transitleastsquares.readthedocs.io/en/latest](https://transitleastsquares.readthedocs.io/en/latest)
We undertook a multi-transits search in this detrended light-curve. It was carried out with the transitleastsquares software7. This search was done via the computation of a box least square (BLS) periodogram, with an orbital period search comprised between 2 and 20 days. We did not find any significant transit signal in this period range. From the estimated minimum mass of planet b, the mass-radius relationship for rocky planets8 from Otegi et al. (2020) and the stellar radius reported on exofop9, we estimated a transit depth of \(\sim\)1000ppm. We do not observe any hint of a transit signal at the period and phase of planet b estimated from our RV analysis. Concerning planet c, the expected transit time occurred at the beginning of sector 45. Under the hypothesis of a rocky composition, the planet would induce a transit depth of 850ppm on its parent star, which we do not see in the data. For both planets, the hypothetical transit depths are large enough compared to the light-curve noise, and they would be detected. Therefore, we rule out the transiting configuration of HD 99492 b. Concerning the outer planet, the TESS observations do not cover all the orbital phases, and given the uncertainty on the transit time it is possible that a potential transit was missed. However, it is unlikely given the low transit probability of such a long-period planet (\(\sim\)1%). Details of the transit searches can be found in Appendix B and Fig. 1.
Footnote 8: While the minimum mass of planet b points towards a gas-rich composition, here we explored the unfavourable case of rocky composition so to estimate the transit detectability in an unfavourable scenario. This comment also applies for the transit searches presented in Sect. 5.2 and 6.2.
Footnote 9: [https://exofop.ipac.caltech.edu/tess/](https://exofop.ipac.caltech.edu/tess/)
We also analysed the TESS sectors 45 and 46 in search for potential single transit features. To proceed, we fitted our detrended light curve based on two distinct models: no planet and one planet models. We compared the posterior probabilities of the models using the true inclusion probability (TIP) detection criterion (Hara et al., 2022), which provides us with a rigorous transit detection threshold. A TIP of 1 would favour the one planet model with a probability of 100% (TIP=1-FIP). This framework was first designed for RV data, and was later on applied to transit analyses (Hoyer et al., 2022; Ehrenreich et al., 2023; Wilson et al., 2021). To aid convergence we performed the TIP analyses on one-day slices of the TESS sectors. We ran two sets of analyses employing different priors to first search for any transit signal and subsequently to probe the transiting nature of HD 99492 b: wide planetary priors in one case, priors constrained on the HD 99492 b planet parameters as reported in this study in the other case. We did not find any preference for the one-planet model as the TIP never exceeds 0.5 across the entire light curve (cf. Appendix - Fig. 2). Therefore, we conclude on the absence of transit signals in this data set.
## 5 HD 147379 data analysis
HD 147379 was first found to harbour a planet by the CARMENES team (Reiners et al., 2018). With the help of 114 CARMENES RV measurements and an additional 30 HIRES RV, they detected a planet with a minimum mass of \(\sim\)25 \(M_{\oplus}\) in a 85.5 day orbit. This detection was independently confirmed by the SOPHIE team, with a dataset of 163 SOPHIE RV measurements (Hobson et al., 2018). A noticeable fact about HD 147379 b is that it lies in the conservative habitable zone of its host star, as defined by Kopparapu et al. (2013).
### RV analysis
We analysed this system using exclusively the 165 nightly binned HARPS-N RV measurements. As with HD 99492, we first looked in the modified VARARA-V1 dataset for activity signatures - that is YARRAY-V1 where the stellar activity was injected back in the RV. Besides the long-term trend due to the stellar magnetic cycle, the most significant signal is detected in the S-index indicator and peaks at 21.6 days. Furthermore, we observe a weak correlation between the RV and the S-index measurements, with a correlation coefficient of R=0.39 (cf. Appendix - Fig. 3).
In a second step, we undertook an analysis of the RV in search for planetary signals. We employed a correlated noise model with SPLEAF, using both the RV and S-index time-series. We also included in the model an offset, a white noise term and a linear drift term10. Wide uniform priors were injected, and we initialised the correlated noise with a period of 22 days and a large correlation decay time-scale of 500 days. After performing a first fit - again through a successive addition in the model of the different correlated noise parameters - we find a significant signal at 86.5 days in the periodogram of the residuals (Fig. 9). We include the latter in our model as a Keplerian, and fit again the RV time-series with the new model. In the periodogram of the residuals, we do not find additional significant signals and the correlated noise period is fitted to 21.8 days, which is the expected stellar rotation period. The most prominent signal that we find in the residuals has a period of 7.1 days, which is about a third of the estimated stellar rotation period, and FAP=3.8%. This signal is pushed down to
a FAP of 23% when we use the augmented version of SPLEAF, which develops further the periodic component of the kernel to account for the third harmonic. This result further suggests the stellar rotation origin of the 7.1d signal. Furthermore, we do not find any signal around 500 days, which was reported by Hobson et al. (2018) using the SOPHIE data alone. Therefore, we stop our planet search and consider a model composed of one Keplerian only. The exploration of this model was performed with a MCMC, using 500k iterations leading to a final sample of 3298 independent solutions. The results are presented in Table 3. The stellar rotation period that we estimate (\(P_{GP}\)=21.9\(\pm\)0.4 days) is consistent with previous investigations, and our correlated noise model converges to a correlation decay time-scale of 28.3\(\pm\)4.4 days. The minimum mass of HD 147379 b is found to be smaller than both the estimates of Reiners et al. (2018) with 24.7\({}^{+1.8}_{-2.4}\)\(M_{\oplus}\) and Hobson et al. (2018) with 28.6\(\pm\)1.5 \(M_{\oplus}\) - our result suggests 21.6\(\pm\)1.1 \(M_{\oplus}\), or about 1.26 times the mass of Neptune, and was derived from a careful mitigation of stellar rotation features. The bottom plot of Fig. 9 presents the RV measurements folded on the orbital period and phase of HD 147379 b.
As a comparison, we searched for planetary signals with HARPS-N using the YARARA-Shell post-process. After deriving the cleaned RV, we computed a periodogram which unambiguously revealed the 86.5d planet (FAP=3.310\({}^{-33}\)). After fitting the latter with a Keplerian model and computing the periodogram of the residual RV, we did not find any signal that reaches our detection threshold of FAP=0.1%. The strongest peak stands at a period of 12.3d, and has FAP=0.5%. Because of this too large FAP and the weakness of the signal in V1-SPLEAF (FAP>10%), we did not retain this signal as a planetary candidate. Further observations are needed to shed light on this signal.
Finally, we also analysed the entire dataset composed of the HARPS-N (DRS), CARMENES, SOPHIE and HIRES RV. This ensemble consists of 458 RV measurements11. However, we note that the CARMENES and SOPHIE public data do not contain spectroscopic activity indicators, but only the RV measurements. Therefore, we could not carry a reliable modelling of stellar activity. We only applied a linear drift fit, in order to account for an observable trend. We computed the GLS periodogram of the residuals on this combined dataset, and found the 86.5d planet as the most significant peak. After its subtraction from the time-series and the computation of the updated periodogram, we found several periodic signals with a FAP below 0.1% at 10.6, 12.3 and 21.4 days. While the signals at 10.6 and 21.4 days are attributed to stellar rotation (they correspond to 0.5 and 1 time the stellar rotation period, respectively), the signal at 12.3 days is more unclear. Nevertheless, we note that its fit with a Keplerian yields a large eccentricity of 0.25. With the HARPS-N data alone, this remaining signal not only is less significant, but also leads to a large orbital eccentricity of 0.62 if we fit it with a Keplerian. As a result, we put strong caution about the planetary nature of this 12.3d signal, and presently interpret it as an artefact of stellar activity. The Keplerian orbital parameters of HD 147379 b that we derived from the combined dataset suggest a minimum mass of msini=25.2M\({}_{\oplus}\) (K=5.25 m s\({}^{-1}\)). In Fig. 11, we present the RV folded on this Keplerian solution. Again, we stress that these results do not include a careful modelling of stellar activity. Therefore, our analysis of the HARPS-N YARARA-V1 data alone together with SPLEAF is favoured, and its results defines our final solution for the orbital parameters of HD 147379 b.
Footnote 11: Nine measurements from the SOPHIE spectrograph were neglected following Hobson et al. (2018), because of an uncertainty larger than 35 m/s on the RV measurement.
### Transit search
We scrutinised the full TESS light curve in search of any transiting signal. To proceed, we extracted the light curve of each of the 28 sectors following the same procedure as described in Sect. 2.2.1. After downloading the CBV of those sectors, we found that the combination of multi-scale and spike CBV provides the best corrections in terms of minimising the CDPP metric, with an average CDPP of \(\sim\)90 ppm over the sectors. We further detrended the resulting light curve from any remaining stellar systematics with kepplersplinev2. Our detrending fit converged to a spline variation time-scale of 0.9 d.
We then undertook several successive transit searches, for orbital periods between 2 and 20 days, between 20 and 70 days, and between 70 and 100 days. In none of the BLS periodograms did we find significant peaks. Concerning HD 147379 b, using the mass-radius relationship of the rocky population from Otegi et al. (2020) - considering the unfavourable case of a high bulk density - and the stellar radius provided on exafop, we estimated the transit depth to \(\sim\)1200ppm. The planet would hence be detected given the large time-span of the TESS observations, and we conclude that it does not transit its parent star. We refer to the Appendix - Fig. 12 for more details.
Figure 9: HD 147379 planet search. _Top and Middle panels_ – GLS periodograms of the HARPS-N YARARA-V1 RV time-series, in a model with correlated noise and with no Keplerian in the model (\(k_{0}\)) and one Keplerian (\(k_{1}\)) at a period of 86.5 days. _Bottom panel_ – RV measurements phase-folded on the 86.5 days signal.
## 6 HD 190007 data analysis
A planet candidate was recently suggested to orbit HD 190007 (Burt et al., 2021). The authors used the combined RV dataset of APF and HIRES in order to determine the planetary characteristics. They found a body that orbits its star with a period of 11.7 days and a minimum mass of 16.46 \(M_{\oplus}\). The stellar rotation is visible in the data, and they modelled the latter using a Keplerian with a period consistent with the photometric variability (\(\sim\) 29 days).
### RV analysis
We reviewed this solution by adding our set of 37 nightly binned HARPS-N measurements, and analysing the combined dataset. In total, we used 159 RV. Given the small number of HARPS-N RV measurements, we could not reliably process them with YARARA. Indeed, the algorithms behind the post-processing require several dozens of observations with different barycentric Earth RV (BERV) in order to not over-fit the data. Also, to work properly, having several observations per BERV bin element is mandatory, a condition not satisfied in this case. Instead, we used the data derived from the new DRS (Dumusque et al., 2021). After correcting for the different instrument offsets and accounting for drifts, the periodogram of the RV time-series clearly shows the presence of the 11.7 days planet. Additionally, a significant signal is visible with a period of \(\sim\)28 days, consistent with the stellar rotation period. This periodogram is presented in the top left plot of Fig. 10. At the bottom left, we show the periodogram of the residual RV, after subtraction from a Keplerian model with a period of 11.7 days. Significant power is still observed at the rotation period and half of it. Burt et al. (2021) fitted the 28 days signal with another Keplerian.
Our three datasets - HARPS-N, HIRES and APF - contain the S-index indicator. We used this information to train a correlated noise model, and apply it simultaneously on the RV time-series. Again, we employed SPLEAF to model the correlated noise. Given the tight constraints on the stellar rotation period, we set a uniform prior on the correlated noise period between 25 and 30 days. We set priors on the other parameters of the correlated noise model according to Table 2. We initialised the model with a correlated noise period of 28 days and a decay time-scale of 200 days. Beside the correlated noise parameters, we also included in the model the instrumental offsets and a white noise term. We fitted the data with all the parameters of the model, and present the resulting periodogram of the RV residuals at the top right plot of Fig. 10. The planetary signal at 11.7 days is more significant. Its has a FAP of 3 10\({}^{-10}\). Naturally, we further note that the signal at \(\sim\)28 days disappeared. As a second step, we included a Keplerian with a period of 11.7 days in the model, and performed a new fit. The bottom right plot of Fig. 10 shows the periodogram of the new residuals. Without ambiguity, no significant signal remains. Therefore, the correlated noise modelling provides a solution cleaned from the stellar rotation effects. It is able to model the main stellar rotation period and its harmonic better than a single Keplerian as previously done by Burt et al. (2021). The minimum mass of the planet and the orbital eccentricity are estimated to 15.5\(\pm\)1.3 \(M_{\oplus}\) and 0.14\(\pm\)0.08, respectively. This is in agreement with Burt et al. (2021), which estimated a minimum mass of 16.5\(\pm\)1.7 \(M_{\oplus}\) and an eccentricity of 0.14\(\pm\)0.07. We present the updated planetary parameters in Table 3, and the phase-folded RV time-series in Fig. 11.
### Transit search
We analysed the photometry of TESS sector 54 in search for transit signals. Again, we retrieved the 2-min cadence target pixel files and downloaded the CBV of sector 54. We applied aperture photometry and corrected the light-curve for instrumental systematics via a joint fit of multi-scale and spike CBV and background subtraction at the pixels level (as explained in Sect. 2.2.1). A special care with the detrending was needed due to the rapidly-varying variability of the star. When let free, the parameter describing the time-scale of variation of the spline converged to the lower limit that we imposed (i.e. 0.5d). We undertook several BLS transit searches between orbital periods of 0.5 and 13 days, exploring various detrend time-scales in the interval [0.5, 1.5] days with the parameter bkspace of the function keplersplinev2. Irrespective of the detrend time-scale, we did not find any significant signal in the BLS periodogram.
We computed the expected transit times, depth and duration of planet b. To proceed, we used the results of our RV analysis, the stellar radius reported on exofop, and the mass-radius relationship for rocky planets taken from Otegi et al. (2020) - again, we tested the transit detectability in the unfavourable case of a high bulk density, noting however that HD 190007 b is more likely gas-rich. We estimated the transit depth to 920ppm, which is significant compared to the scatter observed in our detrended light curve. However, we did not find hints of transits that match with the expected ephemerides of planet b. We refer to the Appendix - Fig. B.4 for more details.
In a second approach, we scrutinised the TESS sector 54 considering potential single transit features. To proceed, we applied the same single-transit search process as the one employed for HD 99492 (cf. Sect. 4.6) and based on the TIP framework. Again, we ran two distinct analyses with wide priors on one side, and priors constrained on the ephemerides of HD 190007 b as derived from our RV modelling on the other side. In both cases, the absence of planetary transits was unambiguously favoured. Therefore, we rule out the presence of transits in the data.
## 7 Discussion and conclusions
In this work, we presented the HARPS-N spectroscopic data acquired on three stars: HD 99492, HD 147379 and HD 190007, all of them already known to harbour planets. We reviewed those planetary systems with the help of advanced data analysis tools. YARARA-V1 provided us with HARPS-N datasets cleaned from any instrumental effects. Additionally, we used the generalised SPLEAF model to carefully and efficiently mitigate stellar activity. The combination of these two tools was shown to provide the best results with respect to other strategies, as discussed for HD 99492 in Sect. 4. We updated the orbit of the known planet HD 99492 b (\(P\)-17.05 days), and unambiguously detected a second planetary companion, namely HD 99492 c, that we confirmed independently using tweaks. This second planet orbits its parent star in 95.2 days, and has an estimated minimum mass of 17.9\(\pm\)1.3 \(M_{\oplus}\). The analysis of the 165 nightly binned HARPS-N measurements of HD 147379 did not lead to a new planet detection. We also did not find any transit signal in the extensive 28 TESS sectors. However, we updated the parameters of HD 147379 b, and notably found a minimum mass smaller than previous studies (Reiners et al., 2018; Hobson et al., 2018) - our new estimate, peaking at 21.6\(\pm\)1.1 \(M_{\oplus}\), is 2.6\(\sigma\) away from the nearest
value provided by Reiners et al. (2018). Finally, we reviewed the system HD 190007 with the addition of our 37 nightly binned HARPS-N measurements to the publicly available HIRES and APF data. We performed a correlated noise analysis to account for the strong stellar rotation signal. We updated the solution of the known planet at \(P=11.7\) days, and obtained a minimum mass of \(15.5\pm 1.3~{}M_{\oplus}\). The results from our review of the three planetary systems are presented in Table 3. We also carried out a systematic transit search in the available TESS sectors, and did not find transit signals.
\begin{table}
\begin{tabular}{l l c c c} \hline \hline Parameter & Units & HD 99492 & HD 147379 & HD 190007 \\ \hline Offsets (\(\gamma\)) and jitter (\(\sigma\)) & & & & \\ \(\gamma HARPS-N\) & m s\({}^{-1}\) & -2.18\(\pm\)0.69 & -113.2\(\pm\)22.8 & 0.94\(\pm\)0.97 \\ \(\gamma APF\) & m s\({}^{-1}\) & / & / & -1.44\(\pm\)0.44 \\ \(\gamma HIES\) & m s\({}^{-1}\) & / & / & -1.37\({}^{+1.37}_{-1.40}\) \\ \(\sigma_{RV}\) & m s\({}^{-1}\) & 1.56\(\pm\)0.10 & 0.98\({}^{+0.13}_{-0.12}\) & 2.57\({}^{+0.30}_{-0.27}\) \\ Correlated noise & & & & \\ \(P_{GP}\) & days & \(43.65^{+0.47}_{-0.56}\) & 21.94\({}^{+0.42}_{-0.44}\) & 30.71\({}^{+0.29}_{-0.72}\) \\ \(\rho_{GP}\) & days & \(152.7^{+22.5}_{-20.7}\) & 28.31\({}^{+4.43}_{-4.12}\) & 39.0\({}^{+7.1}_{-5.7}\) \\ \(\eta_{GP}\) & & \(0.94^{+0.16}_{-0.13}\) & 0.61\({}^{+0.10}_{-0.09}\) & 0.97\({}^{+0.17}_{-0.15}\) \\ Planets & & & & \\ Parameter & Units & HD 99492 b & HD 99492 c & HD147379 b & HD 190007 b \\ \hline \(p\) & days & 17.0503\(\pm\)0.0016 & 95.233\({}^{+0.098}_{-0.096}\) & 86.58\(\pm\)0.14 & 11.724128(99) \\ \(K\) & m s\({}^{-1}\) & 7.05\(\pm\)0.18 & 2.79\({}^{+0.22}_{-0.21}\) & 4.49\(\pm\)0.22 & 4.91\(\pm\)0.45 \\ \(e\) & & \(0.034^{+0.025}_{-0.021}\) & \(0.063^{+0.080}_{-0.040}\) & 0.063\({}^{+0.047}_{-0.038}\) & 0.136\({}^{+0.095}_{-0.080}\) \\ \(\omega\) & deg & \(154.3^{+4.68}_{-48.0}\) & \(137.5^{+10.77}_{-62.5}\) & 130.1\({}^{+58.3}_{-48.1}\) & 2.3\({}^{+50.5}_{-46.3}\) \\ \(\lambda_{0}\) & deg & 276.4\(\pm\)5.2 & 351.2\(\pm\)10.3 & 204.6\({}^{+20.1}_{-19.7}\) & 78.1\({}^{+8.2}_{-7.9}\) \\ \(m\)sin \(i\) & \(M_{\oplus}\) & 25.5\(\pm\)0.6 & 17.9\(\pm\)1.3 & 21.6\(\pm\)1.1 & 15.5\({}^{+1.2}_{-1.3}\) \\ \hline \end{tabular}
\end{table}
Table 3: Final results of the fits on HD 99492, HD 147379, and HD 190007. We report the median values from the posterior distribution of the MCMC exploration. The uncertainties in the parameters are 68.27% confidence intervals.
Figure 10: HD 190007: Periodograms of the full RV time-series. _Top left_ – No Keplerian (\(k_{0}\)), and no correlated noise (CN) in the model. _Bottom left_ – One Keplerian (\(k_{1}\)) in the model, at a period of 11.7 days. _Top right_ – Inclusion of a CN model, no Keplerian. _Bottom right_ – Inclusion of both CN and one Keplerian models.
HD 99492 c and HD 147379 b have orbital periods and minimum masses that stand in an underpopulated region of the parameter space. Fig. 12a presents the distribution of known exoplanets with measured minimum masses \(m\sin i\), and with large orbital periods \(P>50\) days. This population is composed of 962 planets, according to the Exoplanet Archive12 as of March 22, 2023. We note that this distribution also includes long-period transiting planets whose masses are constrained with RV measurements. The positions of HD 99492 c and HD 147379 b in this plot are represented by vertical grey and green lines, respectively. For comparison, we indicate the position of Neptune, Saturn and Jupiter in this histogram. This distribution unambiguously reveals a larger number of long-period giant exoplanets. Naturally, observational biases play an important role in this picture, since Neptune-mass exoplanets on long orbital periods are technically more challenging to detect. However, a closer inspection of this histogram may reveal a potential lack of planets between 30 and 50 Earth masses, giving rise to two distinct populations that the observational biases hardly explain.
Footnote 12: [https://exoplanetarchive.ipac.caltech.edu/](https://exoplanetarchive.ipac.caltech.edu/)
To estimate the statistical significance of this bimodality, and hence the existence of two planet populations, we tested the similarity between this distribution and a single-mode Gaussian distribution. To proceed, we defined the large-mass sub-sample by setting a mass threshold at 50 \(M_{\oplus}\), and fitted a Gaussian law on this sub-sample. Then, we generated a random sample following this law, and with the same size as the full distribution. Finally, we computed the p-value between this Gaussian and the blue histogram. We repeated this procedure 10 000 times, and derived a distribution of p-values. In those results, the maximum p-value obtained is 3 10\({}^{-4}\), which indicates that all of our 10 000 tests support the hypothesis of two distinct distributions between the planet distribution and the fitted Gaussian. As a second statistical test, we performed a Z-test between the sample of planets with \(m\sin i\)\(<50\)\(M_{\oplus}\) (red shaded area), and the sample with \(m\sin i\)\(>50\)\(M_{\oplus}\) (grey shaded area) to quantify their mutual difference. This test compares the means and standard deviations of two samples, and indicates if these samples significantly differ from each other. We fitted both large-mass and small-mass sub-samples with a Gaussian, and applied the Z-test on these adjusted laws. The test returned the value of 2.94, which points towards a significant difference between the two populations. We repeated the Z-test with a new separation threshold of 30 \(M_{\oplus}\) between the two populations, and obtained the value of 3.06, again indicating a significant difference between these two populations. Therefore, our analyses support the bimodality of the planet distribution.
We bring an additional support to the bimodality in the bottom panel of Fig. 12a. It presents, among the considered population, the percentage of planets found in multi-planet systems. The planets seem to be globally found in distinct system architectures. Despite the large uncertainties in the small-mass population caused by the small number of detections, we observe that these planets are most often found in multi-planet systems. For higher-mass planets, this ratio is significantly smaller. Those results support previous RV-detected planet statistics. Notably, Winn & Fabrycky (2015) pinpoint a bimodality in the distribution of minimum masses, without distinction of orbital period - see their Fig. 4. In that same figure, the authors illustrate, at the population level, the increase in eccentricity scatter for larger minimum masses. Besides, among the exoplanet population, lower multiplicity systems present also a larger eccentricity scatter (Wright et al., 2009; Limbach & Turner, 2015). Through the link of orbital eccentricity, we hence deduce that large-mass planets tend to be found in systems with smaller multiplicity, according to the RV surveys. The results that we present in the lower panel of Fig. 12a corroborate these observations, since larger-mass planets seem to be more often found in single-planet systems. Again, while we raise a hint of a trend, we stress that a treatment of detection biases is needed to confirm this statement.
We compared these results with the short-period planet population. Fig. 12b shows the population of exoplanets with measured minimum masses, but this time with a focus on short-period planets (\(P<50\) days). The double peak distributivenics more prominent than for the long-period population, and a similar trend with the system multiplicity is observed. We undertook a new \(Z\)-test for a threshold mass of 50 \(M_{\oplus}\), and obtained a value of 2.78 which confirms the likely bimodality of the distribution. We note that at short orbital periods, low-mass planets appear more common than giant planets. The opposite is observed for long-period planets. This observation is in line with classical theories of planet formation, for which gas giants are formed in cooler regions of the protoplanetary disk, as opposed to smaller-mass planets. However, we stress that detection biases likely alter this picture. Furthermore, we highlighted with the red histogram the short-period planet companions to the low-mass long-period planets located in the red zone of Fig 12a. Most of these planets have smaller masses too. Conversely, we highlighted with the grey histogram the short-period planet companions to the large-mass long-period planets inside the grey zone of Fig. 12a, which does not indicate a strong preference for large or small mass companions. Additionally, the red histogram constitutes most of the planet companions to low-mass long-period planets with a measured minimum mass, with 80 planets out of 94 companions in total, that is 85%. The remaining 15% are long-period planets. As a result, most of the companions to the small-mass long-period planets have small masses and short periods.
Based on all those observations, we would therefore expect HD 147379 to be orbited by at least a second planetary companion inner to HD 147379 b, with the hypothesis that this sys
Figure 11: HD 190007: RV measurements phase-folded on the Keplerian solution of HD 190007 b.
tem follows the observed trend. We stress, however, that the statistics have low numbers and that the observational biases could alter this picture. Continuing observational efforts using high-resolution spectrographs will help to augment those statistics and refine the architectures of planetary systems. Finally, we note that a similar bimodality is observed in the true mass distribution of exoplanets.
We put these results in light of the Kepler and TESS transit surveys in Fig. 13. The histogram presents the population of confirmed Kepler and TESS planets spread over radius, in the range [3.5, 15]\(R_{\oplus}\) so to entail the Neptune-size regime and above. In these data, there might also be a tentative hint of two distinct populations separated around 7 \(R_{\oplus}\), which corresponds to a mass of \(38M_{\oplus}\) using the mass-radius relationship for volatile-rich planets (Otegi et al. 2020). The location of this boundary hence overlaps with the ones observed in Fig. 12a and b. Nevertheless, this separation is here too shallow for any further prospects. Moreover, there is no clear trend in the proportion of planets in multi-planet systems as a function of planet radius in this considered radius range, as is illustrated in the bottom panel. Different factors could explain this discrepancy with the RV surveys, such as a non homogeneous scaling law for the planets bulk density, or probing different types of systems from the transit and RV surveys. This is briefly discussed in Leleu et al. (2023), in the context of the mass dichotomy between RV and transit-timing variations (TTV) techniques.
This work has once again proven the importance of long-term RV surveys with stable high-precision spectrographs for the detection and characterisation of warm planets, which constitute a yet under-explored family of exoplanets. Additionally, the use of advanced tools for the removal of instrumental systematics and for modelling correlated noise sheds light on the noise pattern, and pushes further the barriers of small-mass planet detections.
###### Acknowledgements.
This work is based on observations made with the Italian _Tekescopio Nazionale Galileo_ (TNG) operated by the _Fundacion Galileo Galilei_ (FGG) of the _Istituto Nazionale d Astrofisica_ (INAF) at the _Observatorio del Roque de los Muchachas_ (La Palma, Canary Islands, Spain). The HARPS-N project was funded by the Prodex Program of the Swiss Space Office (SSO), the Harvard University Origin of Life Initiative (HUOLI), the Scottish Universities Physics Alliance (SUPA), the University of Geneva, the Smithsonian Astrophysical Observatory (SAO), the Italian National Astrophysical Institute (INAF), University of St. Andrews, Queen's University Belfast, and University of Edinburgh. This work has been carried out within the framework of the National Centre of Competence in Research Planet's supported by the Swiss National Science Foundation under grants 51NF40_182901 and 51NF40_205606. The authors acknowledge the financial support of the SNSE. This work has
Figure 12: Exoplanet population with a minimum mass measurement msini. Panel a: Sub-population with large orbital periods (P>50 days). This sample contains 428 exoplanets. _Top_ – Histogram of that population, with respect to the planet minimum mass. The vertical grey and green lines identify the position of HD 99492 c and HD 147379 b, respectively. _Bottom_ – Proportion of planets that are part of multi-planet systems, among the considered sub-population. Panel b: Sub-population with small orbital periods (P<50 days). _Top_ – Distribution of that sub-population according to msini (in orange). The red histogram indicates companions to the low-mass long-period planets (red zone in Panel a). The grey histogram focuses on companions to the large-mass long-period planets (grey zone in Panel a). _Bottom_ – Proportion of planets part of multi-planet systems.
Figure 13: Exoplanet population from the Kepler and TESS surveys, distributed in planet radius between 3.5 and 15 \(R_{\oplus}\). This population is composed of 632 planets as of March 22, 2023. _Top_ – Histogram of the considered populations. _Bottom_ – Proportion of planets part of multi-planet systems.
made use of data from the European Space Agency (ESA) mission _Gaia_ ([https://www.cosmos.esa.int/gaia](https://www.cosmos.esa.int/gaia)), processed by the _Gaia_ Data Processing and Analysis Consortium (DPAC, [https://www.cosmos.esa.int/web/gaia/dpac/consortium](https://www.cosmos.esa.int/web/gaia/dpac/consortium)). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the _Gaia_ Multilateral Agreement. M.C. acknowledges the SNSF support under grant 5900PT.211204. R.D.H. is funded by the UK Science and Technology Facilities Council (STFC)'s Ernest Rutherford Fellowship (grant number ST/V004735/1). FPE and CLO would like to acknowledge the Swiss National Science Foundation (SNSF) for supporting research with HARPS-N through the SNSF grants nr. 140649, 152721, 166227 and 184618. The HARPS-N Instrument Project was partially funded through the Swiss ESA-PRODEX Programme. TGW acknowledges support from STFC consolidated grant number ST/V000861/1 and UKSA grant number ST/R0032031/1. This project has received funding from the European Research Council (ERC) under the European Union's Horizon 202 research and innovation programme (grant agreement SCORe No 851555). AS. acknowledges financial support from the agreement ASI-INAF n.2018-16-HH.0. Finally, the authors thank the referee for their insightful comments and suggestions on the paper.
|
2308.02106
|
Radiatively-cooled quantum microwave amplifiers
|
Superconducting microwave amplifiers are essential for sensitive signal
readout in superconducting quantum processors. Typically based on Josephson
Junctions, these amplifiers require operation at milli-Kelvin temperatures to
achieve quantum-limited performance. Here we demonstrate a quantum microwave
amplifier that employs radiative cooling to operate at elevated temperatures.
This kinetic-inductance-based parametric amplifier, patterned from a single
layer of high-$T_\mathrm{c}$ NbN thin film\cmt{in the form of a nanobridge},
maintains a high gain and meanwhile enables low added noise of 1.3 quanta when
operated at 1.5 Kelvin. Remarkably, this represents only a 0.2 quanta increase
compared to the performance at a base temperature of 0.1 Kelvin. By uplifting
the parametric amplifiers from the mixing chamber without compromising readout
efficiency, this work represents an important step for realizing scalable
microwave quantum technologies.
|
Mingrui Xu, Yufeng Wu, Wei Dai, Hong X. Tang
|
2023-08-04T02:01:40Z
|
http://arxiv.org/abs/2308.02106v1
|
# Radiatively-cooled quantum microwave amplifiers
###### Abstract
Superconducting microwave amplifiers are essential for sensitive signal readout in superconducting quantum processors. Typically based on Josephson Junctions, these amplifiers require operation at milli-Kelvin temperatures to achieve quantum-limited performance. Here we demonstrate a quantum microwave amplifier that employs radiative cooling to operate at elevated temperatures. This kinetic-inductance-based parametric amplifier, patterned from a single layer of high-\(T_{\mathrm{c}}\) NbN thin film, maintains a high gain and meanwhile enables low added noise of 1.3 quanta when operated at 1.5 Kelvin. Remarkably, this represents only a 0.2 quanta increase compared to the performance at a base temperature of 0.1 Kelvin. By uplifting the parametric amplifiers from the mixing chamber without compromising readout efficiency, this work represents an important step for realizing scalable microwave quantum technologies.
Superconducting parametric amplifiers are critical components for high-sensitivity readout of superconducting quantum processors [1]. To achieve fast, high-fidelity qubit readout [2; 3], the first-stage amplifier must operate with near quantum-limited noise performance [4]. Traditionally, Josephson Junction-based amplifiers operated at milli-Kelvin temperatures have been used for this purpose [5; 6; 7; 8]. However, with the increasing number of superconducting qubits [9; 10], the demand for an increasing number of readout lines poses significant challenges in terms of space occupation and power consumption at the mixing chamber (MXC) stage of dilution fridges [11; 12; 13; 14]. Despite the fact that the amplifier itself may have a small footprint, the magnetic shield, isolators, and circulators associated with it still occupy considerable space. Similarly, while the power required to drive the parametric amplifier may be insignificant, the attenuators necessary for reducing background noise contribute to significant power dissipation. These factors collectively pose challenges to the scalability of microwave quantum technologies.
An ideal superconducting parametric amplifier should maintain quantum-limited noise regardless of its material temperature. However, current state-of-the-art parametric amplifiers are typically limited to operation at milli-Kelvin temperatures due to several reasons. First, the superconducting transition temperature sets a hard ceiling to the operating temperature of the devices. Second, the intrinsic loss within the amplifier introduces excessive fluctuations to the signal by thermalizing the mode temperature to the material temperature of the device. The latter is further exacerbated by thermal quasiparticle population as \(k_{B}T\) gets comparable to superconducting bandgap \(\hbar\Delta\). Therefore, the use of Josephson-junction-based amplifiers is restricted to a temperature well below 1K, the critical temperature of aluminum from which they are fabricated. Hence, developing microwave parametric amplifiers using materials with high superconducting transition temperature and high intrinsic quality factor within the desired temperature range would be preferable.
Kinetic inductance nonlinearity based on single layer NbN, NbTiN, and granular Al materials has drawn increasing attention in recent years [15; 16; 17]. By utilizing a nanowire design, amplifiers predicated on kinetic inductance have demonstrated high gain with quantum-limited noise [18; 19; 20]. Traveling-wave amplifiers have also been demonstrated with excellent performance compared with their Josephson junction counterparts [21; 22; 23; 24; 25]. Notably, these materials possess high critical temperatures, typically around 10 K for NbN and NbTiN [25; 26; 27]. This characteristic facilitates the operation of the kinetic-inductance traveling-wave parametric amplifier (KI-TWPA) at 4 K [25]. As a result, amplifiers that incorporate these materials can sustain superconductivity even at higher temperatures.
In this paper, we introduce the nanobridge kinetic-inductance parametric amplifier (NKPA) [19] that leverages the radiative-cooling concept [28; 29; 30] to achieve ultralow-added-noise microwave signal amplification. We are the first to demonstrate that, with radiative cooling in effect, the NKPA achieves near quantum-limited amplification performance with added noise of \(n_{\mathrm{add}}=1.33\pm 0.04\) at an operating temperature of 1.5 K. These findings not only solidify our understanding of added noise versus physical temperature of parametric amplifiers, but also introduce a new operating regime for superconducting parametric amplifiers. This advancement paves the way for enhanced scalability in superconducting quantum computing and many sensing applications.
In the context of parametric amplification, radiative cooling could help reduce the added noise of a hot amplifier device (\(T_{\mathrm{dev}}\)) by using a cooling channel connected to a cold thermal bath (\(T_{\mathrm{src}}\)), as illustrated in Fig. 1 (a). The thermal occupancy of the cavity mode \(n_{\mathrm{mode}}\) is related to the thermal occupancy of the physical bath \(n_{\mathrm{i}}\) and cold source \(n_{\mathrm{e}}\), each corresponding to \(T_{\mathrm{dev}}\) and \(T_{\mathrm{src}}\) through Bose-Einstein distribution. The mode oc
cupancy could be expressed as [28]
\[n_{\mathrm{mode}}=\frac{\kappa_{\mathrm{e}}}{\kappa}n_{\mathrm{e}}+\frac{\kappa_{ \mathrm{i}}}{\kappa}n_{\mathrm{i}} \tag{1}\]
[floatfix] where \(\kappa_{\mathrm{e}}\) and \(\kappa_{\mathrm{i}}\) are the external coupling rate and the internal loss rate respectively, and \(\kappa=\kappa_{\mathrm{e}}+\kappa_{\mathrm{i}}\) is the total loss rate, and \(n_{\mathrm{i}}\) and \(n_{\mathrm{e}}\) are related to \(T_{\mathrm{dev}}\) and \(T_{\mathrm{src}}\) through Bose-Einstein distribution \(n_{\mathrm{s}}(T)=1/\left(\exp(\hbar\omega_{s}/k_{B}T)-1\right)\). In order to minimize the thermalization of the mode with its physical environment, our strategy is to maximize the ratio of the external coupling rate (to the cable connected to the cold source) over the internal loss rate (to the warmer physical bath). This radiative cooling scheme [28; 29; 30] has been utilized to prepare a superconducting resonator near its quantum ground state in the presence of environmental thermal excitations.
Our selected amplifier, the NKPA, is patterned from a high \(T_{\mathrm{c}}\) NbN film on a silicon substrate. It has previously been demonstrated to be resilient to magnetic fields of up to 0.5 Tesla [19]. The device features a nanobridge with a cross-section of \(80\,\mathrm{nm}\times 4\,\mathrm{nm}\), which provides a single photon Kerr nonlinearity of approximately \(10\,\mathrm{kHz}\). The resonance frequency of the device is \(7.151\,\mathrm{GHz}\). At base temperature of \(0.13\,\mathrm{K}\), the external coupling rate is found to be around \(2\pi\times 65\,\mathrm{MHz}\) from the reflection spectrum fitting [31]. However, because of the intentionally designed strong overcoupling, we could not extract the accurate internal loss rate from the reflection spectrum in the presence of background ripples in the frequency spectrum as shown in Fig. 1(a). Through radiative cooling performance reported later, we infer that the internal loss rate is below \(2\pi\times 3\) MHz.
This particular amplifier is mounted on a variable temperature stage (VTS2, see Fig. 1) consisting of a heater and a calibrated thermometer. VTS2 is connected to the mixing chamber of a dilution refrigerator through a weak thermal link made of a stainless steel post, and allows the device operating temperature to vary from 130 mK to above 3 K while the rest of the mixing chamber maintains at 50 mK. Another variable temperature stage (VTS1), which serves as a reference thermal noise source [19], is implemented with a 30 dB attenuator mounted on it. The diagram of the setup is shown in Fig. 1(b). After the NKPA, we use a High Electron Mobility Transistor
Figure 1: (a) Principle of radiatively-cooled amplification. To achieve radiative cooling for the amplifier installed in a hot enviroment with thermal occupancy \(n_{\mathrm{i}}\), the amplifier is connected to a cooling bath through external coupling channel with thermal occupancy \(n_{\mathrm{e}}\). The decay rate of the amplifier resonance through intrinsic and external coupling channels are denoted as \(\kappa_{\mathrm{i}}\) and \(\kappa_{\mathrm{e}}\), respectively. (b) Schematic of the experimental setup. The cooling bath at temperature \(T_{\mathrm{src}}\) is implemented as a 30 dB attenuator (near perfect 50 mm load) anchored on a variable temperature stage (VTS1). The amplifier is mounted on a second variable temperature stage (VTS2) that defines the device material temperature \(T_{\mathrm{dev}}\).
Figure 2: Device characterization and amplifier performance with varied device operating temperature. (a) The magnitude and (b) the phase response of the NKPA in the linear regime when the device material temperature increases from 0.1 K to 2.7 K. (c) Signal-to-noise ratio improvement with NKPA turned off (black line) and on (colored lines) in the phase-preserving mode with device operating temperature from 0.1 K to 2.1 K. The gain for the probe tone, detuned from the center amplification frequency by 100 kHz, is maintained at 27 dB for different temperatures. The power spectrum, measured with a resolution bandwidth of 1.8 kHz, is artificially offset to maintain a consistent power level for the output probe tone. The insert illustrates the gain profile while the NKPA was functioning as an amplifier.
(HEMT) amplifier as a second-stage amplifier, with several stages of circulators in between. The output signal is then monitored by a vector network analyzer (VNA) and a spectrum analyzer (SA).
By probing the linear reflection spectrum of NKPA while varying the device operating temperature, it is confirmed that the device maintains very overcoupled condition from milli-Kelvin temperatures up to 2.7 Kelvin, owing to the high critical temperature of NbN. No significant frequency shift is observed until 2 K, as shown in Fig. 2(a) and (b). We use a two-tone drive scheme to operate the amplifier, with each drive tone detuned by over 105 MHz from the NKPA's center frequency. Phase-preserving amplification up to 45 dB is recorded at up to 1.5 K as discussed in detail in Appendix. Upon turning on the amplifier in the phase-preserving amplification mode, we observed a 13.47 dB improvement in signal-to-noise ratio using a weak coherent signal as a reference, shown in Fig. 2(c). As we increase the device's material temperature to 1.5 K, the noise floor increased only by 0.74 dB, indicating the amplifier performance was only marginally affected. The excellent noise performance is maintained up to 2.1 K, at which point the noise floor increased more dramatically by 3.55 dB.
### Added Noise Analysis
To calibrate the amplifier added noise, we sweep the temperature of VTS1 to generate a reference thermal noise to feed to the NKPA [28; 32]. As illustrated in Fig.3 (a), results show the total added noise of NKPA (referred to the input) is at \(1.12\pm 0.03\) quanta when operated at 130 mK, which is 0.62 quanta above (about twice of) the quantum limit. The excessive noise likely originates from unaccounted nonlinear processes in NKPA. Remarkably, the added noise of NKPA does not significantly increase while the operating temperature of the device increases to 1.5 K. Even at an operating temperature of 1 K, the added noise remains below 1.2 quanta.
To further investigate the influence of device operating temperature on the NKPA added noise, we attempt to understand the excessive added noise of NKPA as a function of the NKPA operating temperatures. Note that throughout this experiment, the temperature of the VTS1 remains at 130 mK, so the observed increase in added noise should be attributed to the resonator's thermalization to its material temperature through phonon dissipation. In the high-gain limit, the excessive noise due to increased device material temperature could be expressed as:
\[\Delta n_{\mathrm{add}}=2\frac{k_{\mathrm{i}}}{k_{\mathrm{e}}}n_{\mathrm{dev} }=2\left(\frac{\kappa}{k_{\mathrm{e}}}-1\right)n_{\mathrm{dev}} \tag{2}\]
where \(\kappa=\kappa_{e}+\kappa_{i}\) represents the resonator linewidth and \(n_{\mathrm{dev}}\) denotes the thermal photon occupation of device material temperature controlled by VTS2. The results and predicted curves based on various external coupling ratios \(\kappa_{\mathrm{e}}/\kappa\) are shown in Fig. 3 (b). We found that the data fit relatively well to the predicted results with \(\kappa_{\mathrm{e}}/\kappa=0.98\), indicating very overcoupled resonance and a decent radiative cooling effect. We can further infer the internal Q of the resonance of approximately 2000. In this case, we suspect the internal loss of NKPA is not likely limited by quasiparticles [33; 34].
When we further increase the device operating temperature beyond 2 K, the added noise increases drastically in a fashion that is not explainable by our linear model, which used both internal and external dissipation rates reflected in the reflective spectra of NKPA in the linear regime. Coincidentally, the increase of noise is accompanied by a pronounced shift in resonance frequency as the temperature increases, similar to the result reported
Figure 3: Added noise results from noise thermometry calibration as a function of the NKPA’s operating temperature \(T_{\mathrm{dev}}\). (a) The added noise in quanta referred to the output of the VTS, where the quantum-limited added noise of an ideal parametric amplifier is at 0.5. (b) The excessive added noise when NKPA is operated at elevated temperatures compared with the base temperature (0.13 K). The solid lines represent the expected excessive added noise level with different external coupling ratios \(\kappa_{\mathrm{e}}/\kappa\).
by Grunhaupt et al. [35] from a thin-film granular Aluminum resonator. Hence, we believe that spurious effects due to thermal quasiparticles come into play as temperature increases, which results in a drastic added noise increase to NKPA at around 2K.
### Prospective application
The radiatively-cooled parametric amplifier demonstrated in this study could potentially address some of the scalability challenges faced by superconducting quantum computers, particularly the spatial and cooling power constraints of dilution refrigerators. To better understand the perspective of using radiatively-cooled microwave amplifiers, in the following, we attempt to estimate the performance of a readout line incorporating radiatively cooled NKPA installed at 1-K or 4-K plates. We acknowledge that for the particular above-mentioned NKPA device discussed in this manuscript, the radiative cooling model only applies when the operating temperature is below 2 K. However, a NbTiN-based KI-TWPA is recently reported to exhibit satisfactory performance operating at a 4 K [25], suggesting that the issue of excessive added noise above 2 K for NKPA device discussed above could be addressed by improving the material choice and device engineering.
A proposed configuration of utilizing radiatively cooled NKPA for quantum signal readout is depicted in Fig. 4(a). The primary benefit of shifting the NKPA and necessary circulators to the 1-K plate is that it liberates preceious refrigeration resources within the mixing chamber. To feed drive power to the NKPA device, we propose to use a directional coupler thermalized at the 1-K plate, as opposed to mixing chamber for most junction-based amplifiers. Since the drive power does not need to be routed through the mixing chamber, its active heat load to the mixing chamber is thus estimated to be 0.1 \(\mu\)W per device. For practical applications such as qubit readout, typically, an isolator is required after the qubit to prevent any backactions from the first-stage amplifier. Due to the insertion loss of each component between the signal source and the amplifier, the signal-to-noise ratio of the entire output line will be inevitably degraded. Here, we denote the power transmission coefficient for the readout pulse at the mixing chamber and 1-K plate as \(\alpha_{1}\) and \(\alpha_{2}\). Thus, we can express the anticipated added noise of the system as follows:
\[n_{\mathrm{sys}}= \frac{1}{\alpha_{1}\alpha_{2}}\left[\frac{\kappa_{\mathrm{i}}}{ \kappa_{\mathrm{e}}}\left(2n_{\mathrm{s}}(T_{\mathrm{dev}})+\frac{1}{2} \right)+n_{\mathrm{exc}}+1\right]\] \[+ \frac{\kappa}{\kappa_{e}}\left[2\frac{1-\alpha_{1}}{\alpha_{1}} \left(n_{\mathrm{s}}(0.01K)+1\right)+2\frac{1-\alpha_{2}}{\alpha_{1}\alpha_{2} }\left(n_{\mathrm{s}}(1K)+1\right)\right], \tag{3}\]
where \(n_{\mathrm{exc}}\) denotes the excessive noise of the amplifier. From this expression, we can evaluate the system noise impacted by the attenuation of each component. We assume a loss of 0.5 dB (\(\alpha_{1}\approx 0.9\)) for the microwave output line in the mixing chamber, and a loss of 1 dB (\(\alpha_{2}\approx 0.8\)) in the 1 K plate [18, 36, 37]. To give a realistic estimation, we retained the excessive noise value from the noise calibration, \(n_{\mathrm{exc}}=0.62\). The simulated system noise referred to the output of the signal source (e.g. qubit readout pulse) is illustrated in Fig. 4(b) with device operating temperatures of 1-K and 4-K. Our findings suggest that for the same level of ratiative cooling effectiveness demonstrated in this work, i.e. \(\kappa_{\mathrm{e}}/\kappa=0.98\) (or \(\kappa_{\mathrm{i}}/\kappa=0.02\)),
Figure 4: The perspective of qubit readout using radiatively cooled NKPA. (a) Experimental scheme of qubit readout using parametric amplifier at 1-K/4-K Plate: the qubit is placed on the mixing chamber (MXC), followed by a circulator to shield against reflected pump power. The NKPA can be installed on either the 1-K or the bottom of the 4-K plate. A directional coupler is used to apply the drive power to the NKPA. (b) Theoretical estimation of readout line noise \(n_{\mathrm{sys}}\) referred to the output of Qubit as a function of the internal coupling ratio \(\kappa_{\mathrm{i}}/\kappa\) of radiatively cooled NKPA. The power transmission coefficient is assumed as \(\alpha_{1}=0.9\) and \(\alpha_{2}=0.8\). The gray dashed line indicates the quantum limit of the system noise (\(n_{\mathrm{sys}}=1\)) with a phase-preserving parametric amplifier.
the total output line noise is at 4.6 quanta when the device is at the 1-K plate. This value marginally increases to 5.1 quanta at the 4-K plate. This performance is competitive with that of the state-of-the-art Josephson Traveling-Wave Parametric Amplifiers (JTWPA) operating at the mixing chamber [38, 39].
To maintain overcoupling at high temperatures, e.g. 4 K, we believe it would be helpful to implement NKPA made from superconducting films with higher \(T_{\mathrm{c}}\) to mitigate the TLS and quasiparticle loss [31]. Materials such as NbTiN [25, 40] or MBE-grown NbN [41] can be explored.
In conclusion, we demonstrate a radiatively-cooled superconducting parametric amplifier that achieves noise performance close to the quantum limit even at operating temperatures above 1 K. This advance is made possible by employing the kinetic-inductance nanobridge amplifier technology with high-\(T_{\mathrm{c}}\) NbN films. Our results not only provide valuable insights into the impact of device material temperature on the excessive noise observed in parametric amplifiers but also hold tremendous potential for enabling rapid single-shot qubit readout for large-scale quantum computers by reducing heat load and space requirements within the mixing chamber. Ultimately, these findings contribute to enhancing the scalability of superconducting quantum computing devices.
###### Acknowledgements.
The authors would like to thank Professor Michel Devoret, Dr. Gangqiang Liu, Dr. Alessandro Miano, for useful discussions. We thank Dr. Yong Sun, Dr. Lauren McCabe, Mr. Kelly Woods, Dr. Michael Rooks, and Dr Sihao Wang for assistance in device fabrication. We acknowledge funding support from the Office of Naval Research on the development of nitride-based superconductors (under Grant No. N00014-20-1-2126). The part of the research that involves cryogenic instrumentation is supported by the DOE Office of Science, National Quantum Information Science Research Centers, Co-design Center for Quantum Advantage (C2QA), Contract No. DE-SC0012704.
## Appendix
### Added-noise calibration
To calibrate the added noise of the NKPA, we show the noise source and the detection chain in Fig. 5 (a). The intracavity photon occupancy can be represented as \(n_{\mathrm{mode}}=(\kappa_{\mathrm{i}}n_{\mathrm{i}}+\kappa_{\mathrm{e}}n_{ \mathrm{e}})/\kappa\), incorporating both environmental noise and externally coupled noise. Here, \(n_{\mathrm{i}}\) and \(n_{\mathrm{e}}\) are dictated by the Bose-Einstein distribution \(n_{\mathrm{s}}(T)=1/\left(\exp(\hbar\omega_{\mathrm{s}}/k_{\mathrm{B}}T)-1\right)\), with corresponding temperatures being \(T_{\mathrm{dev}}\) and \(T_{\mathrm{src}}\). In the phase-preserving amplification mode, the output field is related to the input signal field \(a_{\mathrm{S,mode}}\) and input idler field \(a_{\mathrm{I,mode}}\) as
\[a_{\mathrm{out,S}}=\sqrt{G_{\mathrm{N}}}a_{\mathrm{S,mode}}+\sqrt{G_{\mathrm{ N}}-1}a_{\mathrm{I,mode}}^{\dagger}, \tag{4}\]
where \(G_{\mathrm{N}}\) is the NKPA gain. As we operate the NKPA in the high-gain limit, i.e., \(G_{\mathrm{N}}\gg 1\), the added noise of the subsequent amplifiers such as HEMT and Room-temperature amplifier is negligible. With the assumption that the idler mode occupancy is the same as the signal mode, the output power spectral density is expressed as
\[\begin{split}\frac{S_{\mathrm{out}}}{BW}&=\hbar \omega G(2n_{\mathrm{mode}}+1),\\ &=\hbar\omega G\frac{\kappa_{\mathrm{e}}}{\kappa}(2n_{\mathrm{e} }(T_{\mathrm{src}})+2\frac{\kappa_{\mathrm{i}}}{\kappa_{\mathrm{e}}}n_{ \mathrm{i}}(T_{\mathrm{dev}})+\frac{\kappa}{\kappa_{\mathrm{e}}}),\end{split} \tag{5}\]
where \(S_{\mathrm{out}}\) is the power spectrum, \(BW\) represent resolution bandwidth of the power spectrum, \(G=G_{\mathrm{R}}G_{\mathrm{H}}G_{\mathrm{N}}\) is the total system gain, and \(G_{\mathrm{R}}\) and \(G_{\mathrm{H}}\) correspond to the gain of the room-temperature (RT) amplifier and HEMT. We fix the device physical temperature \(T_{\mathrm{dev}}=130\) mK, while varying the temperature of the VTS1 \(T_{\mathrm{src}}\), during which we take measurements of the power spectrum.
Figure 5: (a) Noise thermometry calibration setup diagram. The reference thermal noise sourced from the VTS is preamplified by the NKPA, which operates at a 27 dB gain, and then further amplified by a high-electron-mobility transistor (HEMT). The noise power spectrum is measured by a spectrum analyzer (SA). The cyan line represents superconducting cables. (b) Output noise power spectrum in quanta as a function of the thermal source temperature \(T_{\mathrm{src}}\).
By fitting the normalized power spectrum in quanta (\(S_{\rm out}\kappa/BW\cdot\hbar\omega G\kappa_{\rm e}\)) as a function of VTS1 temperature \(T_{\rm src}\), we are able to determine the added noise referred to the amplifier input:
\[n_{\rm add}=2\frac{\kappa_{\rm i}}{\kappa_{\rm e}}n_{\rm i}(T_{\rm dev})+\frac{ \kappa}{\kappa_{\rm e}}-\frac{1}{2}+n_{\rm exc}, \tag{6}\]
where we subtract the 0.5 quantum limit of noise quanta accompanying the input signal, and \(n_{\rm exc}\) is the excess added noise not captured by the radiatively-cooled amplifier model. The above expression can be rewritten as
\[n_{\rm add}=\frac{\kappa_{\rm i}}{\kappa_{\rm e}}(2n_{\rm i}(T_{\rm dev})+1)+0.5+n_{\rm exc} \tag{7}\]
The base temperature (130 mK) calibration is shown in Fig. 5 (b). Eq. (7) suggests that with the same intrinsic and external coupling condition, the incremental added noise due to the increased device physical temperature can be expressed as
\[\Delta n_{\rm add}=2\frac{\kappa_{\rm i}}{\kappa_{\rm e}}\left[n_{\rm i}(T_{ \rm dev,high})-n_{\rm i}(T_{\rm dev,low})\right]. \tag{8}\]
This equation shows that the elevated added noise is linearly dependent on the difference noise quanta and the internal/external coupling ratio. This contribution remains small with a very overcoupled device.
### Device performance at higher temperatures
In the main text, we demonstrate that the amplifier's added noise remains quantum-limited up to a physical temperature of 1.5 K. Impressively, the gain performance does not show any noticeable change, shown in Fig. 6, where we plot the device gain as a function of detuning under various pump powers. When we compare the gain performance at 150 mK and 1.5 K, it becomes apparent from the figure that the gain profiles closely overlap under the same pump power. This result shows an impressive gain above 45 dB, achieved at both operating temperatures of 150 mK and 1.5 K.
The device performance, however, becomes less consistent for the temperature above 2 K, evident from the power spectrum result shown in Fig. 2 in the main text, where a dramatic increase in the noise floor is observed. This NKPA performance degradation becomes even more evident with the added noise results from the NKPA, measured at operating temperatures up to 2.1 K, shown in Fig. 6(a). The added noise rises steeply from 1.3 quanta at 1.5 K to more than 2.8 quanta at 2.1 K.
Figure 6: Amplifier gain characteristics at 150 mK (solid line) and 1.5 K (dashed line). Phase-preserving amplification as a function of the signal frequency detuning from the center frequency of the two pumps. More than 45 dB gain is observed with -68.90 dBm on-chip pump power at both temperatures.
Figure 7: Device characteristics as a function of device physical temperature. (a) Added noise in quanta up to 2.12 K. (b) External Q and loaded Q extracted from the reflection spectrum fitting. (c) Phase-locked loop (PLL) tracking of frequency shift as a function of device physical temperature \(T_{\rm dev}\) at 7.1509 GHz. The legend indicates the temperature sweeping direction.
When the device temperature is increased to above 2 K, we also observe the deviation of the quality factor (Fig. 7 (b)) and resonance frequency shift (Fig. 7 (c)). Similar results have been reported in previous literature [35, 42]. Both the frequency shift and the change of internal Q could be attributed to the temperature-dependent dielectric constant due to TLSs in surface oxidation [42] or the effect of thermal quasiparticle [35]. The increase in the external Q is likely a result of the diminished capacitive coupling arising from elevated temperatures.
In summary, the concurrent shift in added noise, quality factor, and frequency around 2 Kelvin indicates the emergence of spurious effects at elevated temperatures, due to either TLS or thermal quasiparticle. Higher \(T_{c}\) superconductor materials such as NbTiN [40, 25] or MBE-grown NbN [41] can be explored to mitigate these effects, and potentially make near-quantum-limited amplifiers that work at even higher temperatures.
### System noise analysis for perspective applications
In perspective applications, where radiatively cooled NKPA is used for quantum signal readout, illustrated in Fig. 4, the additional insertion loss between the signal source (e.g. the qubit) and the first-stage amplifier could compromise the readout performance. Here we quantify the total system noise referred to the signal source at mixing chamber stage. The schematic diagram of the detection chain is shown in Fig. 8 (a). The input field \(a_{\mathrm{in}}\) from MXC is transmitted to the NKPA installed at 1-K or 4-K plate, where the signal is being amplified. The signal then proceeds to a HEMT situated at the 4-K plate before reaching a room-temperature amplifier. We presume the NKPA's substantial gain can saturate the added noise from subsequent amplifiers. Therefore, we neglect added noise from the output line after NKPA.
We use \(\alpha_{1}\) and \(\alpha_{2}\) to represent the power transmission coefficients in the signal line at the MXC and the 1-K or 4-K stage respectively. In a practical minimal setup, \(\alpha_{1}\) should include the loss of 4 connectors, and an isolator. \(\alpha_{2}\) should include the loss of 6 connectors, a directional coupler, and a circulator. The attenuation thus couples the corresponding thermal fields \(h_{1}\) and \(h_{2}\) into the output. Then the input quanta to the NKPA is
\[a_{\mathrm{e}}=\sqrt{\alpha_{2}}\left(\sqrt{\alpha_{1}}a_{\mathrm{in}}+\sqrt{ 1-\alpha_{1}}h_{1}^{\dagger}\right)+\sqrt{1-\alpha_{2}}h_{2}^{\dagger}. \tag{9}\]
where \(a_{\mathrm{in}}\) is the input signal at MXC. The equilibrium noise photon occupation \(\langle h_{1}^{\dagger}h_{1}\rangle=n_{\mathrm{s}}(0.01\mathrm{K})\) and \(\langle h_{2}^{\dagger}h_{2}\rangle=n_{\mathrm{s}}(1\mathrm{K})\). The NKPA device could be mounted on either the 1-K plate or the backside of the 4-K plate, as needed by the perspective applications. Therefore, the intra-cavity photon number of the NKPA can be expressed as
\[n_{\mathrm{mode}}=\frac{\kappa_{\mathrm{e}}}{\kappa}\langle a_{\mathrm{e}}^{ \dagger}a_{\mathrm{e}}\rangle+\frac{\kappa_{\mathrm{i}}}{\kappa}n_{\mathrm{s} }(T_{\mathrm{dev}}) \tag{10}\]
Plugin Eq. (10) and Eq. (9) into Eq. (5) for both signal and idler modes, we can derive the system noise as
\[n_{\mathrm{sys}}= \frac{1}{\alpha_{1}\alpha_{2}}\left[\frac{\kappa_{\mathrm{i}}}{ \kappa_{\mathrm{e}}}\left(2n_{\mathrm{s}}(T_{\mathrm{dev}})+\frac{1}{2} \right)+n_{\mathrm{exc}}+1\right]\] \[+\frac{\kappa}{\kappa_{\mathrm{e}}}\left[2\frac{1-\alpha_{1}}{ \alpha_{1}}\left(n_{\mathrm{s}}(0.01K)+1\right)+2\frac{1-\alpha_{2}}{\alpha_{ 1}\alpha_{2}}\left(n_{\mathrm{s}}(1K)+1\right)\right] \tag{11}\]
The terms inside the first square bracket represent the amplifier noise, with a prefactor of \(1/\alpha_{1}\alpha_{2}\) meaning it is effectively amplified due to attenuated signal. The two terms inside the second square bracket are the coupled environment noise from MXC and 1-K plate due to loss. With \(\alpha_{1}\to 1\) and \(\alpha_{2}\to 1\), Eq. 11 reduce to Eq. 7 plus 0.5 to account for the vacuum noise accompanied by the signal.
In figure 8b, we illustrate the system noise as a function of attenuations in MXC and 1-K plate (\(1-\alpha_{1}\) and \(1-\alpha_{2}\)), while maintaining a fixed ration between coupling rates \(\kappa_{\mathrm{e}}/\kappa=0.98\). The dense contour line along the y-axis implies a higher sensitivity of system noise to the
Figure 8: (a) The output line model for perspective application. The model includes 3 stages of amplifiers (NKPA, HEMT, and room-temperature (RT) amplifier), and imperfect transmission coefficients in the mixing chamber and 1-K plate before the amplifiers, denoted as \(\alpha_{1}\) and \(\alpha_{2}\), respectively. (b) Color contour plot of the system added noise as a function of attenuations at mixing chamber and 1-K stage. The contour lines show the system added noise in quanta.
loss at the 1-K plate. The system noise remains under 6 quanta if the losses at MXC and 1-K are each below 1 dB (\(1-\alpha_{1}<0.1\), \(1-\alpha_{2}<0.1\)). If both attenuations reach 1.5 dB, the system noise would exceed 8 quanta.
|
2303.00463
|
Mixed local and nonlocal semilinear elliptic equation with strongly
singular and critical Choquard nonlinearity
|
In this article, we study an elliptic problem of mixed order with both local
and nonlocal aspects involving singular nonlinearity in combination with
critical Hartree-type nonlinearity. Using variational methods together with the
critical point theory of nonsmooth analysis and the geometry of the energy
functional, we show the existence and multiplicity of positive solutions with
respect to the parameter $\lambda$.
|
G. C. Anthal, J. Giacomoni, K. Sreenadh
|
2023-03-01T12:41:51Z
|
http://arxiv.org/abs/2303.00463v3
|
Mixed local and nonlocal semilinear elliptic equation with strongly singular and critical Choquard nonlinearity
###### Abstract
In this article, we study an elliptic problem of mixed order with both local and nonlocal aspects involving singular nonlinearity in combination with critical Hartree type nonlinearity (See (\(P_{\lambda}\)) below). Using variational methods together with the critical point theory of nonsmooth analysis and the geometry of the energy functional, we show the existence and multiplicity of positive solutions with respect to the parameter \(\lambda\).
**Key words:** Local-nonlocal operators, Singular nonlinearity, Choquard equation, Nonsmooth analysis, Existence results, Regularity.
_2020 Mathematics Subject Classification:_ 35A01, 35A15, 35B33, 35D30, 35J15, 35J75, 35M10, 49J52.
## 1 Introduction
This article investigates the existence and multiplicity of weak solutions of the following problem:
\[(P_{\lambda})\begin{cases}{\mathcal{M}}u=u^{-\gamma}+\lambda \left(\int\limits_{\Omega}\frac{|u|^{2^{*}_{\mu}}(y)}{|x-y|^{\mu}}dy\right)|u |^{2^{*}_{\mu}-2}u,&\text{ $u>0$ in $\Omega$},\\ u=0&\text{ in $\mathbb{R}^{n}\setminus\Omega$},\end{cases}\]
for \(\gamma>0\), \(n\geq 3\), \(s\in(0,1)\), \(2^{*}_{\mu}=\frac{2n-\mu}{n-2}\) and \(\Omega\) is a bounded domain in \(\mathbb{R}^{n}\) with smooth boundary. The operator \({\mathcal{M}}\) in \((P_{\lambda})\) is given by
\[{\mathcal{M}}=-\Delta+(-\Delta)^{s}\text{ for some }s\in(0,1), \tag{1.1}\]
i.e., \(\mathcal{M}\) is obtained by the superposition of the classical Laplacian \((-\Delta)\) and the fractional Laplacian \((-\Delta)^{s}\), which for a fixed parameter \(s\in(0,1)\), is defined by
\[(-\Delta)^{s}u=C(n,s)P.V.\int\limits_{\mathbb{R}^{n}}\frac{u(x)-u(y)}{|x-y|^{n+2 s}}dy.\]
The term "\(P.V\)" stands for the Cauchy's principal value and \(C(n,s)\) is a normalizing constant, whose explicit expression is given by
\[C(n,s)=\left(\int\limits_{\mathbb{R}^{n}}\frac{1-\cos(z_{1})}{|z|^{n+2s}}dz \right)^{-1}.\]
The study of the mixed operators of the form \(\mathcal{L}\) in (1.1) is motivated by a wide range of applications. Indeed these operators arise naturally in the applied sciences, to study the role of the impact caused by a local and a nonlocal change in a physical phenomenon. These operators model diffusion patterns with different time scales (loosely speaking, the higher order operator leading the diffusion for small scales times and the lower order operator becoming predominant for large times) and they arise for instance in bi-modal power law distribution processes, see [34]. Further applications arise in the theory of optimal searching, biomathematics and animal forging, see [15, 16] and the references therein.
Due to these applications and mathematical interest, the study of elliptic problems involving mixed type of operators having both local and nonlocal features is attracting a lot of attention. The current research has specifically focused on several problems in the existence and regularity theory. Using probability theory, Foondum [17], Chen et. al [9], studied the regularity results for the equation
\[\mathcal{M}u=0.\]
More recently, using purely analytic approach, Biagi, Dipierro, Valdonoci and Veechi, in their series of papers [6, 7, 8], carried out a broad investigation of problems involving mixed operators, proving a number of results, concerning regularity and qualitative behaviour for solutions, maximum principles and related variational principles. The question of Holder regularity was investigated by De Filipps-Mingione in [12] for a large class of mixed local and nonlocal operators. Under some suitable assumptions, the authors proved almost Lipschitz local continuity and local Holder regularity of the gradient (see Theorem 3, 6 and 7 respectively there).
There is a large literature available for the problems with Choquard nonlinearity due to its vast application in physical modeling see for instance the works of Pekar [35] and Lieb [30]. For detailed studies on the existence and regularity of weak solutions for these types of problems we refer in the local setting to [31] and the references therein. In the non local case, Choquard type equations have been investigated more recently and arise for instance in the study of mean field limit of weakly interacting molecules, in the quantum mechanical theory and in the dynamics of relativistic Boson-stars (see [11] and references therein). In [11] a Schrodinger type problem with Hartree type nonlinearity and involving the fractional laplacian is studied. Existence, nonexistence and properties of solutions are proved in this paper.
For the Brezis-Niremberg type results with Choquard nonlinearity, we refer to [18] in the local
setting, [32] for fractional diffusion case and to [3] for the mixed operator case.
The problems involving singular nonlinearity have a very long history. One of the seminal breakthrough in the study of singular nonlinearities was the work of Crandall, Rabinowitz and Tartar [10]. In this work, the authors proved the existence of a solution of an elliptic PDE with singular nonlinearity by using the classical method of sub-supersolutions on the non singular approximated problem and passing to the limit. Afterward, a large number of research works have been done in the direction of elliptic equations involving the singular nonlinearity see for instance [21, 28] and references therein.
[27, Haitao] was one of the pioneer contributions to study the local elliptic singular problem with critical perturbation. Precisely, the author considered the following problem:
\[-\Delta u=\lambda u^{-\gamma}+u^{p},\ u>0\ \mbox{in}\ \Omega,\ u=0\ \mbox{on}\ \partial\Omega\]
where \(\Omega\subset\mathbb{R}^{n}\ (n\geq 3)\) is a smooth bounded domain and \(\gamma\in(0,1)\), \(1<p\leq\frac{n+2}{n-2}\). Using monotone iterations and the mountain pass lemma, the author explored existence and multiplicity results for the maximal range of parameter \(\lambda\). We also refer to [1, 13] for higher singular cases, i.e. with \(\gamma\in(1,3)\). Finally, the case of any \(\gamma>0\) was considered by Hirano, Saccon and Shioji in [29]. Here the authors studied the existence of \(L^{1}_{\rm loc}\) solutions \(u\) satisfying \((u-\epsilon)^{+}\in H^{1}_{0}(\Omega)\) for all \(\epsilon>0\). The proof used variational methods and nonsmooth analysis arguments.
Turning to the nonlocal elliptic problems involving singular nonlinearity, Barrios et al. [4] considered the problem
\[(-\Delta)^{s}u=\lambda\frac{g(x)}{u^{\gamma}}+Ku^{r},\ u>0\ \mbox{in}\ \Omega,\ u=0\ \mbox{in}\ \mathbb{R}^{n}\setminus\Omega\]
where \(n>2s\), \(K\geq 0\), \(0<s<1\), \(\gamma>0\), \(\lambda>0\), \(1\leq r<2^{*}_{s}-1\),\(2^{*}_{s}=\frac{2n}{n-2s}\) and \(g\in L^{p}(\Omega)\), with \(p\geq 1\) is a nonnegative function. In the spirit of [10], the authors first considered the perturbed problem where the singular term \(\frac{1}{u^{\gamma}}\) is replaced by \(\frac{1}{(u+1/k)^{\gamma}}\) and showed the existence of a solution \(u_{k}\) to the perturbed problem. Finally the existence of weak solutions was shown by getting uniform estimates of \(\{u_{k}\}_{k\in\mathbb{N}}\). Furthermore the authors also discussed the multiplicity results when \(K>0\) and for small \(\lambda>0\). The case of critical exponent problem with singular nonlinearity was handled by [33] for \(\gamma\in(0,1)\). Later in the spirit of [29], [24, Giacomoni Mukherjee and Sreenadh], using the nonsmooth analysis, proved the multiplicity result for the critical exponent problem with singular nonlinearity for any \(\gamma>0\). More recently, [22, Giacomoni, Goel and Sreenadh] studied the following nonlocal singular problem with critical Choquard nonlinearity
\[(-\Delta)^{s}u=u^{-\gamma}+\lambda\left(\int\limits_{\Omega}\frac{|u|^{2^{*}_{ \mu,s}}(y)}{|x-y|^{\mu}}dy\right)|u|^{2^{*}_{\mu,s}-2}u,\ u>0\ \mbox{in}\ \Omega,u=0\ \mbox{in}\ \mathbb{R}^{n}\setminus\Omega,\]
for all \(\gamma>0\), \(n>2s\), \(2^{*}_{\mu,s}=\frac{2n-\mu}{n-2s}\) and \(\Omega\) is a bounded domain in \(\mathbb{R}^{n}\) with smooth boundary. Also using the critical point theory of nonsmooth analysis and the geometry of the energy functional, the authors established the global multiplicity of weak solutions.
The singular problems in the mixed local and nonlocal setting is very less known. Recently, [2, Arora and Radulescu] studied the following singular problem involving mixed local and nonlocal
operators
\[\mathcal{M}u=\frac{g(x)}{u^{\gamma}},\ u>0\ \text{in}\ \Omega,\ u=0\ \text{on}\ \mathbb{R}^{n}\setminus\Omega,\]
where \(\Omega\subset\mathbb{R}^{n}\), \(n\geq 2\) and \(\gamma\geq 0\), \(f:\Omega\rightarrow\mathbb{R}^{+}\) belongs to \(L^{r}(\Omega)\) for some \(1\leq r\leq\infty\). The case where \(f\) behaves as a power function of distance function \(\delta\) near the boundary i.e. \(g(x)\sim\delta^{-\zeta}(x)\) for some \(\zeta\geq 0\) and \(x\) lying near the boundary \(\partial\Omega\) is also investigated. Here the authors proved the existence, Sobolev regularity and boundary behaviour of weak solutions in the light of the interplay between the summability of the datum \(g\) and the power exponent \(\gamma\) in the singular nonlinearity. We also refer to [19] for the proof of the existence of multiple solutions in case of perturbed subcritical singular nonlinearities. The case of quasilinear mixed operator singular problems is issued in [20].
Motivated by the above discussion, in the present paper, we consider the double nonlocal problem (\(P_{\lambda}\)). To the best of our knowledge, there is no previous contribution which deals with critical Choquard with singular nonlinearity involving mixed local and nonlocal operators. In the spirit of [29], using the critical point theory from nonsmooth analysis together with convex properties of the singular part of the associated energy, we prove the existence, multiplicity (under some restrictions on \(s\) and \(n\)) and regularity of solutions to (\(P_{\lambda}\)) for all \(\gamma>0\).
**Organization of the paper**: In Section 2 we define the function spaces, give some preliminaries of nonsmooth analysis and further state the main results of the present work. In Section 3, we prove the regularity of positive weak solution of (\(P_{\lambda}\)) by bootstrap'arguments together with the comparison principle proved in Lemma 3.4. Finally, in Section 4 we show the existence (by sub and supersolutions technique). Identifying the energy critical level below that the Palais Smale condition holds, we prove the multiplicity of positive solutions to (\(P_{\lambda}\)) and complete the proof of Theorem 2.11.
**Notations**: Throughout the paper, we set
* \(\delta(x):=\text{dist}(x,\partial\Omega)\) and \(d_{\Omega}=\text{diam}(\Omega)\);
* for any number \(p\in(1,\infty)\), we denote by \(p^{\prime}=\frac{p}{p-1}\) as the conjugate exponent of \(p\);
* for any two functions \(g,\ h\), we write \(g\prec h\) or \(g\succ h\) if there exists a constant \(C>0\) such that \(g\leq Ch\) or \(g\geq Ch\). We write \(g\sim h\) if \(g\prec h\) and \(g\succ h\);
* \(u^{p}=|u|^{p-1}u\) and \(\|u\|_{HL}^{22^{*}_{H}}=\int\limits_{\mathbb{R}^{n}}\int\limits_{ \mathbb{R}^{n}}\frac{|u|^{2^{*}_{\mu}}(x)|u|^{2^{*}_{\mu}}(y)}{|x-y|^{\mu}}dxdy\).
## 2 Preliminaries and main results
In this section we give the functional settings and collect the notations and preliminary results required in the rest of the paper. We then give the statement of the main results obtained in this work.
Let \(s\in(0,1)\). For a measurable function \(u:\mathbb{R}^{n}\rightarrow\mathbb{R}\), we define
\[[u]_{s}=\left(\frac{C(n,s)}{2}\int\limits_{\mathbb{R}^{n}}\int\limits_{ \mathbb{R}^{n}}\frac{|u(x)-u(y)|^{2}}{|x-y|^{n+2s}}dxdy\right)^{\frac{1}{2}},\]
the so-called Gagliardo seminorm of \(u\) of order \(s\).
We define the space \(X_{0}\) as the completion of \(C_{c}^{\infty}(\Omega)\) with respect to the global norm
\[\|u\|:=\left(\ll u\gg^{2}+[u]_{s}^{2}\right)^{\frac{1}{2}},\ u\in C_{c}^{\infty }(\Omega),\]
where we define \(\ll u\gg=\left(\int\limits_{\mathbb{R}^{n}}|\nabla u|^{2}\right)^{ \frac{1}{2}}.\) The norm \(\|\cdot\|\) is induced by the scalar product
\[\langle u,v\rangle:=\int\limits_{\mathbb{R}^{n}}\nabla u\cdot\nabla vdx+\frac {C(n,s)}{2}\int\limits_{\mathbb{R}^{n}}\int\limits_{\mathbb{R}^{n}}\frac{(u(x )-u(y))(v(x)-v(y))}{|x-y|^{n+2s}}dxdy,\ u,v\in X_{0}.\]
where \((\cdot)\) denotes the usual scalar product in the Euclidean space \(\mathbb{R}^{n}\). Clearly, \(X_{0}\) is a Hilbert space.
**Remark 2.1**: _Note that in the definition of \(\|\cdot\|\) the \(L^{2}\)-norm of \(\nabla u\) is considered on the whole of \(\mathbb{R}^{n}\) in spite of \(u\in C_{c}^{\infty}(\Omega)\) (identically vanishes outside \(\Omega\)). This is to point out that the elements in \(X_{0}\) are functions defined on the entire space and not only on \(\Omega\). The benefit of having this global functional setting is that these functions can be globally approximated on \(\mathbb{R}^{n}\) with respect to the norm \(\|\cdot\|\) by smooth functions with support in \(\Omega\). We see that this global definition of \(\|\cdot\|\) implies that the functions in \(X_{0}\) naturally satisfy the nonlocal Dirichlet type condition prescribed in problem \((P_{\lambda})\), that is,_
\[u\equiv 0\ \text{a.e. in}\ \mathbb{R}^{n}\setminus\Omega\ \text{for every}\ u\in X_{0}. \tag{2.1}\]
_In order to verify (2.1), we know (see [14, Proposition 2.2]) \(H^{1}(\mathbb{R}^{n})\) is continuously embedded into \(H^{s}(\mathbb{R}^{n})\) (with \(s\in(0,1)\)) \(i.e.\) there exists a constant \(k=k(s)>0\) such that, for every \(u\in C_{c}^{\infty}(\Omega)\) one has_
\[[u]_{s}^{2}\leq k(s)\|u\|_{H^{1}(\mathbb{R}^{n})}^{2}=k(s)(\|u\|_{L^{2}( \mathbb{R}^{n})}^{2}+\ll u\gg^{2}). \tag{2.2}\]
_This, together with the classical Poincare inequality, implies that \(\|\cdot\|\) and the full \(H^{1}-norm\) in \(\mathbb{R}^{n}\) are actually equivalent in the space \(C_{c}^{\infty}(\Omega)\), and hence_
\[X_{0}=\overline{C_{c}^{\infty}(\Omega)}^{\|\cdot\|_{H^{1}(\mathbb{R}^{n})}}= \{u\in H^{1}(\mathbb{R}^{n}):u|_{\Omega}\in H^{1}_{0}(\Omega)\ \text{and}\ u\equiv 0\ \text{a.e. in}\ \mathbb{R}^{n}\setminus\Omega\}.\]
Now we recall the Hardy-Littlewood-Sobolev inequality which is the foundation in study of the Choquard problems of the type \((P_{\lambda})\).
**Proposition 2.2**: _Hardy-Littlewood-Sobolev inequality Let \(r,q>1\) and \(0<\mu<n\) with \(1/r+1/q+\mu/n=2\), \(g\in L^{r}(\mathbb{R}^{n}),h\in L^{q}(\mathbb{R}^{n})\). Then, there exist a sharp constant \(C(r,q,n,\mu)\) independent of \(g\) and \(h\) such that_
\[\int\limits_{\mathbb{R}^{n}}\int\limits_{\mathbb{R}^{n}}\frac{g(x)h(y)}{|x-y|^ {\mu}}dxdy\leq C(r,q,n,\mu)|g|_{r}|h|_{q}. \tag{2.3}\]
In particular, let \(g=h=|u|^{p}\) then by Hardy-Littlewood-Sobolev inequality we see that,
\[\int\limits_{\mathbb{R}^{n}}\int\limits_{\mathbb{R}^{n}}\frac{|u(x)|^{p}u(y)| ^{p}}{|x-y|^{\mu}}dxdy\]
is well defined if \(|u|^{p}\in L^{\nu}(\mathbb{R}^{n})\) with \(\nu=\frac{2n}{2n-\mu}>1\). Thus, from Sobolev embedding theorems, we must have
\[\frac{2n-\mu}{n}\leq p\leq\frac{2n-\mu}{n-2}.\]
From this, for \(u\in H^{1}(\mathbb{R}^{n})\) we have
\[\left(\int\limits_{\mathbb{R}^{n}}\int\limits_{\mathbb{R}^{n}}\frac{|u(x)|^{2 ^{*}_{\mu}}|u(y)|^{2^{*}_{\mu}}}{|x-y|^{\mu}}dxdy\right)^{\frac{1}{2^{*}_{\mu} }}\leq C(n,\mu)^{\frac{1}{2^{*}_{\mu}}}|u|^{2}_{2^{*}}.\]
We fix \(S_{H,L}\) to denote the best constant associated to Hardy-Littlewood-Sobolev inequality, i.e,
\[S_{H,L}=\inf\limits_{u\in C_{0}^{\infty}(\mathbb{R}^{n})\setminus\{0\}}\frac{ \left\|\nabla u\right\|^{2}_{L^{2}(\mathbb{R}^{n})}}{\left\|u\right\|^{2}_{HL}}. \tag{2.4}\]
The best constant \(S_{M}\) of the mixed Sobolev embedding is defined by
\[S_{M}=\inf\limits_{u\in X_{0}\setminus\{0\}}\frac{\left\|u\right\|^{2}}{|u|^{ 2^{*}}_{2^{*}}}.\]
We also define '
\[S_{H,L,M}=\inf\limits_{u\in X_{0}\setminus\{0\}}\frac{\left\|u\right\|^{2}}{ \left\|u\right\|^{2}_{HL}}.\]
Now we have from [5, Theorem 1.1] and [3, Theorem 1.2], that \(S_{M}=S\) and \(S_{H,L,M}=S_{H,L}\), where \(S\) is the best constant in the classical Sobolev embedding. Now the following lemma plays a crucial role in the sequel:
**Lemma 2.3**: _[_18_]_ _The constant \(S_{H,L}\) is achieved if and only if_
\[u=C\left(\frac{b}{b^{2}+|x-a|^{2}}\right)^{\frac{n-2}{2}},\]
_where \(C>0\) is a fixed constant, \(a\in\mathbb{R}^{n}\) and \(b\in(0,\infty)\) are parameters. Moreover,_
\[S=C(n,\mu)^{\frac{n-2}{2n-\mu}}S_{H,L}.\]
Now we give the notion of weak solution to the problem \((P_{\lambda})\).
**Definition 2.4**: _We say that a function \(u\in L^{1}_{loc}(\Omega)\) is a weak solution of \((P_{\lambda})\) if the following hold:_
1. \(\inf_{x\in K}u(x)>0\) _for every compact subset_ \(K\subset\Omega\)_._
2. _For any_ \(\psi\in C_{c}^{\infty}(\Omega)\)_,_ \[\langle u,\psi\rangle=\int\limits_{\Omega}u^{-\gamma}\psi dx+\lambda\int \limits_{\Omega}\int\limits_{\Omega}\frac{u^{2^{*}_{\mu}}(y)u^{2^{*}_{\mu}-1}( x)\psi(x)}{|x-y|^{\mu}}dxdy.\]
3. \((u-\epsilon)^{+}\in X_{0}\) _for every_ \(\epsilon>0\)_._
**Lemma 2.5**: _Let \(u\) be a weak solution to \((P_{\lambda})\). Then for all compactly supported \(0\leq\varphi\in X_{0}\cap L^{\infty}(\Omega)\), we have_
\[\langle u,\varphi\rangle=\int\limits_{\Omega}u^{-\gamma}\varphi dx+\lambda \int\limits_{\Omega}\int\limits_{\Omega}\frac{u^{2^{*}_{\mu}}(y)u^{2^{*}_{\mu} -1}(x)\varphi(x)}{|x-y|^{\mu}}dxdy.\]
**Proof.** Proof follows on the similar lines of proof of [23, Lemma 2.9].
In order to prove the existence results for \((P_{\lambda})\), we translate the problem by the solution to the purely singular problem:
\[(P_{0})\left\{\mathcal{M}u=u^{-\gamma},\ u>0\ \text{in}\ \Omega,\ u=0\ \text{in}\ \mathbb{R}^{n}\setminus\Omega.\right.\]
From [2] we know the following results:
**Theorem 2.6**: _We have the following_
* _Let_ \(\gamma\in(0,1]\)_. Then there exists a positive minimal solution_ \(\hat{u}\in H^{1}_{0}(\Omega)\cap L^{\infty}(\Omega)\) _of_ \((P_{0})\) _such that for every_ \(\psi\in H^{1}_{0}(\Omega)\) _we have_ \[\langle\hat{u},\psi\rangle=\int\limits_{\Omega}\hat{u}^{-\gamma}\psi dx.\]
* _Let_ \(\gamma>1\)_. Then there exists a positive minimal solution of_ \((P_{0})\) _in the following sense:_ 1. \(\hat{u}\in H^{1}_{\text{loc}}(\Omega)\cap L^{\infty}(\Omega)\) _and_ \(u\) _satisfies item_ \((iii)\) _of Definition_ 2.4_._ 2. \(\inf_{x\in K}\hat{u}(x)>0\) _for every compact subset_ \(K\subset\Omega\)_._ 3. _for every_ \(\psi\in H^{1}_{0}(\Omega)\) _in case of_ \(\gamma<3\) _and additionnaly with_ \(\text{supp}(\psi)\Subset\Omega\) _in case of_ \(\gamma>3\)_,_ \[\langle\hat{u},\psi\rangle=\int\limits_{\Omega}\hat{u}^{-\gamma}\psi dx.\]
_Furthermore for any \(\gamma>0\), we have_
\[\hat{u}\in\mathcal{G}(\Omega)\ \text{where}\ \mathcal{G}(\Omega)=\begin{cases}u:u \sim\delta&\text{if}\ \gamma<1,\\ u:u\sim\delta\ln^{\frac{1}{2}}\left(\frac{d_{\Omega}}{\delta}\right)&\text{if} \ \gamma=1,\\ u:u\sim\delta^{\frac{2}{\gamma+1}}&\text{if}\ \gamma>1,\end{cases}\]
_and_
\[\hat{u}^{\frac{\partial+1}{2}}\in H^{1}_{0}(\Omega)\ \text{if and only if}\ \vartheta>\begin{cases}0&\text{if}\ \gamma\leq 1,\\ \frac{\gamma-1}{2}&\text{if}\ \gamma>1.\end{cases}\]
Now we consider the following translated problem:
\[(\hat{P}_{\lambda})\begin{cases}\mathcal{M}u+\hat{u}^{-\gamma}-(u+\hat{u})^{- \gamma}=\lambda\left(\int\limits_{\Omega}\frac{(u+\hat{u})^{2^{*}_{\mu}}}{|x- y|^{\mu}}dy\right)(u+\hat{u})^{2^{*}_{\mu}-1},\ u>0\ \text{in}\ \Omega,\\ u=0\ \text{in}\ \mathbb{R}^{n}\setminus\Omega.\end{cases}\]
We see that \(u+\hat{u}\) is a solution \((\hat{P}_{\lambda})\) if and only if \(u\in X_{0}\) solves \((P_{\lambda})\). Define the function \(f:\Omega\times\mathbb{R}\rightarrow\mathbb{R}\cup\{-\infty\}\) by
\[f(x,\tau)=\begin{cases}(\hat{u}(x))^{-\gamma}-(\tau+\hat{u}(x))^{-\gamma}& \text{if}\ \tau+\hat{u}(x)>0,\\ -\infty&\text{otherwise.}\end{cases}\]
Also we define \(F(x,\tau)=\int\limits_{0}^{\tau}f(x,r)dr\). Note that \(f\) is nonnegative and nondecreasing in \(\tau\). Next we define the notion of subsolution and supersolution for problem \((\hat{P}_{\lambda})\).
**Definition 2.7**: _A function \(u\in X_{0}\) is a subsolution (resp. a supersolution) of \((\hat{P}_{\lambda})\) if the following hold:_
* \(f(\cdot,u)\in L^{1}_{\text{loc}}(\Omega)\)_;_
* _For any_ \(\psi\in C^{\infty}_{c}(\Omega)\)_,_ \(\psi\geq 0\)__ \[\langle u,\psi\rangle+\int\limits_{\Omega}f(x,u)\psi dx-\lambda\int\limits_{ \Omega}\int\limits_{\Omega}\frac{(u+\hat{u})^{2^{*}_{\mu}}(y)(u+\hat{u})^{2^{*} _{\mu}-1}(x)\psi(x)}{|x-y|^{\mu}}dxdy\leq 0\ (\text{resp.}\geq 0).\]
**Definition 2.8**: _A function \(u\in X_{0}\) is a weak solution to \((\hat{P}_{\lambda})\) if it is both sub- and supersolution and \(u\geq 0\) in \(\Omega\)._
**Lemma 2.9**: _For each \(v\in X_{0}\), \(v\geq 0\), there exists a sequence \(\{v_{k}\}_{k\in\mathbb{N}}\subset X_{0}\) such that \(v_{k}\to v\) strongly in \(X_{0}\), where \(0\leq v_{1}\leq v_{2}\leq\cdots\) and \(v_{k}\) has compact support in \(\Omega\) for each \(k\)._
**Proof.** The proof is similar to the proof of [24, Lemma 3.1] and hence omitted. \(\Box\)
**Lemma 2.10**: _Let \(u\in X_{0}\) be a weak solution to \((\hat{P}_{\lambda})\). Then for any \(\psi\in X_{0}\), we have_
\[\langle u,\psi\rangle+\int\limits_{\Omega}f(x,u)\psi dx-\int\limits_{\Omega} \int\limits_{\Omega}\frac{(u+\hat{u})^{2^{*}_{\mu}}(u+\hat{u})^{2^{*}_{\mu}-1 }\psi}{|x-y|^{\mu}}dxdy=0. \tag{2.5}\]
**Proof.** Let \(0\leq\psi\in X_{0}\). Then by Lemma 2.9, there exists a sequence \(\{\psi_{k}\}_{k\in\mathbb{N}}\subset X_{0}\) such that \(\psi_{k}\) is increasing, each \(\psi_{k}\) has compact support in \(\Omega\) and \(\psi_{k}\to\psi\) strongly in \(X_{0}\). For each fixed \(k\), we can find a sequence \(\{\varphi^{k}_{l}\}_{l\in\mathbb{N}}\subset C^{\infty}_{c}(\Omega)\) such that \(\varphi^{k}_{l}\geq 0\), \(\bigcup_{l}\text{supp}\varphi^{k}_{l}\) is contained in a compact subset of \(\Omega\), \(\{|\varphi^{k}_{l}|_{\infty}\}\) is bounded and \(\varphi^{k}_{l}\to\psi_{k}\) strongly as \(l\to\infty\). Since \(u\) is a weak solution of \((\hat{P}_{\lambda})\), we get
\[\langle u,\varphi^{k}_{l}\rangle+\int\limits_{\Omega}f(x,u)\varphi^{k}_{l}dx- \int\limits_{\Omega}\int\limits_{\Omega}\frac{(u+\hat{u})^{2^{*}_{\mu}}(u+\hat {u})^{2^{*}_{\mu}-1}\varphi^{k}_{l}}{|x-y|^{\mu}}dxdy=0.\]
Now by the strong convergence of \(\varphi^{k}_{l}\to z_{k}\) in \(X_{0}\) as \(l\to\infty\), we deduce that
\[\langle u,\psi_{k}\rangle+\int\limits_{\Omega}f(x,u)\psi_{k}dx-\int\limits_{ \Omega}\int\limits_{\Omega}\frac{(u+\hat{u})^{2^{*}_{\mu}}(u+\hat{u})^{2^{*}_{ \mu}-1}\psi_{k}}{|x-y|^{\mu}}dxdy=0.\]
Now using the monotone convergence theorem, Lebesgue Dominated Convergence theorem and the strong convergence of \(\psi_{k}\) in \(X_{0}\), we obtain \(f(x,u)\psi\in L^{1}(\Omega)\) and we have (2.5) for any \(0\leq\psi\in X_{0}\). Now the result for general \(\psi\in X_{0}\) holds due to the fact that \(\psi=\psi^{+}-\psi^{-}\) and both \(\psi^{+}\) and \(\psi^{-}\) are nonnegative members of \(X_{0}\). \(\Box\)
We now state our main result:
**Theorem 2.11**: _Let \(\mu\leq\min\{4,n\}\). Then there exists \(\Lambda>0\) such that the following hold:_
_Existence:_ (\(P_{\lambda}\)) _admits at least one positive solution in_ \(L^{\infty}(\Omega)\cap\mathcal{G}(\Omega)\) _for every_ \(\lambda\in(0,\Lambda]\) _and no solution for_ \(\lambda>\Lambda\)_._
_Multiplicity: Assuming \(n+2s<6\), there exists at least two distinct solutions in \(L^{\infty}(\Omega)\cap\mathcal{G}(\Omega)\) for \(\lambda\in(0,\Lambda)\)._
### Notion of Nonsmooth analysis
To obtain the existence of nontrivial solutions to problem (\(P_{\lambda}\)), we use some nonsmooth analysis tools. In this subsection we collect some basic definitions, observations and recall a version of the linking theorem adapted to nonsmooth functionals. We begin with the following definition:
**Definition 2.12**: _Let \(V\) be a Hilbert space and \(I:V\to(-\infty,\infty]\) be a proper (i.e., \(I\not\equiv\infty)\) lower semicontinuous functional._
* _Let_ \(D(I)=\{u\in V:I(u)<\infty\}\) _be the domain of_ \(I\)_. For every_ \(u\in D(I)\)_, we define the Frechet subdifferential of_ \(I\) _at_ \(u\) _as the set_ \[\partial^{-}I(u)=\left\{z\in V:\liminf_{v\to u}\frac{I(v)-I(u)-\langle z,v-u \rangle}{\|v-u\|_{V}}\geq 0\right\}.\]
* _For each_ \(u\in V\)_, we define_ \[|||\partial^{-}I(u)|||=\begin{cases}\min\{\|z\|_{V}:z\in\partial^{-}I(u)\}& \text{ if }\partial^{-}I(u)\neq\emptyset,\\ \infty&\text{ if }\partial^{-}I(u)=\emptyset.\end{cases}\]
We know that \(\partial^{-}I(u)\) is a closed convex set which may be empty. If \(u\in D(I)\) is a local minimizer for \(I\), then it can be seen that \(0\in\partial^{-}I(u)\).
**Remark 2.13**: _We remark that if \(I_{0}:V\to(-\infty,\infty]\) be a proper, lower semicontinuous, convex functional, \(I_{1}:V\to\mathbb{R}\) is a \(C^{1}\) functional and \(I=I_{1}+I_{0}\), the \(\partial^{-}I(u)=\nabla I_{1}(u)+\partial I_{0}(u)\) for every \(u\in D(I)=D(I_{0})\), where \(\partial I_{0}\) denotes the usual subdifferential of the convex functional \(I_{0}\). Thus, \(u\) is said to be a critical point of \(I\) if \(u\in D(I_{0})\) and for every \(v\in V\), we have \(\langle\nabla I_{1}(u),v-u\rangle+I_{0}(v)-I_{0}(u)\geq 0\)._
**Definition 2.14**: _For a proper, lower semicontinuous functional \(I:V\to(-\infty,\infty]\), we say that \(I\) satisfies Cerami's variant of the Palais-Smale condition at level \(d\) (in short, \(I\) satifies \((CPS)_{d}\)), if any sequence \(\{w_{k}\}_{k\in\mathbb{N}}\subset D(I)\) such that \(I(w_{k})\to d\) and \((1+w_{k})|||\partial^{-}I(w_{k})|||\to 0\) has a strongly convergent subsequence in \(V\)._
Analogous to the mountain pass theorem, we have the following linking theorem for nonsmooth functionals.
**Theorem 2.15**: _[_36_]_ _Let \(V\) be a Hilbert space. Assume \(I=I_{0}+I_{1}\), where \(I_{0}:V\to(-\infty,\infty]\) is a proper, lower semicontinuous, convex functional and \(I_{1}:V\to\mathbb{R}\) is a \(C^{1}-\) functional. Let
\(B^{n},\ S^{n-1}\) denote the closed unit ball and its boundary in \(\mathbb{R}^{n}\) respectively. Let \(\varphi:S^{n-1}\to D(I)\) be a continuous function such that_
\[\Sigma=\{\psi\in C(B^{n},D(I)):\psi|_{S^{n-1}}=\varphi\}\neq\emptyset.\]
_Let \(A\) be a relatively closed subset of \(D(I)\) such that_
\[A\cap\varphi(S^{n-1})=\emptyset,\ A\cap\psi(B^{n})\neq\emptyset\ \mbox{for all}\ \psi\in\Sigma\ \mbox{and}\ \inf I(A)\geq\sup I(\varphi(S^{n-1})).\]
_Define \(d=\inf_{\psi\in\Sigma}\sup_{x\in B^{n}}I(\psi(x))\). Assume that \(d\) is finite and that \(I\) satisfies \((CPS)_{d}\). Then there exists \(u\in D(I)\) such that \(I(u)=d\) and \(0\in\partial^{-}I(u)\). Furthermore, if \(\inf J(A)=d\), then there exists \(u\in A\cap D(I)\) such that \(I(u)=d\) and \(0\in\partial^{-}I(u)\)._
## 3 Regularity of weak solutions
In this section, we prove some regularity results about positive weak solutions to \((P_{\lambda})\). For this, we first investigate the regularity of positive weak solution to \((\hat{P}_{\lambda})\). We start with the boundedness property obtained by Moser type iterations:
**Lemma 3.1**: _Any nonnegative solution to \((\hat{P}_{\lambda})\) belongs to \(L^{\infty}(\mathbb{R}^{n})\)._
**Proof.** Let \(u\) be a nonnegative solution to \((\hat{P}_{\lambda})\). We define \(u_{\tau}=\min\{u,\tau\}\) for \(\tau>0\). Let \(\psi=u(u_{\tau})^{q-2}\), \(q\geq 3\) be a test function to problem \((\hat{P}_{\lambda})\). Now
\[\nabla(uu_{\tau}^{\frac{q}{2}-1})=u_{\tau}^{\frac{q}{2}-1}\nabla u+\left( \frac{q}{2}-1\right)u_{\tau}^{\frac{q}{2}-2}u\nabla u_{\tau}.\]
This implies
\[\left|\nabla(u(u_{\tau})^{\frac{q}{2}-1})\right|^{2}= \sum_{i=1}^{n}\left(u_{\tau}^{\frac{q}{2}-1}\frac{\partial u}{ \partial x_{i}}+\left(\frac{q}{2}-1\right)u_{\tau}^{\frac{q}{2}-2}u\frac{ \partial u_{\tau}}{\partial x_{i}}\right)^{2}\leq 2\left(u_{\tau}^{q-2}| \nabla u|^{2}+\frac{q^{2}}{4}u_{\tau}^{q-4}u^{2}|\nabla u_{\tau}|^{2}\right)\] \[\leq \frac{q^{2}}{2}\left(u_{\tau}^{q-2}|\nabla u|^{2}+u_{\tau}^{q-4} u^{2}|\nabla u_{\tau}|^{2}\right).\]
Thus,
\[\int\limits_{\Omega}\left|\nabla(u(u_{\tau})^{\frac{q}{2}-1}) \right|^{2}\leq\frac{q^{2}}{2}\left(\int\limits_{\Omega}u_{\tau}^{q-2}|\nabla u |^{2}+\int\limits_{\{u<\tau\}}u^{q-2}|\nabla u_{\tau}|^{2}\right). \tag{3.1}\]
Also we have
\[\int\limits_{\Omega}\nabla u\cdot\nabla(u(u_{\tau})^{q-2})= \int\limits_{\Omega}u_{\tau}^{q-2}|\nabla u|^{2}+(q-2)\int \limits_{\Omega}u_{\tau}^{q-3}u\nabla u\cdot\nabla u_{\tau}\] \[\geq \int\limits_{\Omega}u_{\tau}^{q-2}|\nabla u|^{2}+\int\limits_{\{ u<\tau\}}u^{q-2}|\nabla u|^{2}. \tag{3.2}\]
Combining (3.1) and (3.2), we get
\[\int\limits_{\Omega}\left|\nabla(u(u_{\tau})^{\frac{q}{2}-1}) \right|^{2}\leq Cq^{2}\int\limits_{\Omega}\nabla u\nabla\psi. \tag{3.3}\]
Now from [22, Lemma 3.5], we have the following inequality:
\[\frac{4(q-1)}{q^{2}}\left(a|a_{k}|^{\frac{q}{2}-1}-b|b_{k}|^{\frac{q}{2}-1}\right) ^{2}\leq(a-b)(a_{k}|a_{k}|^{q-2}-b|b_{k}|^{q-2}). \tag{3.4}\]
where \(a,\ b\in\mathbb{R}\), \(q\geq 2\), \(a_{k}=\min\{a,k\}\) and \(b_{k}=\min\{b,k\}\). Using (3.4) with \(a=u(x)\) and \(b=u(y)\), we obtain
\[[u(u_{\tau})^{\frac{q}{2}-1}]_{s}^{2}\leq\frac{Cq^{2}}{q-1}\int\limits_{\backmu} \int\limits_{\mathbb{R}^{n}}\frac{u(x)-u(y)(\psi(x)-\psi(y))}{|x-y|^{n+2s}}dxdy. \tag{3.5}\]
Using (3.3), (3.4) and Sobolev inequality, we get
\[|u(u_{\tau})^{\frac{q}{2}-1}|_{2^{*}}^{2}\leq C\left(\ll u(u_{\tau})^{\frac{q}{2}-1}\gg^{2}+[u(u_{\tau})^{ \frac{q}{2}-1}]_{s}^{2}\right)\leq Cq^{2}\langle u,\psi\rangle\] \[= Cq^{2}\left(-\int\limits_{\Omega}f(x,u)u(u_{\tau})^{q-2}dx+\int \limits_{\Omega}\int\limits_{\Omega}\frac{(u+\hat{u})^{2^{*}_{\mu}}(u+\hat{u} )^{2^{*}_{\mu}-1}u(u_{\tau})^{q-2}}{|x-y|^{\mu}}dxdy\right).\]
The rest of the proof follows similarly as the proof of [22, Lemma 4.1]. \(\Box\)
**Lemma 3.2**: _Let \(q>0\) and let \(v\in L^{(q+1)/q}(\Omega)\) be a positive function and \(u\in X_{0}\cap L^{q+1}(\Omega)\) a positive weak solution to_
\[\mathcal{M}u+f(x,u)=v\ \text{in}\ \Omega,\ u=0\ \text{in}\ \mathbb{R}^{n}\setminus\Omega. \tag{3.6}\]
_Then \((u+\hat{u}-\epsilon_{1})^{+}\in X_{0}\) for every \(\epsilon_{1}>0\)._
**Proof.** Let \(\epsilon_{1},\ \epsilon_{2}>0\) and set \(\varphi=\min\{u,\epsilon_{1}-(\hat{u}-\epsilon_{2})^{+}\}\in X_{0}\). Note that \(u-\varphi=(u+(\hat{u}-\epsilon_{2})^{+}-\epsilon_{1})^{+}\in X_{0}\). Since
\[0\leq v(u-\varphi)\leq vu+v\hat{u}\in L^{1}(\Omega),\]
using the arguments in the proof of Lemma 2.10, we can show that \(f(\cdot,u)(u-\varphi)\in L^{1}(\Omega)\) and
\[\langle u,u-\varphi\rangle+\int\limits_{\Omega}f(x,u)(u-\varphi)dx-\int \limits_{\Omega}v(u-\varphi)=0.\]
Now using the following inequality for the fractional Laplacian:
\[(-\Delta)^{s}g(u)\leq g^{\prime}(u)(-\Delta)^{s}u,\]
where \(g\) is a convex piecewise \(C^{1}\) with bounded derivative function, we have
\[\langle(\hat{u}-\epsilon_{2})^{+},\psi\rangle\leq\langle\hat{u},\psi\rangle= \int\limits_{\Omega}\hat{u}^{-\gamma}\psi dx,\ \text{for every}\ 0\leq\psi\in C_{c}^{\infty}(\Omega).\]
So, arguing as in the proof of Lemma 2.10, we can show that
\[\langle(u-\epsilon_{2})^{+},u-\varphi\rangle\leq\int\limits_{\Omega}\hat{u}^ {-\gamma}(u-\varphi)dx.\]
We note that \(u+\hat{u}\geq\epsilon_{1}\) when \(u\neq\varphi\), \((u+\hat{u})^{-\gamma}(u-\varphi)\in L^{1}(\Omega)\) and \(\hat{u}(u-\varphi)\in L^{1}(\Omega)\). Therefore, we have
\[\|u+(\hat{u}-\epsilon_{2})^{+}-\epsilon_{1})^{+}\|^{2}= \langle u+(\hat{u}-\epsilon_{2})^{+}-\epsilon_{1}),u-\varphi\rangle\] \[\leq \int\limits_{\Omega}\hat{u}^{-\gamma}(u-\varphi)dx-\int\limits_{ \Omega}f(x,u)(u-\varphi)dx+\int\limits_{\Omega}v(u-\varphi)dx\] \[= \int\limits_{\Omega}(u+\hat{u})^{-\gamma}(u-\varphi)dx+\int\limits _{\Omega}v(u-\varphi)dx\] \[\leq \epsilon_{1}^{-\gamma}\int\limits_{\Omega}(u-\psi)dx+\int\limits _{\Omega}v(u-\varphi)dx.\]
Thus for any \(\epsilon_{1}>0\), we have that \(u+(\hat{u}-\epsilon_{2})^{+}-\epsilon_{1})^{+}\) is bounded in \(X_{0}\) as \(\epsilon_{2}\to 0^{+}\). Hence, we conclude that \((u+\hat{u}-\epsilon_{1})^{+}\in X_{0}\) for every \(\epsilon_{1}>0\). \(\Box\)
**Corollary 3.3**: _Let \(v\in L^{2^{*}}(\Omega)\) be a positive function and assume \(g(x,v)=\left(\int\limits_{\Omega}\frac{v^{2^{*}_{\mu}}(y)}{|x-y|^{\mu}}dy \right)v^{2^{*}_{\mu}-1}\). Assume that \(u\in X_{0}\) be a positive weak solution to_
\[\mathcal{M}u+f(x,u)=g(x,v)\text{ in }\Omega,\ u=0\text{ in }\mathbb{R}^{n} \setminus\Omega. \tag{3.7}\]
_Then \((u+\hat{u}-\epsilon)^{+}\in X_{0}\) for every \(\epsilon>0\)._
Now we establish a crucial comparison principle. It states as follows:
**Lemma 3.4**: _Let \(\mathcal{H}\in X_{0}^{*}\) (the dual of \(X_{0}\)) and let \(v,\ w\in H^{1}_{\text{loc}}(\Omega)\) be such that \(v,\ w>0\) a.e in \(\Omega\), \(v,\ w\geq 0\) in \(\mathbb{R}^{n}\), \(v^{-\gamma},\ w^{-\gamma}\in L^{1}_{\text{loc}}(\Omega)\), \((v-\epsilon)^{+}\in X_{0}\) for all \(\epsilon>0\), \(z\in L^{1}(\Omega)\) and_
\[\langle v,\psi\rangle\leq\int\limits_{\Omega}v^{-\gamma}\psi dx+(\mathcal{H}, \psi),\ \langle w,\psi\rangle\geq\int\limits_{\Omega}w^{-\gamma}\psi dx+(\mathcal{H},\psi) \tag{3.8}\]
_for all compactly supported function \(0\leq\psi\in X_{0}\cap L^{\infty}(\Omega)\). Then \(v\leq w\) a.e in \(\Omega\)._
**Proof.** Let us denote \(\Psi_{k}:\mathbb{R}\rightarrow\mathbb{R}\) as the primitive of the function
\[\tau\mapsto\begin{cases}\max\{-\tau^{-q},-k\}&\text{ if }\tau>0,\\ -k&\text{ if }\tau\leq 0\end{cases}\]
such that \(\Psi_{k}(1)=0\). Next we define a proper lower semicontinuous, strictly convex functional \(\widetilde{G}_{0,k}:L^{2}(\Omega)\rightarrow\mathbb{R}\) as
\[\widetilde{G}_{0,k}(u)=\begin{cases}\frac{1}{2}\|u\|^{2}+\int\limits_{\Omega} \Psi_{k}(u)dx&\text{ if }u\in X_{0},\\ \infty&\text{ if }u\in L^{2}(\Omega)\setminus X_{0}.\end{cases}\]
As we know, primitives are usually defined up to an additive constant, to prevent a possible unlikely choice we consider \(G_{0,k}:L^{2}(\Omega)\rightarrow\mathbb{R}\) defined by
\[G_{0,k}(u)=\widetilde{G}_{0,k}(u)-\min\widetilde{G}_{0,k}=\widetilde{G}_{0,k} (u)-\widetilde{G}_{0,k}(u_{0,k}),\]
where \(u_{0,k}\in X_{0}\) is the minimum of \(\widetilde{G}_{0,k}\). In general, for \(\mathcal{H}\in X_{0}^{*}\) we set
\[\widetilde{G}_{\mathcal{H},k}(u)=\begin{cases}G_{0,k}(u)-(\mathcal{H},u-u_{0,k})& \text{ if }u\in X_{0}\\ \infty&\text{ if }u\in L^{2}(\Omega)\setminus X_{0}.\end{cases}\]
Let \(\epsilon>0\) and \(k>\epsilon^{-\gamma}\) and let \(z\) be the minimum of the functional \(\widetilde{G}_{\mathcal{H},k}\) on the convex set \(K=\{\psi\in X_{0}:0\leq\psi\leq w\text{ a.e in }\Omega\}\). Then for all \(\psi\in K\) we get
\[\langle v,\psi-z\rangle\geq-\int\limits_{\Omega}\Psi_{k}^{\prime}(z)(\psi-v)dx +(\mathcal{H},\psi-z). \tag{3.9}\]
Let \(0\leq\psi\in C_{c}^{\infty}(\Omega)\), \(t>0\). Define \(\varphi_{t}:=\min\{z+t\psi,w\}\). Noting that \(w\in H^{1}_{\text{loc}}(\Omega)\), \(z\in X_{0}\), \(\psi\in C_{c}^{\infty}(\Omega)\), we have \(\psi_{t}\in X_{0}\). Next we claim that \(\psi_{t}\) is uniformly bounded in \(X_{0}\) for all \(t<1\). Using the continuous embedding of \(H^{1}_{0}(\Omega)\) into \(H^{s}_{0}(\mathbb{R}^{n})\), it is sufficient to show that \(\ll\psi_{t}\gg\) is uniformly bounded in \(t\). We have
\[\int\limits_{\Omega}|\nabla\psi_{t}|^{2} =\int\limits_{\{z+t\psi\leq w\}}|\nabla z+t\nabla\psi|^{2}+\int \limits_{\{z+t\psi>w\}}|\nabla w|^{2}\] \[\leq\int\limits_{\Omega}|\nabla z|^{2}+t^{2}\int\limits_{\Omega} |\nabla\psi|^{2}+2t\int\limits_{\Omega}\nabla z\nabla\psi+\int\limits_{ \text{supp}\psi}|\nabla w|^{2}\] \[\leq\ll z\gg^{2}+\ll\psi\gg^{2}+\ll z\gg\ll\psi\gg+\int\limits_{ \text{supp}\psi}|\nabla w|^{2}<\infty.\]
This proves the claim. Considering the subsequence (still denoted by \(\psi_{t}\)) such that \(\psi_{t}\rightharpoonup z\) weakly in \(X_{0}\) and taking \(\psi=\psi_{t}\) in (3.9), we obtain
\[\langle z,\psi_{t}-z\rangle\geq-\int\limits_{\Omega}\Psi_{k}^{\prime}(z)(\psi_ {t}-v)dx+(\mathcal{H},\psi_{t}-z). \tag{3.10}\]
Since \(w\) is a supersolution and \(w^{-\gamma}\geq-\Psi_{k}^{\prime}(w)\), we infer that \(w\) satisfies
\[\langle w,\psi\rangle\geq-\int\limits_{\Omega}\Psi_{k}^{\prime}(w)\psi dx+( \mathcal{H},\psi). \tag{3.11}\]
Using the facts that \(\psi_{t}\leq w\), \(\psi_{t}-z-t\psi\leq 0\) and \(\psi_{t}-z-t\psi\neq 0\) only if \(\psi_{t}=w\), we observe that
\[\int\limits_{\Omega}\nabla\psi_{t}\nabla(\psi_{t}-z-t\psi)+\frac{ C(n,s)}{2}\int\limits_{\mathbb{R}^{n}}\int\limits_{\mathbb{R}^{n}}\frac{\psi_{t}(x)- \psi_{t}(y)((\psi_{t}-z-t\psi)(x)-(\psi_{t}-z-t\psi)(y)}{|x-y|^{n+2s}}dxdy\] \[\leq \int\limits_{\Omega}\nabla z\nabla(\psi_{t}-z-t\psi)+\frac{C(n,s) }{2}\int\limits_{\mathbb{R}^{n}}\int\limits_{\mathbb{R}^{n}}\frac{z(x)(\psi_{ t}-z-t\psi)(x)}{|x-y|^{n+2s}}dxdy\] \[+\frac{C(n,s)}{2}\int\limits_{\mathbb{R}^{n}}\int\limits_{\mathbb{R }^{n}}\frac{z(y)(\psi_{t}-z-t\psi)(y)}{|x-y|^{n+2s}}dxdy-\frac{C(n,s)}{2}\int \limits_{\mathbb{R}^{n}}\int\limits_{\mathbb{R}^{n}}\frac{z(x)(\psi_{t}-z-t \psi)(y)}{|x-y|^{n+2s}}dxdy\] \[-\frac{C(n,s)}{2}\int\limits_{\mathbb{R}^{n}}\int\limits_{\mathbb{R }^{n}}\frac{z(y)(\psi_{t}-z-t\psi)(x)}{|x-y|^{n+2s}}dxdy=\langle z,\psi_{t}-z-t \psi\rangle. \tag{3.12}\]
Similarly, \(\int\limits_{\Omega}(\Psi^{\prime}_{k}(\psi_{t})-\Psi^{\prime}_{k}(w))(\psi_{t}-z- t\psi)\leq 0\) and moreover \(\Psi^{\prime}_{k}(w)\leq-w^{-\gamma}\). Taking into account (3.8), (3.10), (3.11), (3.12) and above observations, we infer that
\[\|\psi_{t}-z\|^{2} -\int\limits_{\Omega}(-\Psi^{\prime}_{k}(\psi_{t})+\Psi^{\prime}_ {k}(z))(\psi_{t}-z)dx\] \[= \langle\psi_{t},\psi_{t}-z\rangle+\int\limits_{\Omega}\Psi^{ \prime}_{k}(\psi_{t})(\psi_{t}-z)dx-\langle z,\psi_{t}-z\rangle-\int\limits_{ \Omega}\Psi^{\prime}_{k}(z)(\psi_{t}-z)dx\] \[\leq \langle\psi_{t},\psi_{t}-z\rangle+\int\limits_{\Omega}\Psi^{ \prime}_{k}(\psi_{t})(\psi_{t}-z)dx-(\mathcal{H},\psi_{t}-z)\] \[= \langle\psi_{t},\psi_{t}-z-t\psi\rangle+\int\limits_{\Omega}\Psi ^{\prime}_{k}(\psi_{t})(\psi_{t}-z-t\psi)dx-(\mathcal{H},\psi_{t}-z-t\psi)\] \[+t\left(\langle\psi_{t},\psi\rangle+\int\limits_{\Omega}\Psi^{ \prime}_{k}(\psi_{t})\psi-(\mathcal{H},\psi)\right)\] \[\leq \langle w,\psi_{t}-z-t\psi\rangle+\int\limits_{\Omega}\Psi^{ \prime}_{k}(w)(\psi_{t}-z-t\psi)dx-(\mathcal{H},\psi_{t}-z-t\psi)\] \[+t\left(\langle\psi_{t},\psi\rangle+\int\limits_{\Omega}\Psi^{ \prime}_{k}(\psi_{t})\psi-(\mathcal{H},\psi)\right)\leq t\left(\langle\psi_{t},\psi\rangle+\int\limits_{\Omega}\Psi^{\prime}_{k}(\psi_{t})\psi-(\mathcal{H}, \psi)\right).\]
This yields
\[\langle\psi_{t},\psi\rangle+\int\limits_{\Omega}\Psi^{\prime}_{k} (\psi_{t})\psi-(\mathcal{H},\psi)\geq \frac{1}{t}\left(\|\psi_{t}-z\|^{2}-\int\limits_{\Omega}|\Psi^{ \prime}_{k}(\psi_{t})-\Psi^{\prime}_{k}(z)|(\psi_{t}-z)dx\right)\] \[\geq -\int\limits_{\Omega}|\Psi^{\prime}_{k}(\psi_{t})-\Psi^{\prime}_ {k}(z)|\psi dx.\]
Now using the weak convergence of \(\psi_{t}\), monotone convergence theorem and dominated convergence theorem, we have
\[\langle z,\psi\rangle\geq-\int\limits_{\Omega}\Psi^{\prime}_{k}(z)\psi dx+( \mathcal{H},\psi). \tag{3.13}\]
Since \(C^{\infty}_{c}(\Omega)\) is dense in \(X_{0}\), we infer that (3.13) is true for all \(\psi\in X_{0}\) with \(\psi\geq 0\) a.e. in \(\Omega\). In particular, since \(z\geq 0\) we have \((v-z-\epsilon)^{+}\in X_{0}\). Testing (3.13) with \((v-z-\epsilon)^{+}\), we get
\[\langle z,(v-z-\epsilon)^{+}\rangle\geq-\int\limits_{\Omega}\Psi^{\prime}_{k} (z)(v-z-\epsilon)^{+}dx+(\mathcal{H},(v-z-\epsilon)^{+}). \tag{3.14}\]
Let us now consider \(\Theta\in X_{0}\) such that \(0\leq\Theta\leq v\) a.e. in \(\Omega\). Let \(\{\Theta_{m}\}\) be a sequence in \(C^{\infty}_{c}(\Omega)\) converging to \(\Theta\in X_{0}\) and set \(\tilde{\Theta}_{m}=\min\{\Theta^{+}_{m},\Theta\}\). Testing (3.8) with \(\tilde{\Theta}_{m}\), we get
\[\langle v,\tilde{\Theta}_{m}\rangle\leq\int\limits_{\Omega}v^{-\gamma}\tilde{ \Theta}_{m}dx+(\mathcal{H},\tilde{\Theta}_{m}).\]
If \(v^{-\gamma}\Theta\in L^{1}(\Omega)\), then passing to the limit as \(m\to\infty\), we get
\[\langle v,\Theta\rangle\leq\int\limits_{\Omega}v^{-\gamma}\Theta dx+(\mathcal{H}, \Theta).\]
If \(v^{-\gamma}\Theta\not\in L^{1}(\Omega)\), then the above inequality is obviously still true. In particular, we have
\[\langle v,(v-z-\epsilon)^{+}\rangle\leq\int\limits_{\Omega}v^{-\gamma}(v-z- \epsilon)^{+}dx+(\mathcal{H},(v-z-\epsilon)^{+}). \tag{3.15}\]
Using (3.14), (3.15) together with the fact that \(k\geq\epsilon^{-\gamma}\), we get
\[\langle(v-z-\epsilon)^{+},(v-z-\epsilon)^{+}\rangle\leq \langle v-z,(v-z-\epsilon)^{+}\rangle\leq\int\limits_{\Omega}(v^{- \gamma}+\Psi^{\prime}_{k}(z))(v-z-\epsilon)^{+}dx\] \[= \int\limits_{\Omega}(-\Psi^{\prime}_{k}(v)+\Psi^{\prime}_{k}(z)) (v-z-\epsilon)^{+}dx\leq 0.\]
Thus \(v\leq z+\epsilon\leq w+\epsilon.\) Since \(\epsilon\) was arbitrary chosen, the proof follows. \(\Box\)
**Lemma 3.5**: _Let \(\lambda>0\) and let \(v\in H^{1}_{loc}(\Omega)\cap L^{2^{*}}(\Omega)\) be a weak solution to \((P_{\lambda})\) as it is defined in Definition 2.4. Then \(v-\hat{u}\) is a positive weak solution to \((\hat{P}_{\lambda})\) belonging to \(L^{\infty}(\Omega)\)._
**Proof.** Consider problem (3.7) with \(v\) given. Then \(0\) is a strict subsolution to (3.7). Define the functional \(J:X_{0}\to(-\infty,\infty]\) by
\[J(u)=\begin{cases}\frac{1}{2}\|u\|^{2}+\int\limits_{\Omega}F(x,u)dx-\frac{ \lambda}{22^{*}_{\mu}}\int\limits_{\Omega}\int\limits_{\Omega}\frac{v^{2^{*}_ {\mu}}v^{2^{*}_{\mu}-1}u}{|x-y|^{\mu}}dxdy&\text{ if }F(\cdot,u)\in L^{1}( \Omega),\\ \infty&\text{ otherwise.}\end{cases}\]
Define \(K^{\prime}=\{u\in X_{0}:u\geq 0\}\), a closed convex set and define
\[J_{K^{\prime}}(u)=\begin{cases}J(u)&\text{ if }u\in K^{\prime}\text{ and }F(\cdot,u)\in L^{1}(\Omega),\\ \infty&\text{ otherwise.}\end{cases}\]
We can easily show that there exists \(u\in K^{\prime}\) such that \(J_{K^{\prime}}(u)=\inf J_{K^{\prime}}(K^{\prime})\). This implies that \(0\in\partial^{-}J_{K^{\prime}}(u)\). Now from Proposition 4.2, we obtain that \(u\) is a nonnegative solution to (3.7). Now using Corollary 3.3, Lemma 2.5 and assertions as in Lemma 2.10, we obtain that \((u+\hat{u}-\epsilon)^{+}\in X_{0}\) for every \(\epsilon>0\) and
\[\langle u+\hat{u},\psi\rangle-\int\limits_{\Omega}(u+\hat{u})^{- \gamma}\psi dx-\lambda\int\limits_{\Omega}\int\limits_{\Omega}\frac{v^{2^{*}_ {\mu}}v^{2^{*}_{\mu}-1}\psi}{|x-y|^{\mu}}dxdy=0\] \[\langle v,\psi\rangle-\int\limits_{\Omega}v^{-\gamma}\psi dx- \lambda\int\limits_{\Omega}\int\limits_{\Omega}\frac{v^{2^{*}_{\mu}}v^{2^{*} _{\mu}-1}\psi}{|x-y|^{\mu}}dxdy=0\]
for \(0\leq\psi\in X_{0}\cap L^{\infty}(\Omega)\) with compact support in \(\Omega\). Now using Lemma 3.4, we get \(v=u+\hat{u}\), which implies that \(u=v-\hat{u}\) is a positive weak solution of \((\hat{P}_{\lambda})\). Finally, by Lemma 3.1, we have \(u\in L^{\infty}(\mathbb{R}^{n})\). \(\Box\)
**Lemma 3.6**: _Let \(\mu\leq\min\{4,n\}\). Let \(u\) be any weak solution of \((P_{\lambda})\). Then \(u\in L^{\infty}(\Omega)\cap\mathcal{G}(\Omega)\)._
**Proof.** Let \(u\) be any weak solution of problem \((P_{\lambda})\). Using Lemma 3.5, we have \(u-\hat{u}\in X_{0}\) is a solution of \(\hat{P}_{\lambda}\). Again using Lemma 3.1, we have \(u-\hat{u}\in L^{\infty}(\Omega)\). Therefore, \(u=(u-\hat{u})+\hat{u}\in L^{\infty}(\Omega)\). Now let \(\tilde{u}\) be a solution to the following problem:
\[\mathcal{M}\tilde{u}=\tilde{u}^{-\gamma}+\lambda d,\ \tilde{u}>0\in\Omega,\ \tilde{u}=0\ \text{in}\ \mathbb{R}^{n}\setminus\Omega\]
where \(d=D^{*}|u|_{\infty}^{22^{\mu}_{\mu}-1}\) with \(D^{*}=\left|\int\limits_{\Omega}\dfrac{dy}{|x-y|^{\mu}}\right|_{\infty}\). Using Lemma 3.4, we observe that \(\hat{u}\leq u\leq\tilde{u}\) a.e in \(\Omega\). Finally by using the regularity of \(\hat{u}\) and \(\tilde{u}\), we conclude that \(u\in\mathcal{G}(\Omega)\).
## 4 Existence and multiplicity of positive solutions for \((P_{\lambda})\)
### First Solution
In this subsection, we prove the existence of a weak solution which actually comes out to be a local minimizer of an appropriate functional. We start this subsection by giving the variational framework to problem \((\hat{P}_{\lambda})\) in the space \(X_{0}\). We define the functional \(\Phi:X_{0}\to(-\infty,\infty]\) associated with \((\hat{P}_{\lambda})\) by
\[\Phi(u)=\begin{cases}\dfrac{1}{2}\|u\|^{2}+\int\limits_{\Omega}F(x,u)dx-\dfrac {\lambda}{22^{*}_{\mu}}\int\limits_{\Omega}\int\limits_{\Omega}\dfrac{|u+\hat{ u}|^{2^{*}_{\mu}}|u+\hat{u}|^{2^{*}_{\mu}}}{|x-y|^{\mu}}dxdy&\text{if }F(\cdot,u)\in L^{1}( \Omega),\\ \infty&\text{otherwise}.\end{cases}\]
Next for any closed convex subset \(K\subset X_{0}\), we define the functional \(\Phi_{K}:X_{0}\to(-\infty,\infty]\) by
\[\Phi_{K}(u)=\begin{cases}\Phi(u)&\text{if }u\in K\text{ and }F(\cdot,u)\in L^{1}( \Omega),\\ \infty&\text{otherwise}.\end{cases}\]
We note that \(u\in D(\Phi_{K})\) iff \(u\in K\) and \(F(\cdot,u)\in L^{1}(\Omega)\). Our next lemma characterizes the set \(\partial^{-}\Phi_{K}(u)\).
**Lemma 4.1**: _Let \(K\subset X_{0}\) be a convex set and let \(\nu\in X_{0}\). Let also \(u\in K\) with \(F(\cdot,u)\in L^{1}(\Omega)\). Then the following assertions are equivalent:_
* \(\nu\in\partial^{-}\Phi_{K}(u)\)_._
* _For every_ \(\varphi\in K\) _with_ \(F(\cdot,\varphi)\in L^{1}(\Omega)\)_, we have_ \(f(\cdot,u)(\varphi-u)\in L^{1}(\Omega)\) _and_ \[\langle\nu,\varphi-u\rangle\leq\langle u,\varphi-u\rangle+\int\limits_{\Omega }f(x,u)(\varphi-u)dx-\lambda\int\limits_{\Omega}\int\limits_{\Omega}\dfrac{ (u+\hat{u})^{2^{*}_{\mu}}(u+\hat{u})^{2^{*}_{\mu}-1}(\varphi-u)}{|x-y|^{\mu}}dxdy.\]
**Proof.** The proof is similar to the proof of [23, Lemma 5.1] and hence omitted.
Now for any functions \(v,\ w:\Omega\to[-\infty,\infty]\), we define the following convex sets:
\[K_{v}=\{u\in X_{0}:v\leq u\ \mbox{a.e}\},\ K^{w}=\{u\in X_{0}:u\leq w\ \mbox{a.e}\}\mbox{ and }K_{v}^{w}=\{u\in X_{0}:v\leq u\leq w\ \mbox{a.e}\}.\]
We state the following proposition which can be thought of as Perron's method for non-smooth functionals.
**Proposition 4.2**: _Assume that one of the following conditions holds:_
* \(v_{1}\) _is a subsolution to_ \((\hat{P}_{\lambda})\)_,_ \(F(x,\varphi(x))\in L^{1}_{loc}(\Omega)\) _for all_ \(\varphi\in K_{v_{1}}\)_,_ \(u\in D(\Phi_{K_{v_{1}}})\) _and_ \(0\in\partial^{-}\Phi_{K_{v_{1}}}(u)\)_._
* \(v_{2}\) _is a supersolution of_ \((\hat{P}_{\lambda})\)_,_ \(F(x,\varphi(x))\in L^{1}_{loc}(\Omega)\) _for all_ \(\varphi\in K^{v_{2}}\)_,_ \(u\in D(\Phi_{K^{v_{2}}})\) _and_ \(0\in\partial^{-}\Phi_{K^{v_{2}}}(u)\)_._
* \(v_{1}\) _and_ \(v_{2}\) _are subsolution and supersolution of_ \((\hat{P}_{\lambda})\)_,_ \(v_{1}\leq v_{2}\)_,_ \(F(x,v_{1}(x)),\ F(x,v_{2}(x))\in L^{1}_{loc}(\Omega)\)_,_ \(u\in D(\Phi_{K^{v_{2}}_{1}})\) _and_ \(0\in\partial^{-}\Phi_{K^{v_{2}}_{1}}(u)\)_._
_Then \(u\) is a weak solution of \((\hat{P}_{\lambda})\)._
**Proof.** Following the proof of [24, Proposition 4.2], we have the required result. \(\Box\)
Let \(\xi\) be the function which satisfies \({\cal M}u=\frac{1}{2}\). From [8, Theorem 2.7], \(\xi\in C^{1,\beta}(\bar{\Omega})\) for some \(\beta\in(0,1)\). For \(f\) and \(F\), we have the following properties.
**Lemma 4.3**:
* _Let_ \(u\in L^{1}_{loc}(\Omega)\) _such that ess_ \(\inf_{K}u>0\) _for any compact set_ \(K\subset\Omega\)_. Then_ \(f(x,u(x)),\ F(x,u(x))\in L^{1}_{loc}(\Omega)\)_._
* _For all_ \(x\in\Omega\)_, the following holds:_
* \(F(x,st)\leq s^{2}F(x,t)\) _for each_ \(s\geq 1\) _and_ \(t\geq 0\)_._
* \(F(x,s)-F(x,t)-(f(x,s)+f(x,t))(s-t)/2\geq 0\) _for each_ \(s,\ t\) _with_ \(s\geq t>-\xi(x)\)_._
* \(F(x,s)-f(x,s)s/2\geq 0\) _for each_ \(s\geq 0\)_._
**Proof.** For a proof we refer to [29, Lemma 4].
**Lemma 4.4**: _The following hold:_
* \(0\) _is the strict subsolution to_ \((\hat{P}_{\lambda})\) _for all_ \(\lambda>0\)_._
* \(\xi\) _is a strict supersolution to_ \((\hat{P}_{\lambda})\) _for all sufficiently small_ \(\lambda>0\)_._
* _Any positive weak solution_ \(z\) _to_ \((\hat{P}_{\lambda_{2}})\) _is a strict supersolution to_ \((\hat{P}_{\lambda_{1}})\) _for_ \(0<\lambda_{1}<\lambda_{2}\)_._
**Proof.**
* Let \(\psi\in X_{0}\setminus\{0\}\), \(\psi\geq 0\). Since \(f(x,0)=0\), we get \[\langle 0,\psi\rangle+\int\limits_{\Omega}f(x,0)\psi-\lambda\int\limits_{\Omega} \int\limits_{\Omega}\frac{(0+\hat{u})^{2^{*}_{\mu}}(0+\hat{u})^{2^{*}_{\mu}-1} \psi}{|x-y|^{\mu}}dxdy<0.\]
2. Choose \(\lambda\) small enough such that \(\lambda\left(\int\limits_{\Omega}\frac{(\xi+\hat{u})^{2^{*}_{\mu}}}{|x-y|^{ \mu}}dy\right)(\xi+\hat{u})^{2^{*}_{\mu}-1}<1\) in \(\Omega\). From Lemma 4.3, \(f(x,\xi)\), \(F(x,\xi)\in L^{1}_{\rm loc}(\Omega)\), for all \(\psi\in X_{0}\setminus\{0\}\), we deduce that \[\langle\xi,\psi\rangle+\int\limits_{\Omega}f(x,\xi)\psi dx- \lambda\int\limits_{\Omega}\int\limits_{\Omega}\frac{(\xi+\hat{u})^{2^{*}_{ \mu}}(\xi+\hat{u})^{2^{*}_{\mu}-1}\psi}{|x-y|^{\mu}}dxdy\] \[\geq\int\limits_{\Omega}\left(1-\lambda\left(\int\limits_{\Omega}\frac{( \xi+\hat{u})^{2^{*}_{\mu}}}{|x-y|^{\mu}}dy\right)(\xi+\hat{u})^{2^{*}_{\mu}-1} \right)\psi dx>0.\]
3. Let \(0<\lambda_{1}<\lambda_{2}\) and \(\varphi\) be a weak positive weak solution to \((\hat{P}_{\lambda_{2}})\). Then for all \(\psi\in X_{0}\setminus\{0\}\), we have \[\langle z,\psi\rangle+\int\limits_{\Omega}f(x,z)\psi-\lambda_{1}\int\limits_{ \Omega}\int\limits_{\Omega}\frac{(z+\hat{u})^{2^{*}_{\mu}}(z+\hat{u})^{2^{*}_ {\mu}-1}\psi}{|x-y|^{\mu}}dxdy\] \[=(\lambda_{2}-\lambda_{1})\int\limits_{\Omega}\int\limits_{\Omega}\frac{(z+ \hat{u})^{2^{*}_{\mu}}(z+\hat{u})^{2^{*}_{\mu}-1}\psi}{|x-y|^{\mu}}dxdy>0.\]
This completes the proof. \(\Box\)
Let \(\Lambda:=\sup\{\lambda>0:(\hat{P}_{\lambda})\) admits a solution\(\}\).
**Remark 4.5**: _If \(\Lambda>0\), by Lemma 4.4, we deduce that for any \(\lambda\in(0,\Lambda)\), \((\hat{P}_{\lambda})\) has a subsolution (the trivial function \(0\)) and a positive strict supersolution (say \(z\))._
**Theorem 4.6**: _Let \(v_{1},\ v_{2}:\Omega\rightarrow[-\infty,+\infty]\) with \(v_{1}\leq v_{2}\) such that \(v_{1}\) is a strict subsolution to \((\hat{P}_{\lambda})\) and \(u\in D(\Phi_{K^{v_{2}}_{v_{1}}})\) be a minimizer for \(\Phi_{K^{v_{2}}_{v_{1}}}\). Then \(u\) is a local minimizer of \(\Phi_{K_{v_{1}}}\)._
**Proof.** For each \(w\in K_{v_{1}}\) and \(0\leq\varphi\in X_{0}\), we define \(\eta(w)=\min\{w,v_{2}\}=w-(w-v_{2})^{+}\) and
\[\mathcal{J}(\varphi)=\langle v_{2},\varphi\rangle+\int\limits_{\Omega}f(x,v_{ 2})\varphi dx-\int\limits_{\Omega}\int\limits_{\Omega}\frac{(v_{2}+\hat{u})^{ 2^{*}_{\mu}}(v_{2}+\hat{u})^{2^{*}_{\mu}-1}\varphi}{|x-y|^{\mu}}dxdy.\]
We first claim that
\[\langle\eta(w),w-\eta(w)\rangle\geq\langle v_{2},w-\eta(w)\rangle\ \mbox{and}\]
\[\int\limits_{\Omega}\int\limits_{\Omega}\frac{\left((\eta(w)+\hat{u})^{2^{*} _{\mu}}(\eta(w)+\hat{u})^{2^{*}_{\mu}-1}-(v_{2}+\hat{u})^{2^{*}_{\mu}}(v_{2}+ \hat{u})^{2^{*}_{\mu}-1}\right)(w-\eta(w))}{|x-y|^{\mu}}dxdy\leq 0.\]
Let \(\Omega^{\prime}=\mbox{supp}((w-v_{2})^{+})\). Then on \(\Omega^{\prime}\), \(\eta(w)=v_{2}\) and using the fact that \(\eta(w)\leq v_{2}\) on \(\Omega\), we easily deduce that
\[\langle\eta(w),w-\eta(w)\rangle\geq\langle v_{2},w-\eta(w)\rangle.\]
Also the second inequality hold using the fact that \(\eta(w)\leq v_{2}\) on \(\Omega\). This proves the claim. Deploying the fact that \(u\) is a minimizer for \(\Phi_{K_{v_{1}}^{v_{2}}}\), \(\eta(w)\in D(\Phi_{K_{v_{1}}^{v_{2}}})\), [29, Lemma 2] and using that \(F(x,\cdot)\) is convex, we have
\[\Phi_{K_{v_{1}}}(w)-\Phi_{K_{v_{1}}}(u)\geq\Phi_{K_{v_{1}}}(w)-\Phi _{K_{v_{1}}}(\eta(w))\] \[\qquad= \frac{\|w-\eta(w)\|^{2}}{2}+\langle\eta(w),w-\eta(w)\rangle+ \int\limits_{\Omega}(F(x,w)-F(x,\eta(w)))\] \[\quad+\frac{\lambda}{22_{\mu}^{*}}\int\limits_{\Omega}\int \limits_{\Omega}\frac{\left((\eta(w)+\hat{u})^{2_{\mu}^{*}}(\eta(w)+\hat{u})^{ 2_{\mu}^{*}}-(v_{2}+\hat{u})^{2_{\mu}^{*}}(v_{2}+\hat{u})^{2_{\mu}^{*}-1} \right)(w-\eta(w))}{|x-y|^{\mu}}dxdy\] \[= \frac{\|w-\eta(w)\|^{2}}{2}+\langle\eta(w),w-\eta(w)\rangle+ \int\limits_{\Omega}f(x,\eta(w))(w-\eta(w))\] \[\quad+\frac{\lambda}{22_{\mu}^{*}}\int\limits_{\Omega}\int \limits_{\Omega}\frac{\left((\eta(w)+\hat{u})^{2_{\mu}^{*}}(\eta(w)+\hat{u})^{ 2_{\mu}^{*}-1}-(v_{2}+\hat{u})^{2_{\mu}^{*}}(v_{2}+\hat{u})^{2_{\mu}^{*}-1} \right)(w-\eta(w))}{|x-y|^{\mu}}dxdy\] \[= \frac{\|w-\eta(w)\|^{2}}{2}+\langle v_{2},w-\eta(w)\rangle+\int \limits_{\Omega}f(x,v_{2})(w-\eta(w))\] \[\quad+\frac{\lambda}{22_{\mu}^{*}}\int\limits_{\Omega}\int \limits_{\Omega}\frac{\left((\eta(w)+\hat{u})^{2_{\mu}^{*}}(\eta(w)+\hat{u})^{ 2_{\mu}^{*}-1}-(v_{2}+\hat{u})^{2_{\mu}^{*}}(v_{2}+\hat{u})^{2_{\mu}^{*}-1} \right)(w-\eta(w))}{|x-y|^{\mu}}dxdy\] \[\geq \frac{\|w-\eta(w)\|^{2}}{2}+\mathcal{J}(w-\eta(w))-\frac{\lambda} {22_{\mu}^{*}}\mathcal{G}, \tag{4.1}\]
where
\[\mathcal{G}= \int\limits_{\Omega}\int\limits_{\Omega}\frac{(w+\hat{u})^{2_{ \mu}^{*}}(w+\hat{u})^{2_{\mu}^{*}}}{|x-y|^{\mu}}dxdy-\int\limits_{\Omega}\int \limits_{\Omega}\frac{(\eta(w)+\hat{u})^{2_{\mu}^{*}}(\eta(w)+\hat{u})^{2_{ \mu}^{*}}}{|x-y|^{\mu}}dxdy\] \[\quad-22_{\mu}^{*}\int\limits_{\Omega}\int\limits_{\Omega}\frac{ (\eta(w)+\hat{u})^{2_{\mu}^{*}}(\eta(w)+\hat{u})^{2_{\mu}^{*}-1}(w-\eta(w))}{| x-y|^{\mu}}dxdy.\]
Next we estimate \(\mathcal{G}\) from above. For this, we first note that
\[\mathcal{G}= 2_{\mu}^{*}\int\limits_{\Omega}\int\limits_{\eta(w)}^{w}\left( \int\limits_{\Omega}\frac{(w+\hat{u})^{2_{\mu}^{*}}+(\eta(w)+\hat{u})^{2_{\mu} ^{*}}}{|x-y|^{\mu}}dy\right)\left((t+\hat{u})^{2_{\mu}^{*}-1}-(\eta(w)+\hat{u} )^{2_{\mu}^{*}-1}\right)dtdx\] \[\quad+2_{\mu}^{*}\int\limits_{\Omega}\int\limits_{\eta(w)}^{w} \left(\int\limits_{\Omega}\frac{(w+\hat{u})^{2_{\mu}^{*}}-(\eta(w)+\hat{u})^{ 2_{\mu}^{*}}}{|x-y|^{\mu}}dy\right)(\eta(w)+\hat{u})^{2_{\mu}^{*}-1}dtdx. \tag{4.2}\]
Using the mean value theorem, there exists \(\theta\in[0,1]\) suh that
\[\frac{(u+\hat{u})^{2_{\mu}^{*}-1}-(w+\hat{u})^{2_{\mu}^{*}-1}}{u-w}= (2_{\mu}^{*}-1)(u+\hat{u}+\theta(w-u))^{2_{\mu}^{*}-2}=(2_{\mu}^{ *}-1)(\hat{u}+(1-\theta)u+\theta w))^{2_{\mu}^{*}-2}\] \[\leq (2_{\mu}^{*}-1)2^{2_{\mu}^{*}-3}\left(\hat{u}^{2_{\mu}^{*}-2}+(( 1-\theta)u+\theta w)^{2_{\mu}^{*}-2}\right)\] \[\leq (2_{\mu}^{*}-1)2^{2_{\mu}^{*}-3}\left(\hat{u}^{2_{\mu}^{*}-2}+\max \{u,w\}^{2_{\mu}^{*}-2}\right).\]
For each \(x\in\Omega\) and \(w\in D(\Phi_{K_{v_{2}}})\) define the functions
\[m_{w}^{1}(x)=(2_{\mu}^{*}-1)2^{2_{\mu}^{*}-3}\left(\hat{u}^{2_{\mu} ^{*}-2}+\max\{|v_{2}|,|w|\}^{2_{\mu}^{*}-2}\right)\chi_{\{w>v_{2}\}},\] \[m_{w}^{2}(x)=2_{\mu}^{*}2^{2_{\mu}^{*}-2}\left(\hat{u}^{2_{\mu} ^{*}-1}+\max\{|v_{2}|,|w|\}^{2_{\mu}^{*}-1}\right)\chi_{\{w>v_{2}\}}.\]
Now employing Hardy-Littlewood-Sobolev inequality, we have
\[\int\limits_{\Omega}\int\limits_{\eta(w)}^{w} \left(\int\limits_{\Omega}\frac{(w+\hat{u})^{2_{\mu}^{*}}+(\eta( w)+\hat{u})^{2_{\mu}^{*}}}{|x-y|^{\mu}}dy\right)\left((t+\hat{u})^{2_{\mu}^{*}-1 }-(\eta(w)+\hat{u})^{2_{\mu}^{*}-1}\right)dtdx\] \[\leq\frac{1}{2}\int\limits_{\Omega}\int\limits_{\Omega}\frac{ \left((w+\hat{u})^{2_{\mu}^{*}}+(\eta(w)+\hat{u})^{2_{\mu}^{*}}\right)m_{w}^{ 1}(x)(w-\eta(w))^{2}}{|x-y|^{\mu}}dydx\] \[\leq c_{1}\left(|w+\hat{u}|_{2^{*}}^{2_{\mu}^{*}}+|\eta(w)+\hat{u }|_{2^{*}}^{2_{\mu}^{*}}\right)|m_{w}^{1}(x)(w-\eta(w))^{2}|_{\frac{2^{*}}{2_ {\mu}^{*}}} \tag{4.3}\]
for some appropriate positive constant \(c_{1}\). Similarly with the help of Hardy-Littlewood-Sobolev inequality, Holder's inequality and the definition of \(S\) we have
\[\int\limits_{\Omega}\int\limits_{\eta(w)}^{w} \left(\int\limits_{\Omega}\frac{(w+\hat{u})^{2_{\mu}^{*}}-(\eta( w)+\hat{u})^{2_{\mu}^{*}}}{|x-y|^{\mu}}dy\right)(\eta(w)+\hat{u})^{2_{\mu}^{*}-1 }dtdx\] \[\leq c_{2}S^{\frac{-1}{2}}|m_{w}^{2}(x)(w-\eta(w))|_{\frac{2^{*}}{ 2_{\mu}^{*}}}|\eta(w)+\hat{u}|_{2^{*}}^{2_{\mu}^{*}-1}\|w-\eta(w)\| \tag{4.4}\]
for some appropriate positive constant \(c_{2}\). Substituting (4.3) and (4.4) in (4.2), we obtain
\[\mathcal{G}\leq c_{1}\left(|w+\hat{u}|_{2^{*}}^{2_{\mu}^{*}}+|\eta(w)+\hat{u}|_{2^{ *}}^{2_{\mu}^{*}}\right)|m_{w}^{1}(x)(w-\eta(w))^{2}|_{\frac{2^{*}}{2_{\mu}^{ *}}}\] \[+c_{2}S^{\frac{-1}{2}}|m_{w}^{2}(x)(w-\eta(w))|_{\frac{2^{*}}{2_{ \mu}^{*}}}|\eta(w)+\hat{u}|_{2^{*}}^{2_{\mu}^{*}-1}\|w-\eta(w)\|. \tag{4.5}\]
Suppose on the contrary that the result does not hold. Then there exists a sequence \(\{w_{k}\}\subset X_{0}\) such that \(w_{k}\in K_{w_{1}}\) and
\[\|w_{k}-u\|<\frac{1}{2^{k}},\ \Phi_{K_{v_{1}}}(w_{k})<\Phi_{K_{v_{1}}}(u)\ \text{for all}\ k.\]
Next we set \(j:=u+\sum_{k=1}^{\infty}|w_{k}-u|\). Then, clearly \(w_{k}\) satisfies \(|w_{k}|\leq j\) a.e for all \(k\). Now for each \(w\in D(\Phi_{K_{v_{1}}})\), set
\[\hat{m}_{w}^{1}=(2_{\mu}^{*}-1)2^{2_{\mu}^{*}-3}\left(\hat{u}^{2_{\mu}^{*}-2}+ \max\{|v_{2}|,|j|\}^{2_{\mu}^{*}-2}\right)\chi_{\{w>v_{2}\}},\]
\[\hat{m}_{w}^{2}(x)=2_{\mu}^{*}2^{2_{\mu}^{*}-2}\left(\hat{u}^{2_{\mu}^{*}-1}+ \max\{|v_{2}|,|j|\}^{2_{\mu}^{*}-1}\right)\chi_{\{w>v_{2}\}}.\]
Using (4.1) and (4.5), we obtain
\[0> \Phi_{K_{v_{1}}}(w_{k})-\Phi_{K_{v_{1}}}(u)\] \[\geq \Phi_{K_{v_{1}}}(w_{k})-\Phi_{K_{v_{1}}}(\eta(w_{k}))\] \[\geq \frac{\|w_{k}-\eta(w_{k})\|^{2}}{2}-\lambda\left(c_{1}\left(|w_{ k}+\hat{u}|_{2^{*}}^{2_{\mu}^{*}}+|\eta(w_{k})+\hat{u}|_{2^{*}}^{2_{\mu}^{*}} \right)|\hat{m}_{w_{k}}^{1}(x)(w_{k}-\eta(w_{k}))^{2}|_{\frac{2^{*}}{2_{\mu}^{ *}}}\right.\] \[\left.+c_{2}S^{\frac{-1}{2}}|\hat{m}_{w_{k}}^{2}(x)(w_{k}-\eta(w_{ k}))|_{\frac{2^{*}}{2_{\mu}^{*}}}|\eta(w_{k})+\hat{u}|_{2^{*}}^{2_{\mu}^{*}-1}\|w_{k}- \eta(w_{k})\|\right)+\mathcal{J}(w_{k}-\eta(w_{k}))\]
\[0> \frac{\|(w_{k}-v_{2})^{+}\|^{2}}{2}+\mathcal{J}((w_{k}-v_{2})^{+})\] \[-\frac{C^{*}}{2}\left(|(w_{k}-v_{2})^{+}|_{\frac{22^{*}}{2\mu}}^{2 ^{*}}+|(w_{k}-v_{2})^{+}|_{\frac{22^{*}}{2\mu}}\|w_{k}-\eta(w_{k})\|\right). \tag{4.7}\]
Let \(\kappa=\inf\{\mathcal{J}(\varphi):\varphi\in\mathcal{A}\}\), where \(\mathcal{A}=\{\varphi\in X_{0}:\varphi\geq 0,|\varphi|_{\frac{22^{*}}{2\mu}}=1,\ \| \varphi\|\leq 2C^{*}\}\). Clearly, \(\mathcal{A}\) is a weakly sequentially closed subset of \(X_{0}\). Also using Fatou's Lemma and the fact that the Riesz potential is a bounded linear functional, one can easily prove that \(\mathcal{J}\) is a weakly lower semicontinuous on \(\mathcal{A}\). So, if \(\{w_{k}\}\subset\mathcal{A}\) is a minimizing sequence for \(\kappa\) such that \(w_{k}\rightharpoonup w\) as \(k\to\infty\), then
\[\mathcal{J}(w)\leq\liminf\mathcal{J}(w_{k}).\]
Since \(v_{2}\) is a supersolution of \((\hat{P}_{\lambda})\), \(\mathcal{J}(w)>0\) for all \(w\in\mathcal{A}\). This implies \(\kappa>0\). Now notice using the definition of \(\kappa\), (4.7) can be rewritten as the following:
\[0> \kappa+\frac{1}{4}\left(\left(\|(w_{k}-v_{2})^{+}\|-C^{*}|(w_{k}-v _{2})^{+}|_{\frac{22^{*}}{2^{*}_{\mu}}}\right)^{2}-((C^{*})^{2}+2C^{*})|(w_{k}- v_{w})|_{\frac{22^{*}}{2^{*}_{\mu}}}^{2}\right)\] \[> \kappa-\frac{1}{4}((C^{*})^{2}+2C^{*})|(w_{k}-v_{w})|_{\frac{22^ {*}}{2^{*}_{\mu}}}^{2}. \tag{4.8}\]
As \(\{w_{k}\}_{k\in\mathbb{N}}\) is a sequence such that \(w_{k}\to u\) in \(X_{0}\). It implies that as \(k\to\infty\), \(|(w_{k}-v_{2})^{+}|_{\frac{22^{*}}{2^{*}_{\mu}}}\to 0\). So from (4.8), we get a contradiction to the fact that \(\kappa>0\). This completes the proof. \(\Box\)
**Lemma 4.7**: _We have \(\Lambda>0\)._
**Proof.** We will use the sub- and supersolution method to prove the required result. From Lemma 4.4, we get that \(0\) and \(\xi\) are the sub- and supersolution, respectively, to \((\hat{P}_{\lambda})\) for \(\lambda\) small enough. We define the closed convex subset of \(X_{0}\) as \(K=\{\varphi\in X_{0}:0\leq\varphi\leq\xi\}\). Using the definition fo \(K\), we can easily prove that
\[\Phi_{K}(u)\geq\frac{\|u\|^{2}}{2}-c_{1}-c_{2}\]
for appropriate positive constants \(c_{1}\) and \(c_{2}\). This imply that \(\Phi_{K}\) is coercive on \(K\). Next, we claim that \(\Phi_{K}\) is weakly lower semicontinuous on \(K\). Indeed, let \(\{\varphi_{k}\}_{k\in\mathbb{N}}\subset K\) such that \(\varphi_{k}\rightharpoonup\varphi\) weakly in \(X_{0}\) as \(k\to\infty\). For each \(k\), we have
\[\int\limits_{\Omega}F(x,\varphi_{k})dx\leq\int\limits_{\Omega}F(x,\varphi)<+\infty,\]
\[\int\limits_{\Omega}\int\limits_{\Omega}\frac{(\varphi_{k}+\hat{u})^{2^{*}_{ \mu}}(\varphi_{k}+\hat{u})^{2^{*}_{\mu}}}{|x-y|^{\mu}}dxdy\leq\int\limits_{ \Omega}\int\limits_{\Omega}\frac{(\xi+\hat{u})^{2^{*}_{\mu}}(\xi+\hat{u})^{2^ {*}_{\mu}}}{|x-y|^{\mu}}dxdy<+\infty.\]
Thus by the dominated convergence theorem and the weak lower semicontonuity of the norm, we deduce that \(\Phi_{K}\) is weakly lower semicontinuous on \(K\). Thus, there exists \(u\in X_{0}\) such that
\[\inf\limits_{\varphi\in K}\Phi_{K}(\varphi)=\Phi_{K}(u).\]
Since \(0\in\partial^{-}\Phi_{K}(u)\), \(u\) is a weak solution to \((\hat{P}_{\lambda})\). It implies that \(\Lambda>0\). \(\Box\)
**Theorem 4.8**: _Let \(\lambda\in(0,\Lambda)\). Then there exists a positive weak solution \(u_{\lambda}\) to \((\hat{P}_{\lambda})\) belonging to \(X_{0}\) such that \(\Phi(u_{\lambda})<0\) and \(u_{\lambda}\) is a local minimizer of \(\Phi_{K_{0}}\)._
**Proof.** Let \(\lambda\in(0,\Lambda)\) and \(\lambda_{1}\in(\lambda,\Lambda)\). Then by Lemma 4.4, \(0\) and \(u_{\lambda_{1}}\) are strict subsolution and supersolution of \((\hat{P}_{\lambda})\) respectively. The existence of \(u_{\lambda_{1}}\) is clear by definition of \(\Lambda\). Now consider the convex set \(K=\{u\in X_{0}:0\leq u\leq u_{\lambda_{1}}\}\). Then following the analysis carried out in Lemma 4.7, we obtain \(u_{\lambda}\in X_{0}\) such that \(\inf_{\varphi\in K}\Phi_{K}(\varphi)=\Phi_{K}(u_{\lambda})\). Since \(0\in K\) and \(\Phi_{K}(0)<0\), we conclude that \(\Phi_{K}(u_{\lambda})<0\). Let \(v_{1}=0\) and \(v_{2}=u_{\lambda_{1}}\) in Theorem 4.6, we have \(u_{\lambda}\) is a local minimizer of \(\Phi_{K_{0}}\)
**Lemma 4.9**: \(\Lambda<\infty\)_._
**Proof.** Suppose on the contrary that \(\Lambda=+\infty\). This means that there exist sequences \(\{\lambda_{k}\}_{k\in\mathbb{N}}\) and \(\{u_{\lambda_{k}}\}_{k\in\mathbb{N}}\) such that \(\lambda_{k}\rightarrow+\infty\) as \(k\rightarrow\infty\) and \(u_{\lambda_{k}}\) is the corresponding solution to \((\hat{P}_{\lambda_{k}})\). Then by Theorem 4.8, \(\Phi(u_{\lambda_{k}})<0\) and \(u_{\lambda_{k}}\) is a local minimizer of \(\Phi_{K_{0}}\). Thus we have
\[\frac{1}{2}\|u_{\lambda_{k}}\|^{2}+\int\limits_{\Omega}F(x,u_{\lambda_{k}})dx- \frac{\lambda_{k}}{22^{*}_{\mu}}\int\limits_{\Omega}\frac{(u_{\lambda_{k}}+ \hat{u})^{2^{*}_{\mu}}(u_{\lambda_{k}}+\hat{u})^{2^{*}_{\mu}}}{|x-y|^{\mu}}dxdy <0, \tag{4.9}\]
and
\[\|u_{\lambda_{k}}\|^{2}+\int\limits_{\Omega}f(x,u_{\lambda_{k}})u_{\lambda_{k }}dx-\lambda_{k}\int\limits_{\Omega}\int\limits_{\Omega}\frac{(u_{\lambda_{k}} +\hat{u})^{2^{*}_{\mu}}(u_{\lambda_{k}}+\hat{u})^{2^{*}_{\mu}-1}u_{\lambda_{k }}}{|x-y|^{\mu}}dxdy =0. \tag{4.10}\]
From (4.9) and (4.10), we get
\[0<\frac{\lambda_{k}}{2}\int\limits_{\Omega}\int\limits_{\Omega} \left(\frac{(u_{\lambda_{k}}+\hat{u})^{2^{*}_{\mu}}(u_{\lambda_{k}}+\hat{u})^{ 2^{*}_{\mu}}-1/2^{*}_{\mu}(u_{\lambda_{k}}+\hat{u})^{2^{*}_{\mu}}(u_{\lambda_{ k}}+\hat{u})^{2^{*}_{\mu}-1}u_{\lambda_{k}}}{|x-y|^{\mu}}\right)dxdy\] \[-\int\limits_{\Omega}\left(F(x,u_{\lambda_{k}})-\frac{1}{2}f(x,u_ {\lambda_{k}}u_{\lambda_{k}})\right)dx.\]
By Lemma 4.3, we have \(F(x,u_{\lambda_{k}})-f(x,u_{\lambda_{k}})u_{\lambda_{k}}/2\geq 0\) which implies
\[\frac{1}{2}\int\limits_{\Omega}\int\limits_{\Omega}\frac{(u_{\lambda_{k}}+ \hat{u})^{2^{*}_{\mu}}(u_{\lambda_{k}}+\hat{u})^{2^{*}_{\mu}-1}u_{\lambda_{k} }}{|x-y|^{\mu}}dxdy<\frac{1}{22^{*}_{\mu}}\int\limits_{\Omega}\int\limits_{ \Omega}\frac{(u_{\lambda_{k}}+\hat{u})^{2^{*}_{\mu}}(u_{\lambda_{k}}+\hat{u})^ {2^{*}_{\mu}}}{|x-y|^{\mu}}dxdy. \tag{4.11}\]
Employing the fact that \(\hat{u}\in L^{\infty}(\Omega)\), we conclude the following
\[\frac{\left(\int\limits_{\Omega}\frac{(t+\hat{u})^{2^{*}_{\mu}}}{|x-y|^{\mu} }dy\right)(t+\hat{u})^{2^{*}_{\mu}}}{\left(\int\limits_{\Omega}\frac{(t+\hat{u })^{2^{*}_{\mu}}}{|x-y|^{\mu}}dy\right)(t+\hat{u})^{2^{*}_{\mu}-1}t}=1.\]
Therefore, it follows that for any \(\epsilon>0\) small enough, there exists \(m_{\epsilon}>0\) such that, for all \(k\)
\[\frac{1}{22^{*}_{\mu}}\int\limits_{\Omega}\int\limits_{\Omega} \frac{(u_{\lambda_{k}}+\hat{u})^{2^{*}_{\mu}}(u_{\lambda_{k}}+\hat{u})^{2^{*} _{\mu}}}{|x-y|^{\mu}}dxdy\\ <\frac{1}{2+\epsilon}\int\limits_{\Omega}\int\limits_{\Omega} \frac{(u_{\lambda_{k}}+\hat{u})^{2^{*}_{\mu}}(u_{\lambda_{k}}+\hat{u})^{2^{*}_ {\mu}-1}u_{\lambda_{k}}}{|x-y|^{\mu}}dxdy+m_{\epsilon}. \tag{4.12}\]
Combining (4.11) and (4.12), we see that
\[\int\limits_{\Omega}\int\limits_{\Omega}\frac{(u_{\lambda_{k}}+\hat{u})^{2^{ *}_{\mu}}(u_{\lambda_{k}}+\hat{u})^{2^{*}_{\mu}-1}u_{\lambda_{k}}}{|x-y|^{\mu} }dxdy<\infty\text{ for all }k.\]
Now from (4.10), we have
\[\|u_{\lambda_{k}}\|^{2}<\lambda_{k}\int\limits_{\Omega}\int\limits_{\Omega} \frac{(u_{\lambda_{k}}+\hat{u})^{2^{*}_{\mu}}(u_{\lambda_{k}}+\hat{u})^{2^{*}_ {\mu}-1}u_{\lambda_{k}}}{|x-y|^{\mu}}dxdy.\]
This means \(\{\lambda_{k}^{-1/2}u_{\lambda_{k}}\}_{k\in\mathbb{N}}\) is uniformly bounded in \(X_{0}\). Then there exists \(w\in X_{0}\) such that \(w_{k}:=\lambda_{k}^{-1/2}u_{\lambda_{k}}\rightharpoonup w_{0}\) weakly in \(X_{0}\). Let \(0\leq\psi\in C_{c}^{\infty}(\Omega)\) be a nontrivial function. Let \(m>0\) such that \(\hat{u}>m\) on \(\mathrm{supp}(\psi)\). Again using (4.10), we deduce that
\[\sqrt{\lambda_{k}}\int\limits_{\Omega}\int\limits_{\Omega}\frac{ m^{2\sigma_{\mu}^{-1}}\psi}{|x-y|^{\mu}}dxdy\leq \sqrt{\lambda_{k}}\int\limits_{\Omega}\int\limits_{\Omega}\frac{ (u_{\lambda_{k}}+\hat{u})^{2\sigma_{\mu}^{*}}(u_{\lambda_{k}}+\hat{u})^{2 \sigma_{\mu}^{-1}}\psi}{|x-y|^{\mu}}dxdy\] \[= \langle w_{k},\psi\rangle+\frac{1}{\sqrt{\lambda_{k}}}\int \limits_{\Omega}f(x,u_{\lambda_{k}})\psi dx\] \[\leq \langle w_{k},\psi\rangle+\frac{1}{\sqrt{\lambda_{k}}}\int \limits_{\Omega}k^{-\gamma}\psi dx.\]
Now passing the limit \(k\to\infty\), we have \(\langle w,\psi\rangle=\infty\), which is not true. Hence \(\Lambda<\infty\). \(\Box\)
### Second solution
In this subsection we will prove the existence of a second solution to \((\hat{P}_{\lambda})\). We denote by \(v\) the first solution to \((\hat{P}_{\lambda})\) as obtained in Theorem 4.8.
**Proposition 4.10**: _The functional \(\Phi_{K_{v}}\) satisfies the \((CPS)_{d}\) for each \(d\) satisfying_
\[d<\Phi_{K_{v}}(v)+\frac{1}{2}\left(\frac{n-\mu+2}{2n-\mu}\right) \left(\frac{S_{H,L}^{\frac{2n-\mu}{n-\mu+2}}}{\lambda^{\frac{n-2}{n-\mu+2}}} \right).\]
**Proof.** Let \(d<\Phi_{K_{v}}(v)+\frac{1}{2}\left(\frac{n-\mu+2}{2n-\mu}\right) \left(\frac{S_{H,L}^{\frac{2n-\mu}{n-\mu+2}}}{\lambda^{\frac{n-2}{n-\mu+2}}}\right)\) be fixed and choose a sequence \(\{v_{k}\}_{k\in\mathbb{N}}\subset D(\Phi_{K_{v}})\) such that
\[\Phi_{K_{v}}(v_{k})\to d\text{ and }(1+\|v_{k}\|)|||\partial^{-}\Phi_{K_{v}}(v_{k })|||\to 0\text{ as }k\to\infty.\]
It implies there exists \(\alpha_{k}\in\partial^{-}\Phi_{K_{v}}(v_{k})\) such that \(\|\alpha_{k}\|=|||\partial^{-}\Phi_{K_{v}}(v_{k})|||\) for every \(k\). Using Lemma 4.1, for each \(w\in D(\Phi_{K_{v}})\) and for each \(k\), \(f(\cdot,v_{k})(w-v_{k})\in L^{1}(\Omega)\) and
\[\langle\alpha_{k},w-v_{k}\rangle\leq\langle v_{k},w-v_{k}\rangle +\int\limits_{\Omega}f(x,v_{k})(w-v_{k})dx\] \[-\lambda\int\limits_{\Omega}\int\limits_{\Omega}\frac{(v_{k}+ \hat{u})^{2\sigma_{\mu}^{*}}(v_{k}+\hat{u})^{2\sigma_{\mu}^{*}-1}(w-v_{k})}{|x- y|^{\mu}}dxdy. \tag{4.13}\]
Using the fact that \(F(\cdot,v_{k})\in L^{1}(\Omega)\) and Lemma 4.3, we obtain that \(F(\cdot,2v_{k})\in L^{1}(\Omega)\). So \(2v_{k}\in D(\Phi_{K_{v}})\). Taking \(w=2v_{k}\) in (4.13), we get
\[\langle\alpha_{k},v_{k}\rangle\leq\|v_{k}\|^{2}+\int\limits_{\Omega}f(x,v_{k} )v_{k}dx-\lambda\int\limits_{\Omega}\int\limits_{\Omega}\frac{(v_{k}+\hat{u})^ {2\sigma_{\mu}^{*}}(v_{k}+\hat{u})^{2\sigma_{\mu}^{*}-1}v_{k}}{|x-y|^{\mu}}dxdy.\]
Now using Lemma 4.3 and (4.12), for \(\epsilon>0\) small enough, we have
\[d+1\geq \frac{1}{2}\|v_{k}\|^{2}+\int\limits_{\Omega}F(\cdot,v_{k})dx-\frac{ \lambda}{22^{*}_{\mu}}\int\limits_{\Omega}\int\limits_{\Omega}\frac{(v_{k}+\hat{ u})^{2^{*}_{\mu}}(v_{k}+\hat{u})^{2^{*}_{\mu}}}{|x-y|^{\mu}}dxdy\] \[\geq \frac{1}{2}\|v_{k}\|^{2}+\int\limits_{\Omega}F(\cdot,v_{k})dx+ \frac{1}{2+\epsilon}\left(\langle\alpha_{k},v_{k}\rangle-\|v_{k}\|^{2}-\int \limits_{\Omega}f(x,v_{k})v_{k}dx\right)-\lambda m_{\epsilon}\] \[\geq \frac{1}{2}\|v_{k}\|^{2}+\int\limits_{\Omega}F(\cdot,v_{k})dx+ \frac{1}{2+\epsilon}\left(\langle\alpha_{k},v_{k}\rangle-\|v_{k}\|^{2}\right)- \lambda m_{\epsilon}.\]
It shows that \(\{v_{k}\}\) is a bounded sequence in \(X_{0}\). Hence, up to a subsequence, there exists \(v_{0}\in X_{0}\) such that \(v_{k}\rightharpoonup v_{0}\) weakly in \(X_{0}\) as \(k\to\infty\). We assume, again up to a subsequence, that as \(k\to\infty\),
\[\|v_{k}-v_{0}\|^{2}\to a^{2}\text{ and }\int\limits_{\Omega}\int\limits_{ \Omega}\frac{(v_{k}-v_{0})^{2^{*}_{\mu}}(v_{k}-v_{0})^{2^{*}_{\mu}}}{|x-y|^{ \mu}}dxdy\to b^{22^{*}_{\mu}}\text{ \ as }k\to\infty.\]
Using the convexity of the function \(F\), Brezis-Lieb Lemma and (4.13), we deduce that
\[\int\limits_{\Omega}F(x,v_{0})dx\geq \int\limits_{\Omega}F(x,v_{k})dx+\int\limits_{\Omega}f(x,v_{k})(v _{0}-v_{k})dx\] \[\geq \int\limits_{\Omega}F(x,v_{k})dx-\lambda\int\limits_{\Omega}\int \limits_{\Omega}\frac{(v_{k}+\hat{u})^{2^{*}_{\mu}}(v_{k}+\hat{u})^{2^{*}_{ \mu}-1}(v_{k}-v_{0})}{|x-y|^{\mu}}dxdy-\langle\alpha_{k},v_{k}-v_{0}\rangle\] \[+\langle v_{k},v_{k}-v_{0}\rangle\] \[= \int\limits_{\Omega}F(x,v_{k})dx-\langle\alpha_{k},v_{k}-v_{0} \rangle+\lambda\int\limits_{\Omega}\int\limits_{\Omega}\frac{(v_{k}+\hat{u})^ {2^{*}_{\mu}}(v_{k}+\hat{u})^{2^{*}_{\mu}-1}(v_{0}+\hat{u})}{|x-y|^{\mu}}dxdy\] \[-\lambda\int\limits_{\Omega}\int\limits_{\Omega}\frac{(v_{0}+ \hat{u})^{2^{*}_{\mu}}(v_{0}+\hat{u})^{2^{*}_{\mu}}(v_{k}-v_{0})^{2^{*}_{\mu} }(v_{k}-v_{0})^{2^{*}_{\mu}}}{|x-y|^{\mu}}dxdy+\langle v_{k},v_{k}-v_{0}\rangle.\]
Taking into account the weak convergence of \(v_{k}\rightharpoonup v_{0}\) in \(X_{0}\), we obtain as \(k\to\infty\)
\[\int\limits_{\Omega}F(x,v_{0})dx\geq\int\limits_{\Omega}F(x,v_{0})+a^{2}-b^{ 22^{*}_{\mu}}.\]
This implies
\[\lambda b^{22^{*}_{\mu}}\geq a^{2}. \tag{4.14}\]
Now since \(v\) is a weak positive solution to \((\hat{P}_{\lambda})\), for each \(k\), we have
\[0=\langle v,v_{k}-v\rangle+\int\limits_{\Omega}f(x,v)(v_{k}-v)dx+\lambda\int \limits_{\Omega}\int\limits_{\Omega}\frac{(v+\hat{u})^{2^{*}_{\mu}}(v+\hat{u} )^{2^{*}_{\mu}-1}(v_{k}-v)}{|x-y|^{\mu}}dxdy. \tag{4.15}\]
Noting that \(F(\cdot,v_{k}),\ F(\cdot,2v_{k})\in L^{1}(\Omega)\) and \(v\leq 2v_{k}-v\leq 2v_{k}\), we infer that \(2v_{k}-v\in D(\Phi_{K_{v}})\). Testing (4.13) with \(2v_{k}-v\), we obtain
\[\langle\alpha_{k},v_{k}-v\rangle\leq\langle v_{k},v_{k}-v\rangle +\int\limits_{\Omega}f(x,v_{k})(v_{k}-v)dx\] \[-\lambda\int\limits_{\Omega}\int\limits_{\Omega}\frac{(v_{k}+\hat{ u})^{2^{*}_{\mu}}(v_{k}+\hat{u})^{2^{*}_{\mu}-1}(v_{k}-v)}{|x-y|^{\mu}}dxdy. \tag{4.16}\]
Taking into account Lemma 4.3, (4.15) and (4.16), we have
\[\Phi_{K_{v}}(v_{k})-\Phi_{K_{v}}(v)= \frac{1}{2}\|v_{k}\|^{2}+\int\limits_{\Omega}F(x,v_{k})dx-\frac{ \lambda}{22^{*}_{\mu}}\int\limits_{\Omega}\int\limits_{\Omega}\frac{(v_{k}+ \hat{u})^{2^{*}_{\mu}}(v_{k}+\hat{u})^{2^{*}_{\mu}}}{|x-y|^{\mu}}dxdy\] \[-\frac{1}{2}\|v\|^{2}-\int\limits_{\Omega}F(x,v)dx+\frac{\lambda }{22^{*}_{\mu}}\int\limits_{\Omega}\int\limits_{\Omega}\frac{(v+\hat{u})^{2^{* }_{\mu}}(v+\hat{u})^{2^{*}_{\mu}}}{|x-y|^{\mu}}dxdy\] \[\geq \int\limits_{\Omega}\left(F(x,v_{k})-F(x,v)-\frac{1}{2}(f(x,v)+f( x,v_{k}))(v_{k}-v)\right)dx\] \[+\frac{\lambda}{22^{*}_{\mu}}\int\limits_{\Omega}\int\limits_{ \Omega}\frac{(v+\hat{u})^{2^{*}_{\mu}}(v+\hat{u})^{2^{*}_{\mu}}-(v_{k}+\hat{u })^{2^{*}_{\mu}}(v_{k}+\hat{u})^{2^{*}_{\mu}}}{|x-y|^{\mu}}dxdy+\frac{1}{2} \langle\alpha_{k},v_{k}-v\rangle\] \[+\frac{\lambda}{2}\int\limits_{\Omega}\int\limits_{\Omega}\frac {\left((v+\hat{u})^{2^{*}_{\mu}}(v+\hat{u})^{2^{*}_{\mu}-1}-(v_{k}+\hat{u})^{2 ^{*}_{\mu}}(v_{k}+\hat{u})^{2^{*}_{\mu}-1}\right)(v_{k}-v)}{|x-y|^{\mu}}dxdy\] \[\geq \frac{\lambda}{22^{*}_{\mu}}\int\limits_{\Omega}\int\limits_{ \Omega}\frac{(v+\hat{u})^{2^{*}_{\mu}}(v+\hat{u})^{2^{*}_{\mu}}-(v_{k}+\hat{u })^{2^{*}_{\mu}}(v_{k}+\hat{u})^{2^{*}_{\mu}}}{|x-y|^{\mu}}dxdy+\frac{1}{2} \langle\alpha_{k},v_{k}-v\rangle\] \[+\frac{\lambda}{2}\int\limits_{\Omega}\int\limits_{\Omega}\frac {(v+\hat{u})^{2^{*}_{\mu}}(v+\hat{u})^{2^{*}_{\mu}-1}(v_{k}-v)-(v_{k}+\hat{u })^{2^{*}_{\mu}}(v_{k}+\hat{u})^{2^{*}_{\mu}-1}(v+\hat{u})}{|x-y|^{\mu}}dxdy\] \[+\frac{\lambda}{2}\int\limits_{\Omega}\int\limits_{\Omega}\frac {(v_{k}+\hat{u})^{2^{*}_{\mu}}(v_{k}+\hat{u})^{2^{*}_{\mu}}}{|x-y|^{\mu}}dxdy=: \mathcal{P}+\frac{1}{2}\langle\alpha_{k},v_{k}-v\rangle. \tag{4.17}\]
Again using Brezis-Lieb Lemma, we have
\[\mathcal{P}= \lambda\left(\frac{1}{2}-\frac{1}{22^{*}_{\mu}}\right)\int\limits _{\Omega}\int\limits_{\Omega}\frac{(v_{k}-v)^{2^{*}_{\mu}}(v_{k}-v)^{2^{*}_{ \mu}}+(v_{k}+\hat{u})^{2^{*}_{\mu}}(v_{k}+\hat{u})^{2^{*}_{\mu}}}{|x-y|^{\mu}}dxdy\] \[+\frac{\lambda}{2}\int\limits_{\Omega}\int\limits_{\Omega}\frac {(v+\hat{u})^{2^{*}_{\mu}}(v+\hat{u})^{2^{*}_{\mu}-1}(v_{k}-v)-(v_{k}+\hat{u })^{2^{*}_{\mu}}(v_{k}+\hat{u})^{2^{*}_{\mu}-1}(v+\hat{u})}{|x-y|^{\mu}}dxdy\] \[+\frac{\lambda}{22^{*}_{\mu}}\int\limits_{\Omega}\int\limits_{ \Omega}\frac{(v+\hat{u})^{2^{*}_{\mu}}(v+\hat{u})^{2^{*}_{\mu}}}{|x-y|^{\mu}}dxdy. \tag{4.18}\]
Using the weak convergence of the sequence \(\{v_{k}\}_{k\in\mathbb{N}}\), we have
\[\int\limits_{\Omega}\int\limits_{\Omega}\frac{(v+\hat{u})^{2^{*}_{\mu}}(v+ \hat{u})^{2^{*}_{\mu}-1}(v_{k}-v_{0})}{|x-y|^{\mu}}dxdy\to 0\text{ and } \tag{4.19}\]
\[\int\limits_{\Omega}\int\limits_{\Omega}\frac{\left((v_{k}+\hat{u})^{2^{*}_{ \mu}}(v_{k}+\hat{u})^{2^{*}_{\mu}-1}-(v_{0}+\hat{u})^{2^{*}_{\mu}}(v_{0}+\hat{u })^{2^{*}_{\mu}-1}\right)(v+\hat{u})}{|x-y|^{\mu}}dxdy\to 0. \tag{4.20}\]
Combining (4.17)-(4.20) and passing the limit \(k\rightarrow\infty\), we obtain that
\[d-\Phi_{K_{v}}(v)\geq \frac{\lambda}{22^{*}_{\mu}}\int\limits_{\Omega}\int\limits_{ \Omega}\frac{(v+\hat{u})^{2^{*}_{\mu}}(v+\hat{u})^{2^{*}_{\mu}}}{|x-y|^{\mu}}dxdy+ \lambda\left(\frac{1}{2}-\frac{1}{22^{*}_{\mu}}\right)b^{22^{*}_{\mu}}\]
\[+\frac{\lambda}{2}\int\limits_{\Omega}\int\limits_{\Omega}\frac{ \left((v+\hat{u})^{2^{*}_{\mu}}(v+\hat{u})^{2^{*}_{\mu}-1}+(v_{0}+\hat{u})^{2^{* }_{\mu}-1}\right)(v_{0}-v)}{|x-y|^{\mu}}\] \[-\frac{\lambda}{22^{*}_{\mu}}\int\limits_{\Omega}\int\limits_{ \Omega}\frac{\left(v_{0}+\hat{u}\right)^{2^{*}_{\mu}}(v_{0}+\hat{u})^{2^{*}_{ \mu}}}{|x-y|^{\mu}}dxdy:=\mathcal{P}_{1}(\text{say})+\lambda\left(\frac{1}{2}- \frac{1}{22^{*}_{\mu}}\right)b^{22^{*}_{\mu}}. \tag{4.21}\]
Next we will show that \(\mathcal{P}_{1}\geq 0\). Indeed, we have
\[\mathcal{P}_{1}= \frac{\lambda}{2}\int\limits_{\Omega}\int\limits_{\Omega}\frac{ \left(v+\hat{u}\right)^{2^{*}_{\mu}}\left((v+\hat{u})^{2^{*}_{\mu}-1}+(v_{0}+ \hat{u})^{2^{*}_{\mu}-1}\right)(v_{0}-v)}{|x-y|^{\mu}}\] \[-\frac{\lambda}{2}\int\limits_{\Omega}\int\limits_{\Omega}\frac{ \left(v+\hat{u}\right)^{2^{*}_{\mu}}(v_{0}+\hat{u})^{2^{*}_{\mu}-1}(v_{0}-v)} {|x-y|^{\mu}}dxdy\] \[+\frac{\lambda}{2}\int\limits_{\Omega}\int\limits_{\Omega}\frac{ \left(v_{0}+\hat{u}\right)^{2^{*}_{\mu}}\left((v+\hat{u})^{2^{*}_{\mu}-1}+(v_{ 0}+\hat{u})^{2^{*}_{\mu}-1}\right)(v_{0}-v)}{|x-y|^{\mu}}\] \[-\frac{\lambda}{2}\int\limits_{\Omega}\int\limits_{\Omega}\frac{ \left(v_{0}+\hat{u}\right)^{2^{*}_{\mu}}(v+\hat{u})^{2^{*}_{\mu}-1}(v_{0}-v)} {|x-y|^{\mu}}dxdy\] \[+\frac{\lambda}{22^{*}_{\mu}}\int\limits_{\Omega}\int\limits_{ \Omega}\frac{\left(v+\hat{u}\right)^{2^{*}_{\mu}}\left((v+\hat{u})^{2^{*}_{ \mu}}-(v_{0}+\hat{u})^{2^{*}_{\mu}}\right)}{|x-y|^{\mu}}dxdy. \tag{4.22}\]
Since
\[(v+\hat{u})^{2^{*}_{\mu}}-(v_{0}+\hat{u})^{2^{*}_{\mu}}=-2^{*}_{\mu}\int \limits_{v}^{v_{0}}(t+\hat{u})^{2^{*}_{\mu}-1}dt\geq-2^{*}_{\mu}\left(\frac{ \left(v+\hat{u}\right)^{2^{*}_{\mu}-1}+(v_{0}+\hat{u})^{2^{*}_{\mu}-1}}{2} \right)(v_{0}-v),\]
we obtain
\[\frac{\lambda}{22^{*}_{\mu}}\int\limits_{\Omega}\int\limits_{ \Omega}\frac{\left(v+\hat{u}\right)^{2^{*}_{\mu}}\left((v+\hat{u})^{2^{*}_{ \mu}}-(v_{0}+\hat{u})^{2^{*}_{\mu}}\right)}{|x-y|^{\mu}}dxdy\] \[\geq -\frac{\lambda}{4}\int\limits_{\Omega}\int\limits_{\Omega}\frac{ \left(v+\hat{u}\right)^{2^{*}_{\mu}}\left((v+\hat{u})^{2^{*}_{\mu}-1}+(v_{0}+ \hat{u})^{2^{*}_{\mu}-1}\right)(v_{0}-v)}{|x-y|^{\mu}}dxdy. \tag{4.23}\]
Similarly, we have
\[\frac{\lambda}{22^{*}_{\mu}}\int\limits_{\Omega}\int\limits_{ \Omega}\frac{\left(v_{0}+\hat{u}\right)^{2^{*}_{\mu}}\left((v+\hat{u})^{2^{*}_ {\mu}}-(v_{0}+\hat{u})^{2^{*}_{\mu}}\right)}{|x-y|^{\mu}}dxdy\] \[\geq -\frac{\lambda}{4}\int\limits_{\Omega}\int\limits_{\Omega}\frac{ \left(v_{0}+\hat{u}\right)^{2^{*}_{\mu}}\left((v+\hat{u})^{2^{*}_{\mu}-1}+(v_{ 0}+\hat{u})^{2^{*}_{\mu}-1}\right)(v_{0}-v)}{|x-y|^{\mu}}dxdy. \tag{4.24}\]
From (4.22), (4.23) and (4.24), we deduce that
\[\mathcal{P}_{1}= \frac{\lambda}{2}\int\limits_{\Omega}\int\limits_{\Omega}\frac{ \left(v+\hat{u}\right)^{2^{*}_{\mu}}\left((v+\hat{u})^{2^{*}_{\mu}-1}-(v_{0}+ \hat{u})^{2^{*}_{\mu}-1}\right)(v_{0}-v)}{|x-y|^{\mu}}dxdy\] \[+\frac{\lambda}{22^{*}_{\mu}}\int\limits_{\Omega}\int\limits_{ \Omega}\frac{\left(v_{0}+\hat{u}\right)^{2^{*}_{\mu}}\left((v+\hat{u})^{2^{*}_ {\mu}}-(v_{0}+\hat{u})^{2^{*}_{\mu}}\right)(v_{0}-v)}{|x-y|^{\mu}}dxdy\] \[+\frac{\lambda}{22^{*}_{\mu}}\int\limits_{\Omega}\int\limits_{ \Omega}\frac{\left(v_{0}+\hat{u}\right)^{2^{*}_{\mu}}\left((v+\hat{u})^{2^{*}_ {\mu}}-(v_{0}+\hat{u})^{2^{*}_{\mu}}\right)(v_{0}-v)}{|x-y|^{\mu}}dxdy\] \[+\frac{\lambda}{22^{*}_{\mu}}\int\limits_{\Omega}\int\limits_{ \Omega}\frac{\left(v_{0}+\hat{u}\right)^{2^{*}_{\mu}}\left((v+\hat{u})^{2^{*}_ {\mu}}-(v_{0}+\hat{u})^{2^{*}_{\mu}}\right)(v_{0}-v)}{|x-y|^{\mu}}dxdy\] \[+\frac{\lambda}{22^{*}_{\mu}}\int\limits_{\Omega}\int\limits_{ \Omega}\frac{\left(v_{0}+\hat{u}\right)^{2^{*}_{\mu}}\left((v+\hat{u})^{2^{*}_ {\mu}}-(v_{0}+\hat{u})^{2^{*}_{\mu}}\right)(v_{0}-v)}{|x-y|^{\mu}}dxdy\] \[+\frac{\lambda}{22^{*}_{\mu}}\int\limits_{\Omega}\int\limits_{ \Omega}\frac{\left(v_{0}+\hat{u}\right)^{2^{*}_{\mu}}\left((v+\hat{u})^{2^{*}_{ \mu}}-(v_{0}+\hat{u})^{2^{*}_{\mu}}\right)(v_{0}-v)}{|x-y|^{\mu}}dxdy\] \[+\frac{\lambda}{22^{*}_{\mu}}\int\limits_{\Omega}\int\limits_{ \Omega}\frac{\left(v_{0}+\hat{u}\right)^{2^{*}_{\mu}}\left((v+\hat{u})^{2^{*}_ {\mu}}-(v_{0}+\hat{u})^{2^{*}_{\mu}}\right)(v_{0}-v)}{|x-y|^{\mu}}dxdy\] \[+\frac{\lambda}{22^{*}_{\mu}}\int\limits_{\Omega}\int\limits_{ \Omega}\frac{\left(v_{0}+\hat{u}\right)^{2^{*}_{\mu}}\left((v+\hat{u})^{2^{*}_{ \mu}}-(v_{0}+\hat{u})^{2^{*}_{\mu}}\right)(v_{0}-v)}{|x-y|^{\mu}}dxdy\] \[+\frac{\lambda}{22^{*}_{\mu}}\int\limits_{\Omega}\int\limits_{ \Omega}\frac{\left(v_{0}+\hat{u}\right)^{2^{*}_{\mu}}\left((v+\hat{u})^{2^{*}_ {\mu}}-(v_{0}+\hat{u})^{2^{*}_{\mu}}\right)(v_{0}-v)}{|x-y|^{\mu}}dxdy\] \[+\frac{\lambda}{22^{*}_{\mu}}\int\limits_{\Omega}\int\limits_{ \Omega}\frac{\left(v_{0}+\hat{u}\right)^{2^{*}_{\mu}}\left((v+\hat{u})^{2^{*}_ {\mu}}-(v_{0}+\hat{u})^{2^{*}_{\mu}}\right)(v_{0}-v)}{|x-y|^{\mu}}dxdy\] \[+\frac{\lambda}{22^{*}_{\mu}}\int\limits_{\Omega}\int\limits_{ \Omega}\frac{\left(v_{0}+\hat{u}\right)^{2^{*}_{\mu}}\left((v+\hat{u})^{2^{*}_{ \mu}}-(v_{0}+\hat{u}
\[+\frac{\lambda}{2}\int\limits_{\Omega}\int\limits_{\Omega}\frac{ \left(v_{0}+\hat{u}\right)^{2^{*}_{\mu}}\left((v_{0}+\hat{u})^{2^{*}_{\mu}-1}-(v+ \hat{u})^{2^{*}_{\mu}-1}\right)(v_{0}-v)}{|x-y|^{\mu}}dxdy\] \[= \frac{\lambda}{2}\int\limits_{\Omega}\int\limits_{\Omega}\frac{ \left((v_{0}+\hat{u})^{2^{*}_{\mu}}-(v+\hat{u})^{2^{*}_{\mu}}\right)\left((v_{ 0}+\hat{u})^{2^{*}_{\mu}-1}-(v+\hat{u})^{2^{*}_{\mu}-1}\right)(v_{0}-v)}{|x-y |^{\mu}}dxdy\] \[\geq 0. \tag{4.25}\]
Hence from (4.21) and (4.25), we get
\[d-\Phi_{K_{v}}(v)\geq\lambda\left(\frac{1}{2}-\frac{1}{22^{*}_{\mu}}\right)b^{ 22^{*}_{\mu}}. \tag{4.26}\]
Using definition of \(S_{H,L}\) and (4.14), we have \(\lambda b^{22^{*}_{\mu}}\geq a^{2}\) and \(a^{2}\geq S_{H,L}b^{2}\), that is
\[b\geq\left(\frac{S_{H,L}}{\lambda}\right)^{\frac{n-2}{2(n-\mu+2)}}. \tag{4.27}\]
Using (4.26) and (4.27), we get
\[d-\Phi_{K_{v}}(v)\geq\lambda\left(\frac{1}{2}-\frac{1}{22^{*}_{\mu}}\right) \left(\frac{S_{H,L}}{\lambda}\right)^{\frac{2n-\mu}{(n-\mu+2)}}=\frac{1}{2} \left(\frac{n-\mu+2}{2n-\mu}\right)\left(\frac{S_{H,L}^{\frac{2n-\mu}{n-\mu+2 }}}{\lambda^{\frac{n-2}{n-\mu+2}}}\right).\]
It contradicts the fact that \(d<\Phi_{K_{v}}(v)+\frac{1}{2}\left(\frac{n-\mu+2}{2n-\mu}\right) \left(\frac{S_{H,L}^{\frac{2n-\mu}{n-\mu+2}}}{\lambda^{\frac{n-2}{n-\mu+2}}}\right)\). Hence \(a=0\).
Now consider the family of minimizers of the best constant \(S_{H,L}\) (see Lemma 2.3 ) given by
\[V_{\epsilon}(x)=S^{\frac{(n-\mu)(2-n)}{4(n-\mu+2)}}(C(n,\mu))^{\frac{2-n}{2(n -\mu+2)}}\left(\frac{\epsilon}{\epsilon^{2}+|x|^{2}}\right)^{\frac{n-2}{2}},\ 0< \epsilon<1.\]
Let \(\delta>0\) such that \(B_{4\delta}\subset\Omega\). Now define \(\psi\in C_{c}^{\infty}(\Omega)\) such that \(0\leq\psi\leq 1\) in \(\mathbb{R}^{n}\), \(\psi\equiv 1\) in \(B_{\delta}(0)\) and \(\psi\equiv 0\) in \(\mathbb{R}^{n}\setminus B_{2\delta}(0)\). For each \(\epsilon>0\) and \(x\in\mathbb{R}^{n}\), we define \(u_{\epsilon}(x)=\psi(x)V_{\epsilon}(x)\). Then we have the following:
**Proposition 4.11**: _Let \(n\geq 3\), \(0<\mu<n\) then the following holds:_
* \(\ll u_{\epsilon}\gg^{2}\leq S_{H,L}^{\frac{2n-\mu}{n-\mu+2}}+O(\epsilon^{n-2}).\)__
* \(\|u_{\epsilon}\|_{HL}^{22^{*}_{\mu}}\leq S_{H,L}^{\frac{2n-\mu}{n-\mu+2}}+O( \epsilon^{n})\)_._
* \(\|u_{\epsilon}\|_{HL}^{22^{*}_{\mu}}\geq S_{H,L}^{\frac{2n-\mu}{n-\mu+2}}-O( \epsilon^{n})\)_._
* \([v_{\epsilon}]_{s}^{2}=O(\epsilon^{\nu_{s,n}})\)_, where_ \(\nu_{s,n}=\min\{n-2,2-2s\}\)_._
**Proof.** For proof of part \((i)\), we refer to [37, Lemma 1.46]. For \((ii)\) and \((iii)\), see [25, Proposition 2.8]. Lastly for a proof of part \((iv)\), see [5, p22].
**Lemma 4.12**: _The following holds:_
* _If_ \(\mu<\min\{4,n\}\) _then for all_ \(\zeta<1\)_,_ \[\|v+tu_{\epsilon}\|_{HL}^{22^{*}_{\mu}}\geq \|v\|_{HL}^{22^{*}_{\mu}}+\|tu_{\epsilon}\|_{HL}^{22^{*}_{\mu}}+ \widetilde{C}t^{22^{*}_{\mu}-1}\int\limits_{\Omega}\int\limits_{\Omega}\frac{( u_{\epsilon}(x))^{2^{*}_{\mu}}(u_{\epsilon}(y))^{2^{*}_{\mu}-1}v(y)}{|x-y|^{\mu}}dxdy\] \[+22^{*}_{\mu}t\int\limits_{\Omega}\int\limits_{\Omega}\frac{(v( x))^{2^{*}_{\mu}}(v(y))^{2^{*}_{\mu}-1}u_{\epsilon}(y)}{|x-y|^{\mu}}dxdy-O( \epsilon^{(\frac{2n-\mu}{4})\zeta}).\]
* _There exists a constant_ \(T_{0}>0\) _such that_ \(\int\limits_{\Omega}\int\limits_{\Omega}\frac{(u_{\epsilon}(x))^{2^{*}_{\mu}} (u_{\epsilon}(y))^{2^{*}_{\mu}-1}v(y)}{|x-y|^{\mu}}dxdy\geq\widetilde{C}T_{0} \epsilon^{\frac{n-2}{2}}\)_._
**Proof.** For a proof, see the proof of [26, Lemma 4.2]. \(\square\)
**Lemma 4.13**: _We have_
\[\sup\{\Phi_{K_{v}}(v+tu_{\epsilon}):t\geq 0\}<\Phi_{K_{v}}(v)+\frac{1}{2} \left(\frac{n-\mu+2}{2n-\mu}\right)\left(\frac{S_{H,L}^{\frac{n-\mu}{n+2}}}{ \lambda^{\frac{n-2}{n-\mu+2}}}\right)\]
_for any sufficiently small \(\epsilon>0\) and \(n+2s<6\)._
**Proof.** Taking into account the fact that \(v\) is a weak solution of (\(\hat{P}_{\lambda}\)) and employing Lemma 4.12, for all \(\zeta<1\), we have
\[\Phi_{K_{v}}(v+tu_{\epsilon})-\Phi_{K_{v}}(v)\leq \frac{1}{2}\|tu_{\epsilon}\|^{2}-\frac{\lambda}{22^{*}_{\mu}}\|tu _{\epsilon}\|_{HL}^{22^{*}_{\mu}}+\int\limits_{\Omega}\left(F(v+tu_{\epsilon}) -F(x,v)-f(x,v)tu_{\epsilon}\right)dx\] \[-\frac{\lambda\widetilde{C}t^{22^{*}_{\mu}-1}}{22^{*}_{\mu}}\int \limits_{\Omega}\int\limits_{\Omega}\frac{(u_{\epsilon}(x))^{2^{*}_{\mu}}(u_ {\epsilon}(y))^{2^{*}_{\mu}-1}v(y)}{|x-y|^{\mu}}dxdy+O(\epsilon^{(\frac{2n-\mu }{4})\zeta}).\]
Using Proposition 4.11 and Lemma 4.12, we obtain
\[\Phi_{K_{v}}(v+tu_{\epsilon})-\Phi_{K_{v}}(v)\leq \frac{t^{2}}{2}\left(S_{H,L}^{\frac{2n-\mu}{n-\mu+2}}+O(\epsilon^ {\nu_{s,n}})\right)-\frac{\lambda t^{22^{*}_{\mu}}}{22^{*}_{\mu}}\left(S_{H,L} ^{\frac{2n-\mu}{n-\mu+2}}-O(\epsilon^{n})\right)+O(\epsilon^{(\frac{2n-\mu}{4 })\zeta})\] \[+\int\limits_{\Omega}\left(F(v+tu_{\epsilon})-F(x,v)-f(x,v)tu_{ \epsilon}\right)dx-\frac{\lambda\widetilde{C}t^{22^{*}_{\mu}-1}}{22^{*}_{\mu }}\widetilde{C}T_{0}\epsilon^{\frac{n-2}{2}}. \tag{4.28}\]
We see that for any fix \(1<\rho<\min\{2,\frac{2}{n-2}\}\), there exists \(T_{1}>0\) such that \(\int\limits_{\Omega}|u_{\epsilon}|^{\rho}dx\leq T_{1}\epsilon^{\frac{(n-2)\rho }{2}}\). Moreover, there exists \(T_{2}>0\) such that, for all \(x\in\Omega\), \(p>m\) and \(r\geq 0\),
\[F(x,p+r)-F(x,r)-f(x,p)r=\int\limits_{p}^{p+r}(\tau^{-\gamma}-p^{-\gamma})d\tau \leq T_{2}r^{\rho}.\]
Using last inequality and (4.28) with \(\zeta=\frac{2}{2^{*}_{\mu}}\), we obtain
\[\Phi_{K_{v}}(v+tu_{\epsilon})-\Phi_{K_{v}}(v)\leq \frac{t^{2}}{2}\left(S_{H,L}^{\frac{2n-\mu+2}{n-\mu+2}}+O( \epsilon^{\nu_{s,n}})\right)-\frac{\lambda t^{22^{*}_{\mu}}}{22^{*}_{\mu}} \left(S_{H,L}^{\frac{2n-\mu}{n-\mu+2}}-O(\epsilon^{n})\right)\] \[+\int\limits_{\Omega}\left(F(v+tu_{\epsilon})-F(x,v)-f(x,v)tu_{ \epsilon}\right)dx-\frac{\lambda\widetilde{C}t^{22^{*}_{\mu}-1}}{22^{*}_{\mu }}\widetilde{C}T_{0}\epsilon^{\frac{n-2}{2}}. \tag{4.29}\]
We see that for any fixed \(1<\rho<\min\{2,\frac{2}{n-2}\}\), there exists \(T_{1}>0\) such that \(\int\limits_{\Omega}|u_{\epsilon}|^{\rho}dx\leq T_{1}\epsilon^{\frac{(n-2)\rho }{2}}\). Moreover, there exists \(T_{2}>0\) such that, for all \(x\in\Omega\), \(p>m\) and \(r\geq 0\),
\[F(x,p+r)-F(x,r)-f(x,p)r=\int\limits_{p}^{p+r}(\tau^{-\gamma}-p^{-\gamma})d\tau \leq T_{2}r^{\rho}.\]
Using last inequality and (4.28) with \(\zeta=\frac{2}{2^{*}_{\mu}}\), we obtain
\[\Phi_{K_{v}}(v+tu_{\epsilon})-\Phi_{K_{v}}(v)\leq \frac{t^{2}}{2}\left(S_{H,L}^{\frac{2n-\mu}{n-\mu+2}}+O(\epsilon^ {\nu_{s,n}})\right)-\frac{\lambda t^{22^{*}_{\mu}}}{22^{*}_{\mu}}\left(S_{H,L}^{ \frac{2n-\mu}{n-\mu+2}}-O(\epsilon^{n})\right)\] \[+\int\limits_{\Omega}\left(F(v+tu_{\epsilon})-F(x,v)-f(x,v)tu_{ \epsilon}\right)dx-\frac{\lambda\widetilde{C}t^{22^{*}_{\mu}-1}}{22^{*}_{\mu }}\widetilde{C}T_{0}\epsilon^{\frac{n-2}{2}}.\]
Using (4.29) with \(\zeta=\frac{2}{2^{*}_{\mu}}\), we obtain
\[\Phi_{K_{v}}(v+tu_{\epsilon})-\Phi_{K_{v}}(v)\leq \frac{t^{2}}{2}\left(S_{H,L}^{\frac{2n-\mu}{n-\mu+2}}+O( \epsilon^{\nu_{s,n}})\right)-\frac{\lambda t^{22^{*}_{\mu}}}{22^{*}_{\mu}} \left(S_{H,L}^{\frac{2n-\mu}{n-\mu+2}}-O(\epsilon^{n})\right)\] \[+\int\limits_{\Omega}\left(F(v+tu_{\epsilon})-F(x,v)-f(x,v)tu_{ \epsilon}\right)dx-\frac{\lambda\widetilde{C}t^{22^{*}_{\mu}-1}}{22^{*}_{\mu }}\widetilde{C}T_{0}\epsilon^{\frac{n-2}{2}}.
\[-\frac{\lambda\widetilde{C}t^{22^{\ast}_{\mu}-1}}{22^{\ast}_{\mu}} \widetilde{C}T_{0}\epsilon^{\frac{n-2}{2}}+T_{1}T_{2}t^{\rho}\epsilon^{\frac{(n- 2)\rho}{2}}+o\left(\epsilon^{\frac{n-2}{2}}\right)\] \[:= g(t).\]
Clearly, \(g(t)\rightarrow-\infty\) as \(t\rightarrow\infty\), \(g(t)>0\) as \(t\to 0^{+}\) and there exists \(t_{\epsilon}>0\) such that \(g^{\prime}(t_{\epsilon})=0\). Furthermore, there exists positive constants \(R_{1}\) and \(R_{2}\) such that \(R_{1}\leq t_{\epsilon}\leq R_{2}\). Hence
\[g(t)\leq \frac{t_{\epsilon}^{2}}{2}\left(S_{H,L}^{\frac{2n-\mu}{n-\mu+2}}+ O(\epsilon^{\nu_{s,n}})\right)-\frac{\lambda t_{\epsilon}^{22^{\ast}_{\mu}}}{2 2^{\ast}_{\mu}}\left(S_{H,L}^{\frac{2n-\mu}{n-\mu+2}}-O(\epsilon^{n})\right)\] \[-\frac{\lambda\widetilde{C}R_{1}^{22^{\ast}_{\mu}-1}}{22^{\ast}_ {\mu}}\widetilde{C}T_{0}\epsilon^{\frac{n-2}{2}}+T_{1}T_{2}R_{2}^{\rho} \epsilon^{\frac{(n-2)\rho}{2}}+o\left(\epsilon^{\frac{n-2}{2}}\right)\] \[\leq \sup_{t\geq 0}g_{1}(t)-\frac{\lambda\widetilde{C}R_{1}^{22^{\ast}_{ \mu}-1}}{22^{\ast}_{\mu}}\widetilde{C}T_{0}\epsilon^{\frac{n-2}{2}}+T_{1}T_{2} R_{2}^{\rho}\epsilon^{\frac{(n-2)\rho}{2}}+o\left(\epsilon^{\frac{n-2}{2}}\right),\]
where \(g_{1}(t)=\frac{t^{2}}{2}\left(S_{H,L}^{\frac{2n-\mu}{n-\mu+2}}+O(\epsilon^{\nu _{s,n}})\right)-\frac{\lambda t^{22^{\ast}_{\mu}}}{22^{\ast}_{\mu}}\left(S_{H,L}^{\frac{2n-\mu}{n-\mu+2}}-O(\epsilon^{n})\right)\). On trivial computation, we get
\[\Phi_{K_{v}}(v+tu_{\epsilon})-\Phi_{K_{v}}(v)\leq \frac{1}{2}\left(\frac{n-\mu+2}{2n-\mu}\right)\left(\frac{S_{H,L }^{\frac{2n-\mu}{n-\mu+2}}}{\lambda^{\frac{n-2}{n-\mu+2}}}\right)+O(\epsilon^ {\nu_{s,n}})-C\epsilon^{\frac{n-2}{2}}+o(\epsilon^{\frac{n-2}{2}})\]
for an appropriate constant \(C>0\). Thus, for \(\epsilon\) sufficiently small and owing to the assumption \(n+2s<6\), we obtain
\[\Phi_{K_{v}}(v+tu_{\epsilon})-\Phi_{K_{v}}(v)\leq \frac{1}{2}\left(\frac{n-\mu+2}{2n-\mu}\right)\left(\frac{S_{H,L }^{\frac{2n-\mu}{n-\mu+2}}}{\lambda^{\frac{n-2}{n-\mu+2}}}\right).\]
This completes the proof. \(\square\)
**Proposition 4.14**: _Assuming \(n+2s<6\), there exists two distinct solutions to \((\hat{P}_{\lambda})\), for any \(\lambda\in(0,\Lambda)\)._
**Proof.** From Theorem 4.8, we have \(v\) is a local minimizer of \(\Phi_{K_{v}}\). This imply that there exists \(\delta>0\) such that \(\Phi_{K_{v}}(w)\geq\Phi_{K_{v}}(v)\) for every \(w\in K_{v}\) with \(\|w-v\|\leq\delta\). Let \(u=u_{\epsilon}\) for \(\epsilon\) obtained in Lemma 4.13. Since \(\Phi_{K_{v}}(v+tu)\rightarrow-\infty\) as \(t\rightarrow\infty\), so choose \(t\geq\delta/\|u\|\) such that \(\Phi_{K_{v}}(v+tu)\leq\Phi_{K_{v}}(v)\). Now define
\[\Sigma=\{\Psi\in C([0,1],D(\Phi_{K_{v}})):\Psi(0)=v,\ \Psi(1)=v+tu\},\]
\[A=\{w\in D(\Phi_{K_{v}}):\|w-v\|=\alpha\}\ \text{and}\ d=\inf_{\Psi\in\Sigma} \sup_{r\in[0,1]}\Phi_{K_{v}}(\Psi(r)).\]
Combining Proposition 4.10 and Lemma 4.13, \(\Phi_{K_{v}}\) satisfies \((CPC)_{d}\) condition. If \(d=\Phi_{K_{v}}(v)=\inf\Phi_{K_{v}}(A)\), then \(v\not\in A\), \(v+tu\not\in A\), \(\inf\Phi_{K_{v}}(A)\geq\Phi_{K_{v}}(v)\geq\Phi_{K_{v}}(v+tu)\), and for every \(\Psi\in\Sigma\), there exists \(r\in[0,1]\) such that \(\|\Psi(r)-v\|=\delta\). Thus by Theorem 2.15, we get there exists \(w\in D(\Phi_{K_{v}})\) such that \(w\neq v\), \(\Phi_{K_{v}}(w)=d\) and \(0\in\partial^{-}\Phi_{K_{v}}(w)\). Using Proposition 4.2, we obtain that \(w\) is a positive weak solution to \((\hat{P}_{\lambda})\). \(\square\)
**End of Proof of Theorem 2.11:** Combining Lemma 3.6, Theorem 4.8 and Proposition 4.14, the proof of Theorem 2.11 is complete. \(\square\)
**Acknowledgement:** The first author thanks the CSIR(India) for financial support in the form of a Senior Research Fellowship, Grant Number 09/086(1406)/2019-EMR-I. The second author was partially funded by IFCAM (Indo-French Centre for Applied Mathematics) IRL CNRS 3494.
|
2310.17027
|
$C^{1,α}$ Regularity For Stationary Mean-Field Games With
Logarithmic Coupling
|
This paper investigates stationary mean-field games (MFGs) on the torus with
Lipschitz non-homogeneous diffusion and logarithmic-like couplings. The primary
objective is to understand the existence of $C^{1,\alpha}$ solutions to address
the research gap between low-regularity results for bounded and measurable
diffusions and the smooth results modeled by the Laplacian.
We use the Hopf--Cole transformation to convert the MFG system into a scalar
elliptic equation. Then, we apply Morrey space methods to establish the
existence and regularity of solutions. The introduction of Morrey space methods
offers a novel approach to address regularity issues in the context of MFGs.
|
Tigran Bakaryan, Giuseppe Di Fazio, Diogo A. Gomes
|
2023-10-25T22:06:21Z
|
http://arxiv.org/abs/2310.17027v1
|
# \(C^{1,\alpha}\) regularity for stationary mean-field games with logarithmic coupling
###### Abstract.
This paper investigates stationary mean-field games (MFGs) on the torus with Lipschitz non-homogeneous diffusion and logarithmic-like couplings. The primary objective is to understand the existence of \(C^{1,\alpha}\) solutions to address the research gap between low-regularity results for bounded and measurable diffusions and the smooth results modeled by the Laplacian.
We use the Hopf-Cole transformation to convert the MFG system into a scalar elliptic equation. Then, we apply Morrey space methods to establish existence and regularity of solutions. The introduction of Morrey space methods offers a novel approach to address regularity issues in the context of MFGs.
Key words and phrases:Mean Field Games; Stationary Solutions; Morrey spaces; Holder regularity, Hopf-Cole transformation 2010 Mathematics Subject Classification: 35J47, 35A01 The authors were supported by King Abdullah University of Science and Technology (KAUST) baseline funds and KAUST OSR-CRG2021-4674.
## 1. Introduction
This paper studies stationary mean-field games (MFGs) on the torus, focusing on non-homogeneous diffusion and logarithmic-like couplings. MFGs offer a framework for analyzing large populations of competing rational agents. These games have two primary components: a Hamilton-Jacobi equation that governs each agent's value function and a Fokker-Planck equation that describes the evolution of agent density.
We consider a stationary MFG with a non-homogeneous diffusion matrix, \(A(x)\), and a logarithmic coupling, \(g(\log m)\), between the equations. More precisely, the problem we are investigating is the following.
**Problem 1**.: _Let \(A(x)=\left(a^{i,j}(x)\right)_{i,j}\) be a non negative definite \(d\times d\) matrix-valued function defined on the \(d\)-dimensional torus \(\mathbb{T}^{d}\). Let \(V\) and \(g\) be given continuous functions in \(\mathbb{T}^{d}\). Find \((u,m)\in(C^{2}(\mathbb{T}^{d}))^{2}\) and \(\bar{H}\in\mathbb{R}\) satisfying_
\[\begin{cases}-\operatorname{div}(ADu^{T})+\frac{1}{2}DuADu^{T}+V(x)=g\left( \log m\right)+\overline{H}\\ -\operatorname{div}(ADm^{T})-\operatorname{div}(MADu^{T})=0\\ m\geqslant 0,\quad\int_{\mathbb{T}^{d}}m\mathrm{d}x\,=1.\end{cases}x\in\mathbb{T}^{d} \tag{1.1}\]
The existence of weak solutions for this problem in the sense of monotone operators can be proved with minimal assumptions on the matrix, \(A\), see [15, 16]. On the higher end of the regularity spectrum, when \(A\) is the identity matrix and the diffusion corresponds to the Laplacian, smooth solutions for this and related problems were studied in [18], [17], [10], and [27]. Non-homogeneous diffusion can arise in many applications, prompting our investigation into the effects of replacing the Laplacian with a more general elliptic operator. However, the presence of the non-constant matrix \(A(x)\) in the diffusion terms introduces analytical challenges. Some of these challenges were previously addressed in the literature. For example, the hypoelliptic
Introduction
Let \(\mathbb{R}^{d}\) be a smooth smooth manifold and \(\mathbb{R}^{d}\) be a smooth manifold. We consider the following elliptic equation
\[-\operatorname{div}(ADu^{T})+\frac{1}{2}DuADu^{T}+V(x)-\bar{H}-g(-u)=0, \tag{1.1}\]
where \(u\) is a smooth smooth function and \(\bar{H}\) is a smooth smooth function. The equation (1.1) is a smooth function and \(\bar{H}\) is a smooth function. The equation (1.1) is a smooth function and \(\bar{H}\) is a smooth function. The equation (1.1) is a smooth function and \(\bar{H}\) is a smooth function. The equation (1.1) is a smooth function and \(\bar{H}\) is a smooth function. The equation (1.1) is a smooth function and \(\bar{H}\) is a smooth function. The equation (1.1) is a smooth function and \(\bar{H}\) is a smooth function. The equation (1.1) is a smooth function and \(\bar{H}\) is a smooth function. The equation (1.1) is a smooth function and \(\bar{H}\) is a smooth function. The equation (1.1) is a smooth function and \(\bar{H}\) is a smooth function. The equation (1.1) is a smooth function and \(\bar{H}\) is a smooth function. The equation (1.1) is a smooth function and \(\bar{H}\) is a smooth function. The equation (1.1) is a smooth function and \(\bar{H}\) is a smooth function. The equation (1.1) is a smooth function and \(\bar{H}\) is a smooth function. The equation (1.1) is a smooth function and \(\bar{H}\) is a smooth function. The equation (1.1) is a smooth function and \(\bar{H}\) is a smooth function. The equation (1.1) is a smooth function and \(\bar{H}\) is a smooth function. The equation (1.1) is a smooth function and \(\bar{H}\) is a smooth function. The equation (1.1) is a smooth function and \(\bar{H}\) is a smooth function. The equation (1.1) is a smooth function and \(\bar{H}\) is a smooth function. The equation (1.1) is a smooth function and \(\bar{H}\) is a smooth function. The equation (1.1) is a smooth function and \(\bar{H}\) is a smooth function. The equation (1.1) is a smooth function and \(\bar{H}\) is a smooth function. The equation (1.1) is a smooth function and \(\bar{H}\) is a smooth function. The equation (1.1) is a smooth function and \(\bar{H}\) is a smooth function. The equation (1.1) is a smooth function and \(\bar{H}\) is a smooth function. The equation (1.1) is a smooth function and \(\bar{H}\) is a smooth function. The equation (1.1) is a smooth function and \(\bar{H}\) is a smooth function. The equation (1.1) is a smooth function and \(\bar{H}\) is a smooth function. The equation (1.1) is a smooth function and \(\bar{H}\) is a smooth function. The equation (1.1) is a smooth function and \(\bar{H}\) is a smooth function and \(\bar{H}\) is a smooth function. The equation (1.1) is a smooth function and \(\bar{H}\) is a smooth function and \(\bar{H}\) is a smooth function. The equation (1.1) is a smooth function and \(\bar{H}\) is a smooth function and \(\bar{H}\) is a smooth function. The equation (1.1) is a smooth function and \(\bar{H}\) is a smooth function and \(\bar{H}\) is a smooth function. The equation (1.1) is a smooth function and \(\bar{H}\) is a smooth function and \(\bar{H}\) is a smooth function. The equation (1.1) is a smooth function and \(\bar{H}\) is a smooth function and \(\bar{H}\) is a smooth function. The equation (1.1) is a smooth function and \(\bar{H}\) is a smooth function and \(\bar{H}\) is a smooth function. The equation (1.1) is a smooth function and \(\bar{H}\) is a smooth function and \(\bar{H}\) is a smooth function and \(\bar{H}\) is a smooth function. The equation (1.1) is a smooth function and \(\bar{H}\) is a smooth function and \(\bar{H}\) is a smooth function and \(\bar{H}\) is a smooth function. The equation (1.1) is a smooth function and \(\bar{H}
existence and uniqueness of solutions to Problem 1. This is our main result, stated as follows.
**Theorem 1.2**.: Suppose that Assumptions 1-4 hold (see Section 2). Then, there exists a unique triple \((u,m,\bar{H})\in C^{1}(\mathbb{T}^{d})\times C^{1}(\mathbb{T}^{d})\times\mathbb{R}\) solving Problem 1 in the sense of Definition 1.1.
In [3], the authors examined MFGs under a more general set of assumptions, showing the existence of weak solutions in Lebesgue spaces. In contrast, our study establishes a higher degree of regularity, specifically \(C^{1,\alpha}\) regularity. Understanding the solution regularity is crucial for numerical methods and applications.
Our proof strategy hinges on new estimates in Morrey spaces and elliptic regularity results. To the best of the authors' knowledge, this is the first time Morrey space methods have been used in the context of MFGs. Morrey spaces are particularly suitable for studying elliptic equations and are a primary technical tool for obtaining Holder estimates. These methods allow us to examine a different range of regularity issues that previous techniques could not address because they work with elliptic equations with limited regularity.
## 2. Main assumptions
Our primary objective is to establish Theorem 1.2. This will be accomplished by verifying the existence of a solution to (1.2), under appropriate assumptions that allow the use of elliptic regularity theory.
The first two assumptions, Assumptions 1 and 2, provide conditions on the diffusion matrix \(A(x)\) and the coupling function \(g(u)\), respectively. Assumption 1 imposes ellipticity and uniform convexity on the Hamiltonian, ensuring it exhibits suitable structure amenable to the techniques used. Assumption 2, requires certain growth and monotonicity properties on \(g(u)\), allowing \(L^{\infty}\) bounds to be obtained. These assumptions are common in the MFG literature when analyzing regularity issues. All results in Section 3 rely only in these two assumptions.
The first assumption gives both the ellipticity of the second-order term and the Hamiltonian's uniform convexity, resulting in (1.2) exhibiting uniform ellipticity and convexity in the gradient with a natural growth condition.
**Assumption 1**.: _The matrix \(A(x)=\big{(}a^{i,j}(x)\big{)}_{i,j}\) is uniformly elliptic, i.e., there exists \(\theta_{0},\theta_{1}>0\) such that_
\[\theta_{0}|z|^{2}\leqslant zA(x)z^{T}=\sum_{i,j}^{n}a_{ij}(x)z_{i}z_{j} \leqslant\theta_{1}|z|^{2},\qquad(x,z)\in\mathbb{T}^{d}\times\mathbb{R}^{d}.\]
Assumption 1 is standard as it allows to use elliptic regularity techniques. Two important cases where it is does not hold is the hypoelliptic case and the first-order case. In both cases, the techniques used to establish the existence of solutions are quite different from the ones used here.
The second assumption imposes conditions on \(g\), enabling the proof of \(L^{\infty}\) bounds (see Proposition 3.4). These assumptions are foundational in Section 3 for establishing the existence of a solution \(u\in H^{1}(\mathbb{T}^{d})\cap L^{\infty}(\mathbb{T}^{d})\cap C^{0,\alpha}( \mathbb{T}^{d})\) to (1.2).
**Assumption 2**.: _Let the function, \(g:\mathbb{R}^{d}\to\mathbb{R}\), is locally Lipschitz continuous and satisfies \(g(u)\operatorname{sign}(u)\geqslant C_{g}|u|-\frac{1}{C_{g}}\)._
Assumption 3 imposes the Lipschitz continuity of the matrix \(A(x)\) and the potential \(V(x)\). This continuity is critical for the estimates in Morrey spaces and Holder
regularity results. No further assumptions are required on \(g\) because the results on Section 3 give that \(u\) is bounded. Thus, it suffices that \(g\) is locally Lipschitz, as prescribed in Assumption 2. The Lipschitz regularity of these terms allow us to differentiate the equation to bootstrap regularity. This is achieved in Section 4 where, we derive the \(C^{1,\alpha}\) regularity for the solutions.
**Assumption 3**.: _The function \(V\) and the matrix \(A(x)=\big{(}a^{i,j}(x)\big{)}_{i,j}\) are Lipschitz continuous, i.e., \(V,a_{ij}\in Lip(\mathbb{T}^{d})\), \(i,j=1,\ldots,d\)._
The last assumption concerns the monotonicity of the coupling function \(g(\log m)\). This monotonicity \(g(\log m)\) ensures the solution uniqueness through the well-known Lasry-Lions argument.
**Assumption 4**.: _The function \(g(\log(\cdot))\) is strictly monotone, i.e., for all \(s_{1},s_{2}\in\mathbb{R}^{+}\)_
\[(g(\log(s_{1}))-g(\log(s_{2})),s_{1}-s_{2})>0.\]
Because \(\log m\) is monotone, the preceding assumption is equivalent to the monotonicity of \(g\).
## 3. Existence of solutions of elliptic equation with quadratic growth
In this section, we prove the existence of weak solutions to non-linear elliptic equations with natural growth in the gradient. We rewrite (1.2) as
\[-\operatorname{div}(ADu^{T})+\frac{1}{2}DuADu^{T}+V_{\overline{H}}(x)-g(-u)=0, \tag{3.1}\]
where \(V_{\overline{H}}(x)=V(x)-\bar{H}.\) In Section 3.1, we consider non-linear elliptic equations with a bounded non-linearity in the first-order derivatives and prove an existence result. In Section 3.2, using an approximation argument, we get the existence of solutions to (3.1).
### An elliptic equation with bounded non-linear term
We begin by analyzing elliptic equations of the form
\[-\operatorname{div}(ADu^{T})+H(x,Du)+V_{\overline{H}}(x)-g(-u)=0,\qquad x\in \mathbb{T}^{d}, \tag{3.2}\]
where \(H\) is a bounded function and \(g\) has linear growth. We prove the existence of a solution of (3.2), which in the next section is combined with a limiting argument to obtain a solution for (3.1).
Let \(C_{H}\) be a positive constant such that
\[|H(x,p)|\leqslant C_{H},\qquad(x,p)\in\mathbb{T}^{d}\times\mathbb{R}^{d}. \tag{3.3}\]
Similarly, let \(\tilde{C}_{g}\) be a positive constant such that
\[|g(s)|\leqslant\tilde{C}_{g}(1+|s|),\qquad s\in\mathbb{R}. \tag{3.4}\]
First, we prove the existence of solutions to (3.2) assuming, in addition to Assumptions 1 and 2, that both (3.3) and (3.4) hold. Then, in Corollary 3.3, we remove the requirement (3.4). The boundedness of \(H\) is handled in the next section.
**Proposition 3.1**.: _Let \(H\) and \(g\) satisfy (3.3) and (3.4), respectively. Suppose that Assumptions 1 and 2 hold. Then, there exists \(u\in H^{1}(\mathbb{T}^{d})\) solving (3.2) in the sense of distributions._
Proof.: To prove the existence of solutions to (3.2), we use Leray-Lions theory (see [23] or [25, Chapter 5]). Consider the operator \(\mathcal{A}:H^{1}(\mathbb{T}^{d})\to H^{-1}(\mathbb{T}^{d})\) given by
\[(\mathcal{A}(u),v)=\int_{\mathbb{T}^{d}}DvADu^{T}+(H(x,Du)+V_{\overline{H}}(x)- g(-u))v\mathrm{d}x=0, \tag{3.5}\]
for all \(v\in H^{1}(\mathbb{T}^{d})\). Assumptions 1 and 2, (3.3), and (3.4) imply
\[(\mathcal{A}(v),v)\geqslant c||Dv||^{2}_{L^{2}(\mathbb{T}^{d})}-C\|v\|_{L^{2}( \mathbb{T}^{d})}+C_{g}\|v\|^{2}_{L^{2}(\mathbb{T}^{d})}.\]
Thus, for some suitable constants \(\hat{c}\) and \(C\), we have
\[\lim_{||v||_{H^{1}(\mathbb{T}^{d})}\to\infty}\frac{(\mathcal{A}(v),v)}{||v||_ {H^{1}(\mathbb{T}^{d})}}\geqslant\lim_{||v||_{H^{1}(\mathbb{T}^{d})}\to\infty} \frac{c||Dv||^{2}_{L^{2}(\mathbb{T}^{d})}+\hat{c}||v||^{2}_{L^{2}(\mathbb{T}^{ d})}-C}{||v||_{H^{1}(\mathbb{T}^{d})}}=+\infty. \tag{3.6}\]
Let
\[\mathcal{A}^{0}(x,s,p)=(H(x,p)+V_{\overline{H}}(x)-g(-s)),\quad\mathcal{A}^{1 }(p)=pA.\]
Accordingly, Assumptions 1, (3.3), and (3.4), imply
\[|\mathcal{A}^{0}(x,s,p)|+|\mathcal{A}^{1}(p)|\leqslant C(1+|s|+|p|),\quad(x,s,p)\in\mathbb{T}^{d}\times\mathbb{R}\times\mathbb{R}^{d}. \tag{3.7}\]
Using ellipticity of \(A\), we get
\[(\mathcal{A}^{1}(p_{1})-\mathcal{A}^{1}(p_{2}))(p_{1}-p_{2})\geqslant\theta_{ 0}|p_{1}-p_{2}|^{2},\quad p_{1},p_{2}\in\mathbb{R}^{d}. \tag{3.8}\]
Inequalities in (3.6), (3.7) and (3.8) imply that the operator, \(\mathcal{A}\), defined in (3.5) satisfies the conditions of [25, Theorem 5.12.2], which ensures the existence of solutions to (3.2).
We now prove the boundedness of weak solutions using a truncation argument (see, e.g., [28] or [6]).
**Theorem 3.2.** Consider the setting of Problem 1. Let \(H\) and \(g\) satisfy (3.3) and (3.4), respectively. Suppose that Assumptions 1-2 hold. Then, there exists \(u\in H^{1}(\mathbb{T}^{d})\cap L^{\infty}(\mathbb{T}^{d})\) solving (3.2).
Proof.: By Proposition 3.1, it follows that there exists \(u\in H^{1}(\mathbb{T}^{d})\) solving (3.2). We prove \(u\in H^{1}(\mathbb{T}^{d})\cap L^{\infty}(\mathbb{T}^{d})\).
Let \(k_{0}=\left(\frac{C_{H}+C_{V}}{C_{g}}+1\right)\). For all \(v\in H^{1}(\mathbb{T}^{d})\), we have
\[\int_{\mathbb{T}^{d}}DvADu^{T}+(H(x,u,Du)+V_{\overline{H}}(x)-g(-u))v\mathrm{d }x=0. \tag{3.9}\]
To understand the behaviour of \(u\) when its absolute value is large, for \(k>0\), we introduce
\[G_{k}(s)=\begin{cases}s-k,&s>k\\ \quad 0,&-k\leqslant s\leqslant k\\ s+k,&s<-k\end{cases} \tag{3.10}\]
and
\[A_{k}=\{x\in\mathbb{T}^{d}:|u(x)|>k\}. \tag{3.11}\]
Setting \(w=G_{k}(u)\), we have \(w\in H^{1}(\mathbb{T}^{d})\). Note that
\[w=G_{k}(u(x))=\chi_{A_{k}}(|u|-k)\operatorname{sign}(u)=(|u|-k)^ {+}\operatorname{sign}(u),\] \[Dw=DG_{k}(u)=\chi_{A_{k}}Du,\]
where \(\chi_{A_{k}}\) is the indicator function of \(A_{k}\).
Next, we take \(w\) as a test function in (3.9) and get
\[\int_{\mathbb{T}^{d}}DG_{k}(u)ADu^{T}-g(-u)G_{k}(u)\mathrm{d}x=\int_{\mathbb{T}^{ d}}(-H(x,u,Du)-V_{\overline{H}}(x))G_{k}(u)\mathrm{d}x. \tag{3.13}\]
Notice that Assumption 2 yields
\[\begin{split}-g(-u)G_{k}(u)&=-g(-u)\operatorname{ sign}(u)\chi_{A_{k}}(|u|-k)\\ &\geqslant C_{g}|u|\chi_{A_{k}}(|u|-k)-\frac{1}{C_{g}}\chi_{A_{k} }(|u|-k)\\ &\geqslant(kC_{g}-\frac{1}{C_{g}})|G_{k}(u)|.\end{split} \tag{3.14}\]
By combining the preceding inequality with the ellipticity condition, (3.12) and (3.14) in (3.13), we obtain
\[\theta\int_{\mathbb{T}^{d}}\chi_{A_{k}}DuADu^{T}\mathrm{d}x+kC_{g}\int_{ \mathbb{T}^{d}}|G_{k}(u)|\mathrm{d}x\leqslant\left(C_{H}+C_{V}+\frac{1}{C_{g} }\right)\int_{\mathbb{T}^{d}}|G_{k}(u)|\mathrm{d}x,\]
for any \(k>0\). Hence,
\[\left(kC_{g}-\left(C_{H}+C_{V}+\frac{1}{C_{g}}\right)\right)\int_{\mathbb{T}^ {d}}|G_{k}(u)|\mathrm{d}x\leqslant 0.\]
Now, taking \(k=k_{0}=\frac{C_{H}+C_{V}+\frac{1}{C_{g}}}{C_{g}}+1\), we deduce that \(G_{k_{0}}(u)=0\). This with (3.11) and (3.12) imply \(|u|\leqslant k_{0}\).
Next, we prove that in Theorem 3.2, we can remove condition (3.4).
**Corollary 3.3**.: _Consider the setting of Problem 1. Let \(H\) be such that (3.3) holds. Suppose that Assumptions 1-2 hold. Then, there exists \(u\in H^{1}(\mathbb{T}^{d})\cap L^{\infty}(\mathbb{T}^{d})\) solving (3.2)._
Proof.: Let \(k_{0}=\left(\frac{C_{H}+C_{V}+\frac{1}{C_{g}}}{C_{g}}+1\right)\) and let \(\bar{g}(s)=g(s)\) for \(|s|<k_{0}\) and extend it linearly with \(\bar{g}^{\prime}(s)=C_{g}\) for \(|s|>k_{0}\). Clearly \(\bar{g}\) satisfies (3.4). Moreover, it satisfies the conditions of Assumption 2. Hence, considering the following equation
\[-\operatorname{div}(ADu^{T})+H(x,Du)+V_{\overline{H}}(x)-\bar{g}(-u)=0,\quad x \in\mathbb{T}^{d}, \tag{3.15}\]
we note that it satisfies the conditions of Theorem 3.2. Therefore, there exists \(\bar{u}\in H^{1}(\mathbb{T}^{d})\cap L^{\infty}(\mathbb{T}^{d})\) solving (3.15). Moreover, by the proof of Theorem 3.2, it follows that \(|\bar{u}|\leqslant k_{0}\). Consequently, \(\bar{g}(\bar{u})=g(\bar{u})\) and equations (3.2), (3.15) are coincide for \(\bar{u}\). Thus, \(u=\bar{u}\in H^{1}(\mathbb{T}^{d})\cap L^{\infty}(\mathbb{T}^{d})\) solves (3.2).
### Bounded generalized solution to elliptic equation with quadratic growth
We focus back to the Hamiltonian with quadratic growth and combine the results from the previous section with an approximation argument to show the existence of bounded solutions to (3.1) (see e.g. [2] and [1]).
For any \(\varepsilon>0\), we set
\[H_{\varepsilon}(x,p)=\frac{pAp^{T}}{2+\varepsilon|pAp^{T}|}. \tag{3.16}\]
Notice that
\[|H_{\varepsilon}(x,p)|\leqslant\frac{1}{\varepsilon} \tag{3.17}\]
and, by the upper bound in Assumption 1, we have
\[|H_{\varepsilon}(x,p)|\leqslant\frac{1}{2}pAp^{T}\leqslant\frac{\theta_{1}}{2} |p|^{2}. \tag{3.18}\]
Consider the approximation to (3.1)
\[-\operatorname{div}(ADu_{\varepsilon}^{T})+H_{\varepsilon}(x,Du_{\varepsilon})+V_{ \overline{H}}(x)-g(-u_{\varepsilon})=0. \tag{3.19}\]
We stress that the bound (3.17) ensures conditions in Corollary 3.3 are satisfied. Then, there exists a solution \(u_{\varepsilon}\in H^{1}(\mathbb{T}^{d})\cap L^{\infty}(\mathbb{T}^{d})\) to (3.19). To obtain the existence of solutions to (3.1), we consider the limit of \(u_{\varepsilon}\) as \(\varepsilon\to 0\). In Theorem 3.5, we show that the limit exists and it solves (3.1). The main obstacle is the lack of uniformity of the bounds in the preceding section with respect to \(\varepsilon\). In particular, the bound in Corollary 3.3 depends on the constant in (3.3), which according to (3.17) is not uniform as \(\varepsilon\to 0\). Thus, in the following proposition, we use (3.18) to establish uniform in \(\varepsilon\) bounds for the solutions to (3.19).
**Proposition 3.4**.: _Consider the setting of Problem 1. Suppose that Assumptions 1-2 hold. Then, there exists a constant \(C_{\infty}\) which does not depend on \(\varepsilon\), such that any solution \(u_{\varepsilon}\) to (3.19) satisfies_
\[||u_{\varepsilon}||_{L^{\infty}(\mathbb{T}^{d})}\leqslant C_{\infty}\,.\]
Proof.: Let \(u_{\varepsilon}\in H^{1}(\mathbb{T}^{d})\cap L^{\infty}(\mathbb{T}^{d})\) be a bounded solution to (3.19).
For fixed \(k,\lambda\in\mathbb{R}^{+}\), let \(G_{k}\) and \(A_{k}\) be as in (3.10) and (3.11), respectively. Let
\[\phi(s)=\begin{cases}e^{\lambda s}-1,&s\geqslant 0\\ -e^{-\lambda s}+1,&s\leqslant 0\end{cases} \tag{3.20}\]
and
\[\psi(x)=\phi(G_{k}(u_{\varepsilon})).\]
Because \(\psi\in H^{1}(\mathbb{T}^{d})\cap L^{\infty}(\mathbb{T}^{d})\), we can use \(\psi\) as a test function in (3.19). Then,
\[\int_{\mathbb{T}^{d}}D\psi ADu_{\varepsilon}^{T}-g(-u_{\varepsilon})\psi \mathrm{d}x=\int_{\mathbb{T}^{d}}\left(\frac{1}{2}H_{\varepsilon}(Du_{ \varepsilon})-V_{\overline{H}}(x)\right)\psi\mathrm{d}x. \tag{3.21}\]
Note that
\[\psi=\phi(|u_{\varepsilon}|-k)\chi_{A_{k}}\operatorname{sign}(u_{ \varepsilon})=\phi((|u_{\varepsilon}|-k)^{+})\operatorname{sign}(u_{ \varepsilon}),\] \[D\psi=\phi^{\prime}((|u_{\varepsilon}|-k)^{+})\chi_{A_{k}}Du_{ \varepsilon}. \tag{3.22}\]
Next, using the previous identities, we estimate the terms in (3.21). Recalling that \(V_{\overline{H}}\) is bounded by the formulation of Problem 1, by (3.22), we have
\[\begin{cases}|\psi|\leqslant|\phi((|u_{\varepsilon}|-k)^{+})|=\phi((|u_{ \varepsilon}|-k)^{+})\\ \\ |V_{\overline{H}}\psi|\leqslant C_{V}\phi((|u_{\varepsilon}|-k)^{+}). \end{cases}\]
Then, by using Assumption 2, we get
\[-g(-u_{\varepsilon})\psi =-g(-u_{\varepsilon})\operatorname{sign}(u_{\varepsilon})\phi( (|u_{\varepsilon}|-k)^{+})\] \[\geqslant C_{g}|u_{\varepsilon}|\phi((|u_{\varepsilon}|-k)^{+})- \frac{1}{C_{g}}\phi((|u_{\varepsilon}|-k)^{+})\] \[\geqslant kC_{g}\phi((|u_{\varepsilon}|-k)^{+})-\frac{1}{C_{g}} \phi((|u_{\varepsilon}|-k)^{+}).\]
Using the preceding inequalities, (3.18), (3.22) and Assumption 1 in (3.21), we deduce
\[\theta_{0}\int_{\mathbb{T}^{d}}|Du_{\varepsilon}|^{2}\phi^{\prime}((|u_{ \varepsilon}|-k)^{+})\chi_{A_{k}}\mathrm{d}x+kC_{g}\int_{\mathbb{T}^{d}}\phi( (|u_{\varepsilon}|-k)^{+})\mathrm{d}x\] \[\quad\leqslant\frac{\theta_{1}}{2}\int_{\mathbb{T}^{d}}|Du_{ \varepsilon}|^{2}\phi((|u_{\varepsilon}|-k)^{+})\mathrm{d}x+\left(C_{V}+\frac{ 1}{C_{g}}\right)\int_{\mathbb{T}^{d}}\phi((|u_{\varepsilon}|-k)^{+})\mathrm{d}x. \tag{3.23}\]
Taking \(\lambda=\frac{\theta_{1}}{2\theta_{0}}\) in (3.20), we have
\[\theta_{0}\phi^{\prime}((|u_{\varepsilon}|-k)^{+})\chi_{A_{k}}-\frac{\theta_{1}} {2}\phi((|u_{\varepsilon}|-k)^{+})\geqslant\frac{\theta_{1}}{2}\chi_{A_{k}},\]
where we used the inequality \(\phi^{\prime}(s)\geqslant\lambda\phi(s)\) for \(s\geqslant 0\). This and (3.23), yield
\[\frac{\theta_{1}}{2}\int_{\mathbb{T}^{d}}|Du_{\varepsilon}|^{2}\chi_{A_{k}} \mathrm{d}x+kC_{g}\int_{\mathbb{T}^{d}}\phi((|u_{\varepsilon}|-k)^{+})\mathrm{ d}x\leqslant\left(C_{V}+\frac{1}{C_{g}}\right)\int_{\mathbb{T}^{d}}\phi((|u_{ \varepsilon}|-k)^{+})\mathrm{d}x.\]
Finally, taking \(k=k_{0}=\frac{\left(C_{V}+\frac{1}{C_{g}}\right)}{C_{g}}+1\), we get \(\phi((|u_{\varepsilon}|-k_{0})^{+})=0\); that is, \(|u_{\varepsilon}|\leqslant k_{0}\).
**Theorem 3.5.** Consider the setting of Problem 1. Suppose that Assumptions 1-2 hold. Then, there exists \(u\in H^{1}(\mathbb{T}^{d})\cap L^{\infty}(\mathbb{T}^{d})\) solving (3.1) in the sense of distributions.
Proof.: Let \(u_{\varepsilon}\) solve (3.19). By Proposition 3.4, we know that
\[\|u_{\varepsilon}\|_{L^{\infty}(\mathbb{T}^{d})}\leqslant C_{\infty}, \tag{3.24}\]
for some \(C_{\infty}\) which does not depend on \(\varepsilon\).
To show uniform boundedness of \(\{u_{\varepsilon}\}\) in \(H^{1}(\mathbb{T}^{d})\), we set \(\Phi(s)=e^{s}\) and notice that \(\Phi(u_{\varepsilon})\in H^{1}(\mathbb{T}^{d})\cap L^{\infty}(\mathbb{T}^{d})\). By taking \(\Phi(v_{\varepsilon})\) as a test function in (3.19), we obtain
\[\int_{\mathbb{T}^{d}}H_{\epsilon}(Du_{\varepsilon})\Phi(u_{\varepsilon})+ \frac{1}{2}Du_{\varepsilon}ADu_{\varepsilon}^{T}\Phi(u_{\varepsilon})\mathrm{ d}x=\int_{\mathbb{T}^{d}}(g(-u_{\varepsilon})-V_{\overline{H}}(x))\Phi(u_{ \varepsilon})\mathrm{d}x. \tag{3.25}\]
From (3.24) it follows that
\[e^{-C_{\infty}}\leqslant\Phi(u_{\varepsilon})\leqslant e^{C_{\infty}}.\]
Using this bound together with Assumption 1 in (3.25), yields
\[\int_{\mathbb{T}^{d}}|Du_{\varepsilon}|^{2}\mathrm{d}x\leqslant C,\]
for some constant independent on \(\varepsilon\). Consequently, taking into account (3.24), there exists \(u\in H^{1}(\mathbb{T}^{d})\cap L^{\infty}(\mathbb{T}^{d})\) such that
\[\begin{split} u_{\varepsilon}\rightharpoonup u&\text {in}\quad H^{1}(\mathbb{T}^{d}),\\ u_{\varepsilon}\overset{*}{\rightharpoonup}u&\text{in} \quad L^{\infty}(\mathbb{T}^{d}).\end{split} \tag{3.26}\]
Note that to complete the proof it is enough to prove that the limit in the weak form of (3.19) is exits, as \(\varepsilon\to 0\). For that first, we prove that
\[Du_{\varepsilon}\to Du,\quad\text{strongly in}\quad L^{2}(\mathbb{T}^{d}). \tag{3.27}\]
Taking
\[\Theta=e^{\mu(u_{\varepsilon}-u)^{2}}(u_{\varepsilon}-u),\]
as a test function in (3.19), we get
\[\begin{split}& 2\mu\int_{\mathbb{T}^{d}}e^{\mu(u_{ \varepsilon}-u)^{2}}(u_{\varepsilon}-u)^{2}(Du_{\varepsilon}-Du)ADu_{ \varepsilon}^{T}\mathrm{d}x+\int_{\mathbb{T}^{d}}e^{\mu(u_{\varepsilon}-u)^{2} }(Du_{\varepsilon}-Du)ADu_{\varepsilon}^{T}\mathrm{d}x\\ &=\int_{\mathbb{T}^{d}}\left(-H_{\varepsilon}(Du_{\varepsilon})-V _{\overline{H}}+g(-u_{\varepsilon})\right)e^{\mu(u_{\varepsilon}-u)^{2}}(u_{ \varepsilon}-u)\mathrm{d}x.\end{split} \tag{3.28}\]
Observe that
\[\int_{\mathbb{T}^{d}}\left(-V_{\overline{H}}+g(-u_{\varepsilon})\right)e^{\mu (u_{\varepsilon}-u)^{2}}(u_{\varepsilon}-u)\mathrm{d}x\to 0, \tag{3.29}\]
as \(\varepsilon\to 0\). Moreover, because \(u\in H^{1}(\mathbb{T}^{d})\cap L^{\infty}(\mathbb{T}^{d})\) from (3.26), we have
\[e^{\mu(u_{\varepsilon}-u)^{2}}(u_{\varepsilon}-u)^{2}Du\to 0,\quad\text{ strongly in}\quad L^{2}(\mathbb{T}^{d}),\] \[e^{\mu(u_{\varepsilon}-u)^{2}}Du\to Du,\quad\text{ strongly in}\quad L^{2}(\mathbb{T}^{d})\]
as \(\varepsilon\to 0\). Consecutively, we have
\[\limsup_{\varepsilon\to 0}2\mu\int_{\mathbb{T}^{d}}\Theta(u_{ \varepsilon}-u)(Du_{\varepsilon}-Du)ADu_{\varepsilon}^{T}+e^{\mu(u_{ \varepsilon}-u)^{2}}(Du_{\varepsilon}-Du)ADu_{\varepsilon}^{T}\mathrm{d}x\] \[=\limsup_{\varepsilon\to 0}2\mu\int_{\mathbb{T}^{d}}\Theta(u_{ \varepsilon}-u)Du_{\varepsilon}ADu_{\varepsilon}^{T}+e^{\mu(u_{\varepsilon}- u)^{2}}(Du_{\varepsilon}-Du)A(Du_{\varepsilon}^{T}-Du^{T})\mathrm{d}x. \tag{3.30}\]
Furthermore, we notice that
\[\limsup_{\varepsilon\to 0} \int_{\mathbb{T}^{d}}e^{\mu(u_{\varepsilon}-u)^{2}}Du_{ \varepsilon}ADu_{\varepsilon}^{T}\mathrm{d}x=\limsup_{\varepsilon\to 0} \Bigg{(}\int_{\mathbb{T}^{d}}e^{\mu(u_{\varepsilon}-u)^{2}}(Du_{\varepsilon}- Du)A(Du_{\varepsilon}-Du)^{T}\] \[+2e^{\mu(u_{\varepsilon}-u)^{2}}(Du_{\varepsilon}-Du)ADu^{T}+e^{ \mu(u_{\varepsilon}-u)^{2}}DuADu^{T}\mathrm{d}x\Bigg{)}\] \[=\limsup_{\varepsilon\to 0}\int_{\mathbb{T}^{d}}e^{\mu(u_{ \varepsilon}-u)^{2}}(Du_{\varepsilon}-Du)A(Du_{\varepsilon}-Du)^{T}+e^{\mu(u_ {\varepsilon}-u)^{2}}DuADu^{T}\mathrm{d}x. \tag{3.31}\]
The last equation follows from (3.26). On the other hand, using that \(A\) is elliptic and Young's inequality, \(\frac{1}{a}+a(u_{\varepsilon}-u)^{2}\geqslant 2|u_{\varepsilon}-u|\), we obtain
\[-\int_{\mathbb{T}^{d}}H_{\varepsilon}(Du_{\varepsilon})e^{\mu(u _{\varepsilon}-u)^{2}}(u_{\varepsilon}-u)\mathrm{d}x\leqslant C\int_{\mathbb{ T}^{d}}|Du_{\varepsilon}|^{2}e^{\mu(u_{\varepsilon}-u)^{2}}|u_{ \varepsilon}-u|\mathrm{d}x\] \[\leqslant\frac{C}{\mu}\int_{\mathbb{T}^{d}}e^{\mu(u_{\varepsilon }-u)^{2}}Du_{\varepsilon}ADu_{\varepsilon}^{T}\mathrm{d}x+2\mu\int_{\mathbb{ T}^{d}}e^{\mu(u_{\varepsilon}-u)^{2}}(u_{\varepsilon}-u)^{2}Du_{\varepsilon} ADu_{\varepsilon}^{T}\mathrm{d}x. \tag{3.32}\]
Using equations in (3.29), (3.30), (3.31), and (3.32) in (3.28), we deduce that
\[\limsup_{\varepsilon\to 0}\int_{\mathbb{T}^{d}}e^{\mu(u_{ \varepsilon}-u)^{2}}(Du_{\varepsilon}-Du)A(Du_{\varepsilon}^{T}-Du^{T}) \mathrm{d}x\] \[\leqslant\limsup_{\varepsilon\to 0}\frac{C}{\mu}\int_{\mathbb{T}^{d}}e^{ \mu(u_{\varepsilon}-u)^{2}}(Du_{\varepsilon}-Du)A(Du_{\varepsilon}-Du)^{T} \mathrm{d}x+e^{\mu(u_{\varepsilon}-u)^{2}}DuADu^{T}\mathrm{d}x. \tag{3.33}\]
Because \(e^{\mu(u_{\varepsilon}-u)^{2}}\geqslant 1\), we have
\[\limsup_{\varepsilon\to 0}\int_{\mathbb{T}^{d}}(Du_{ \varepsilon}-Du)A(Du_{\varepsilon}-Du)^{T}\mathrm{d}x\] \[\leqslant\limsup_{\varepsilon\to 0}\int_{\mathbb{T}^{d}}e^{\mu(u_{ \varepsilon}-u)^{2}}(Du_{\varepsilon}-Du)A(Du_{\varepsilon}^{T}-Du^{T}) \mathrm{d}x. \tag{3.34}\]
Notice that by the dominated convergence theorem, we have
\[\int_{\mathbb{T}^{d}}e^{\mu(u_{\varepsilon}-u)^{2}}DuADu^{T}\mathrm{d}x\to\int_{ \mathbb{T}^{d}}DuADu^{T}\mathrm{d}x.\]
Relying on this and using (3.34) in (3.33), for \(\mu\) large enough, we get
\[\limsup_{\varepsilon\to 0}\frac{1}{2}\int_{\mathbb{T}^{d}}(Du_{\varepsilon}-Du)A( Du_{\varepsilon}-Du)^{T}\mathrm{d}x\leqslant\frac{C}{\mu}\int_{\mathbb{T}^{d}} DuADu^{T}\mathrm{d}x.\]
Because \(\mu\) is arbitrary, this implies (3.27).
Now, we ready to prove that the limit passing in the weak form of (3.19) is allowed; that is, the limit is allowed in the following equation
\[\int_{\mathbb{T}^{d}}DvADu_{\varepsilon}^{T}+\big{(}H_{\varepsilon}(x,Du_{ \varepsilon})+V_{\overline{H}}(x)-g(-u_{\varepsilon})\big{)}\,v\mathrm{d}x=0, \tag{3.35}\]
for all \(v\in H^{1}(\mathbb{T}^{d})\cap L^{\infty}(\mathbb{T}^{d})\). First, note that for all \(v\in H^{1}(\mathbb{T}^{d})\cap L^{\infty}(\mathbb{T}^{d})\)
\[\left|\int_{\mathbb{T}^{d}}DvADu_{\varepsilon}^{T}-DvADu^{T} \mathrm{d}x\right| \leqslant\int_{\mathbb{T}^{d}}\left|DvA(Du_{\varepsilon}^{T}-Du^{ T})\right|\mathrm{d}x\] \[\leqslant||Dv|A|||_{L^{2}(\mathbb{T}^{d})}||Du_{\varepsilon}-Du|| _{L^{2}(\mathbb{T}^{d})}\to 0, \tag{3.36}\]
as \(\varepsilon\to 0\). Let \(\delta,M>0\). Because \(Du_{\epsilon}\to Du\) strongly in \(L^{2}\), we have
\[\int_{\mathbb{T}^{d}}|H_{\varepsilon}(x,Du_{\varepsilon})-H_{ \varepsilon}(x,Du)|\,\mathrm{d}x \leqslant\int_{|Du-Du_{\epsilon}|<\delta\wedge|Du|<M}|H_{ \varepsilon}(x,Du_{\varepsilon})-H_{\varepsilon}(x,Du)|\,\mathrm{d}x\] \[+\int_{|Du-Du_{\epsilon}|\geqslant\delta\vee|Du|\geqslant M}|H_{ \varepsilon}(x,Du_{\varepsilon})-H_{\varepsilon}(x,Du)|\,\mathrm{d}x.\]
Because \(Du\) and \(Du_{\epsilon}\) take values on a compact set for \(|Du-Du_{\epsilon}|<\delta\wedge|Du|<M\), we have
\[\int_{|Du-Du_{\epsilon}|<\delta\wedge|Du|<M}|H_{\varepsilon}(x,Du_{\varepsilon })-H_{\varepsilon}(x,Du)|\,\mathrm{d}x\to 0,\]
as \(\varepsilon\to 0\). Moreover,
\[\int_{|Du-Du_{\epsilon}|\geqslant\delta\vee||Du|\geqslant M}|H_{ \varepsilon}(x,Du_{\varepsilon})-H_{\varepsilon}(x,Du)|\,\mathrm{d}x\] \[\qquad\leqslant C\int_{|Du-Du_{\epsilon}|\geqslant\delta\vee|Du |\geqslant M}|Du_{\varepsilon}|^{2}+|Du|^{2}\mathrm{d}x.\]
Therefore,
\[\limsup_{\varepsilon\to 0}\int_{\mathbb{T}^{d}}|H_{ \varepsilon}(x,Du_{\varepsilon})-H_{\varepsilon}(x,Du)|\,\mathrm{d}x\] \[\leqslant C\limsup_{\varepsilon\to 0}\int_{|Du-Du_{\epsilon}| \geqslant\delta\vee|Du|\geqslant M}|Du_{\varepsilon}|^{2}+|Du|^{2}\mathrm{d}x. \tag{3.37}\] \[\leqslant C\limsup_{\varepsilon\to 0}\int_{|Du-Du_{\epsilon}| \geqslant\delta\vee|Du|\geqslant M}|Du_{\varepsilon}-Du|^{2}+2|Du|^{2} \mathrm{d}x.\]
Observe that the dominated convergence theorem implies
\[\limsup_{\varepsilon\to 0}\int_{|Du-Du_{\epsilon}|>\delta\vee|Du|>M}|Du|^{2} \mathrm{d}x=\int_{|Du|>M}|Du|^{2}\mathrm{d}x.\]
Furthermore, this with again dominated convergence theorem, yield
\[\limsup_{\delta\to 0,M\to\infty}\limsup_{\varepsilon\to 0}\int_{|Du- Du_{\epsilon}|>\delta\vee|Du|>M}|Du|^{2}\mathrm{d}x \tag{3.38}\] \[=\limsup_{M\to\infty}\int_{|Du|>M}|Du|^{2}\mathrm{d}x=0.\]
On the other hand,
\[\int_{|Du-Du_{\epsilon}|>\delta\vee|Du|>M}|Du_{\varepsilon}-Du|^{2}\mathrm{d }x\leqslant\int_{\mathbb{T}^{d}}|Du_{\varepsilon}-Du|^{2}\mathrm{d}x\to 0. \tag{3.39}\]
Consequently, combining (3.38) with (3.39) in (3.37), we conclude that
\[\int_{\mathbb{T}^{d}}|H_{\varepsilon}(x,Du_{\varepsilon})-H_{ \varepsilon}(x,Du)|\,\mathrm{d}x\to 0,\]
as \(\varepsilon\to 0\).
From the definition of \(H_{\varepsilon}\) (see (3.16)), we have
\[\left|H_{\varepsilon}(x,Du)-\frac{1}{2}DuADu^{T}\right|\leqslant\frac{1}{2}DuADu ^{T},\]
which with the dominated convergence theorem imply
\[\int_{\mathbb{T}^{d}}\left|H_{\varepsilon}(x,Du)-\frac{1}{2}DuADu^{T}\right| \mathrm{d}x\to 0,\]
as \(\varepsilon\to 0\). Thus, we have
\[\begin{split}\int_{\mathbb{T}^{d}}\left|H_{\varepsilon}(x,Du_{ \varepsilon})-\frac{1}{2}DuADu^{T}\right|\mathrm{d}x&\leqslant \int_{\mathbb{T}^{d}}\left|H_{\varepsilon}(x,Du_{\varepsilon})-H_{\varepsilon }(x,Du)\right|\mathrm{d}x\\ &+\int_{\mathbb{T}^{d}}\left|\frac{1}{2}DuADu^{T}-H_{\varepsilon }(x,Du)\right|\mathrm{d}x\to 0.\end{split} \tag{3.40}\]
Finally, Rellich-Kondrachov Theorem and (3.27) imply that \(u_{\varepsilon}\to u\) strongly in \(H^{1}(\mathbb{T}^{d})\). Therefore, because \(u_{\varepsilon}\) and \(u\) are bounded in \(L^{\infty}\) and \(g\) is locally Lipschitz, we get
\[\int_{\mathbb{T}^{d}}\left|g(-u_{\varepsilon})-g(-u)\right|\mathrm{d}x \leqslant C\int_{\mathbb{T}^{d}}\left|u_{\varepsilon}-u\right|\mathrm{d}x \to 0. \tag{3.41}\]
Using (3.36), (3.40) and (3.41) in (3.35) and letting \(\varepsilon\to 0\), we conclude that
\[\int_{\mathbb{T}^{d}}DvADu^{T}+\left(\frac{1}{2}DuADu^{T}+V_{\overline{H}}(x)- g(-u)\right)v\mathrm{d}x=0.\qed\]
**Proposition 3.6**.: _Suppose that Assumptions 1-2 hold and let \(u\in H^{1}(\mathbb{T}^{d})\cap L^{\infty}(\mathbb{T}^{d})\) be a solution to (3.1). Then, \(u\in H^{1}(\mathbb{T}^{d})\cap C^{0,\gamma}(\mathbb{T}^{d})\) for some \(\gamma>0\)._
Proof.: Because \(u\) is a weak solution to (3.1) it satisfies
\[\int_{\mathbb{T}^{d}}DuA(x)D\varphi^{T}\mathrm{d}x=-\frac{1}{2}\int_{\mathbb{T }^{d}}DuA(x)Du^{T}\varphi\mathrm{d}x+\int_{\mathbb{T}^{d}}(g(-u)-V_{\overline{ H}})\varphi\mathrm{d}x, \tag{3.42}\]
for all \(\varphi\in H^{1}(\mathbb{T}^{d})\cap L^{\infty}(\mathbb{T}^{d})\). Taking into account that \(u\in H^{1}(\mathbb{T}^{d})\cap L^{\infty}(\mathbb{T}^{d})\), we set \(\varphi=e^{-\frac{u}{2}}\psi\) with \(\psi\in H^{1}(\mathbb{T}^{d})\cap L^{\infty}(\mathbb{T}^{d})\) as a test function in (3.42) to get
\[\begin{split}\int_{\mathbb{T}^{d}}\Big{(}-\tfrac{1}{2}e^{-\frac{u }{2}}\psi Du+e^{-\frac{u}{2}}D\psi\Big{)}& A(x)Du^{T}+\Big{(} \frac{1}{2}DuA(x)Du^{T}-g(-u)+V_{\overline{H}}\Big{)}e^{-\frac{u}{2}}\psi \mathrm{d}x\\ &=\int_{\mathbb{T}^{d}}D\psi e^{-\frac{u}{2}}A(x)Du^{T}-\Big{(}g( -u)-V_{\overline{H}}\Big{)}e^{-\frac{u}{2}}\psi\mathrm{d}x=0.\end{split}\]
Because \(u\in L^{\infty}(\mathbb{T}^{d})\), the preceding equation can be written as follows
\[\int_{\mathbb{T}^{d}}D\psi A_{e}(x)Du^{T}-f_{e}\psi\mathrm{d}x=0, \tag{3.43}\]
where \(A_{e}(x)=e^{-\frac{u}{2}}A(x)\in(L^{\infty}(\mathbb{T}^{d}))^{2d}\) and \(f_{e}=\Big{(}g(-u)-V_{\overline{H}}\Big{)}e^{-\frac{u}{2}}\in L^{\infty}( \mathbb{T}^{d})\). Let \(u^{*}\) be the periodic extension of \(u\) to \(\mathbb{R}^{d}\). Then, from (3.43), we have
\[\int_{\mathbb{R}^{d}}D\psi A_{e}(x)(Du^{*})^{T}-f_{e}\psi\mathrm{d}x=0, \tag{3.44}\]
for all \(\psi\in H^{1}_{0}(\mathbb{R}^{d})\). Note that if \(u^{*}\in C^{0,\alpha}(\Omega_{2})\) for some \(\Omega_{2}\subset\mathbb{R}^{d}\) satisfying \(\Omega_{1}=[-1,1]^{d}\subset\Omega_{2}\), then, \(u\in C^{0,\alpha}(\mathbb{T}^{d})\). So, to prove Holder regularity of \(u^{*}\) in a
domain containing \(\Omega_{1}\), we fix smooth domains \(\Omega_{1}^{\prime}\) and \(\Omega_{2}\) such that \(\Omega_{1}\subset\Omega_{1}^{\prime}\subset\Omega_{2}\). Using these sets, we define
\[\eta_{2}(x)=\begin{cases}1&x\in\Omega_{1}\\ \zeta_{2}(x)&x\in\Omega_{2}\setminus\Omega_{1}\\ 0,&0\in\mathbb{R}^{d}\setminus\Omega_{2},\end{cases} \tag{3.45}\]
where \(\zeta_{2}\in C_{0}^{2,\alpha}(\mathbb{R}^{d})\) is such that \(\zeta_{2}(x)>\frac{1}{2}\) for all \(x\in\Omega_{1}^{\prime}\), \(\eta_{2}\in C_{0}^{2,\alpha}(\mathbb{R}^{d})\), \(|D\eta_{2}|\leqslant C\). Taking \(\psi(x)=\varphi(x)\eta_{2}(x)\), \(\varphi\in H^{1}(\mathbb{R}^{d})\) as a test function in (3.44), we get
\[\int_{\mathbb{R}^{d}}D\varphi A_{e}(D(\eta_{2}u^{*}))^{T}+\varphi D\eta_{2}A_ {e}(Du^{*})^{T}-D\varphi u^{*}A_{e}(D\eta_{2})^{T}-f_{e}\eta_{2}\varphi\, \mathrm{d}x=0. \tag{3.46}\]
Note that
\[\int_{\mathbb{R}^{d}}\varphi D\eta_{2}A_{e}(Du^{*})^{T}\,\mathrm{d}x=-\int_{ \mathbb{R}^{d}}\operatorname{div}(\varphi D\eta_{2}A_{e})u^{*}\,\mathrm{d}x \tag{3.47}\]
Substituting (3.47) into (3.46), we get
\[\int_{\mathbb{R}^{d}}D\varphi A_{e}D(\eta_{2}u^{*})^{T}-D\varphi(2u^{*}A_{e}(D \eta_{2})^{T})-(f_{e}\eta_{2}+u^{*}\operatorname{div}(D\eta_{2}A_{e}))\varphi \,\mathrm{d}x=0.\]
Denoting \(u_{\eta}=\eta_{2}u^{*}\), we notice that \(u_{\eta}\) solves the following boundary value problem
\[\begin{cases}-\operatorname{div}(A_{e}Du_{\eta}^{T})+\operatorname{div}(f_{b} )-f_{h}=0&\text{in}\quad\Omega_{2}\\ u_{\eta}=0,&\text{on}\quad\partial\Omega_{2}\end{cases} \tag{3.48}\]
where
\[f_{b}=2u^{*}A_{e}(D\eta_{2})^{T},\quad f_{h}=f_{e}\eta_{2}+u^{*} \operatorname{div}(D\eta_{2}A_{e}).\]
Recalling the definitions of \(\eta_{2}\) and \(A_{e}\), and that \(u^{*}\) is bounded on \(\Omega_{2}\), we deduce that \(A_{e}\) is uniformly elliptic and \(A_{e}\in(L^{\infty}(\Omega_{2}))^{2d}\), \(f_{b}\in(L^{\infty}(\Omega_{2}))^{d}\), \(f_{h}\in L^{\infty}(\Omega_{2})\). Therefore, because \(u^{*}\in H^{1}(\Omega)\cap L^{\infty}(\Omega)\) and solves elliptic equation in (3.48) by the classic regularity results for linear elliptic equations (see, for example, Theorem 1.1 in Chapter 4 of [22]), we obtain that \(u_{\eta}\in C^{0,\gamma}(\Omega_{2})\) for some \(\gamma>0\). Recalling that \(\eta_{2}>\frac{1}{2}\) on \(\Omega_{1}\), we deduce that \(u^{*}\in C^{0,\gamma}(\Omega_{1})\), hence, \(u_{\eta}\in C^{0,\gamma}(\mathbb{T}^{d})\).
## 4. Regularity Analysis
In this section, we investigate the regularity of linear elliptic boundary value problems and prove Holder continuity of the corresponding solutions. Here, \(\Omega\) is a bounded domain with a smooth boundary in \(\mathbb{R}^{d}\).
We recall the definitions of Morrey and Campanato spaces.
**Definition 4.1** (Morrey spaces).: _Let \(\Omega\subset\mathbb{R}^{d}\) be an open set with smooth boundary and let \(B(x,r)\) be any ball centered at \(x\) with radius \(r\). We say that a locally integrable function \(f\) belongs to the Morrey space \(L^{p,\lambda}(\Omega)\), \(1\leqslant p<\infty\), \(0<\lambda<d\) if_
\[\|f\|_{L^{p,\lambda}(\Omega)}=\sup_{x\in\Omega,r>0}\left(\frac{1}{r^{\lambda}} \int_{B(x,r)\cap\Omega}|f(y)|^{p}dy\right)^{\frac{1}{p}}<\infty.\]
_We say that a locally integrable function \(f\) belongs to the weak Morrey space \(L^{p,\lambda}_{w}(\Omega)\) if there exists \(c\geqslant 0\) such that_
\[\sup_{t>0}t^{p}|\{x\in\Omega:|f(x)|>t\}\cap B(x,r)|\leqslant c\,r^{\lambda}.\]
_The best constant in the previous inequality will be denoted by \(\|f\|_{L^{p,\lambda}_{w}(\Omega)}\)._
**Definition 4.2** (Campanato spaces).: _Let \(\Omega\subset\mathbb{R}^{d}\) be an open set with smooth boundary and let \(B(x,r)\) be any ball centered at \(x\) with radius \(r\). We say that a locally integrable function \(f\) belongs to the Campanato space \(\mathcal{L}^{p,\lambda}(\Omega)\), \(1\leqslant p<\infty,\)\(\lambda>0\) if_
\[\|f\|_{\mathcal{L}^{p,\lambda}(\Omega)}=\sup_{x\in\Omega,r>0}\left(\frac{1}{r ^{d}}\int_{B(x,r)\cap\Omega}|f(y)-f_{B(x,r)}|^{p}dy\right)^{\frac{1}{p}}<\infty\]
_where \(f_{B(x,r)}=\frac{1}{|B(x,r)|}\int_{B(x,r)}f(x)\mathrm{d}x\) denotes the integral average of \(f\) on \(B(x,r)\)._
Campanato and Morrey spaces are closely related and valuable tools for analysing partial differential equations.
Here, we prove Holder continuity for solutions to the following problem.
**Problem 2**.: _Let \(0<\alpha<1\) and \(\lambda\in(d-2,d)\). Suppose that \(f_{1}\in C^{0,\alpha}(\Omega)\), \(f_{2},f_{3}\in(L^{2,\lambda}(\Omega))^{d}\) and \(f_{4}\in L^{1,\lambda}(\Omega)\). Find \(v\in H^{1}_{0}(\Omega)\) such that_
\[\begin{cases}-\operatorname{div}(ADv^{T})+f_{1}v+\operatorname{div}(f_{2})+f_ {3}Dv+f_{4}=0&\text{in}\quad\Omega\\ v=0&\text{on}\quad\partial\Omega.\end{cases} \tag{4.1}\]
### Weak Solutions
We start with some results on Morrey spaces crucial to Problem 2.
**Lemma 4.3**.: _Let \(d\geqslant 3\). Suppose that \(u\in H^{1}(\Omega)\) and that \(Du\in(L^{2,\nu}(\Omega))^{d}\) for some \(0\leqslant\nu<d-2\). Then, \(u\in L^{2,2+\nu}(\Omega)\)._
Proof.: Let \(p\geqslant 1\) be such that \(\frac{1}{2}=\frac{1}{p}-\frac{1}{d}\). By Poincare and Holder inequalities
\[\int_{B_{R}}|u-\bar{u}_{R}|^{2}\mathrm{d}x\leqslant CR^{d+2-2d/p}\left(\int_{ B_{R}}|Du|^{p}\mathrm{d}x\right)^{\frac{2}{p}}\leqslant CR^{2}\int_{B_{R}}|Du|^{2} \mathrm{d}x.\]
Because \(Du\in(L^{2,\nu}(\Omega))^{d}\), \(u\) belongs to Campanato space \(\mathcal{L}^{2,2+\nu}(\Omega)\) where \(2+\nu<d\). Recalling that the for \(1\leqslant k<d\) Campanato space \(\mathcal{L}^{2,k}(\Omega)\) coincides with the Morrey space \(L^{2,k}(\Omega)\) (see [26] or [29] and [30]), we conclude the proof.
The following Lemma is proved in [12] (see also [29] and [30]).
**Lemma 4.4**.: _[_12_, Lemma 4.1]_ _Let \(\lambda\in(d-2,d)\), \(\nu\in[0,d-2)\), suppose that \(f\in L^{2,\lambda}(\Omega)\), \(u\in L^{2,2+\nu}(\Omega)\), and \(Du\in L^{2,\nu}(\Omega)\). Then, \(fu\in L^{2,2+\nu+\lambda-d}(\Omega)\) and_
\[||fu||_{L^{2,2+\nu+\lambda-d}}\leqslant C||f||_{L^{2,\lambda}}(||Du||_{L^{2, \nu}}+||u||_{L^{2,2+\nu}}).\]
Next, relying on the previous two Lemmas, we prove finiteness of integrals that arise in Definition 4.6.
**Proposition 4.5**.: _Consider the setting of Problem 2 and Suppose that Assumption 1 holds. Let \(v\) be a corresponding solution to Problem 2. Then, for all \(\varphi\in H^{1}_{0}(\Omega)\) the following integral is finite_
\[\int_{\Omega}D\varphi(ADv^{T}-f_{2})+(f_{1}v+f_{3}Dv+f_{4})\varphi\,dx. \tag{4.2}\]
Proof.: Assumption 1 implies that the first term of (4.2) is bounded. Therefore, it remains to prove that
\[\int_{\Omega}\left(f_{1}v\varphi+f_{3}Dv\varphi+f_{4}\varphi\right)\mathrm{d}x \tag{4.3}\]
is finite for all \(\varphi\in H^{1}_{0}(\Omega)\). The first term is finite because \(f_{1}\in C^{0,\alpha}(\Omega)\) and \(\varphi,v\in H^{1}(\Omega)\). Moreover, because \(\varphi,v\in H^{1}(\Omega)\), Lemma 4.3 yields \(v,\varphi\in L^{2,2}(\Omega)\). Then, taking into account that \(f_{3}\in(L^{2,\lambda}(\Omega))^{d}\) by Lemma 4.4, we get \(\varphi f_{3}\in(L^{2,4+\lambda-d}(\Omega))^{d}\), which implies that \(\varphi f_{3}Dv\in L^{1}(\Omega)\). Accordingly, the second term in (4.3) is finite. It remains to prove that the third term in (4.3) is also finite. So, recalling that \(f_{4}\in L^{1,\lambda}(\Omega)\), we have \(|f_{4}|^{\frac{1}{2}}\in L^{2,\lambda}(\Omega)\). Hence, using Lemma 4.4, we obtain \(|f_{4}|^{\frac{1}{2}}\varphi\in L^{2}(\Omega)\), which yields \(|f_{4}|^{\frac{1}{2}}|f_{4}|^{\frac{1}{2}}\varphi\in L^{1}(\Omega)\).
Proposition 4.5 ensures the following definition of weak solutions to (4.1) makes sense as all integrals are finite.
**Definition 4.6**.: _We say that \(v\in H^{1}_{0}(\Omega)\) solves (4.1) in the weak sense if for all \(\varphi\in H^{1}_{0}(\Omega)\) following equality holds_
\[\int_{\Omega}D\varphi(ADv^{T}-f_{2})+(f_{1}v+f_{3}Dv+f_{4})\varphi\mathrm{d}x=0.\]
### Preliminary results
We review here some preliminary results regarding equations of the following two forms:
\[-\operatorname{div}(ADw^{T})=-\operatorname{div}(f_{2}) \tag{4.4}\]
and
\[-\operatorname{div}(ADw^{T})=f. \tag{4.5}\]
The first result concerns (4.4).
**Theorem 4.7**.: [12, Theorem 3.3 ] Let \(v_{1}\in H^{1}\) solve (4.4) and suppose that Assumptions 1 and 3 hold. Further, assume that \(f_{2}\in(L^{2,\nu}(\Omega))^{d}\) with \(d-2<\nu<d\). Then, \(v_{1}\in C^{0,\alpha}(\Omega)\) for some \(0<\alpha<1\).
The following theorem is proven in [13].
**Theorem 4.8**.: [13, Theorem 4.4] Let \(w\in H^{1}(\Omega)\) be a very weak solution to (4.5) and \(f\in L^{1,\nu}(\Omega)\) with \(0<\nu<d-2\). Suppose that Assumptions 1 and 3 hold. Then, there exists a positive constant \(C\) such that
\[||w||_{L^{\mathrm{g},\nu}(\Omega)}\leqslant C||f||_{L^{1,\nu}(\Omega)}^{\mathrm{ g}\nu},\]
where \(\frac{1}{q_{\nu}}=1-\frac{2}{d-\nu}\).
As a consequence, we obtain the following result.
**Corollary 4.9**.: _[_24_, Remark 2]_ _Consider the setting of Theorem 4.8. Then, there exists a positive constant \(C\) such that_
\[||w||_{L^{p,\nu}(\Omega)}\leqslant C,\]
_where \(1\leqslant p<q_{\nu}\) with \(\frac{1}{q_{\nu}}=1-\frac{2}{d-\nu}\). Moreover,_
\[||w||_{L^{1,2+\nu}(\Omega)}\leqslant C.\]
Proof.: The first estimate follows from Theorem 4.8 and the following embeddings of Morrey spaces
\[L_{w}^{p,\lambda}(\Omega)\subset L^{q,\lambda}(\Omega),\quad 1\leqslant q<p.\]
Next, we prove the estimate for \(\|w\|_{L^{1,2+\nu}(\Omega)}\). Note that
\[\int_{B_{R}\cap\Omega}|w|\mathrm{d}x =\int_{0}^{+\infty}|\{x\in B_{R}\cap\Omega:|w|>t\}|\mathrm{d}t\] \[=\int_{0}^{\tau}|\{x\in B_{R}\cap\Omega:|w|>t\}|\mathrm{d}t+\int_ {\tau}^{+\infty}|\{x\in B_{R}\cap\Omega:|w|>t\}|\mathrm{d}t, \tag{4.6}\]
for all \(\tau>0\). On the other hand, from the definitions of Morrey, weak Morrey spaces, and Theorem 4.8, we have
\[t^{q_{\nu}}|\{x\in B_{R}\cap\Omega:|w|>t\}|\leqslant CR^{\nu}.\]
Consequently, by (4.6) and the fact, \(\nu<d-2\), we deduce
\[\int_{B_{R}\cap\Omega}|w|\mathrm{d}x \leqslant C\left(R^{d}\int_{0}^{\tau}\mathrm{d}t+R^{\nu}\int_{ \tau}^{+\infty}t^{-q_{\nu}}\mathrm{d}t\right)\] \[\leqslant C\left(R^{d}\tau+\frac{R^{\nu}}{1-q_{0}}\tau^{1-q_{\nu}} \right).\]
Taking \(\tau=R^{-(d-\nu-2)}\) in the preceding equation, we complete the proof.
The following result provides Holder continuity for solutions to (4.5).
**Theorem 4.10**.: [13, Theorem 4.13 ] Let \(w\) be a solution to (4.5) and suppose that Assumptions 1 and 3 hold. Further, assume \(f\in L^{1,\nu}(\Omega)\) where \(d-2<\nu<d\). Then, \(w\in C^{0,\alpha}(\Omega)\) for some \(0<\alpha<1\).
### Regularity of Weak Solutions
In this section, we obtain Holder continuity of weak solutions to Problem 2. More precisely, if \(v\) solves (4.1), then \(v\in C^{0,\alpha}(\Omega)\) for some \(\alpha\in(0,1)\). This achieved by combining results from previous sections with the following Lemmas on elliptic PDEs in Morrey spaces (see, e.g., [13], [29] and [30]).
We start by splitting the equation in (4.1) into two problems and analysis their solutions separately.
**Remark 4.11**.: Consider the setting of Problem 2. Let \(v\in H^{1}(\Omega)\) solves (4.1) and let \(v_{1}\in H^{1}(\Omega)\) solve (4.4). Then, \(v_{2}=v-v_{1}\) solves
\[-\operatorname{div}(ADv_{2}^{T})=-f_{1}v-f_{3}Dv-f_{4}. \tag{4.7}\]
The following Lemma provides an interpolation result for Morrey spaces.
**Lemma 4.12**.: _[_24_, Lemma 3]_ _Let \(0\leqslant\nu,\mu<d\), \(u\in L^{\frac{2d}{d-2},\nu}(\Omega)\cap L^{1,\mu}(\Omega)\). Then, \(u\in L^{2,\theta}(\Omega)\), where \(\theta=\frac{4\mu+\nu(d-2)}{d+2}\)._
By applying Holder inequality, we get next lemma, which provides estimates for the product of functions in Morrey spaces.
**Lemma 4.13**.: _Suppose that \(u\in L^{2,\nu}(\Omega)\) and \(f\in L^{2,\mu}(\Omega)\). Then, \(fu\in L^{1,\frac{\nu+\mu}{2}}(\Omega)\) and_
\[\|fu\|_{L^{1,\frac{\nu+\mu}{2}}}\leqslant\|f\|_{L^{2,\mu}}^{\frac{1}{2}}\|u\| _{L^{2,\nu}}^{\frac{1}{2}}.\]
The following result is Morrey space version of Sobolev Theorem.
**Lemma 4.14**.: _Suppose that \(u\in H^{1}(\Omega)\) and \(Du\in L^{2,\nu}(\Omega)\) for \(0\leqslant\nu<d-2\). Then, \(u\in L^{\frac{2d}{d-2}\nu}\frac{d}{d-2}(\Omega)\)._
Proof.: By Poincare inequality (see Theorem 4.9 in [11]) for \(2^{*}=\frac{2d}{d-2}\), we have
\[\left(\frac{1}{|B_{R}|}\int_{B_{R}}|u-\bar{u}_{R}|^{2^{*}}\mathrm{ d}x\right)^{\frac{1}{2^{*}}} \leqslant CR\left(\frac{1}{|B_{R}|}\int_{B_{R}}|Du|^{2}\mathrm{d}x \right)^{\frac{1}{2}}\] \[\leqslant CR^{\frac{2-d+\nu}{2}}\|Du\|_{L^{2,\nu}}.\]
Hence,
\[\int_{B_{R}}|u-\bar{u}_{R}|^{2^{*}}\mathrm{d}x \leqslant CR^{\nu\frac{2^{*}}{2}}\|Du\|_{L^{2,\nu}}^{2^{*}}.\]
Consequently,
\[\|u\|_{\mathcal{L}^{2^{*},\nu\frac{2^{*}}{2}}}\leqslant C\|Du\|_{L^{2,\nu}}.\]
This implies that \(u\in\mathcal{L}^{\frac{2d}{d-2},\nu\frac{d}{d-2}}(\Omega)\). Recalling that \(\nu\frac{d}{d-2}<d\), we conclude the proof by using well-known relations between Morrey and Campanato spaces (see Theorem 4.3 in [26]).
We achieve bound the norm of the solution gradient in terms of the norm of the solution itself by using inequalities in (4.8) and (4.9).
**Lemma 4.15**.: _Consider the setting of Problem 2 and suppose that Assumption 1 holds. Let \(v_{1}\) and \(v_{2}\) be solutions to (4.4) and (4.7), respectively. Then, there exists a constant \(C\) such that for all \(R>0\)_
\[\int_{B_{R}}|Dv_{1}|^{2}\mathrm{d}x\leqslant\frac{C}{R^{2}}\int_{B_{2R}}|v_{1} -\bar{v}_{1}|^{2}\mathrm{d}x+C\int_{B_{2R}}|f_{2}|^{2}\mathrm{d}x, \tag{4.8}\]
_and_
\[\int_{B_{R}}|Dv_{2}|^{2}\mathrm{d}x \leqslant C\int_{B_{R}}|Dv_{1}|^{2}\mathrm{d}x+\frac{C}{R^{2}} \int_{B_{2R}}(|v_{1}|^{2}+|v_{2}|^{2})\mathrm{d}x\] \[+C\int_{B_{2R}}(|f_{3}|^{2}+|f_{4}|)(|v_{1}|^{2}+|v_{2}|^{2}) \mathrm{d}x. \tag{4.9}\]
Proof.: By the definition of weak solution to (4.4), we have
\[\int_{\Omega}Dv_{1}A(x)D\psi^{T}\mathrm{d}x=\int_{\Omega}f_{2}D\psi\mathrm{d}x, \tag{4.10}\]
for all \(\psi\in H^{1}(\Omega).\) Let \(\eta:\mathbb{R}^{d}\to\mathbb{R}_{0}^{+}\) be a smooth \(B_{R}-B_{2R}\) cut-off function whose gradient is controlled by \(C/R\), where \(C\) is a universal constant. Setting
\[\psi_{1}(x)=\eta^{2}(x)(v_{1}(x)-\bar{v}_{1})\]
and taking \(\psi=\psi_{1}\) in (4.10), we obtain
\[\int_{B_{2R}}\eta^{2}Dv_{1}A(x)Dv_{1}^{T}\mathrm{d}x =-2\int_{B_{2R}}\eta(v_{1}(x)-\bar{v}_{1})Dv_{1}A(x)D\eta^{T} \mathrm{d}x\] \[+\int_{B_{2R}}\eta f_{2}\eta Dv_{1}\mathrm{d}x+2\int_{B_{2R}}(v_{ 1}(x)-\bar{v}_{1})\eta f_{2}D\eta\mathrm{d}x.\]
\[+C\int_{B_{2R}}(|f_{3}|^{2}+|f_{4}|)\eta^{2}(|v_{1}|^{2}+|v_{2}|^{2}) \mathrm{d}x.\]
This completes the proof.
Now, we ready to prove Holder continuity for weak solutions to (4.1).
**Theorem 4.16**.: Let \(v\) solves Problem 2 and suppose that Assumptions 1 and 3 hold. Then, \(v\in C^{0,\alpha}(\Omega)\) for some \(0<\alpha<1\).
Proof.: Let \(v_{1}\) be a solution to (4.4). By Remark 4.11, \(v_{2}=v-v_{1}\) solves (4.7). We prove Holder continuity for \(v\) by showing it for \(v_{1}\) and \(v_{2}\).
We split the proof into claims.
**Claim 1:** Estimate for \(Dv_{1}\) in \(L^{2}\)
\[\int_{B_{R}}|Dv_{1}|^{2}\mathrm{d}x\leqslant C\left(R^{d-2+2\alpha}+R^{\lambda }\right)\leqslant CR^{\tau}, \tag{4.13}\]
which implies \(Dv_{1}\in L^{2,\tau}(\Omega)\), where \(\tau=\min\{d-2+2\alpha,\lambda\}\).
First, we obtain a uniform estimate for the gradient of \(v_{1}\). For that, we estimate the right-hand side in (4.8). Since \(f_{2}\in(L^{2,\lambda}(\Omega))^{d}\) with \(d-2<\lambda<d\), by Theorem 4.7, we have \(v_{1}\in C^{0,\alpha}(\Omega)\) for some \(0<\alpha<1\). Thus, in any ball with radius \(R\), we have
\[|v_{1}-\bar{v}_{1}|\leqslant CR^{\alpha}.\]
Using this and recalling that \(f_{2}\in(L^{2,\lambda}(\Omega))^{d}\) from (4.8), we deduce the **Claim 1**.
**Claim 2:** The following estimate for \(v_{2}\) holds true
\[\|v_{2}\|_{L^{1,2+\nu_{0}}(\Omega)}\leqslant C, \tag{4.14}\]
where \(\nu_{0}=\min\{\lambda,\frac{\lambda}{2},\frac{d+2}{2}\}=\frac{\lambda}{2}\).
To establish the claim, we set
\[f_{0}(x)=-f_{1}(x)v(x)-f_{3}(x)Dv(x)-f_{4}(x).\]
Note that Remark 4.11, yields \(-\operatorname{div}(ADv_{2}^{T})=f_{0}(x)\). Next, we estimate all terms of \(f_{0}\). Lemma 4.3 and embedding properties of Morrey spaces imply \(v\in L^{2,2}(\Omega)\) and \(f_{3}Dv\in L^{1,\frac{\lambda}{2}}(\Omega)\). Using these by Corollary 4.9, we obtain (4.14).
Because \(v_{2}\in H^{1}(\Omega)\), Lemma 4.14 implies \(v_{2}\in L^{2^{*}}(\Omega)\). This estimate combined with Lemma 4.12, yields
\[\|v_{2}\|_{L^{2,2+\tau_{0}}(\Omega)}\leqslant C, \tag{4.15}\]
where \(\tau_{0}=\frac{2(2\nu_{0}-(d-2))}{d+2}\). Note that \(\tau_{0}\) is strictly positive because \(\nu_{0}=\frac{\lambda}{2}>\frac{d-2}{2}\).
**Claim 3** For the gradient of \(v_{2}\), we have
\[\int_{B_{R}}|Dv_{2}|^{2}\mathrm{d}x\leqslant CR^{\rho_{1}}, \tag{4.16}\]
where \(\rho_{1}=\min\{\tau_{0},\mu_{0},d-2+2\alpha\}\).
To establish the claim, we note that \(f_{3}\in(L^{2,\lambda}(\Omega))^{d}\), \(|f_{4}|^{\frac{1}{2}}\in L^{2,\lambda}(\Omega)\), \(v_{1},v_{2}\in L^{2,2}(\Omega)\) and \(Dv_{1},Dv_{2}\in L^{2}(\Omega)\). By using Lemma 4.4, we estimate last term in (4.9) as follows
\[\int_{B_{2R}}(|f_{3}|^{2}+|f_{4}|)(|v_{1}|^{2}+|v_{2}|^{2})\mathrm{d}x \leqslant CR^{\mu_{0}}, \tag{4.17}\]
where \(\mu_{0}=\lambda-d+2\). Using (4.13), (4.15) and (4.17) in (4.9), we deduce
\[\int_{B_{R}}|Dv_{2}|^{2}\mathrm{d}x\leqslant C\left(R^{d-2+2\alpha}+R^{d-2}+R ^{\tau_{0}}+R^{\mu_{0}}\right),\]
which completes the proof.
**Claim 4** For \(v\) the following estimate holds
\[\|v\|_{L^{2,2+\rho_{1}}(\Omega)}+\|Dv\|_{L^{2,\rho_{1}}(\Omega)}\leqslant C. \tag{4.18}\]
By (4.16) in Claim 3, from Lemma 4.14, we deduce that \(v_{2}\in L^{2,2+\rho_{1}}(\Omega)\). Using this and utilizing the following facts \(v_{1}\in L^{2,d}(\Omega)\), \(Dv_{1}\in L^{2,d-2+\alpha}(\Omega)\) and \(Dv_{2}\in L^{2,\rho_{1}}(\Omega)\), we obtain (4.18).
Next, using **Claim 1-4**, we improve estimate (see **Claim 3**) of \(\int_{B_{R}}|Dv_{2}|^{2}\,dx\). By the Caccioppoli inequality in (4.9), we have
\[\begin{split}\int_{B_{R}}|Dv_{2}|^{2}\,dx\leqslant& \int_{B_{R}}|Dv_{1}|^{2}\,dx+\frac{1}{R^{2}}\int_{B_{2R}}(|v_{1}|^{2}+|v_{2}|^{ 2})\,dx\\ &+\int_{B_{2R}}(|f_{3}|^{2}+|f_{4}|)(|v_{1}|^{2}+|v_{2}|^{2})\,dx.\end{split} \tag{4.19}\]
By **Claim 1**, Holder and Sobolev inequalities, we have \(\int_{B_{2R}}|v_{1}|^{2}\,dx\leqslant CR^{\tau+2}\). While, by (4.17) Holder, Sobolev, and **Claim 3** it follows that \(\int_{B_{2R}}|v_{2}|^{2}\,dx\leqslant CR^{\rho_{1}+2}\).
For the terms with \(|f_{4}|\) of the last integral in (4.19) by Fefferman-Poincare inequality (see [8]), we have
\[\int_{B_{2R}}|f_{4}|v_{1}^{2}\,\mathrm{d}x\leqslant CR^{\lambda-d+2}\int_{B_{2R} }|Dv_{1}|^{2}\,\mathrm{d}x,\leqslant CR^{\lambda-d+2+\tau}\]
and
\[\int_{B_{2R}}|f_{4}|v_{2}^{2}\,\mathrm{d}x\leqslant CR^{\lambda-d+2}\int_{B_{2 R}}|Dv_{2}|^{2}\,\mathrm{d}x.\]
In the same way, we proceed with \(|f_{3}|^{2}\) in place of \(|f_{4}|\). Combining the preceding estimates, we get
\[\int_{B_{R}}|Dv_{2}|^{2}\,dx\leqslant C\left(R^{\tau}+R^{\rho_{1}}+R^{\lambda- d+2+\tau}\right)+CR^{\lambda-d+2}\int_{B_{2R}}|Dv_{2}|^{2}\,dx.\]
Finally, for sufficiently small \(R\) (for more details see e.g. algebraic Lemma in [28]), we obtain
\[\int_{B_{R}}|Dv_{2}|^{2}\,dx\leqslant C\left(R^{\tau}+R^{\rho_{1}}+R^{\lambda- d+2+\tau},\right)\]
and this means \(|Dv_{2}|\) belongs to \(L^{2,\chi}(\Omega)\), where \(\chi=\min(\tau,\rho_{1},\lambda-d+2+\tau)\). Our last obstacle is that, in general, \(\chi<d-2\) and consequently - at this stage - we can not apply Morrey's lemma to get Holder continuity of \(v_{2}\). We overcome this obstacle by increasing the value of the parameter \(\chi\) through iterating arguments. This enable us to apply Morrey's lemma and conclude the proof. The iterating our arguments are done as follows.
**Claim 2**, gives \(f_{0}\in L^{1,\lambda/2+\chi/2}(\Omega)\) instead of \(\frac{\lambda}{2}\) and, then, (4.14) improves to \(v_{2}\) in \(L^{1,2+\lambda/2+\chi/2}(\Omega)\). Now, estimate in (4.15) becomes \(v_{2}\) belongs to \(L^{2,\theta}(\Omega)\), where
\[\theta=\frac{2(\lambda+\chi)+\nu(d-2)}{d+2}.\]
In similar way, (4.17) becomes
\[\int_{B_{2R}}(|f_{3}|^{2}+|f_{4}|)(|v_{1}|^{2}+|v_{2}|^{2})\mathrm{d}x \leqslant CR^{\mu_{0}},\]
where the new exponent is \(\mu_{1}=\lambda-d+2+\chi/2\). As a consequence \(Dv_{2}\) belongs to \(L^{2,\theta_{2}}(\Omega)\), where \(\rho_{2}=\rho_{1}+\chi/2\). Note that \(\rho_{2}\) is strictly greater than \(\rho_{1}\). By a finite number of steps, we get \(Dv_{2}\) in the same space as \(Dv_{1}\), and the result follows.
### Further Holder regularity
In Proposition 3.6, we proved existence of \(C^{0,\alpha}\) solutions to (3.1). Here, we prove \(C^{1,\alpha}\) Holder regularity. For that, a crucial step is to prove that the gradient of solutions belongs to a suitable Morrey space. Then, the regularity result for the linear elliptic equations in Theorem 4.16 leads to Holder continuity for the gradient.
By Theorem 3.5 and Proposition 3.6 there exists \(u\in H^{1}(\mathbb{T}^{d})\cap C^{0,\alpha}(\mathbb{T}^{d})\) solving (3.1) in the sense of distributions. Next, we prove an estimate for the gradient by a Caccioppoli inequality.
**Proposition 4.17**.: _Let \(u\in H^{1}(\mathbb{T}^{d})\cap C^{0,\alpha}(\mathbb{T}^{d})\) solve (3.1) and suppose that Assumptions 1-2 hold. Then, there exists a positive constant \(C\) depending only on data such that \(\|Du\|_{L^{2,\lambda}(\mathbb{T}^{d})}\leqslant C\) for some \(d-2<\lambda<d\)._
Proof.: By definition of weak solution to (3.1), we have
\[\int_{\mathbb{T}^{d}}DuA(x)D\varphi^{T}\mathrm{d}x=-\frac{1}{2}\int_{\mathbb{ T}^{d}}DuA(x)Du^{T}\varphi\mathrm{d}x+\int_{\mathbb{T}^{d}}(g(-u)-V_{ \overline{H}})\varphi\mathrm{d}x, \tag{4.20}\]
for all \(\varphi\in H^{1}(\mathbb{T}^{d})\cap L^{\infty}(\mathbb{T}^{d})\).
Let \(0<R<R_{0}\), where \(R_{0}\) will be chosen later. Because \(u\in H^{1}_{0}(\mathbb{T}^{d})\cap C^{0,\alpha}(\mathbb{T}^{d})\), we have
\[|u-u_{2R}|\leqslant CR^{\alpha}, \tag{4.21}\]
where \(u_{2R}=\frac{1}{|B(2R)|}\int_{B(2R)}u\ \mathrm{d}x\). Now, let
\[\varphi(x)=\eta^{2}(x)(u(x)-u_{2R}),\]
where \(\eta\) is supported in \(B_{2R}\), identically 1 in \(B_{R}\) and is a smooth cut-off function. We observe that \(\varphi\in H^{1}_{0}(\mathbb{T}^{d})\cap C^{0,\alpha}(\mathbb{T}^{d})\) and
\[D\varphi=2\eta(u-u_{2R})D\eta+\eta^{2}Du. \tag{4.22}\]
By (4.22) and (4.20), we obtain
\[\int_{B(2R)}\eta^{2}DuA(x)Du^{T}\mathrm{d}x= -2\int_{B(2R)}\eta(u-u_{2R})DuA(x)D\eta^{T}\mathrm{d}x\] \[-\frac{1}{2}\int_{B(2R)}DuA(x)Du^{T}\eta^{2}(u-u_{2R})\mathrm{d}x\] \[+\int_{B(2R)}(g(-u)-V_{\overline{H}})\eta^{2}(u-u_{2R})\mathrm{d}x\,. \tag{4.23}\]
From Assumption 1, we have
\[\max_{x\in\mathbb{T}^{d}}|a_{ij}(x)|\leqslant\theta_{1}. \tag{4.24}\]
Thus,
\[2\left|\int_{B(2R)}\eta(u-u_{2R})DuA(x)D\eta^{T}\mathrm{d}x\right|\leqslant \frac{\theta_{0}}{2}\int_{B(2R)}\eta^{2}|Du|^{2}\mathrm{d}x+C\int_{B(2R)}|u-u _{2R}|^{2}|D\eta|^{2}\mathrm{d}x. \tag{4.25}\]
By (4.24), we get
\[\frac{1}{2}\Big{|}\int_{B(2R)}DuA(x)Du^{T}\eta^{2}(u-u_{2R})\mathrm{d}x\Big{|} \leqslant\frac{\theta_{1}}{2}\int_{B(2R)}\eta^{2}|Du|^{2}|u-u_{2R}|\mathrm{d}x. \tag{4.26}\]
Again using Assumption 1, taking into account the estimates in (4.25), (4.26), (4.21), (4.23) and recalling that \(|D\eta|\leqslant\frac{1}{R}\) (follows from the definition of \(\eta\)), we deduce
\[\theta_{0}\int_{B(2R)}\eta^{2}|Du|^{2}\mathrm{d}x\leqslant\frac{\theta_{0}}{ 2}\int_{B(2R)}\eta^{2}|Du|^{2}\mathrm{d}x+CR^{d-2+2\alpha}+\frac{\theta_{1}}{ 2}R^{\alpha}\int_{B(2R)}\eta^{2}|Du|^{2}\mathrm{d}x+CR^{d+\alpha}.\]
Then, fixing \(R_{0}>0\) such that \(\frac{\theta_{1}}{2}R^{\alpha}\leqslant\frac{\theta_{0}}{4}\) for all \(0<R\leqslant R_{0}\) and using the proceeding inequality, we get
\[\int_{B(R)}|Du|^{2}\mathrm{d}x\leqslant C(R^{d-2+2\alpha}+R^{d+\alpha}) \leqslant CR^{d-2+2\alpha},\]
i.e. \(Du\in(L^{2,\lambda}(\mathbb{T}^{d}))^{d}\) for \(\lambda=d-2+2\alpha\).
Next, relying on the preceding proposition and the results from Section 3, we prove that the solution to (3.1) is continuously differentiable.
**Proposition 4.18**.: _Let \(u\in H^{1}(\mathbb{T}^{d})\cap L^{\infty}(\mathbb{T}^{d})\) be a solution to (3.1) and suppose that Assumptions 1-3 hold. Then, \(u\in H^{1}(\mathbb{T}^{d})\cap C^{1,\gamma}(\mathbb{T}^{d})\) for some \(\gamma>0\)._
Proof.: Differentiating (3.1) with respect to \(x_{k}\), \(k\in\{1,\ldots,d\}\) and setting \(u_{x_{k}}=v\), we obtain
\[-\operatorname{div}(ADv^{T})-\operatorname{div}(A_{x_{k}}Du^{T})+\frac{1}{2}Dv ADu^{T}+\frac{1}{2}DuADv^{T}+\]
\[\int_{\mathbb{R}^{d}}D\psi AD(\eta_{2}v^{*})^{T}-D\psi(v^{*}A(D\eta_{2})^{T}+\eta_ {2}f_{2}+v^{*}(D\eta_{2}A)^{T})+\psi f_{3}D(\eta_{2}v^{*})\] \[\quad-(D\eta_{2}f_{2}+f_{3}v^{*}D\eta_{2}+v^{*}\operatorname{div}( D\eta_{2}A)-f_{1}(\eta_{2}v^{*})-\eta_{2}f_{4})\psi\,\mathrm{d}x=0.\]
Recalling that \(v_{\eta}=\eta_{2}v^{*}\), we notice that \(v_{\eta}\) solves
\[\begin{cases}-\operatorname{div}(ADv_{\eta}^{T})+f_{1}v_{\eta}+ \operatorname{div}(\tilde{f}_{2})+f_{3}Dv_{\eta}+\tilde{f}_{4}=0&\text{in}\quad \Omega_{2}\\ v_{\eta}=0&\text{on}\quad\partial\Omega_{2}\end{cases} \tag{4.31}\]
where
\[\tilde{f}_{2}=v^{*}A(D\eta_{2})^{T}+f_{2}\eta_{2}+v^{*}(D\eta_{2}A)^{T},\] \[\tilde{f}_{4}=-v^{*}\operatorname{div}(D\eta_{2}A)-f_{2}D\eta_{2 }-f_{3}v^{*}D\eta_{2}+f_{4}\eta_{2}.\]
By Proposition 4.17, we have \(v=u_{x_{k}}\in L^{2,\lambda}(\mathbb{T}^{d})\) with \(\lambda=d-2+2\alpha\). Therefore, \(v^{*}\in L^{2,\lambda}(\Omega_{2})\). Hence, combining the Lipschitz continuity of \(A\) with (4.27), we obtain
\[f_{1}\in C^{0,\alpha}(\mathbb{T}^{d}),\quad\tilde{f}_{2},f_{3}\in(L^{2, \lambda}(\mathbb{T}^{d}))^{d},\quad\tilde{f}_{4}\in L^{1,\lambda}(\mathbb{T}^ {d}).\]
Consequently, because \(\lambda>d-2\), we apply Theorem 4.16 to (4.31) and deduce that \(v_{\eta}\in C^{0,\alpha}(\Omega_{2})\). Using this and recalling that \(\eta_{2}\) is smooth with \(\eta_{2}(x)=1\) for all \(x\in\Omega_{1}\), we obtain \(v^{*}\in C^{0,\alpha}(\Omega_{1})\) and this completes the proof.
### Uniqueness of solutions
Finally, we address the uniqueness of \(C^{1,\alpha}\) solutions to (3.2). We begin by proving a lemma.
**Lemma 4.19**.: Suppose that Assumptions 1 and 3 hold. Let \(u\in H^{1}(\mathbb{T}^{d})\cap C^{1,\alpha}(\mathbb{T}^{d})\) solve
\[\operatorname{div}(Du(x)A(x))=f(x),\]
where \(f\) is a continuous function. Suppose that \(u\) has a maximum at a point \(x_{M}\). Then,
\[f(x_{M})\leqslant 0.\]
Similarly, \(f(x_{m})\geqslant 0\) for a point of minimum, \(x_{m}\), of \(u\).
Proof.: Let \(x_{M}\) be a point of maximum of \(u\). Without loss of generality, we assume that \(x_{M}=0\). Because \(u\in C^{1,\alpha}(\mathbb{T}^{d})\), \(Du(0)=0\). Let \(\bar{A}=A(0)\) and \(v(y)=u(\bar{A}^{1/2}y)\). Let \(B_{\tau}\) denote the ball centered at the origin with radius \(\tau\) and \(0<\tau<r\), for some sufficiently small \(r\). Finally, we let \(\nu\) be the outer unit normal to \(\partial(\bar{A}^{1/2}B_{\tau})\). Then,
\[\int_{\bar{A}^{1/2}B_{\tau}}\operatorname{div}(Du(x)A(x))dx =\int_{\bar{A}^{1/2}B_{\tau}}\operatorname{div}(Du(x)(A(x)-\bar{A }))+\operatorname{div}(Du(x)\bar{A})dx\] \[=\int_{\partial(\bar{A}^{1/2}B_{\tau})}Du(x)(A(x)-\bar{A})\nu+ \int_{\bar{A}^{1/2}B_{\tau}}\operatorname{div}(Du(x)\bar{A})dx\] \[=O(\tau^{d+\alpha})+\int_{A^{1/2}B_{\tau}}u_{x_{i}x_{j}}(x)\bar{ A}_{ij}dx\] \[=O(\tau^{d+\alpha})+c_{\bar{A}}\int_{B_{\tau}}u_{x_{i}x_{j}}( \bar{A}^{1/2}y)\bar{A}_{ij}dy\] \[=O(\tau^{d+\alpha})+c_{\bar{A}}\int_{B_{\tau}}\operatorname{div} (Dv)dy\] \[=O(\tau^{d+\alpha})+c_{\bar{A}}\int_{\partial B_{\tau}}Dv(y)\frac{ y}{\tau}dS(y)\] \[=O(\tau^{d+\alpha})+c_{\bar{A}}\int_{\partial B_{\tau}}Du(\bar{A} ^{1/2}y)\bar{A}^{1/2}\frac{y}{\tau}dS(y)\] \[=O(\tau^{d+\alpha})+c_{\bar{A}}\tau^{d-1}\int_{\partial B_{1}}Du (\tau\bar{A}^{1/2}z)\bar{A}^{1/2}zdS(z)\] \[=O(\tau^{d+\alpha})+c_{\bar{A}}\tau^{d-1}\frac{d}{d\tau}\int_{ \partial B_{1}}u(\tau\bar{A}^{1/2}z)dS(z).\]
Therefore, we have
\[\frac{\tau}{|\bar{A}^{1/2}B_{\tau}|}\int_{\bar{A}^{1/2}B_{\tau}} \operatorname{div}(Du(x)A(x))dx =O(\tau^{1+\alpha})+\tilde{c}_{\bar{A}}\frac{d}{d\tau}\int_{ \partial B_{1}}u(\tau\bar{A}^{1/2}z)dS(z).\]
Integrating from \(0\) to \(r\), and taking into account that
\[\int_{\partial B_{1}}\left(u(r\bar{A}^{1/2}z)-u(0)\right)\mathrm{d}S(z) \leqslant 0,\]
we obtain
\[\int_{0}^{r}\frac{\tau}{|\bar{A}^{1/2}B_{\tau}|}\int_{\bar{A}^{1/2}B_{\tau}} \operatorname{div}(Du(x)A(x))\mathrm{d}x\leqslant O(r^{2+\alpha}).\]
From which it follows
\[\limsup_{r\to 0}\frac{1}{r^{2}}\int_{0}^{r}\frac{\tau}{|\bar{A}^{1/2}B_{\tau}|} \int_{\bar{A}^{1/2}B_{\tau}}\operatorname{div}(Du(x)A(x))\mathrm{d}x \leqslant 0. \tag{4.32}\]
Notice that
\[\frac{1}{r^{2}}\int_{0}^{r}\frac{\tau}{|\bar{A}^{1/2}B_{\tau}|}\int_{\bar{A}^{1/2 }B_{\tau}}\mathrm{div}(Du(x)A(x))\mathrm{d}x=\frac{1}{r^{2}}\int_{0}^{r}\frac{ \tau}{|\bar{A}^{1/2}B_{\tau}|}\int_{\bar{A}^{1/2}B_{\tau}}F(x)\mathrm{d}x. \tag{4.33}\]
Recalling that \(f\) is continuous for some \(\hat{c}>0\), we have
\[\frac{1}{r^{2}}\int_{0}^{r}\frac{\tau}{|\bar{A}^{1/2}B_{\tau}|}\int_{\bar{A}^{1 /2}B_{\tau}}F(x)dx\to\hat{c}F(0).\]
Using this and (4.33) in (4.32), we establish the first part of the lemma. The case of a minimum is analogous.
**Proposition 4.20**.: Suppose that Assumptions 1-3 hold. Then, there exists at most one solution \(u\in H^{1}(\mathbb{T}^{d})\cap C^{1,\alpha}(\mathbb{T}^{d})\) to (3.1).
Proof.: Suppose there are two solutions to (3.1), \(u\) and \(\tilde{u}\). Let \(x_{M}\) be a point of maximum of \(u-\tilde{u}\). Note that \(Du(x_{M})=D\tilde{u}(x_{M})\). Then, the preceding Lemma implies
\[-g(-u(x_{M}))+g(-\tilde{u}(x_{M}))\leqslant 0.\]
By the monotonicity of \(g\), \(u(x_{M})-\tilde{u}(x_{M})\leqslant 0\). Thus, \(u\leqslant\tilde{u}\). Exchanging the roles of \(u\) and \(\tilde{u}\), we conclude that \(u=\tilde{u}\).
**Corollary 4.21**.: Suppose that Assumptions 1-3 hold. Then, \(\overline{H}\mapsto u_{\overline{H}}\) is continuous as a map from \(\mathbb{R}\) to \(C^{1,\alpha}(\mathbb{T}^{d})\).
Proof.: Consider a sequence \(\overline{H}_{n}\) converging to some value \(\overline{H}\). Let \(u_{\overline{H}_{n}}\) be corresponding solutions to (3.1) with \(\overline{H}_{n}\). We claim that \(u_{\overline{H}_{n}}\) converges to \(u_{\overline{H}}\), where \(u_{\overline{H}}\) is the solution to (3.1) for \(\overline{H}\). Arguing as in the proof of Theorem 3.5, we deduce that any subsequence of \(u_{\overline{H}_{n}}\) converges to a solution of (3.1). Because of the uniqueness result in Proposition 4.20, this limit is unique. Hence, the whole sequence converges.
## 5. Existence and Uniqueness of Solutions to the Stationary MFG
This section discusses the existence and uniqueness of solutions to the stationary MFG problem given in Problem 1 proving Theorem 1.2.
### Proof of Theorem 1.2
Based on previous sections' results, for any \(\overline{H}\), there exists a solution to Problem 1. However, this solution may fail to satisfy the normalization condition \(\int_{\mathbb{T}^{d}}m\mathrm{d}x=1\).
In this section, we show the existence of a constant \(\overline{H}\) such that there exists a normalized solution to Problem 1. This finalizes proof of our main result, Theorem 1.2.
Proof of Theorem 1.2.: First, we prove the uniqueness for Problem 1. This follows the standard Lasy-Lions monotonicity arguments.
Let \((u_{1},m_{1},\bar{H}_{1}),(u_{2},m_{2},\bar{H}_{2})\in C^{1}(\mathbb{T}^{d}) \times C^{1}(\mathbb{T}^{d})\times\mathbb{R}\) solve Problem 1. That is, for \(i=1,2\) and for any \(\varphi\in H^{1}(\mathbb{T}^{d})\),
\[\begin{cases}\int_{\mathbb{T}^{d}}D\varphi ADu_{i}^{T}+\left(\frac{1}{2}Du_{i }ADu_{i}^{T}+V(x)+\bar{H}_{i}\right)\varphi\mathrm{d}x=\int_{\mathbb{T}^{d}}g \left(\log m_{i}\right)\varphi\mathrm{d}x\\ \int_{\mathbb{T}^{d}}D\varphi ADm_{i}^{T}+m_{i}D\varphi ADu_{i}^{T}\mathrm{d }x=0.\end{cases} \tag{5.1}\]
Because \(m_{1},m_{2}\) are probability density functions, we have
\[\int_{\mathbb{T}^{d}}(m_{1}-m_{2})\bar{H}_{i}\mathrm{d}x=0.\]
Therefore, recalling that \(m_{1},m_{2}\in C^{1}(\mathbb{T}^{d})\), we take \((m_{1}-m_{2})\) as a test function in the first equation in (5.1) and subtract the corresponding equations for \(i=1,2\) to get
\[\int_{\mathbb{T}^{d}}(Dm_{1}-Dm_{2})A(Du_{1}^{T}-Du_{2}^{T}) +\frac{(m_{1}-m_{2})}{2}\left(Du_{1}ADu_{1}^{T}-Du_{2}ADu_{2}^{T} \right)\mathrm{d}x\] \[=\int_{\mathbb{T}^{d}}(g\left(\log m_{1}\right)-g\left(\log m_{2} \right))(m_{1}-m_{2})\mathrm{d}x. \tag{5.2}\]
We argue similarly for \(u_{1},u_{2}\in C^{1}(\mathbb{T}^{d})\). Taking \((u_{1}-u_{2})\) as a test function in the second equation in (5.1) and subtracting the corresponding equations for \(i=1,2\), we obtain
\[\int_{\mathbb{T}^{d}}(Du_{1}-Du_{2})A(Dm_{1}^{T}-Dm_{2}^{T})+(Du_{1}-Du_{2}) \left(m_{1}ADu_{1}^{T}-m_{2}ADu_{2}^{T}\right)\mathrm{d}x=0. \tag{5.3}\]
From (5.2) and (5.3), we get
\[\int_{\mathbb{T}^{d}}\frac{(m_{1}+m_{2})}{2}(Du_{1}-Du_{2})A(Du_{1 }^{T}-Du_{2}^{T})\\ +(g\left(\log m_{1}\right)-g\left(\log m_{2}\right))(m_{1}-m_{2}) \mathrm{d}x=0.\]
Assumptions 1 and 4, yield \(Du_{1}=Du_{2}\) and \(m_{1}=m_{2}\). Using these in the first equation in (5.1) with the test function \(\varphi\equiv 1\) for \(i=1,2\), we deduce that \(\bar{H}_{1}=\bar{H}_{2}\). Therefore, there exists at most one triple \((u,m,\bar{H})\in C^{1}(\mathbb{T}^{d})\times C^{1}(\mathbb{T}^{d})\times \mathbb{R}\) that solves Problem 1.
Next, we address the existence part. Fix \(\bar{H}\in\mathbb{R}\). Propositions 3.5 and 4.18 imply the existence of a function \(u_{\bar{H}}\in C^{1}(\mathbb{T}^{d})\) solving (1.2). Let \(m_{\bar{H}}=e^{-u_{\bar{H}}}\in C^{1}(\mathbb{T}^{d})\). We note that \((u_{\bar{H}},m_{\bar{H}},\bar{H})\) satisfies
\[\begin{cases}\int_{\mathbb{T}^{d}}D\varphi ADu_{\bar{H}}^{T}+\left(\frac{1}{2} Du_{\bar{H}}ADu_{\bar{H}}^{T}+V(x)-\bar{H}\right)\varphi\mathrm{d}x=\int_{ \mathbb{T}^{d}}g\left(\log m_{\bar{H}}\right)\varphi\mathrm{d}x\\ \int_{\mathbb{T}^{d}}D\varphi ADm_{\bar{H}}^{T}+m_{\bar{H}}D\varphi ADu_{\bar{ H}}^{T}\mathrm{d}x=0,\end{cases}\]
for all \(\varphi\in H^{1}(\mathbb{T}^{d})\). Accordingly, by Definition 1.1, it is enough to prove that there exists a constant \(\bar{H}\) such that \(m_{\bar{H}}=e^{-u_{\bar{H}}}\) satisfies
\[\int_{\mathbb{T}^{d}}m_{\bar{H}}\mathrm{d}x=\int_{\mathbb{T}^{d}}e^{-u_{\bar{ H}}}\mathrm{d}x=1.\]
To do so, we define \(\mathcal{H}:\mathbb{R}\rightarrow\mathbb{R}\) by
\[\mathcal{H}(\bar{H})=\int_{\mathbb{T}^{d}}e^{-u_{\bar{H}}}\mathrm{d}x.\]
By Corollary 4.21, we have that \(\mathcal{H}\) is continuous.
To complete the proof, we show that there exist constants \(\bar{H}_{low}\) and \(\bar{H}_{up}\) such that
\[\mathcal{H}(\bar{H}_{low})<1,\quad\mathcal{H}(\bar{H}_{up})>1.\]
We begin by proving the existence of \(\bar{H}_{up}\). We recall that \(u_{\bar{H}}\in C^{1,\alpha}(\mathbb{T}^{d})\) satisfies
\[-\operatorname{div}(ADu_{\bar{H}}^{T})+\frac{1}{2}Du_{\bar{H}}ADu_{\bar{H}}^{T} +V(x)-\bar{H}-g\left(-u_{\bar{H}}\right)=0.\]
Consider a maximum point \(x_{M}\) for \(u_{\bar{H}}\). Note that \(Du_{\bar{H}}(x_{M})\)=0. By Lemma 4.19, we get
\[V(x_{M})-\bar{H}-g\left(-u_{\bar{H}}(x_{M})\right)\leqslant 0.\]
Therefore,
\[g(-\max u_{\bar{H}})\geqslant C(-\|V\|_{\infty}-\bar{H}).\]
Taking \(\bar{H}_{up}<-\|V\|_{\infty}-C-\frac{C^{-1}}{C_{g}}\) in proceeding inequality, we get
\[g(-\max u_{\bar{H}_{up}})>\frac{1}{C_{g}}. \tag{5.4}\]
Notice that by Assumption 2, for all \(q\geqslant 0\), we have \(g(-q)\leqslant\frac{1}{C_{g}}\). Furthermore, Assumption 2 implies that there exists \(q_{0}>0\) such that \(g(q)>\frac{1}{C_{g}}\) if and only if \(q\geqslant q_{0}>0\). This with (5.4), yields \(\max u_{\bar{H}_{up}}<0\). Hence,
\[\mathcal{H}(\bar{H}_{up})\geqslant e^{-\max u_{\bar{H}_{up}}}>1.\]
Next, we prove the existence of \(\bar{H}_{low}\). Arguing as in the case of \(\bar{H}_{up}\), we conclude that
\[g(-\min u_{\bar{H}})\leqslant C(\|V\|_{\infty}-\bar{H}-C). \tag{5.5}\]
Similar to the previous case, Assumption 2 implies that there exists \(q_{0}<0\) such that \(g(q)<-\frac{1}{C_{g}}\) if and only if \(q<q_{0}\). This with (5.5) implies that by taking \(\bar{H}_{low}>\|V\|_{\infty}-C+\frac{C^{-1}}{C_{g}}\), we get \(-\min\bar{H}_{low}<0\). Thus,
\[\mathcal{H}(\bar{H}_{low})\leqslant e^{-\min u_{\bar{H}_{low}}}<1.\qed\]
|
2301.01104
|
KoopmanLab: machine learning for solving complex physics equations
|
Numerous physics theories are rooted in partial differential equations
(PDEs). However, the increasingly intricate physics equations, especially those
that lack analytic solutions or closed forms, have impeded the further
development of physics. Computationally solving PDEs by classic numerical
approaches suffers from the trade-off between accuracy and efficiency and is
not applicable to the empirical data generated by unknown latent PDEs. To
overcome this challenge, we present KoopmanLab, an efficient module of the
Koopman neural operator family, for learning PDEs without analytic solutions or
closed forms. Our module consists of multiple variants of the Koopman neural
operator (KNO), a kind of mesh-independent neural-network-based PDE solvers
developed following dynamic system theory. The compact variants of KNO can
accurately solve PDEs with small model sizes while the large variants of KNO
are more competitive in predicting highly complicated dynamic systems govern by
unknown, high-dimensional, and non-linear PDEs. All variants are validated by
mesh-independent and long-term prediction experiments implemented on
representative PDEs (e.g., the Navier-Stokes equation and the Bateman-Burgers
equation in fluid mechanics) and ERA5 (i.e., one of the largest high-resolution
global-scale climate data sets in earth physics). These demonstrations suggest
the potential of KoopmanLab to be a fundamental tool in diverse physics studies
related to equations or dynamic systems.
|
Wei Xiong, Muyuan Ma, Xiaomeng Huang, Ziyang Zhang, Pei Sun, Yang Tian
|
2023-01-03T13:58:39Z
|
http://arxiv.org/abs/2301.01104v3
|
# KoopmanLab: machine learning for solving complex physics equations1
###### Abstract
Numerous physics theories are rooted in partial differential equations (PDEs). However, the increasingly intricate physics equations, especially those that lack analytic solutions or closed forms, have impeded the further development of physics. Computationally solving PDEs by classic numerical approaches suffers from the trade-off between accuracy and efficiency and is not applicable to the empirical data generated by unknown latent PDEs. To overcome this challenge, we present KoopmanLab, an efficient module of the Koopman neural operator family, for learning PDEs without analytic solutions or closed forms. Our module consists of multiple variants of the Koopman neural operator (KNO), a kind of mesh-independent neural-network-based PDE solvers developed following dynamic system theory. The compact variants of KNO can accurately solve PDEs with small model sizes while the large variants of KNO are more competitive in predicting highly complicated dynamic systems govern by unknown, high-dimensional, and non-linear PDEs. All variants are validated by mesh-independent and long-term prediction experiments implemented on representative PDEs (e.g., the Navier-Stokes equation and the Bateman-Burgers equation in fluid mechanics) and ERA5 (i.e., one of the largest high-resolution global-scale climate data sets in earth physics). These demonstrations suggest the potential of KoopmanLab to be a fundamental tool in diverse physics studies related to equations or dynamic systems.
## I Introduction
### The rising of partial differential equation solvers
Solving partial differential equations (PDEs) essentially requires characterizing an appropriate solution operator \(\mathcal{F}\) that relates \(\Phi=\Phi\left(D;\mathbb{R}^{d_{\phi}}\right)\), a Banach space of inputs (i.e., initial values), with \(\Gamma=\Gamma\left(D;\mathbb{R}^{d_{\gamma}}\right)\), a Banach space of solutions (i.e., target values), for a typically time-dependent PDE family defined on a bounded open set \(D\subset\mathbb{R}^{d}\)
\[\partial_{t}\gamma\left(x_{t}\right) =\left(\mathcal{L}_{\phi}\gamma\right)\left(x_{t}\right)+\kappa \left(x_{t}\right),\;x_{t}\in D\times T, \tag{1}\] \[\gamma\left(x_{t}\right) =\gamma_{B},\;x_{t}\in\partial D\times T,\] (2) \[\gamma\left(x_{0}\right) =\gamma_{I},\;x_{0}\in D\times\{0\}. \tag{3}\]
In Eq. (1-3), set \(T=[0,\infty)\) denotes the time domain. Notion \(\mathcal{L}_{\phi}\) is a differential operator characterized by \(\phi\). Mapping \(\kappa\left(\cdot\right)\) is a function that lives in a function space determined by \(\mathcal{L}_{\phi}\). Mapping \(\gamma\left(\cdot\right)\) is the solution of the PDE family that we attempt to obtain. The boundary and initial conditions are denoted by \(\gamma_{B}\) and \(\gamma_{I}\), respectively. Mathematically, driving an accurate solution operator \(\mathcal{F}:\left(\phi,\gamma_{B},\gamma_{I}\right)\mapsto\gamma\) is the key step to obtain the PDE solution \(\gamma\left(\cdot\right)\). However, even in the case where the boundary and initial conditions are constant (i.e., the solution operator \(\mathcal{F}:\left(\phi,\gamma_{B},\gamma_{I}\right)\mapsto\gamma\) reduces to \(\mathcal{F}:\phi\mapsto\gamma\)), driving an analytic expression of solution operator \(\mathcal{F}\) can be highly non-trivial [1; 2].
The absence of analytic solutions of various important PDEs in science and engineering naturally calls for the rapid development of computational solvers, which attempt to approximate a parametric counterpart \(\mathcal{F}_{\theta}\simeq\mathcal{F}\) parameterized by \(\theta\) to derive solution \(\gamma\left(\cdot\right)\)[1; 3; 4]. To date, the joint efforts of physics, mathematics, and computer science have given birth to two mainstream families of PDE solvers [5]:
1. The first family of solvers are classic numerical ones. Typical instances of these solvers include finite element (FEM) [6], finite difference (FDM) [7], and finite volume (FVM) [8] methods. In general, these methods discretize space and time domains following specific mesh designs and solve parameterized PDEs on meshes by certain iterative algorithms. Specifically, FEM subdivides the original domain into a set of sub-domains defined by a collection of element equations and recombines these element equations to derive the global solution [6]. FDM approximates derivatives as finite differences measured on local values [7]. FVM transforms the original problem into a series of surface flux calculations on local volumes [8].
2. The second family of solvers are neural-network-based ones. With a pursuit of accelerating PDE solving and improving the applicability on real data, three kinds of neural-network-based solvers have been proposed:
1. One kind of solvers discretize domains \(D\) and \(T\) into \(x\) and \(y\) meshes and approximate a finite-dimensional and mesh-dependent solution operator \(\mathcal{F}_{\theta}\) by a parameterized neural network between finite Euclidean spaces, i.e., \(\mathcal{F}_{\theta}:\mathbb{R}^{x}\times\mathbb{R}^{y}\times\Theta\to\mathbb{R }^{x}\times\mathbb{R}^{y}\) (e.g., see Refs. [9; 10; 11]). Given an arbitrary input \(\gamma\left(x_{t}\right)\), the trained neural network can function as a solution operator to predict \(\gamma\left(x_{t+\tau}\right)=\mathcal{F}_{\theta}\left(\gamma\left(x_{t} \right)\right)\) for a certain time difference \(\tau\).
2. Another kind of solvers directly parameterize equation solution \(\gamma\left(\cdot\right)\) as a neural network, i.e., \(\mathcal{F}_{\theta}:D\times T\times\Theta\to\mathbb{R}\) (e.g., see Refs. [12; 13; 14; 15]). These solvers are mesh-independent and accurate in learning a given PDE because they can directly transform arbitrary domain and parameter setting to target equation solution \(\gamma\left(\cdot\right)\).
3. The last kind of solvers, including neural operators, attempt to parameterize a mesh-dependent and infinite-dimensional solution operator with neural networks, i.e, \(\mathcal{F}_{\theta}:\Phi\times\Theta\to\Gamma\) (e.g., see Refs. [16; 17; 18; 19; 20; 5; 5; 21]). These mesh-independent solvers can be flexibly implemented on different discretization schemes and only need to be trained once for a given PDE family. The equation solution \(\gamma\left(\cdot\right)\) of different instances of the PDE family can be generated by a computationally reusable forward pass of the network [19; 5], which can be further accelerated by fast Fourier transform [19]. Representative demonstrations of this kind of solver are Fourier neural operator [19] and its variants (e.g., adaptive Fourier neural operator [22] and FourCastNet [23; 24]). These frameworks not only solve PDEs with known expressions but also be able to predict complex dynamic systems governed by unknown PDEs on real data sets (e.g., climate system [23; 24]).
### The limitation of previous partial differential equation solvers
Although substantial progress has been accomplished by existing PDE solvers from various perspectives, there remain critical challenges in this booming direction.
In practice, the mesh-dependent property of classic numerical solvers has implied an inevitable trade-off between computation accuracy and efficiency, i.e., fine-grained meshes ensure accuracy yet coarse-grained meshes are favorable for efficiency [19; 25]. However, in many cases, the applications of PDE solving (e.g., numerical weather forecasting [26; 27]) require timely and accurate computation. To ensure accuracy and speed, every single time of computation in the downstream applications supported by classic numerical solvers frequently costs large amounts of computing resources. In cases with limited computing power, a significant time delay may occur. Moreover, all numerical solvers require the explicit definitions of target PDEs as _a priori_ knowledge and are less applicable to predict real data generated by unknown PDEs [19].
As for neural-network-based solvers, challenges still arise from multiple perspectives, even though these solvers have outperformed the classic numerical ones in prediction efficiency significantly. Type (a) solvers, as we have suggested, are mesh-dependent and lack generalization capacities across different mesh designs [5]. Type (b) solvers are limited to learning a concrete instance of the PDE rather than the entire family and, consequently, require restarted training given a different instance and can not handle the data with unknown PDEs [5]. Although type (c) solvers can learn the entire PDE family in a mesh-independent manner [19; 5], they may face challenges in characterizing the long-term behaviour of equation solution \(\gamma\left(\cdot\right)\). To understand these challenges, let us consider the iterative update strategy of neural operators for any \(x_{t}\in D\times\left\{t\right\}\)[5]
\[\widehat{\gamma}\left(x_{t+\varepsilon}\right)\] \[= \sigma\left(W\widehat{\gamma}\left(x_{t}\right)+\int_{D\times \left\{t\right\}}\kappa_{\theta}\left(x_{t},y_{t},\phi\left(x_{t}\right),\phi \left(y_{t}\right)\right)\widehat{\gamma}\left(y_{t}\right)\mathsf{d}y_{t} \right), \tag{4}\]
in which \(\varepsilon\in\left(0,\infty\right)\) denotes time difference, notion \(\sigma:\mathbb{R}\to\mathbb{R}\) is an arbitrary element-wise non-linear activation function, notion \(W:\mathbb{R}^{d_{\varepsilon}}\to\mathbb{R}^{d_{\varepsilon}}\) stands for a linear layer, function \(\kappa_{\theta}:\mathbb{R}^{2\left(d+d_{\phi}\right)}\to\mathbb{R}^{d_{ \varepsilon}}\) is a neural network parameterized by \(\theta\), and mapping \(\widehat{\gamma}:D\times T\to\mathbb{R}^{d_{\varepsilon}}\) denotes the parameterized counterpart of equation solution \(\gamma\) generated by the neural network (e.g., by embedding) [5]. In Eq. (4), the integral term associated with \(\kappa_{\theta}\) defines an kernel integral operator to parameterize the Green function \(\mathcal{J}_{\phi}:\left(D\times T\right)\times\left(D\times T\right)\to\mathbb{R}\)
\[\widehat{\gamma}\left(x_{t+\varepsilon}\right)=\int_{D\times\left\{t \right\}}\mathcal{J}_{\phi}\left(x_{t},y_{t}\right)\eta\left(y_{t}\right) \mathsf{d}y_{t},\ \forall\,x_{t}\in D\times\left\{t\right\}, \tag{5}\]
where the Green function is determined by \(\phi\) as well. One can see a similar form of Eq. (5) in Ref. [5]. Computationally, the iteration of Eq. (4) can be significantly accelerated by Fourier transform, which leads to the well-known Fourier neural operator [19].
From a dynamic system perspective, Eq. (4) is similar to the iterative dynamics of an infinite-dimensional non-linear dynamic system of equation solution \(\gamma_{t}=\gamma\left(D\times\left\{t\right\}\right)\), where each snapshot \(\gamma\left(D\times\left\{t\right\}\right)\) is generated after function \(\gamma\) acts on all elements in set \(D\times\left\{t\right\}\). Mathematically, the dynamics is defined as
\[\gamma_{t+\varepsilon}=\gamma_{t}+\int_{t}^{t+\varepsilon}\zeta\left(\gamma_{ \tau},\tau\right)\mathsf{d}\tau,\ \forall t\in T, \tag{6}\]
or equivalently
\[\partial_{t}\gamma_{t}=\zeta\left(\gamma_{t},t\right),\;\forall\gamma_{t}\in \mathbb{R}^{d_{\gamma}}\times T, \tag{7}\]
in which \(\zeta:\mathbb{R}^{d_{\gamma}}\times T\rightarrow\mathbb{R}^{d_{\gamma}}\) denotes the associated infinite-dimensional evolution mapping.
The challenge faced by type (c) solvers lies in that evolution mapping \(\zeta\left(\cdot,\cdot\right)\) maybe even more intricate than equation solution \(\gamma\left(\cdot\right)\) itself. Let us consider the cocycle property of the flow mapping \(\theta\) associated with \(\zeta\left(\cdot,\cdot\right)\) according to modern dynamic system theory [28]
\[\theta_{t}^{t+\varepsilon}=\theta_{t+\tau}^{t+\varepsilon}\circ\theta_{t}^{t+ \tau},\;\forall t\leq t+\tau\leq t+\varepsilon\in T. \tag{8}\]
Operator \(\circ\) denotes the composition of mappings. In general, Eq. (8) determines how equation solution \(\gamma\left(\cdot\right)\) evolves across adjoining time intervals. In a special case where \(\zeta\left(\cdot,\cdot\right)\) is time-independent, i.e., \(\partial_{t}\zeta\left(\cdot,t\right)\equiv 0\), Eq. (8) reduces to the autonomous case
\[\theta^{t+\varepsilon}=\theta^{\varepsilon}\circ\theta^{t},\;\forall t, \varepsilon\in T. \tag{9}\]
Otherwise, Eq. (8) generally corresponds to the non-autonomous case where the underlying mechanisms governing the evolution of \(\gamma\left(\cdot\right)\) vary across time. Consequently, a large \(\varepsilon\) may correspond to a highly non-trivial evolution process of \(\gamma\left(\cdot\right)\), making \(\widehat{\gamma}\left(x_{t+\varepsilon}\right)\) less predictable during iterative updating and reducing the precision of Eq. (4) significantly. This phenomenon inevitably impedes the accurate prediction of the long-term dynamics (i.e., \(\varepsilon\rightarrow\infty\)) of diverse non-linear PDE families (e.g., see those in epidemic prevention [29], economic modelling [30], and weather forecast [23; 24]). To overcome this obstacle, existing models are forced to improve accuracy at the cost of efficiency.
### Our contributions to partial differential equation solvers
In this paper, we build on Koopman neural operator (KNO), one of our latest works [31], to develop an efficient module of PDE solving and overcome the limitation in characterizing the long-term behaviours of complicated PDE families. As a study on computational physics programs, our research has the following contributions compared with our previous work [31].
First, we generalize the original KNO to four kinds of variants. Beyond the original KNO, these differentiated variants offer more possibilities for data-specific and task-oriented solver designs. Specifically, the compact variants of KNO realized by multi-layer perceptrons and convolutional neural networks can accurately solve PDE with small model sizes. The large variants of KNO implemented on visual transformers can predict highly intricate dynamic systems governed by unknown, high-dimensional, and non-linear PDEs (e.g., climate system).
Second, we propose KoopmanLab, a PyTorch module of Koopman neural operator family, as a self-contained and user-friendly platform for PDE solving. All necessary tools, such as those for data loading, model construction, parameter manipulation, output visualization, and performance quantification, are offered in a user-friendly manner to support customized applications.
Third, we offer comprehensive validation of the proposed module on representative data sets, including those generated by important PDEs in fluid mechanics (e.g., the Navier-Stokes equation and the Bateman-Burgers equation) or obtained by global meteorological recording research (e.g., atmospheric, land, and oceanic climate fields in ERA5 data set) [32]. By measuring accuracy, quantifying efficiency, and comparing all KNO variants with other state-of-the-art alternatives (e.g., Fourier neural operator [19] and FourCastNet [23; 24]), we suggest the potential of our module to serve as an ideal choice of PDE solving and dynamic system prediction.
## II The initial version of Koopman neural operator
Although the original Koopman neural operator has been proposed in our earlier work [31], here we elaborate on its mechanisms for completeness. We further present more mathematical details that are not covered in Ref. [31] to analyze the convergence of the original Koopman neural operator.
### The original Koopman neural operator: Objective
Koopman neural operator (KNO) is proposed to deal with the non-linear, and potentially non-autonomous, dynamic system in Eqs. (6-7). The idea underlying KNO arises from the pursuit to transform the non-linear system in Eqs. (6-7) to a sufficiently simple linear one
\[\partial_{t}\mathbf{g}\left(\gamma_{t}\right)=\mathcal{A}\mathbf{g}\left( \gamma_{t}\right),\;\forall t\in T, \tag{10}\]
where \(\mathbf{g}\left(\cdot\right)\) is an appropriate transform and \(\mathcal{A}\) is a linear operator. In modern dynamic system theory [28], this pursuit may be achieved if we can develop an approach to characterize the Koopman operator \(\mathcal{K}\), an infinite-dimensional linear operator governing all possible observations of the dynamic system of equation solution \(\gamma\left(\cdot\right)\), to act on the flow mapping \(\theta\) and linearizing the dynamics of \(\gamma\left(\cdot\right)\) in an appropriate observation space. This idea has been extensively applied in plasma physics [33], fluid dynamics [34], robot kinetics [35], and neuroscience [36].
Mathematically, we need to find a set of observation functions (or named as measurement functions) [28]
\[\mathcal{G}\left(\mathbb{R}^{d_{\gamma}}\times T\right)=\left\{\mathbf{g}| \mathbf{g}:\mathbb{R}^{d_{\gamma}}\times T\rightarrow\mathbb{C}^{d_{\gamma}}\right\} \tag{11}\]
such that a family of Koopman operators can be identified for the autonomous (i.e., \(\mathcal{K}^{\varepsilon}:\mathcal{G}\left(\mathbb{R}^{d_{\gamma}}\times T \right)\rightarrow\mathcal{G}\left(\mathbb{R}^{d_{\gamma}}\times T\right)\)) or the non-autonomous (i.e., \(\mathcal{K}^{\varepsilon}_{t}=\mathcal{K}^{\varepsilon}_{t}\)).
### The Koopman operator
Koopman operators can be identified for the autonomous (i.e., \(\mathcal{K}^{\varepsilon}_{t}:\mathcal{G}\left(\mathbb{R}^{d_{\gamma}}\times T \right)\rightarrow\mathcal{G}\left(\mathbb{R}^{d_{\gamma}}\times T\right)\)) or the non-autonomous (i.e., \(\mathcal{K}^{\varepsilon}_{t}=\mathcal{K}^{\varepsilon}_{t}\)).
The Koopman operator \(\mathcal{K}\) is a linear operator, which is a linear operator, which is a linear operator, which is a linear operator, is a linear operator. In modern dynamic system theory [28], the Koopman operator \(\mathcal{K}\) is a linear operator, which is a linear operator, which is a linear operator, which is a linear operator, which is a linear operator, which is a linear operator, which is a linear operator, which is a linear operator, which is a linear operator, which is a linear operator, which is a linear operator, which is a linear operator, which is a linear operator.
The Koopman operator \(\mathcal{K}\) is a linear operator, which is a linear operator,
\(\mathcal{G}\left(\mathbb{R}^{d_{\gamma}}\times T\right)\rightarrow\mathcal{G} \left(\mathbb{R}^{d_{\gamma}}\times T\right)\)) case. These Koopman operators can function on the observations of \(\gamma\left(\cdot\right)\) to update them
\[\mathcal{K}^{c}\mathbf{g}\left(\gamma_{t}\right) =\mathbf{g}\left(\theta^{\varepsilon}\left(\gamma_{t}\right) \right)=\mathbf{g}\left(\gamma_{t+\varepsilon}\right),\;\forall t\times T, \tag{12}\] \[\mathcal{K}^{t+\varepsilon}_{t}\mathbf{g}\left(\gamma_{t}\right) =\mathbf{g}\left(\theta^{t+\varepsilon}_{t}\left(\gamma_{t}\right) \right)=\mathbf{g}\left(\gamma_{t+\varepsilon}\right),\;\forall t\leq t+ \varepsilon\in T, \tag{13}\]
where Eqs. (12-13) correspond to the autonomous and non-autonomous cases, respectively. The updating is implemented in a linear manner, which can be illustrated by taking the non-autonomous case as an example
\[\partial_{t}\mathbf{g}\left(\gamma_{t}\right)=\lim_{\varepsilon\to 0} \frac{\mathcal{K}^{t+\varepsilon}_{t}\mathbf{g}\left(\gamma_{t}\right)- \mathbf{g}\left(\gamma_{t}\right)}{\varepsilon}. \tag{14}\]
Apart from the linear system of \(\mathbf{g}\left(\gamma_{t}\right)\) in Eq. (14), one may also consider the Lie operator (i.e., the Lie derivative of \(\mathbf{g}\left(\cdot\right)\) along the vector field \(\gamma\left(\cdot\right)\)), which is generator operator of such a Koopman operator [37; 38; 39]
\[\mathcal{L}_{t}\mathbf{g}=\lim_{t+\varepsilon\to t}\frac{\mathcal{K}^{t+ \varepsilon}_{t}\mathbf{g}\left(\gamma_{t}\right)-\mathbf{g}\left(\gamma_{t} \right)}{t+\varepsilon-t}. \tag{15}\]
Eq. (15) defines a linear system of \(\mathbf{g}\left(\gamma_{t}\right)\) as well
\[\partial_{t}\mathbf{g}\left(\gamma_{t}\right)=\lim_{t+\varepsilon\to t} \frac{\mathcal{K}^{t+\varepsilon}_{t}\mathbf{g}\left(\gamma_{t}\right)- \mathbf{g}\left(\gamma_{t}\right)}{\varepsilon}=\mathcal{L}_{t}\mathbf{g} \left(\gamma_{t}\right), \tag{16}\]
which can also be considered in the application.
To understand the linearization of \(\mathbf{g}\left(\gamma_{t}\right)\) by the Koopman operator from the perspective of PDE solving, let us consider the Lax pair \(\left(\mathcal{M},\mathcal{N}\right)\) of an integrable version of Eqs. (1-3) [40]
\[\mathcal{M} =\mathsf{D}_{x}^{n}+\alpha\gamma\left(x_{t}\right)I,\;\alpha\in \mathbb{C}, \tag{17}\] \[\mathcal{M}\psi\left(x_{t}\right) =\lambda\psi\left(x_{t}\right),\;\lambda\in\mathbb{C},\] (18) \[\partial_{t}\psi\left(x_{t}\right) =\mathcal{N}\psi\left(x_{t}\right), \tag{19}\]
where \(\mathsf{D}_{x}^{n}\) denotes the \(n\)-th total derivative operator and \(I\) is an identity operator. Eq. (18) denotes an eigenvalue problem at moment \(t\). A relation between linear operators \(\mathcal{M}\) and \(\mathcal{N}\) can be identified if we calculate the time derivative of Eq. (18)
\[\left(\partial_{t}\mathcal{M}+\mathcal{M}\mathcal{N}-\mathcal{N}\mathcal{M} \right)\psi\left(x_{t}\right)=\partial_{t}\lambda\psi\left(x_{t}\right), \tag{20}\]
which directly leads to
\[\partial_{t}\mathcal{M}+\left[\mathcal{M},\mathcal{N}\right]=0, \tag{21}\]
where \(\left[\mathcal{M},\mathcal{N}\right]=\mathcal{M}\mathcal{N}-\mathcal{N} \mathcal{M}\) denotes the commutator of operators. Combining Eqs. (17-21) with Eq. (16), we can readily see the close relation between \(\mathcal{N}\) and \(\mathcal{K}^{t+\varepsilon}_{t}\)
\[\psi\left(D\times\left\{t\right\}\right)=\mathbf{g}\left(\gamma_{t}\right) \Rightarrow\mathcal{N}=\lim_{t+\varepsilon\to t}\frac{\mathcal{K}^{t+ \varepsilon}_{t}\mathbf{g}\left(\gamma_{t}\right)-\mathbf{g}\left(\gamma_{t} \right)}{\varepsilon}, \tag{22}\]
which holds in the autonomous case as well. In sum, the linearization of \(\mathbf{g}\left(\gamma_{t}\right)\) is intrinsically related to the Lax pair and the inverse scattering transform of integrable PDEs [40]. Note that similar ideas have been comprehensively explored in mathematics and physics [41; 42; 43; 44].
Once we find a way to derive the Koopman operator, we can reformulate Eq. (4) as
\[\widehat{\gamma}_{t+\varepsilon}=\mathbf{g}^{-1}\left[\mathcal{K}^{t+ \varepsilon}_{t}\mathbf{g}\left(\widehat{\gamma}_{t}\right)\right],\;\forall t \in T, \tag{23}\]
where \(\widehat{\gamma}_{t}=\widehat{\gamma}\left(D\times\left\{t\right\}\right)\). Certainly, an infinite-dimensional linear operator is not operable in practice. To enable neural networks to learn a potential Koopman operator, we need to consider \(\widehat{\mathcal{K}}\in\mathbb{R}^{r\times r}\), a finite matrix, as a counterpart of \(\mathcal{K}\) that acts on \(\mathbb{K}=\mathrm{span}\left(\widehat{\mathcal{G}}\right)\), a finite invariant sub-space spanned by \(\widehat{\mathcal{G}}=\left\{\mathbf{g}_{1},\ldots,\mathbf{g}_{r}\right\} \subset\mathcal{G}\left(\mathbb{R}^{d_{\gamma}}\times T\right)\)
\[\widehat{\mathcal{K}}\mathbf{g}_{i}=\left\langle\left[\nu_{1},\ldots,\nu_{r} \right],\left[\mathbf{g}_{1},\ldots,\mathbf{g}_{r}\right]\right\rangle,\; \forall\mathbf{g}_{i}\in\widehat{\mathcal{G}}, \tag{24}\]
where \(\left[\nu_{1},\ldots,\nu_{r}\right]\in\mathbb{R}^{r}\) and \(\left\langle\cdot,\cdot\right\rangle\) denotes the inner product. Mathematically, any finite set of eigenfunctions of the Koopman operator \(\mathcal{K}\) can span a finite invariant sub-space.
### The original Koopman neural operator: Mathematics
There exist numerous previous works that pursue to characterize the Koopman operator by machine-learning-based approaches. Some approaches are highly practical but limited to the autonomous case (e.g., the case in Eq. (12)) [45; 46; 47; 48]. Other approaches are more general in application but critically depend on _a priori_ knowledge about the eigenvalue spectrum (e.g, the numbers of real and complex eigenvalues) of Koopman operator to deal with the continuous spectrum problem [49].
In practice, a balance should be reached between mathematical completeness and computational practicality. An ideal Koopman-operator-based PDE solver should fit in with both autonomous and non-autonomous cases and limit the dependence of _a priori_ knowledge as much as possible (even though these restraints inevitably reduce mathematical completeness). To explore such a balance, we introduce the Koopman neural operator (KNO), a flexible approach, in our previous work [31].
The formalization of KNO begins with the Krylov sequence [50] of the observable defined by a unit time step \(\varepsilon\in\left[0,\infty\right]\), which is used in the Krylov subspace approach for computing the eigenvalues of large matrices [50]. One can see its application in Koopman-operator-related algorithms such as the Hankel-DMD [51], sHankel-DMD [52], and HAVOK [53]. Specifically, the Krylov sequence is given as
\[\mathcal{R}_{n}=\left[\mathbf{g}\left(\gamma_{0}\right),\mathbf{g}\left(\gamma_{ \varepsilon}\right),\mathbf{g}\left(\gamma_{2\varepsilon}\right),\ldots, \mathbf{g}\left(\gamma_{n\varepsilon}\right)\right], \tag{25}\]
which is generated by \(\mathcal{K}\) and \(\mathbf{g}\left(\gamma_{0}\right)\)
\[\mathcal{R}_{n}\] \[= \left[\mathbf{g}\left(\gamma_{0}\right),\mathcal{K}_{0}^{\varepsilon }\mathbf{g}\left(\gamma_{0}\right),\mathcal{K}_{\varepsilon}^{2\varepsilon} \mathcal{K}_{0}^{\varepsilon}\mathbf{g}\left(\gamma_{0}\right),\ldots, \mathcal{K}_{(n-1)\varepsilon}^{n\varepsilon}\cdots\mathcal{K}_{0}^{ \varepsilon}\mathbf{g}\left(\gamma_{0}\right)\right]. \tag{26}\]
Computationally, the Krylov sequence can be sampled by a Hankel matrix of observations
\[\mathcal{H}_{m\times n}=\begin{bmatrix}\mathbf{g}\left(\gamma_{0} \right)&\mathbf{g}\left(\gamma_{\varepsilon}\right)&\cdots&\mathbf{g}\left( \gamma_{n\varepsilon}\right)\\ \vdots&\vdots&\vdots&\vdots\\ \mathbf{g}\left(\gamma_{(m-1)\varepsilon}\right)&\mathbf{g}\left(\gamma_{m \varepsilon}\right)&\cdots&\mathbf{g}\left(\gamma_{(m+n-1)\varepsilon} \right)\end{bmatrix}, \tag{27}\]
where \(m\in\mathbb{N}^{+}\) denotes the dimension of delay-embedding. In Eq. (27), each column is a sampled result that approximates a function in the Krylov subspace.
If the Koopman operator has a discrete spectrum (e.g., has eigenvalues), there exists an invariant subspace \(\mathbb{K}\) of the Koopman operator, which can be spanned by the Krylov subspace
\[\mathbb{K}=\mathrm{span}\left(\mathcal{R}_{n}\right)\simeq\mathrm{span}\left( \mathcal{H}_{(m,n)}\right) \tag{28}\]
as long as \(n\geq\dim\left(\mathbb{K}\right)-1\) (here \(\dim\left(\cdot\right)\) denotes the dimensionality). This property suggests the possibility of approximating the actual Koopman operator to \(\mathbb{K}\) by \(\widehat{\mathcal{K}}_{t}^{t+\varepsilon}:\mathcal{G}\left(\mathbb{R}^{d_{ \gamma}}\times T\right)\rightarrow\mathbb{K}\), a finite Koopman operator restricted to \(\mathbb{K}\) for any \(t\in T\). Mathematically, matrix \(\widehat{\mathcal{K}}\) is required to satisfy the Galerkin projection relation
\[\langle\widehat{\mathcal{K}}_{t}^{t+\varepsilon}\mathbf{h}\left(\gamma_{t} \right),\mathbf{g}\left(\gamma_{i\varepsilon}\right)\rangle=\langle \mathcal{K}_{t}^{t+\varepsilon}\mathbf{h}\left(\gamma_{t}\right),\mathbf{g} \left(\gamma_{i\varepsilon}\right)\rangle,\;\forall i=0,\ldots,m, \tag{29}\]
where \(\mathbf{h}\left(\cdot\right)\in\mathcal{G}\left(\mathbb{R}^{d_{\gamma}}\times T\right)\) is an arbitrary function [54; 55]. If the target Koopman operator is bounded and \(\mathcal{H}_{(m,n)}\) spans its invariant subspace, the approximation can be realized by
\[\lim_{m\rightarrow\infty}\int_{\mathcal{G}\left(\mathbb{R}^{d_{ \gamma}}\times T\right)}\|\widehat{\mathcal{K}}_{t}^{t+\varepsilon}\mathbf{h} \left(\gamma_{t}\right)-\mathcal{K}_{t}^{t+\varepsilon}\mathbf{h}\left(\gamma _{t}\right)\|_{F}\mathrm{d}\mu=0,\] \[\forall\mathbf{h}\left(\cdot\right)\in\mathcal{G}\left(\mathbb{R} ^{d_{\gamma}}\times T\right), \tag{30}\]
where \(\mu\) is a measure on \(\mathcal{G}\left(\mathbb{R}^{d_{\gamma}}\times T\right)\) and \(\|\cdot\|_{F}\) denotes the Frobenius norm. Once a restricted Koopman operator is derived, we can obtain the following iterative dynamics
\[\mathcal{H}_{m\times n}\left(k+1\right)=\widehat{\mathcal{K}}_{k\varepsilon}^ {(k+1)\varepsilon}\mathcal{H}_{m\times n}\left(k\right),\;\forall k=1,\ldots,n, \tag{31}\]
in which \(\mathcal{H}_{m\times n}\left(k\right)\) is the \(k\)-th column of \(\mathcal{H}_{m\times n}\).
As for the case where the Koopman operator has a continuous spectrum (e.g., has no eigenvalue), there is no finite invariant subspace to support computational approximation. Such an ill-posed situation remains for future exploration.
The restricted Koopman operator \(\widehat{\mathcal{K}}\) can be learned efficiently if it corresponds to autonomous system, i.e., \(\widehat{\mathcal{K}}_{t}^{t+\varepsilon}=\widehat{\mathcal{K}}^{\varepsilon}\). However, an online optimization will be necessary if it corresponds to non-autonomous system, i.e., \(\widehat{\mathcal{K}}_{t}^{t+\varepsilon}\) is time-varying. Limited by computing resources or data size, expensive online optimization may not always be available during PDE solving. Therefore, we propose a compromised approach to realize off-line training under the ergodic assumption [56; 51] of the dynamic system of \(\gamma_{t}\), i.e., \(\gamma_{t}\) ultimately visits every possible system states as \(t\rightarrow\infty\). Under this assumption, the proportion of retention time of \(\gamma_{t}\) at a certain system state is equivalent to the probability of this state in the space, making the time-averaging equivalent to the actual expectation at the limit of infinite time. Based on this property, we can define an expectation of the restricted Koopman operator associated with \(\varepsilon\)
\[\overline{\mathcal{K}}_{\varepsilon} =\lim_{t\rightarrow\infty}\frac{1}{t}\int_{[0,t)}\mathbf{g}\left( \gamma_{\tau}\right)^{-1}\mathbf{g}\left(\gamma_{\tau+\varepsilon}\right) \mathrm{d}\tau, \tag{32}\] \[\simeq\operatorname*{argmin}_{P\in\mathbb{R}}\sum_{k=1}^{n}\| \mathcal{H}_{m\times n}\left(k+1\right)-P\mathcal{H}_{m\times n}\left(k\right) \|_{F}. \tag{33}\]
For a fixed time difference \(\varepsilon\), the expected Koopman operator \(\overline{\mathcal{K}}_{\varepsilon}:\mathcal{G}\left(\mathbb{R}^{d_{\gamma}} \times T\right)\rightarrow\mathbb{K}\) is a time-average of \(\widehat{\mathcal{K}}_{t}^{t+\varepsilon}\) that can be learned during offline optimization.
### The original Koopman neural operator: Convergence
Given an ideal setting of \(m\rightarrow\infty\), we can ensure the convergence of the eigenvalues and eigenfunctions of \(\widehat{\mathcal{K}}\) to those of \(\mathcal{K}\) under the assumption of ergodic property. Similar conclusions can be seen in Ref. [54]. To understand this convergence, we need to indicate several important properties. First, as we have mentioned, there exists an equivalence relation between time-averaging and the real expectation as the time approaches to infinity (i.e., the Birkhoff ergodic theorem [56; 51])
\[\lim_{m\rightarrow\infty}\frac{1}{m}\sum_{i=0}^{m-1}\mathbf{g}\left(\gamma_{i} \right)=\int_{\mathbb{K}}\mathbf{g}\mathrm{d}\mu. \tag{34}\]
Second, Eq. (34) directly implies that
\[\lim_{m\rightarrow\infty}\frac{1}{m}\langle\mathcal{H}_{m\times n }\left(i\right),\mathcal{H}_{m\times n}\left(j\right)\rangle\] \[= \int_{\mathbb{K}}\mathcal{H}_{m\times n}\left(i\right)\left[ \mathcal{H}_{m\times n}\left(j\right)\right]^{*}\mathrm{d}\mu. \tag{35}\] \[= \langle\mathcal{H}_{m\times n}\left(i\right),\mathcal{H}_{m\times n }\left(j\right)\rangle_{\mathbb{K}}, \tag{36}\]
where \(*\) denotes the complex conjugate and \(\langle\cdot,\cdot\rangle_{\mathbb{K}}\) stands for the inner product of functions in \(\mathbb{K}\). Given the learned
Koopman operator \(\overline{\mathcal{K}}_{\varepsilon}\), Eq. (36) coincides with the definition of the actual Gramian matrix \(\mathcal{V}\) associated with the inner product space \(\mathbb{K}\)
\[\mathcal{V}_{i,j} =\langle\overline{\mathcal{K}}_{\varepsilon}^{i-1}\mathcal{R}_{n}, \overline{\mathcal{K}}_{\varepsilon}^{j-1}\mathcal{R}_{n}\rangle_{\mathbb{K}}, \tag{37}\] \[=\langle\mathcal{H}_{m\times n}\left(i\right),\mathcal{H}_{m \times n}\left(j\right)\rangle_{\mathbb{K}}, \tag{38}\]
where we mark \(\overline{\mathcal{K}}_{\varepsilon}^{i-1}\mathcal{R}_{n}=\left[\overline{ \mathcal{K}}_{\varepsilon}^{i-1}\mathbf{g}\left(\gamma_{0}\right),\ldots, \overline{\mathcal{K}}_{\varepsilon}^{i-1}\mathbf{g}\left(\gamma_{n\varepsilon }\right)\right]\) for convenience. Eq. (38) is derived from the fact that \(\mathcal{H}_{m\times n}\) serves as the sampling of \(\mathcal{R}_{n}\). Meanwhile, the left side of Eq. (35) coincides with the empirical Gramian matrix \(\widehat{\mathcal{V}}\) associated with matrix \(\mathcal{H}_{m\times n}\)
\[\widehat{\mathcal{V}}_{i,j}=\frac{1}{m}\langle\mathcal{H}_{m\times n}\left(i \right),\mathcal{H}_{m\times n}\left(j\right)\rangle. \tag{39}\]
Our formal proof can be developed based on these two properties. Let us consider the first \(r<n\) element of \(\mathcal{R}_{n}\)
\[\mathcal{R}_{r}=\left[\mathbf{g}\left(\gamma_{0}\right),\overline{\mathcal{K }}_{\varepsilon}\mathbf{g}\left(\gamma_{0}\right),\overline{\mathcal{K}}_{ \varepsilon}^{2}\mathbf{g}\left(\gamma_{0}\right),\ldots,\overline{\mathcal{K }}_{\varepsilon}^{r-1}\mathbf{g}\left(\gamma_{0}\right)\right], \tag{40}\]
which defines a possible basis of \(\mathbb{K}\).
Theoretically, the learned Koopman operator restricted to \(\mathbb{K}\) can be represented by a companion matrix
\[\mathcal{C}=\begin{bmatrix}0&0&\cdots&0&c_{0}\\ 1&0&\cdots&0&c_{1}\\ 0&1&\cdots&0&c_{2}\\ \vdots&\vdots&\ddots&\vdots&\vdots\\ 0&0&\cdots&1&c_{r-1}\end{bmatrix}. \tag{41}\]
The last column of \(\mathcal{C}\) denotes the coordinate of \(\overline{\mathcal{K}}_{\varepsilon}^{r}\mathbf{g}\left(\gamma_{0}\right)\) defined by the basis, which should be calculated as
\[\mathcal{C}\left(r\right)=\mathcal{V}^{-1}\begin{bmatrix}\left\langle \mathbf{g}\left(\gamma_{0}\right),\overline{\mathcal{K}}_{\varepsilon}^{r} \mathbf{g}\left(\gamma_{0}\right)\right\rangle_{\mathbb{K}}\\ \left\langle\overline{\mathcal{K}}_{\varepsilon}\mathbf{g}\left(\gamma_{0} \right),\overline{\mathcal{K}}_{\varepsilon}^{r}\mathbf{g}\left(\gamma_{0} \right)\right\rangle_{\mathbb{K}}\\ \left\langle\overline{\mathcal{K}}_{\varepsilon}^{2}\mathbf{g}\left(\gamma_{0} \right),\overline{\mathcal{K}}_{\varepsilon}^{r}\mathbf{g}\left(\gamma_{0} \right)\right\rangle_{\mathbb{K}}\\ \vdots\\ \left\langle\overline{\mathcal{K}}_{\varepsilon}^{r-1}\mathbf{g}\left(\gamma_{0 }\right),\overline{\mathcal{K}}_{\varepsilon}^{r}\mathbf{g}\left(\gamma_{0} \right)\right\rangle_{\mathbb{K}}\end{bmatrix}. \tag{42}\]
Empirically, the learned Koopman operator restricted to span \(\left(\mathcal{H}_{\left(m,n\right)}\right)\) can be also represented by a companion matrix, whose last column can be calculated as
\[\widehat{\mathcal{C}}\left(r\right)=\frac{1}{m}\widehat{\mathcal{V}}^{-1} \begin{bmatrix}\left\langle\mathcal{H}_{m\times n}\left(1\right),\overline{ \mathcal{K}}_{\varepsilon}^{r}\mathcal{H}_{m\times n}\left(1\right)\right\rangle \\ \left\langle\overline{\mathcal{K}}_{\varepsilon}\mathcal{H}_{m\times n}\left(1 \right),\overline{\mathcal{K}}_{\varepsilon}^{r}\mathcal{H}_{m\times n} \left(1\right)\right\rangle\\ \left\langle\overline{\mathcal{K}}_{\varepsilon}^{2}\mathcal{H}_{m\times n} \left(1\right),\overline{\mathcal{K}}_{\varepsilon}^{r}\mathcal{H}_{m\times n }\left(1\right)\right\rangle\\ \vdots\\ \left\langle\overline{\mathcal{K}}_{\varepsilon}^{r-1}\mathbf{g}\left(\gamma_{0 }\right),\overline{\mathcal{K}}_{\varepsilon}^{r}\mathbf{g}\left(\gamma_{0} \right)\right\rangle_{\mathbb{K}}\end{bmatrix}. \tag{43}\]
It is trivial to prove
\[\lim_{m\rightarrow\infty}\widehat{\mathcal{V}}^{-1}=\left(\lim_{m\to \infty}\widehat{\mathcal{V}}\right)^{-1}=\mathcal{V}^{-1} \tag{44}\]
applying Eq. (36) and Eqs. (38-39). Similarly, we can know
\[\lim_{m\rightarrow\infty}\frac{1}{m}\langle\overline{\mathcal{K }}_{\varepsilon}^{k}\mathcal{H}_{m\times n}\left(1\right),\overline{\mathcal{K }}_{\varepsilon}^{r}\mathcal{H}_{m\times n}\left(1\right)\rangle_{\mathbb{K}}\] \[= \langle\overline{\mathcal{K}}_{\varepsilon}^{k}\mathbf{g}\left(\gamma _{0}\right),\overline{\mathcal{K}}_{\varepsilon}^{r}\mathbf{g}\left(\gamma_{0} \right)\rangle_{\mathbb{K}},\;\forall k\in\{0,\ldots,r-1\}. \tag{45}\]
Therefore, we can derive
\[\lim_{m\rightarrow\infty}\widehat{\mathcal{C}}_{ij}=\mathcal{C}_{ij},\;\forall i,j\in\{1,\ldots,r\}^{2} \tag{46}\]
based on Eqs. (44-45), implying that
\[\lim_{m\rightarrow\infty}\sum_{M\in\mathcal{PM}_{k}\left(\widehat{\mathcal{C }}\right)}M=\sum_{M\in\mathcal{PM}_{k}\left(\mathcal{C}\right)}M,\;\forall k \in\{1,\ldots,r\}. \tag{47}\]
In Eq. (47), notion \(\mathcal{PM}_{k}\left(\cdot\right)\) denotes the set of all the \(k\)-order principal minors of the corresponding matrix. Now, let us consider the characteristic polynomials of \(\widehat{\mathcal{C}}\) and \(\mathcal{C}\)
\[P_{\widehat{\mathcal{C}}}\left(z\right) =z^{r}+\sum_{i=1}^{r}\left(-1\right)^{i}\left(\sum_{M\in\mathcal{ PM}_{i}\left(\widehat{\mathcal{C}}\right)}M\right)z^{r-i}, \tag{48}\] \[P_{\mathcal{C}}\left(z\right) =z^{r}+\sum_{i=1}^{r}\left(-1\right)^{i}\left(\sum_{M\in\mathcal{ PM}_{i}\left(\mathcal{C}\right)}M\right)z^{r-i}, \tag{49}\]
whose distance at the limit of \(m\rightarrow\infty\) can be measured as
\[\lim_{m\rightarrow\infty}\left\|P_{\widehat{\mathcal{C}}}\left(z \right)-P_{\mathcal{C}}\left(z\right)\right\|\] \[= \lim_{m\rightarrow\infty}\max_{1\leq i\leq r}\Bigg{|}\left(-1\right) ^{i}\left(\sum_{M\in\mathcal{PM}_{i}\left(\widehat{\mathcal{C}}\right)}M-\sum_{M \in\mathcal{PM}_{i}\left(\mathcal{C}\right)}M\right)\Bigg{|}, \tag{50}\] \[= 0. \tag{51}\]
Because the roots of a given polynomial evolve continuously as the function of the coefficients, we know that
and \(\mathcal{C}\) share the same set of eigenvalues at the limit of \(m\to\infty\) since their characteristic polynomials converge to the same. Moreover, the convergence of \(\widehat{\mathcal{C}}\) to \(\mathcal{C}\) and the convergence of the eigenvalues \(\widehat{\mathcal{C}}\) to those of \(\mathcal{C}\) eventually imply the convergence of the eigenfunctions.
### The original Koopman neural operator: Computation
In Ref. [31], we have proposed an architecture to implement the original Koopman neural operator on neural networks. The details of architecture designs are presented below:
* **Part 1: Observation.** An encoder (e.g., a single non-linear layer with \(\tanh\left(\cdot\right)\) activation function in the original Koopman neural operator) serves as observation function \(\mathbf{g}\left(\cdot\right)\) to transform \(\phi_{t}=\phi\left(D\times\left\{t\right\}\right)\), an arbitrary input of the PDE family (e.g., \(\phi_{t}\) can be directly set as \(\gamma_{t}\)), into \[\mathbf{g}\left(\widehat{\gamma_{t}}\right)\in\mathcal{G}\left(\mathbb{R}^{ \mathbb{d}\gamma}\times T\right)\] \[\mathbf{g}\left(\widehat{\gamma_{t}}\right)=\text{Encoder}\left(\phi_{t} \right),\;\forall t\in T.\] (52) Please see **Fig. 1** for illustrations.
* **Part 2: Fourier transform.** Similar to the Fourier neural operator [19], the original Koopman neural operator also utilizes the Fourier transform during the iterative update of the Green function in Eq. (5). Given \(\mathbf{g}\left(\widehat{\gamma_{t}}\right)\), we derive the Fourier transform \(\mathbf{g}_{\mathcal{F}}\left(\cdot\right)=\mathcal{F}\circ\mathbf{g}\left( \cdot\right)\), where we truncate the Fourier series at \(\omega\), a maximum frequency \[\mathbf{g}_{\mathcal{F}}\left(\xi\right)=\chi_{\left[0,\omega \right]}\left(\xi\right)\int_{D\times\left\{t\right\}}\mathbf{g}\left(\widehat {\gamma}\left(x_{t}\right)\right)\exp\left(-2\pi i\langle x_{t},\xi\rangle \right)\text{d}x_{t}.\] (53) Note that \(\chi\left(\cdot\right)\) denotes the indicator function (i.e., \(\chi_{A}\left(a\right)=1\) if \(a\in A\), otherwise \(\chi_{A}\left(a\right)=0\)). Computationally, the above transform is implemented by fast Fourier transform. For convenience, we mark \[\mathbf{g}_{\mathcal{F}}\left(\widehat{\gamma_{t}}\right)=\mathcal{F}\circ \mathbf{g}\left(\widehat{\gamma_{t}}\right)=\left\{\mathbf{g}_{\mathcal{F}} \left(\xi\right)\left|\xi\in\left[0,\infty\right)\right\}\] (54) as the transformed result of \(\widehat{\gamma_{t}}\). Different from Ref. [19], our main motivation for using the truncated Fourier transform is to extract the low-frequency information (i.e., main system components) of the represented equation solution \(\mathbf{g}\left(\widehat{\gamma_{t}}\right)\). Certainly, frequency truncation inevitably causes the loss of high-frequency information (i.e., high-frequency perturbations or edges). In the original Koopman neural operator, **Part 5** is designed to complement the lost information associated with high-frequency. See **Fig. 1** for more details.
* **Part 3: Hankel representation and offline Koopman operator.** Once we have derived \(\mathbf{g}_{\mathcal{F}}\left(\widehat{\gamma_{t}}\right)\) for every \(t\in\mathbb{N}^{+}\), a Hankel matrix \(\widehat{\mathcal{H}}_{m\times n}\) of \(\mathbf{g}_{\mathcal{F}}\left(\widehat{\gamma_{t}}\right)\) will be generated following \(m\in\mathbb{N}\), a dimension of delay-embedding (note that \(n\in\mathbb{N}\) is the number of all accessible samples) \[\widehat{\mathcal{H}}_{m\times n}=\begin{bmatrix}\mathbf{g}_{ \mathcal{F}}\left(\widehat{\gamma_{0}}\right)&\mathbf{g}_{\mathcal{F}}\left( \widehat{\gamma_{e}}\right)&\cdots&\mathbf{g}_{\mathcal{F}}\left(\widehat{ \gamma_{n\varepsilon}}\right)\\ \vdots&\vdots&\vdots&\vdots\\ \mathbf{g}_{\mathcal{F}}\left(\widehat{\gamma}_{(m-1)\varepsilon}\right)& \mathbf{g}_{\mathcal{F}}\left(\widehat{\gamma}_{m\varepsilon}\right)&\cdots &\mathbf{g}_{\mathcal{F}}\left(\widehat{\gamma}_{(m+n-1)\varepsilon} \right)\end{bmatrix}.\] (55) We train a \(o\times o\) linear layer to learn a neural network representation of Koopman operator \(\overline{\mathcal{K}}_{\varepsilon}:\mathcal{G}\left(\mathbb{R}^{\mathbb{d} \gamma}\times T\right)\rightarrow\widehat{\mathbb{K}}\) following Eqs. (32-33), where \(\widehat{\mathbb{K}}\) is spanned by \(\widehat{\mathcal{H}}_{m\times n}\). The learned \(\overline{\mathcal{K}}_{\varepsilon}\) can be used to predict the future state of \(\mathbf{g}_{\mathcal{F}}\left(\widehat{\gamma}_{(m+n-1)\varepsilon}\right)\) as \[\mathbf{g}_{\mathcal{F}}\left(\widehat{\gamma}_{(m+n+r-1)\varepsilon}\right)= \left[\overline{\mathcal{K}}_{\varepsilon}^{r}\widehat{\mathcal{H}}_{m\times n }\left(n\right)\right]^{\mathsf{T}}\left(m\right),\;r\in\mathbb{N}^{+}\] (56) where notion \(\mathsf{T}\) denotes the transpose of a matrix. Please see **Fig. 1** for illustrations.
* **Part 4: Inverse Fourier transform.** After \(\mathbf{g}_{\mathcal{F}}\left(\widehat{\gamma}_{(m+n)\varepsilon}\right)\) is predicted in **Part 3**, it is transformed from the Fourier space to \(\mathcal{G}\left(\mathbb{R}^{\mathbb{d}\gamma}\times T\right)\) by an inverse Fourier transform \[\mathbf{g}\left(\widehat{\gamma}\left(x_{t}\right)\right)=\frac{1}{\left(2n \right)^{d_{\mathbb{s}}}}\int_{-\infty}^{\infty}\mathbf{g}_{\mathcal{F}}\left( \xi\right)\exp\left(2\pi i\langle x_{t},\xi\rangle\right)\text{d}\xi,\] (57) where \(t=\left(m+n+r-1\right)\varepsilon\). For convenience, we mark \[\mathbf{g}\left(\widehat{\gamma}_{(m+n+r-1)\varepsilon}\right)=\mathcal{F}^{-1} \circ\mathbf{g}_{\mathcal{F}}\left(\widehat{\gamma}_{(m+n+r-1)\varepsilon} \right).\] (58) Please see **Fig. 1** for instances of **Part 4**.
* **Part 5: High-frequency information complement.** In the original Koopman neural operator, we use a convolutional layer to extract high-frequency of \(\mathbf{g}\left(\widehat{\gamma_{t}}\right)\) because convolutional layers can amplify high-frequency components according to Fourier analysis [57]. Therefore, we train a convolutional layer \(\mathcal{C}\) on the outputs of **Part 1** to extract their high-frequency information. As a complement of **Parts 2-4**, the convolutional layer realizes a forward prediction of high-frequency information \[\left[\mathbf{g}_{\mathcal{C}}\left(\widehat{\gamma}_{(j+r-1) \varepsilon}\right),\ldots,\mathbf{g}_{\mathcal{C}}\left(\widehat{\gamma}_{(j+ m+r-1)\varepsilon}\right)\right]^{\mathsf{T}}\] \[= \mathcal{C}\left[\mathbf{g}\left(\widehat{\gamma}_{j\varepsilon}\right), \ldots,\mathbf{g}\left(\widehat{\gamma}_{(j+m)\varepsilon}\right)\right]^{ \mathsf{T}},\;\forall j=1,\ldots,n.\] (59) See **Fig. 1** for illustrations.
* **Part 6: Inverse observation.** Once two future states, \(\mathbf{g}\left(\widehat{\gamma}_{(m+n)\varepsilon}\right)\) and \(\mathbf{g}_{\mathcal{C}}\left(\widehat{\gamma}_{(m+n)\varepsilon}\right)\), are predicted by **Parts 2-4** and **Part 5**, they are unified in a linear manner \[\mathbf{g}\iota\left(\widehat{\gamma}_{(m+n+r-1)\varepsilon}\right)=\mathbf{g }\left(\widehat{\gamma}_{(m+n+r-1)\varepsilon}\right)+\mathbf{g}_{\mathcal{C}} \left(\widehat{\gamma}_{(m+n+r-1)\varepsilon}\right).\] (60) Given \(\mathbf{g}\iota\left(\widehat{\gamma}_{(m+n)\varepsilon}\right)\), a non-linear decoder (e.g., a single non-linear layer with \(\tanh\left(\cdot\right)\) activation function in the original neural operator) is trained to approximate the inverse of observation function \[\mathbf{g}^{-1}\left(\cdot\right)\simeq\mathbf{g}_{\mathcal{U}}^{-1}\left( \cdot\right)\] (61) and derive \[\widehat{\gamma}_{(m+n+r-1)\varepsilon}=\text{Decoder}\left(\mathbf{g}_{ \mathcal{U}}\left(\widehat{\gamma}_{(m+n+r-1)\varepsilon}\right)\right)\] (62) as the target state of equation solution in space \(\mathbb{R}^{d_{\widehat{\gamma}}}\). Please see **Fig. 1** for illustrations.
**Parts 1-6** define the iterative update strategy of Eq. (23). For any \(t^{\prime}>t\in\varepsilon\mathbb{N}\), the iterative dynamics is given as
\[\widehat{\gamma}_{t^{\prime}}= \Big{[}\mathbf{g}^{-1}\Big{(}\underbrace{\mathcal{F}^{-1}\circ \overline{\mathcal{K}}_{\varepsilon}^{t^{\prime}-t}\circ\mathcal{F}\circ \mathbf{g}\left(\widehat{\gamma}_{[t-m\varepsilon,t]}\right)}_{\text{\bf Parts 1-4}}\] \[+\underbrace{\mathcal{C}\circ\mathbf{g}\left(\widehat{\gamma}_{[t -m\varepsilon,t]}\right)}_{\text{\bf Part 1 and part 5}}\Big{)}\Big{]}^{\mathsf{T}}\left(m\right), \tag{63}\]
in which \(\widehat{\gamma}_{[t-m\varepsilon,t]}\) denotes a vector \([\widehat{\gamma}_{t-m\varepsilon},\dots,\widehat{\gamma}_{t}]\). The loss function for optimizing Eq. (63) is defined as
\[\mathcal{L}=\lambda_{p}\|\widehat{\gamma}_{t^{\prime}}-\gamma_{t^{\prime}}\| _{F}+\lambda_{r}\sum_{i=0}^{m}\|\mathbf{g}^{-1}\circ\mathbf{g}\left(\widehat{ \gamma}_{t-im\varepsilon}\right)-\gamma_{t-im\varepsilon}\|_{F}, \tag{64}\]
where \(\lambda_{p},\lambda_{r}\in(0,\infty)\) denotes the weights of prediction and reconstruction processes in the loss function.
The one-unit architecture of the original Koopman neural operator is visualized in **Fig. 1**. A multi-unit architecture can be readily constructed by cascading the copy of **Parts 2-5** multiple times.
## III The Koopman neural operator family
Beyond the original Koopman neural operator (KNO) [31], we generalize it to four kinds of variants to fit in with different application demands.
### The compact KNO sub-family: Definition
The compact KNO sub-family includes two compact variants of KNO realized by multi-layer perceptrons (MLP-KNO) and convolutional neural networks (CNN-KNO). These variants are proposed to accurately solve PDEs with small model sizes. Specifically, they are designed following Eqs. (52-62), where encoder and decoder are defined as
\[\text{Encoder}=\eta\circ\mathcal{E},\;\mathcal{E}\in\{\mathcal{W }_{e},\mathcal{C}_{e}\}, \tag{65}\] \[\text{Decoder}=\eta\circ\mathcal{D},\;\mathcal{D}\in\{\mathcal{W }_{d},\mathcal{C}_{d}\}. \tag{66}\]
In Eqs. (65-66), mapping \(\eta\) denotes a non-linear activation function (e.g., we use \(\tanh\left(\cdot\right)\) in our research), notions \(\mathcal{W}_{e}\) and \(\mathcal{W}_{d}\) are two weight matrices of the corresponding sizes, and \(\mathcal{C}_{e}\) and \(\mathcal{C}_{d}\) are two convolutional layers.
Our proposed KoopmanLab module offers user-friendly tools to customize an MLP-KNO. A customized instance is presented below.
```
1importkoopmanlabaskp
2
3MLP_KNO1D=model.koopman(backbone="KNO1d", autoencoder="MLP",o=o,f=f,r=r,device=device
4
5MLP_KNO2D=model.koopman(backbone="KNO2d", autoencoder="MLP",o=o,f=f,r=r,device=device)
6
7MLP_KNO1D.compile()
8MLP_KNO2D.compile()
9
10#Parameter definitions:
Figure 1: Conceptual illustrations of neural network architectures of the original Koopman neural operator. (a) summarizes key mathematical transform in each part, where \(r\) is the prediction length. (b) visualizes a prediction instance on the 2-dimensional Navier-Stokes equation.
* 1# o:thedimensionoftlearnedKoopman operator
* 2# f:thenumberoffrequencymodesbelow frequencytruncationthreshold
* 3# r:thepoweroftheKoopmanoperatorinEQ. (56)
* 4#device:ifCPUorGPUisusedforcomputation
Similarly, a CNN-KNO can be customized using the following code, where we present 1-dimensional and 2-dimensional visions.
```
1importkoopmanlabaskp
2
3CNN_KNO_1D=model.koopman(backbone="KNO1d",autoencoder="Conv1d",o=o,f=f,r=r,device=device)
4
5CNN_KNO_1D.compile()
6
7CNN_KNO_2D=model.koopman(backbone="KNO2d",autoencoder="Conv2d",o=o,f=f,r=r,device=device)
8
9CNN_KNO_1D.compile()
10CNN_KNO_2D.compile()
11
12#Parameterdefinitions:
13o:thedimensionofthelearnedKoopman operator
14#f:thenumberoffrequencymodesbelow frequencytruncationthreshold
15r:thepoweroftheKoopmanoperatorinEQ. (56)
16device:ifCPUorGPUisusedforcomputation
```
### The compact KNO sub-family: Validation
To validate the proposed compact KNO sub-family in PDE solving tasks, we design mesh-independent and long-term prediction tasks on representative PDEs (e.g., the 2-dimensional Navier-Stokes equation [58] and the 1-dimensional Bateman-Burgers equation [59]). The numerical data sets of these two are provided by Ref. [19].
Specifically, the incompressible 2-dimensional Navier-Stokes equation has a vorticity form
\[\partial_{t}\gamma\left(x_{t}\right)+\chi\left(x_{t}\right) \nabla\gamma\left(x_{t}\right) =\nu\Delta\gamma\left(x_{t}\right)+\psi\left(x_{t}\right),\] \[x_{t} \in\left(0,1\right)^{2}\times\left(0,\infty\right), \tag{67}\] \[\nabla\chi\left(x_{t}\right) =0,\;x_{t}\in\left(0,1\right)^{2}\times\left(0,\infty\right),\] (68) \[\gamma\left(x_{0}\right) =\gamma_{I},\;x_{0}\in\left(0,1\right)\times\left\{0\right\}, \tag{69}\]
in which \(\gamma\left(\cdot\right)\) denotes the vorticity, \(\chi\left(\cdot\right)\) measures the velocity, \(\psi\left(\cdot\right)\) is a time-independent forcing term. The viscosity coefficient is \(\nu\in\left\{10^{-3},10^{-4}\right\}\) in our research. Given the data with the highest mesh resolution, one can further generate the data with the lower resolution by direct down-sampling [19]. The data with the highest mesh resolution has \(2^{13}\) grids [19]. Our KoopmanLab module offers a function to load the data of the incompressible 2-dimensional Navier-Stokes equation.
```
1importkoopmanlabaskp
2
3train_loader,test_loader=kp.data.mavier_stokes(path,batch_size=10,T_in=10,T_out=40,type="1e-3",sub=1)
4
5##Parameterdefinitions:
6#path:thefilepathofthedownloadeddata set
7#T_in:thedurationlengthofinputdata
8#T_out:thedurationlengthrequiredto predict
9#Type:theviscositycoefficient
10#sub:thedown-samplingscalingfactor.Forinstance,ascalingfactorsub=2actingona.2-dimensionaldatawiththespatialresolution64*64willcreatedown-sampledspaceof32*32.Thesamefactoractionona1-dimensionaldatawiththespatialresolution1*64impliesadown-sampledspaceof1*32
```
The 1-dimensional Bateman-Burgers equation is defined as
\[\partial_{t}\gamma\left(x_{t}\right)+\partial_{x}\left(\frac{ \gamma^{2}\left(x_{t}\right)}{2}\right) =\nu\partial_{xx}\gamma\left(x_{t}\right),\;x_{t}\in\left(0,1 \right)\times\left(0,1\right],\] \[x_{t} \in\left(0,1\right)\times\left(0,1\right], \tag{70}\] \[\gamma\left(x_{0}\right) =\gamma_{I},\;x_{0}\in\left(0,1\right)\times\left\{0\right\}, \tag{71}\]
in which \(\gamma_{I}\) stands for a periodic initial condition \(\gamma_{I}\in L^{2}_{\text{periodic}}\left[\left(0,1\right);\mathbb{R}\right]\) and parameter \(\nu\in\left(0,\infty\right)\) is the viscosity coefficient. We set \(\nu=100\) in our research. The data with highest mesh resolution has \(2^{16}\) grids [19]. To load this data set, one can use the following function.
```
1importkoopmanlabaskp
2
3train_loader,test_loader=kp.data.burgers(path,batch_size=64,sub=32)
4
5##Parameterdefinitions:
6#path:thefilepathofthedownloadeddataset
7#sub:thedown-samplingscalingfactor.Forinstance,ascalingfactorsub=2actingona2-dimensionaldatawiththespatialresolution64*64willcreatedown-sampledspaceof32*32.Thesamefactoractionona1-dimensionaldatawiththespatialresolution1*64impliesadown-sampledspaceof1*32
```
In **Fig. 2(a)**, we validate the mesh-independent property of the proposed compact KNO sub-family adopting the same setting used in our earlier work [31]. The mesh-independent property, as suggested by Refs. [16, 17, 18, 19, 20, 5, 21], arises from the fact that the neural operator is expected to learn the solution operator of an entire PDE family rather than be limited to a concrete parameterized instance. Specifically, we conduct the experiments on the data of 1-dimensional Bateman-Burgers equation associated with different mesh granularity conditions (i.e., spatial resolution of meshes). Different versions of the compact KNO sub-family are defined by changing hyperparameters (e.g., operator size \(o\), frequency mode number
\(f\), and the power of the Koopman operator \(r=\frac{t^{\prime}-t}{\overline{\varepsilon}}\) in Eq. (56)). These models are trained by 1000 randomly selected samples with the lowest spatial resolution and conduct 1-second forward prediction on 200 samples associated with different resolutions. Batch size is set as 64, the learning rate is initialized as 0.001 and halved every 100 epochs, and the weights of prediction and reconstruction in Eq. (64)) are set as \((\lambda_{p},\lambda_{r})=(5,0.5)\).
Figure 2: Experimental validation of the compact KNO sub-family. (a) Results of the mesh-independent experiment on the Bateman–Burgers equation. (b) Results of the long-term prediction experiment on the 2-dimensional Navier-Stokes equation with a viscosity coefficient \(\nu=10^{-3}\). (b) Results of the same long-term prediction experiment on the 2-dimensional Navier-Stokes equation with \(\nu=10^{-4}\). (d) The prediction results and errors (RMSE) on the 2-dimensional Navier-Stokes equation with \(\nu=10^{-3}\). (e) The prediction results and errors (RMSE) on the 2-dimensional Navier-Stokes equation with \(\nu=10^{-4}\).
As shown in **Fig. 2(a)**, the prediction errors of all versions of the compact KNO sub-family maintain constantly across different spatial resolutions, suggesting the capacity of the compact KNO sub-family to be mesh-independent. Mesh-independence is important for PDE solving because it allows one to train a neural-network-based PDE solver on the data with low spatial resolution and directly apply the solver on the data with high spatial resolution, which breaks the trade-off of accuracy and efficiency in PDE solving. In our earlier work [31], one can further see a detailed comparison between the original KNO and FNO [19] in mesh-independent prediction task, where the original KNO outperforms FNO with a much smaller model size (e.g., a size of \(5\times 10^{3}\) for KNO and a size of \(2\times 10^{7}\) for FNO). Other neural operator models, such as graph neural operator (GNO) [5] and multipole graph neural operator (MGNO) [60], are no longer considered because they have been demonstrated as less accurate than FNO as reported by Ref. [19].
In **Fig. 2(b-e)**, we validate the compact KNO sub-family by a long-term prediction task designed on the 2-dimensional Navier-Stokes equation data sets with viscosity coefficients \(\nu=10^{-3}\) (**Fig. 2(b)**) and \(\nu=10^{-4}\) (**Fig. 2(c)**). A down-sampling scaling factor of 2 is defined to generate the data sets with \(2^{12}\) grids. For comparison, a one-unit FNO is defined following the default setting introduced in Ref. [19]. A 40-time-interval prediction task is conducted on the data set with \(\nu=10^{-3}\), where models are trained on 1000 samples of \(\gamma\left(\left(0,1\right)^{2}\times\left[0,10\right)\right)\) and tested on 200 samples of \(\gamma\left(\left(0,1\right)^{2}\times\left(10,50\right]\right)\). Similarly, a more challenging 10-time-interval prediction task is conducted on the data set with \(\nu=10^{-4}\), in which models are trained on 8000 samples of \(\gamma\left(\left(0,1\right)^{2}\times\left[0,10\right)\right)\) and tested on 200 samples of \(\gamma\left(\left(0,1\right)^{2}\times\left(10,20\right]\right)\). **Fig. 2(b-c)** report the prediction performance of all models as the function of increasing prediction duration length. **Fig. 2(d-e)** visualize predicted instances and errors in the cases with \(\nu=10^{-3}\) (**Fig. 2(d)**) and \(\nu=10^{-4}\) (**Fig. 2(e)**). All experiment results suggest the optimal potential of the compact KNO sub-family in characterizing the long-term evolution of PDE solutions. Combining these results with the model sizes measured in **Table. 1**, we suggest that the compact KNO sub-family realizes a better balance between accuracy and efficiency because a KNO variant with a smaller model size can still outperform FNO significantly.
### The ViT-KNO sub-family: Definition
Different from the compact KNO sub-family, the ViT-KNO sub-family is proposed for dealing with more intricate situations (here ViT stands for Vision Transformer [61]). Numerous applications of PDE solving (e.g., global climate forecasting) require the solver to be able to capture the underlying patterns of ultra-large data sets that may be related with certain unknown PDEs. Meanwhile, there may exist multiple variables of interest that govern by a group of coupled PDEs. To fit in with these situations, we follow the main idea of the compact KNO sub-family to develop a kind of transfer-based PDE solver. The mechanism underlying the proposed ViT-KNO sub-family is not completely same as Eqs. (52-62) because some mathematical details are modified to improve model applicability on noisy real data sets. We suggest the benefits of our modifications based on an experiment on ERA5, one of the largest data set of global atmospheric, land, and oceanic climate fields [32]. Nevertheless, more in-depth mathematical analyses of these modifications remain for future studies.
Let us consider a case where there exist \(v\) coupled variables, \(\{\gamma^{1},\ldots,\gamma^{h}\}\), defined on domain \(D\times T\). The dynamics of these variables are govern by a group of PDEs with unknown expressions. The objective is to learn the equation solutions of these latent PDEs such that the dynamics of \(\{\gamma^{1},\ldots,\gamma^{h}\}\) can be accurately characterized.
The architecture of ViT-KNO sub-family consists of 7
\begin{table}
\begin{tabular}{l|l|l} \hline \hline Models & Settings & Parameter number \\ \hline FNO & default settings, one-unit [19] & 233897 \\ MLP-KNO & \((o,f,r)=(32,10,12)\) & 206538 \\ MLP-KNO & \((o,f,r)=(32,16,8)\) & 526026 \\ MLP-KNO & \((o,f,r)=(48,10,12)\) & 464170 \\ CNN-KNO & \((o,f,r)=(32,10,12)\) & 206538 \\ CNN-KNO & \((o,f,r)=(32,16,8)\) & 526026 \\ CNN-KNO & \((o,f,r)=(48,10,12)\) & 464170 \\ \hline \hline \end{tabular}
\end{table}
Table 1: The parameter numbers of all implemented models in **Figs. 2(b-c)** counted by the tool provided by Ref. [23; 24].
parts. Below, we present a detailed computational implementation of each part.
* **Part 1: Observation.** Similar to the encoder design in the compact KNO sub-family, an encoder component is implemented in ViT-KNO to serve as observation function \(\mathbf{g}\left(\cdot\right)\) and transform \(\phi_{t}^{s}=\phi^{s}\left(D\times\left\{t\right\}\right)\) into \(\mathbf{g}\left(\widehat{\gamma}_{t}^{s}\right)\) for each \(s\in\left\{1,\ldots,h\right\}\). Specifically, the encoder is realized by the token embedding layer in Vision Transformer (ViT) [61]. Given a joint input \(\left[\phi_{1}^{1},\ldots,\phi_{t}^{h}\right]\in\mathbb{R}^{d_{\phi}\times h}\), we first transform it into a 3-dimensional token tensor \(\Phi_{t}\) by a convolutional layer \(\mathcal{C}_{e}\) \[\mathcal{C}_{e}\left(\left[\phi_{t}^{1},\ldots,\phi_{t}^{h}\right]\right)= \mathbf{g}\left(\widehat{\Gamma}_{t}\right)\in\mathbb{R}^{u\times v\times l},\] (72) where domain \(D\) is reorganized into \(u\times v\) patches (i.e., tokens). The patch is a kind of square and non-overlapping macro-mesh. If domain \(D\) has been already discretized into multiple meshes, then the size of a patch equals the number of meshes it covers. Parameter \(l\) denotes a customized embedding dimension, which is not necessarily the same as \(h\). The derived tensor \(\mathbf{g}\left(\widehat{\Gamma}_{t}\right)\) denotes the joint representation of \(\left[\mathbf{g}\left(\widehat{\gamma}_{t}^{1}\right),\ldots,\mathbf{g}\left( \widehat{\gamma}_{t}^{l}\right)\right]\). Please see **Fig. 3** for illustrations.
* **Part 2: Fourier transform.** Similar to adaptive Fourier neural operator [23; 24; 62], a truncated Fourier transform is applied on the first two dimensions of \(\mathbf{g}\left(\widehat{\Gamma}_{t}\right)\) to derive the Fourier series of each embedded variable \(s\in\left\{1,\ldots,l\right\}\) \[\mathbf{g}_{\mathcal{F}}^{s}\left(\xi\right)\] \[= \chi_{\left[0,\omega\right]}\left(\xi\right)\int_{\left[u\right] \times\left[v\right]\times\left\{t\right\}}\mathbf{g}\left(\widehat{\gamma}^ {s}\left(x_{t}\right)\right)\exp\left(-2\pi i\langle x_{t},\xi\rangle\right) \mathsf{d}x_{t},\] (73) where \(\left[u\right]=\left\{1,\ldots,u\right\}\). For convenience, we mark (74) (75) in which \(s\in\left\{1,\ldots,l\right\}\). Similar to the compact KNO sub-family, frequency truncation leads to the loss of high-frequency information. In the ViT-KNO sub-family, **Part 5** is designed for complementing high-frequency information. See **Fig. 3** for details.
* **Part 3: Koopman-operator-associated component.** After deriving \(\mathbf{g}_{\mathcal{F}}\left(\widehat{\Gamma}_{t}\right)\) for every \(t\in\varepsilon\mathbb{N}^{+}\), a Koopman-operator-associated component is designed to function on the third dimension of every token in \(\mathbf{g}_{\mathcal{F}}\left(\widehat{\Gamma}_{t}\right)\) and realize the iterative dynamics \[\mathbf{g}_{\mathcal{F}}\left(\widehat{\Gamma}_{t+\varepsilon}\right) =\overline{\mathcal{K}}_{e}\mathcal{S}\mathbf{g}_{\mathcal{F}} \left(\widehat{\Gamma}_{t}\right)\] \[=\left[\mathbf{g}_{\mathcal{F}}^{1}\left(\widehat{\gamma}_{t+ \varepsilon}\right),\ldots,\mathbf{g}_{\mathcal{F}}^{l}\left(\widehat{\gamma}_ {t+\varepsilon}\right)\right],\] (76) in which Koopman operator \(\overline{\mathcal{K}}_{\varepsilon}\) is learned by a linear transform and layer \(\mathcal{S}\) is constructed by a non-linear activation function \(\eta\) acting on a linear layer \(\mathcal{W}\) \[\mathcal{S}=\eta\circ\mathcal{W}.\] (77) Although \(\mathcal{S}\) is not a part of the original Koopman neural operator [31], including it can efficiently enhance the capacity of this component to characterize intricate large-scale data. In KoopmanLab, the leaky rectified linear unit (Leaky ReLU) [63] is suggested as a default choice of \(\mathcal{S}\), which can also reduce to the ReLU function as a special case. Please see **Fig. 3** for illustrations of **Part 3**.
* **Part 4: Inverse Fourier transform.** Once \(\mathbf{g}_{\mathcal{F}}\left(\widehat{\Gamma}_{t+\varepsilon}\right)\) is derived in **Part 3**, the inverse Fourier transform is applied on the first two dimensions of \(\mathbf{g}_{\mathcal{F}}\left(\widehat{\Gamma}_{t+\varepsilon}\right)\) to transform \(\mathbf{g}_{\mathcal{F}}\left(\widehat{\Gamma}_{t+\varepsilon}\right)\) back to the observation space \[\mathbf{g}\left(\widehat{\gamma}^{s}\left(x_{t+\varepsilon}\right)\right)\] \[= \frac{1}{\left(2\pi\right)^{d_{\widehat{\gamma}}}}\int_{-\infty}^ {\infty}\mathbf{g}_{\mathcal{F}}^{s}\left(\xi\right)\exp\left(2\pi i\langle x _{t+\varepsilon},\xi\rangle\right)\mathsf{d}\xi,\] (78) based on which, we can define \[\mathbf{g}\left(\widehat{\Gamma}_{t+\varepsilon}\right) =\mathcal{F}^{-1}\circ\mathbf{g}_{\mathcal{F}}\left(\widehat{ \Gamma}_{t+\varepsilon}\right)\] \[=\left[\mathbf{g}\left(\widehat{\gamma}^{1}\left(x_{t+ \varepsilon}\right)\right),\ldots,\mathbf{g}\left(\widehat{\gamma}^{l}\left(x _{t+\varepsilon}\right)\right)\right].\] (79) Please see instances in **Fig. 3**.
* **Part 5: High-frequency information complement.** Same as the compact KNO sub-family, there is a component for complementing high-frequency information in ViT-KNO. This component is also realized by a convolutional layer \(\mathcal{C}\) that acts on the outputs of **Part 1** to learn the dynamics of high-frequency information \[\mathbf{g}_{\mathcal{C}}\left(\widehat{\Gamma}_{t+\varepsilon}\right)=\mathcal{C} \mathbf{g}\left(\widehat{\Gamma}_{t}\right).\] (80) See **Fig. 1** for illustrations.
* **Part 6: Variable coupling.** Given two predicted states, \(\mathbf{g}\left(\widehat{\Gamma}_{t+\varepsilon}\right)\) and \(\mathbf{g}_{\mathcal{C}}\left(\widehat{\Gamma}_{t+\varepsilon}\right)\), by **Parts 2-4** and **Part 5**, we combine them in a linear form \[\mathbf{g}_{\mathcal{U}}\left(\widehat{\Gamma}_{t+\varepsilon}\right)=\mathbf{g} \left(\widehat{\Gamma}_{t+\varepsilon}\right)+\mathbf{g}_{\mathcal{C}}\left( \widehat{\Gamma}_{t+\varepsilon}\right).\] (81) Because ViT-KNO is designed to learn multivariate systems governed by unknown coupled PDEs, we need to characterize the coupling relation among variables. Because we lack the _a priori
knowledge about these underlying PDEs, we suggest to capture the coupling relation by optimizing a non-linear layer \(\mathcal{M}\) \[\mathbf{g}_{\mathcal{M}}\left(\widehat{\Gamma}_{t+\varepsilon}\right)=\mathcal{M} \mathbf{g}_{\mathcal{U}}\left(\widehat{\Gamma}_{t+\varepsilon}\right).\] (82) Following the idea of adaptive Fourier neural operator [23, 24, 62], we use the Gaussian Error Linear Unit (GELU) as the activation function in this non-linear layer. Please see **Fig. 3** for illustrations.
* **Part 7: Inverse observation.** Given \(\mathbf{g}_{\mathcal{M}}\left(\widehat{\Gamma}_{t+\varepsilon}\right)\), a decoder is implemented to function as the inverse of observation function \[\left[\widehat{\gamma}_{t+\varepsilon}^{1},\ldots,\widehat{ \gamma}_{t+\varepsilon}^{h}\right] \simeq\mathbf{g}^{-1}\left(\mathbf{g}_{\mathcal{M}}\left(\widehat {\Gamma}_{t+\varepsilon}\right)\right)\] \[=\text{Decoder}\left(\mathbf{g}_{\mathcal{M}}\left(\widehat{ \Gamma}_{t+\varepsilon}\right)\right).\] (83) Similar to the compact KNO sub-family, there are two kinds of decoders included in our proposed KoopmanLab module \[\text{Decoder}\in\{\mathcal{W}_{d},\mathcal{C}_{d}\},\] (84) in which \(\mathcal{W}_{d}\) and \(\mathcal{C}_{d}\) denote linear and convolutional layers, respectively. These two kinds of decoder designs distinguish between two variants of the ViT-KNO sub-family. See **Fig. 3** for illustrations.
**Parts 1-7** define the iterative update strategy of Eq. (23) in a multi-variate case. For any \(t\in T\), the iterative dynamics is given as
\[\widehat{\Gamma}_{t+\varepsilon}\] \[= \mathbf{g}^{-1}\circ\mathcal{M}\circ\left(\mathcal{F}^{-1}\circ \overline{\mathcal{K}}_{\varepsilon}\mathcal{S}\circ\mathcal{F}\circ\mathbf{g }\left(\widehat{\Gamma}_{t}\right)+\mathcal{C}\circ\mathbf{g}\left(\widehat{ \Gamma}_{t}\right)\right). \tag{85}\]
Multi-step prediction can be realized in an iterative manner. The loss function for optimizing Eq. (85) is
\[\mathcal{L}=\lambda_{p}\sum_{s=1}^{h}\|\widehat{\gamma}_{t+ \varepsilon}^{s}-\gamma_{t+\varepsilon}^{s}\|_{F}+\lambda_{r}\sum_{s=1}^{h}\| \mathbf{g}^{-1}\circ\mathbf{g}\left(\widehat{\gamma}_{t}^{s}\right)-\gamma_{t }^{s}\|_{F}, \tag{86}\]
where \(\lambda_{p},\lambda_{r}\in(0,\infty)\) are the weights of prediction and reconstruction.
Several computational tricks can be considered in the application. First, a LASSO regularization [64] can be included to improve the robustness and sparsity of Koopman operator \(\overline{\mathcal{K}}_{\varepsilon}\) in Eq. (76). This trick has been demonstrated as effective in adaptive Fourier neural operator [23, 24, 62] and is applicable to the ViT-KNO sub-family as well. Second, the transformer architecture supports a parallel design the ViT-KNO sub-family. Specifically, the third dimension of the output of **Part 1** can be subdivided into multiple parts (e.g., \(\mathbf{g}\left(\widehat{\Gamma}_{t}\right)\in\mathbb{R}^{u\times v\times l}\) is subdivided into \(k\) parts such that each part is an element in \(\mathbb{R}^{u\times v\times\frac{1}{k}}\)). Then, **Parts 2-6** are copied \(k\times j\) times, where each group of \(j\) copies is organized into a sequence. Each sequence of \(j\) copies is referred to as a head in the transformer, processes a corresponding \(\frac{1}{k}\) part of \(\mathbf{g}\left(\widehat{\Gamma}_{t}\right)\in\mathbb{R}^{u\times v\times l}\), and shares parameters during optimization (see **Fig. 3**). Computationally, parameters \(k\) and \(j\) are referred to as the number and the depth of heads. The processed outputs of these \(k\) parallel heads are unified by **Part 7** to derive the final prediction result. In our proposed KoopmanLab, these two tricks are included to improve computational efficiency.
Our KoopmanLab module supports customizing ViT-KNO frameworks. Below, we present an instance of ViT-KNO with a multi-layer perceptron as the decoder
```
1importkoopmanlabaskp
2
3ViT_KNO=model.koopman_vit(decoder="MLP",resolution=(1440,720),patch_size=(2,2),in_chans=20,out_chans=20,head_num=20,embed_dim=768,depth=16,parallel=True,high_freq=True,device=device)
4
5ViT_KNO.compile()
6
7#Parameterdefinitions:
8resolution:thespatialresolutionofinput data
9#patch_size:thesizeofeachpatch(i.e.,token)
Figure 3: Conceptual illustrations of neural network architectures of the ViT-KNO sub-family. (a) summarizes the computational design of each part. (b) illustrates an instance of the parallel design (i.e., multi-head design) of the ViT-KNO sub-family, where the depth of each head is 1.
* #in_chans:thenumberoftargetvariablesinthedataset
*out_chans:thenumberofpredictedvariablesbyViT-KNO,whichisusuallysmeasin_chans
*head_num:thenumberofheads
*embed_dim:theembeddingdimensiondenotedbyinEq.(72)
*depth:thedepthofeachhead
*parallel:ifparalleldesignisapplied
*high_freq:ifhigh-frequencyinformation
*compelecientisapplied
*device:ifCPUorGPUisusedforcomputation
Similarly, a ViT-KNO whose decoder is a convolutional layer can be defined as the following
```
1importkoopmanlabaskp
2
3ViT_KNO=model.koopman_vit(decoder="Conv2d",resolution=(1440,720),patch_size=(2,2),in_chans=20,out_chans=20,head_num=20,embed_dim=768,depth=16,parallel=True,high_freq=True,device=device)
4
5ViT_KNO.compile()
6
7#Parameterdefinitions:
8#resolution:thespatialresolutionofinputdata
9patch_size:thesizeofeachpatch(i.e.,token)
0#in_chans:thenumberoftargetvariablesinthedataset
11out_chans:thenumberofpredictedvariablesbyViT-KNO,whichisusuallysmeasin_chans
2#head_num:thenumberofheads
3#embed_dim:theembeddingdimensiondenotedbyinEq.(72)
4#depth:thedepthofeachhead
5#parallel:ifparalleldesignisapplied
6#high_freq:ifhigh-frequencyinformationcomplementisapplied
7#device:ifCPUorGPUisusedforcomputation
```
Please note that there exist some detailed model parameters that are not covered by the above codes because they are highly coupled during computation or less important in our theoretical derivations. Users are suggested to adjust them after loading the source code of ViT-KNO.
### The ViT-KNO sub-family: Validation
To validate the proposed ViT-KNO sub-family, we implement a large-scale experiment on ERA5, one of the largest high- resolution data sets of global-scale multivariate climate fields [32]. This data set has been extensively applied in weather forecasting tasks (e.g., see FourCastNet [23; 24]), ensuring the reproducibility and comparability of our results.
Twenty important climate variables are considered in our research, including mean large-scale precipitation (MSLP), relative humidity with 500 hPa (R500), relative humidity with 850 hPa (R850), surface pressure (SP), 2m temperature (T2M), temperature with 500 hPa (T500), temperature with 850 hPa (T850), total column water vapour (TCWV), the 10m u-component of wind (U10), the u-component of wind with 500 hPa (U500), the u-component of wind with 850 hPa (U850), the u-component of wind with 1000 hPa (U1000), the 10m v-component of wind (V10), the v-component of wind with 500 hPa (V500), the v-component of wind with 1000 hPa (V1000), the geopotential with 50 hPa (Z50), the geopotential with 500 hPa (Z500), the geopotential with 850 hPa (Z850), and the geopotential with 1000 hPa (Z1000).
We test a ViT-KNO whose decoder is a multi-layer perceptron in a long-term prediction task. Given the samples of initial conditions, the ViT-KNO is required to predict the future states of all 20 climate variables after \(t\in\{6,12,18,\ldots,192\}\) hours. The training data set includes the samples recorded from 1979 to 2015. The validation data set includes the samples recorded during 2016 and 2017. The test data set includes the samples recorded in 2018. The spatial resolution of all samples is set as \(1440\times 720\). All data is pre-processed in a standard manner following Refs. [23; 24; 32], where a \(Z\)-transform is applied to normalize all variables. The training of our ViT-KNO is implemented in a multi-GPU environment with \(128\times 16\) GBs in total. The actual memory cost of training is 1250.56 GBs. The testing of trained ViT-KNO is implemented in a single 16-GB GPU environment.
Key model settings are summarized below. The batch size is set as 128, the learning rate is \(5\times 10^{-4}\) and updated by a cosine annealing approach, the patch size is set as \(8\times 8\), the number of heads is 8, the depth of heads is 12, the embedded dimension \(l\) in Eq. (72) is 768, the relative weights of prediction and reconstruction in the loss function are \(\lambda_{p}=0.9\) and \(\lambda_{r}=0.1\), and the number of kept low-frequency modes after the fast Fourier transform is 32. The defined model has 74691840 parameters in total. All 20 climate variables are learned and predicted together rather than respectively. Please note that the information of land-form is not provided to the model. The model is required to learn climate variables with no additional information. The maximum number of available epochs for training is set as 300 to explore when the model can converge, which costs about 92.5 hours in our environment. The convergence is observed to emerge after \(\simeq 150\) epochs. Therefore, users can consider a 150-epoch training in the application, which costs about 2 days under the same hardware condition. There is no additional trick applied during training.
In **Fig. 4**, we visualize several instances of the predicted climate fields during testing, accompanied by corresponding true values. High consistency can be seen between these ground truths and their predicted counterparts derived by ViT-KNO. Quantitatively, the prediction accuracy of each climate variable during testing is measured by anomaly correlation coefficient (ACC) in **Fig. 5**. According to the same prediction task reported by Refs. [23; 24], the trained ViT-KNO outper
forms the baseline state-of-the-art deep learning models for weather forecasting proposed by Ref. [65] significantly. Compared with the FourCastNet trained with multiple numerical tricks (e.g., multi-stage training with large memory consumption) [23; 24], ViT-KNO achieves a similar accuracy during testing. Limited by computing resources, we are unable to precisely compare between ViT-KNO and FourCastNet under the same hardware and training condition yet (the FourCastNet is reported to be trained on 3808 NVIDIA A100 GPUs with nu
Figure 4: Visualization of three instances of the experiment results on ERA5 data set. (a-c) respectively show the time-dependent predicted results of global wind speed, U10, and V10 variables on selected moments, accompanied by corresponding ground truths. Please note that wind speed is not originally included in the 20 climate variables selected from ERA5. It is calculated as wind speed \(=\sqrt{\text{U}10^{2}+\text{V}10^{2}}\).
merous computational optimization [23]). More detailed comparisons may be considered in future studies. We suggest that ViT-KNO has the potential to become a competitive alternative of FourCastNet. Moreover, the time cost of a single time of prediction by ViT-KNO is observed as \(\simeq 0.02768695354\) seconds in a single 16-GB GPU. Compared with the classic numerical weather forecasting systems (e.g., the ECMWF Integrated Forecasting System) whose prediction inevitably requires a multi-GPU environment (e.g., more than 1000 NVIDIA Selene nodes where each node consists of 8 NVIDIA A100 GPUs) [66; 67; 68], ViT-KNO is orders of magnitude faster in the application (e.g., the Integrated Forecasting System L91 18 km model is expected to cost about 9840 node seconds for prediction on a NVIDIA Selene node [68]). Therefore, our ViT-KNO has the potential to serve as a unit in ensemble weather forecasting frameworks to realize an efficient prediction of global weather.
## IV Conclusion
In this paper, we have presented KoopmanLab, an efficient module of Koopman neural operator family for solving partial differential equations. The included models in this module, such as the compact KNO sub-family and the ViT-KNO sub-family, are provided with mathematical foundations, computational designs, and validations in solving concrete PDEs or predicting intricate dynamic system governed by unknown coupled PDEs. All models are suggested as competitive with other state-of-the-art approaches in corresponding tasks. Compared with classic numerical and neural-network-based PDE solvers, the proposed KNO variants can achieve significant acceleration, more robust mesh-independence, higher generalization capacity on changed conditions, more flexibility in characterizing latent PDEs with unknown forms, and a better balance between accuracy and efficiency. Therefore, we suggest the potential of KoopmanLab be applied in diverse down-stream tasks related with PDE solving. Users can download this module via
Figure 5: Time-dependent prediction accuracy (measured by anomaly correlation coefficient, ACC) of ViT-KNO on 20 variables of ERA5 data set. The colored area denotes the interval of accuracy whose boundaries are fractiles. The dashed line denotes the average accuracy.
* pip install koopmanlab
* or
* sit clone [https://github.com/Koopman-Laboratory/KoopmanLab.git](https://github.com/Koopman-Laboratory/KoopmanLab.git)
* cd KoopmanLab
* pip install -e.
Several important questions remain for future studies. First, one may consider more specialized computational optimization of models in KoopmanLab (e.g., consider multi-stage training as suggested in Refs. [23, 24] or multi-objective balancing by Pareto theory [69, 70]). Second, one can explore a more detailed comparison between the ViT-KNO sub-family and FourCastNet under the equivalent hardware and training conditions. Third, one can analyze the errors of our models caused by the potential continuous spectrum of the Koopman operator or the absence of ergodic property in real cases.
## Acknowledgements
This project is supported by the Artificial and General Intelligence Research Program of Guo Qiang Research Institute at Tsinghua University (2020GQG1017) as well as the Tsinghua University Initiative Scientific Research Program.
|
2307.11099
|
Solving multiphysics-based inverse problems with learned surrogates and
constraints
|
Solving multiphysics-based inverse problems for geological carbon storage
monitoring can be challenging when multimodal time-lapse data are expensive to
collect and costly to simulate numerically. We overcome these challenges by
combining computationally cheap learned surrogates with learned constraints.
Not only does this combination lead to vastly improved inversions for the
important fluid-flow property, permeability, it also provides a natural
platform for inverting multimodal data including well measurements and
active-source time-lapse seismic data. By adding a learned constraint, we
arrive at a computationally feasible inversion approach that remains accurate.
This is accomplished by including a trained deep neural network, known as a
normalizing flow, which forces the model iterates to remain in-distribution,
thereby safeguarding the accuracy of trained Fourier neural operators that act
as surrogates for the computationally expensive multiphase flow simulations
involving partial differential equation solves. By means of carefully selected
experiments, centered around the problem of geological carbon storage, we
demonstrate the efficacy of the proposed constrained optimization method on two
different data modalities, namely time-lapse well and time-lapse seismic data.
While permeability inversions from both these two modalities have their pluses
and minuses, their joint inversion benefits from either, yielding valuable
superior permeability inversions and CO2 plume predictions near, and far away,
from the monitoring wells.
|
Ziyi Yin, Rafael Orozco, Mathias Louboutin, Felix J. Herrmann
|
2023-07-18T00:55:37Z
|
http://arxiv.org/abs/2307.11099v2
|
# Solving multiphysics-based inverse problems with learned surrogates and constraints
###### Abstract
Solving multiphysics-based inverse problems for geological carbon storage monitoring can be challenging when multimodal time-lapse data are expensive to collect and costly to simulate numerically. We overcome these challenges by combining computationally cheap learned surrogates with learned constraints. Not only does this combination lead to vastly improved inversions for the important fluid-flow property, permeability, it also provides a natural platform for inverting multimodal data including well measurements and active-source time-lapse seismic data. By adding a learned constraint, we arrive at a computationally feasible inversion approach that remains accurate. This is accomplished by including a trained deep neural network, known as a normalizing flow, which forces the model iterates to remain in-distribution, thereby safeguarding the accuracy of trained Fourier neural operators that act as surrogates for the computationally expensive multiphase flow simulations involving partial differential equation solves. By means of carefully selected experiments, centered around the problem of geological carbon storage, we demonstrate the efficacy of the proposed constrained optimization method on two different data modalities, namely time-lapse well and time-lapse seismic data. While permeability inversions from both these two modalities have their pluses and minuses, their
|
2310.03052
|
Memoria: Resolving Fateful Forgetting Problem through Human-Inspired
Memory Architecture
|
Making neural networks remember over the long term has been a longstanding
issue. Although several external memory techniques have been introduced, most
focus on retaining recent information in the short term. Regardless of its
importance, information tends to be fatefully forgotten over time. We present
Memoria, a memory system for artificial neural networks, drawing inspiration
from humans and applying various neuroscientific and psychological theories.
The experimental results prove the effectiveness of Memoria in the diverse
tasks of sorting, language modeling, and classification, surpassing
conventional techniques. Engram analysis reveals that Memoria exhibits the
primacy, recency, and temporal contiguity effects which are characteristics of
human memory.
|
Sangjun Park, JinYeong Bak
|
2023-10-04T09:40:46Z
|
http://arxiv.org/abs/2310.03052v3
|
# Memoria: Hebbian Memory Architecture
###### Abstract
Transformers have demonstrated their success in various domains and tasks. However, Transformers struggle with long input sequences due to their limited capacity. While one solution is to increase input length, endlessly stretching the length is unrealistic. Furthermore, humans selectively remember and use only relevant information from inputs, unlike Transformers which process all raw data from start to end. We introduce Memoria, a general memory network that applies Hebbian theory which is a major theory explaining human memory formulation to enhance long-term dependencies in neural networks. Memoria stores and retrieves information called engram at multiple memory levels of working memory, short-term memory, and long-term memory, using connection weights that change according to Hebb's rule. Through experiments with popular Transformer-based models like BERT and GPT, we present that Memoria significantly improves the ability to consider long-term dependencies in various tasks. Results show that Memoria outperformed existing methodologies in sorting and language modeling, and long text classification.
## 1 Introduction
Humans possess an incredible ability to retain relevant details over extended periods. Humans extract major information from the flood of data, classify this information into long-term and short-term memory based on importance and utility, retrieve helpful information when needed, and gradually forget useless and unemployed information (Nairne and Pandeirada, 2008; Atkinson and Shiffrin, 1968a; Craik and Lockhart, 1972; Atkinson and Shiffrin, 1968b; Waugh and Norman, 1965; Brown, 1958; Underwood and Postman, 1960). This memorization is a fundamental skill for humans that is essential for learning and completing various tasks. Even when reading a book, we can form a condensed understanding of prior occurrences, such as the characters and plot progresses, despite passing many pages or chapters. Memorization is also associated with problem-solving or language skills since it permits individuals to apply previously learned information to solve novel issues.
Hebbian theory (Hebb, 1949) of human memorization is a prominent theory that explains how the brain forms connections between neurons to store and retrieve information. This theory proposes that when two neurons are repeatedly activated at the same time, the connections between them become strengthened. This phenomenon is commonly referred to as the "fire together, wire together" principle. The more frequently the neurons fire together, the stronger the connection becomes, which results in more robust and stable memory formation.
Memorization is critical for neural networks to perform well on a wide range of tasks, such as language modeling and long-document classification. To solve these problems successfully, models must remember long-term dependencies in the data, such as the context of a sentence or the relationships between pronouns in text. Transformer (Vaswani et al., 2017) has found extensive use in diverse domains and tasks (Devlin et al., 2019; Radford et al., 2018; Brown et al., 2020; Lewis et al., 2020). Self-attention mechanism, which is the key component of Transformer, facilitates the fusion of information from all sequence elements into the comprehensive contextual representation of whole sequence.
However, the downside of Transformer model is that, unlike recurrent neural networks (Rumelhart and McClelland, 1987; Hochreiter and Schmidhuber, 1997; Chung et al., 2014), it requires the entire sequential data at the same time. Most publicly available Transformer-based models are pre-trained with a limited context length, and dealing with long length data is generally difficult due to the time and space complexity of \(O(L^{2})\) where \(L\) is the input length (Vaswani et al., 2017). Moreover, it is significantly different from the mechanisms of human memory.
We propose a Hebbian memory architecture, Memoria, which grants memory management capabilities to deep neural network. Memoria is a separate module that can be used with various sequence processing models. It stores the information processed by the neural network as three level memories according to the Multi-Store model (Atkinson and Shiffrin, 1968b): working memory, short-term memory, and long-term memory, and retrieves it as necessary. This process is quite similar to the way of humans. Each piece of information, called an engram, is connected to one another, and the connection weight changes according to Hebb's rule (Hebb, 1949). We evaluated Memoria with the most widely used Transformer-based encoder and decoder models, such as BERT and GPT (Devlin et al., 2019; Brown et al., 2020). As a result, we confirmed that Memoria enhances the ability to consider long-term dependencies in sorting, language modeling, and text classification tasks. The implementation of Memoria and experiment code are available on Github.1
Footnote 1: The code is available at [https://github.com/cosmoquester/memoria](https://github.com/cosmoquester/memoria).
**Contributions**
1. We designed Memoria, an independent memory module that satisfies Hebbian learning rule which is the core theory of human memory construction, by applying various theories related to memorization and forgetting.
2. We developed effective strategies to integrate Memoria with diverse Transformer-based models, including BERT and GPT while taking into account the properties of their architectures.
3. We show that Memoria outperforms other existing methodologies in language modeling, sorting, and text classification for long sequences through extensive experiments.
## 2 Related Work
Memory-augmented neural networks have a rich history in the field of machine learning. Recurrent Neural Networks (RNNs) (Rumelhart and McClelland, 1987; Hochreiter and Schmidhuber, 1997; Chung et al., 2014) were introduced as a neural network architecture capable of processing sequential data with memory. Neural Turing Machines (NTMs) (Graves et al., 2014) have a storage system for vector representations that can be accessed using an attention mechanism. NTMs were further developed into Differentiable Neural Computer (DNC) (Graves et al., 2016) and Sparse DNC (Rae et al., 2016). Transformer model (Vaswani et al., 2017) has gained popularity for its ability to achieve state-of-the-art results on various domains, especially natural language processing. However, Transformer suffers from a limitation in processing long sequences due to its quadratic time and space complexity (Vaswani et al., 2017).
To address this limitation, two major approaches have been proposed. Firstly, the sparse attention approach uses various techniques such as local attention, reversible layers, and hashing to reduce the computational cost of attention while maintaining the ability to model long-range dependencies. The models like Longformer (Beltagy et al., 2020), BigBird (Zaheer et al., 2020), and Reformer (Kitaev et al., 2020) adopted the approach. However, this approach still has the limitation of processing only a restricted size of consecutive inputs, even though it has capability to handle longer lengths with the same amount of resources. The second approach is segmentation and recurrent processing, which includes models such as Transformer-XL (Dai et al., 2019), Compressive Transformer (Rae et al., 2020), \(\infty\)-Transformer (Martins et al., 2021), Memory Transformer (Burtsev and Sapunov, 2020). Recurrent Memory Transformer (Bulatov et al., 2022) focused on using the small number of memory tokens for efficiency, and Memorizing Transformers (Wu et al., 2022) attempted to use \(k\)-NN cache as memory. These models split inputs into multiple segments and incorporate them to better maintain long-term dependencies in sequential data. However, these methods have a drawback in that, no matter how significant the past information may be, it inevitably becomes diluted gradually. While Memoria follows the second approach, Memoria preserves crucial past information ensuring that
the information remains unchanged, just as it was initially accessed if the information is important enough.
Hebbian theory (Hebb, 1949) is a principle in neuroscience that states, "fire together, wire together", meaning that if two neurons are activated at the same time, the connection between them will strengthen. According to Hebb, an engram is a representation of a memory in the brain, consisting of a group of neurons and their connections that are activated together during the encoding of a memory. In recent years, there has been growing interest in applying Hebbian learning to deep learning (Kuriscak et al., 2015; Movellan, 1991; Journe et al., 2023). These works have shown promising results and highlight the potential for Hebbian learning to improve the unsupervised learning of deep neural networks. We adopted the concept of Hebbian engrams for Memoria.
Hebbian learning rule (Caporale & Dan, 2008; Gerstner & Kistler, 2002; Song et al., 2000), a specific mathematical formulation of a fundamental principle in neuroscience, describes how synapses between neurons can be modified based on their activity and enables the network to learn and memorize information over the time by changing their connections in response to input patterns. Gerstner & Kistler (2002) suggested six important aspects, which are locality, cooperativity, synaptic depression, boundedness, competition, and long-term stability, for the formulation of a useful plasticity model. We manifest that Memoria satisfies all the six of attributes. (See Appendix A for details.)
Memoria categorize memories into three levels according to the Multi-Store model (Atkinson & Shiffrin, 1968b), using the term working memory instead of sensory memory. Furthermore, to account for forgetting in short-term memory, we applied the displacement mechanism (Waugh & Norman, 1965), which replaces old information with new information when the short-term memory is full. For forgetting in both short-term and long-term memory, we incorporated the concept of trace decay theory (Brown, 1958; Peterson & Peterson, 1959), which suggested that memories that are not actively recalled gradually fade away.
## 3 Memoria
There are three stages of utilizing Memoria. First stage is _remind_ stage, in which it uses working memory to remind the engrams from short-term memory and long-term memory. Second stage is _exploit_ stage, where a model uses the reminded engrams to solve the task. Last stage is _memorize & forget_. In this stage, all reminded engrams get more lifespan depending on the usefulness of each engram and all the engrams will lose their lifespan by one. We provided the visualizations of changes of connection in Appendix D to help understand these processes.
Figure 1: Working Memory retains the most recent information, while Short-term Memory also holds a fixed number of recent engrams but its size can be adjusted. The number of engrams in Long-term Memory is not predetermined. The arrows in the diagram represent the connections between each engram.
### Component
Memoria has three types of memory, working memory (WM), short-term memory (STM), and long-term memory (LTM). Engrams, which are the smallest unit of memory information, constitute each memory. These engrams have their own lifespan and are eliminated when their lifespan reaches zero. Figure 1 shows the structure of the three types of memory.
Memory EncoderA memory encoder \(f_{e}\) is needed to transform the input \(X_{t}\) at a particular time step. The design of the memory encoder can vary, and \(X_{t}\) could be defined as the input for a task-solving model, model hidden states of previous time step, or other values. The output of \(f_{e}\) is a set of engrams \(M=\{e_{1},e_{2},\dots,e_{N}\}\).
Working MemoryWorking memory \(M_{wm}\) corresponds to human sensory memory. It represents the most recent memory and serves as a reference to retrieve associated engrams from short-term and long-term memory. Working memory uses a queue structure with a fixed size, which is equivalent to the memory length of a single time step. After every time step, the working memory is updated.
Short-term MemoryShort-term memory \(M_{stm}\), like human short-term memory, holds recent information. Engrams that were previously in working memory are transferred to short-term memory after one time step. Similar to working memory, short-term memory employs a queue data structure with a fixed size, which can be defined as a parameter.
Long-term MemoryLong-term memory \(M_{ltm}\) is equivalent to human long-term memory and has the capacity to store an indefinite number of engrams. Engrams that were dequeued from short-term memory are transferred to long-term memory.
Memory GraphEngrams in any memory can be linked together, represented as a directed weighted graph data structure, where each vertex corresponds to an engram. A directed edge weight \(E_{i\to j}\) denotes the empirical conditional probability that the engram \(e_{j}\) will be reminded when the engram \(e_{i}\) is reminded, with \(M^{rem}\) representing the set of all reminded engrams. This empirical probability can be calculated by dividing the number of times \(e_{i}\) and \(e_{j}\) were reminded together by the number of times \(e_{i}\) was reminded. \(Count_{i,j}\) represents the number of times \(e_{i}\) and \(e_{j}\) were reminded together. The edge is utilized to search for engrams in the long-term memory and its weight is adjusted based on the "fire together, wire together" principle.
\[E_{i\to j} =P(e_{j}\in M^{rem}\mid e_{i}\in M^{rem})\] \[=\frac{Count_{i,j}}{Count_{i,i}}\]
### Remind
Remind is the process to remind engrams from short-term and long-term memory. Figure 2 shows entire reminding process.
1. Using the encoder function \(f_{e}\) with input \(X\), create engrams \(M_{wm}\) and put into the working memory. All the engrams in the working memory will have the same initial lifespan. \[M_{wm}=f_{e}(X)=\{e_{wm,1},e_{wm,2},\dots,e_{wm,N_{wm}}\}\]
2. By utilizing the correlation function \(f_{c}\), calculate the correlation weight \(C_{stm}\) for each \(e_{stm,i}\) within the short-term memory \(M_{stm}\) by averaging all the correlation weights for the engram. The distance function \(f_{d}\) used is L2 distance. Here, \(i\) represents the index of \(M_{stm}\) and \(j\) represents the index of \(M_{wm}\). \[f_{c}(e_{i},e_{j}) =\exp(-(f_{d}(e_{i},e_{j}))^{2})\] \[C_{stm,i} =\frac{1}{N_{wm}}\sum_{j=1}^{N_{wm}}f_{c}(e_{stm,i},e_{wm,j})\]
3. Select only the top \(N^{rem}_{tm}\) number of engrams with \(C_{stm}\) values to remind. Denote the selected engrams as \(M^{rem}_{stm}\).
4. For each \(e_{i}\in M^{rem}_{stm}\), select an engram in \(M_{ltm}\) having highest edge weight from \(e_{i}\). Denote the selected engrams as \(M^{init}_{ltm}\). \[M^{init}_{ltm}=\operatorname*{arg\,max}_{e_{j}\in M_{ltm}}E_{i\to j},\text{ where }e_{i}\in M^{rem}_{stm}\]
5. Using the engrams \(M^{init}_{ltm}\) as a starting point, traverse the \(M_{ltm}\) graph using the depth-first search (DFS) algorithm with a search depth of \(N_{depth}\). The exploration direction should be based on the edge weight, toward the highest edge weight. Gather all the unique engrams that were encountered during the search, including \(M^{init}_{ltm}\), and refer to them as \(M^{found}_{ltm}\). \[M^{0}_{ltm}=M^{init}_{ltm}\] \[M^{k}_{ltm}=\operatorname*{arg\,max}_{e_{j}\in M_{ltm}}E_{i\to j}, \text{ where }e_{i}\in M^{k-1}_{ltm},\;e_{j}\notin M^{found,k-1}_{ltm}\] \[M^{found,k}_{ltm}=\bigcup_{l=0}^{k}M^{l}_{ltm}\] \[M^{found}_{ltm}=M^{found,N_{depth}}_{ltm}\]
6. Calculate correlation weight \(C_{ltm}\) from \(M_{wm}\) for \(M^{found}_{ltm}\) and select top \(N^{rem}_{ltm}\) number of engrams like STM. Denote the engrams as \(M^{rem}_{ltm}\).
7. Use \(M_{wm},M^{rem}_{stm},M^{rem}_{ltm}\) as activated memory. \[M^{rem}=M^{rem}_{stm}\cup M^{rem}_{ltm}\] \[M^{act}=M_{wm}\cup M^{rem}\]
### Exploit
Exploit all the engrams reminded to aid in solving the task and evaluate each engram's contribution towards the solving. A cross-attention mechanism is applied to use information of the engrams. After self-attention layer, the working memory engrams are attended to first, followed by the short-term and long-term memory engrams, using the exactly same cross-attention layer. The average attention weight \(w_{i}\) for each engram \(e_{i}\) is regarded as its contribution towards the solution.
Figure 2: Remind process.
### Memorize & Forget
There are two important principles for memorizing. First, useful engrams should be long-lived. Second, related engrams should be strongly connected together. These principles are applied in memorize stage as follows. Figure 3 shows overall process in this stage.
1. Increase \(Count_{i,j}\) by one for all engrams in \(M^{act}\), which is the number of times \(e_{i}\) and \(e_{j}\) reminded together. \[\mathcal{N} =\{1,2,\ldots,|M^{act}|\}\] \[Count_{i,j} :=Count_{i,j}+1,\forall i,j\in\mathcal{N}\]
2. Increase lifespan of reminded engrams by the increment \(Inc_{i}\) for the engram \(e_{i}\). \(Inc_{i}\) is calculated as follows where \(\alpha\) is hyperparameter meaning lifespan extend scale. If \(\alpha\) is 1.0, each engram \(e\in M^{rem}\) gets lifespan 1.0 on average. \[Inc_{i}=\frac{w_{i}}{\sum_{k=1}^{|M^{rem}|}w_{k}}\times|M^{rem}|\times\alpha\]
3. Decrease lifespan of all engrams by 1.0.
4. Remove engrams having a lifespan of 0 or less.
5. Move \(e_{wm}\) into STM.
6. Move oldest engrams from STM by the number exceeding capacity into LTM.
## 4 Experiments
We experimented how well Memoria maintains long-term connections in various tasks using Transformer (Vaswani et al., 2017) architecture. We integrated Memoria with Transformer by appending encoder-decoder attention over memory engrams, but the method to create engrams is a little different depending on the architecture. We provide figures with descriptions representing how to apply Memoria to Transformer in Appendix C. The first task is sorting. Martins et al. (2021) evaluated the model's ability to remember long-term information about the occurrence of numbers by generating a sorted sequence of numbers based on their frequency of occurrence. In the second group of experiments, we focused on language modeling task for token-level on WikiText-103 (Merity et al., 2017) and PG-19 (Rae et al., 2020), and character-level on enwik8 (Mahoney, 2006). For the Wikitext-103 dataset, since the word-level dataset contains <unk> in the texts, the raw dataset was used. Similar to Martins et al. (2021), only the first 2,000 books of the training dataset were used for PG-19. We compared Memoria with other models such as Transformer (Brown et al., 2020), Transformer-XL (Dai et al., 2019), Compressive Transformer (Rae et al., 2020), and \(\infty\)-former (Martins et al., 2021). Lastly, we conducted a classification task on the long document classification dataset, Hyperpartisan (Kiesel et al., 2019).
Figure 3: All the engrams in working memory and reminded engrams are connected more. The reminded engrams gain lifespan depending on the contribution. End-of-life engrams are removed like \(e_{ltm,7}\). The engrams in the gray area refer to activated engrams \(M^{act}\).
### Sorting
As Memoria is an independent module that enhances long-term dependencies, in order to apply Memoria to Transformer, we needed to define the memory encoder \(f_{e}\) and a method that utilizes the reminded engram data. We used the attention-based abstractor as \(f_{e}\) and the last hidden state of the previous time step of the model as \(X_{t}\). The hidden states \(h_{t-1}\) of previous time step used as \(X_{t}\). The three values of \(Q\), \(W_{k}\), and \(W_{v}\) are trainable parameters. FFN is feed-forward network as same in Transformer (Vaswani et al., 2017). The number of working memory engrams \(N_{wm}\) is determined by the number of query \(Q\), so the number of query is hyperparameter.
\[X_{t} =h_{t-1}\] \[f_{e}(X_{t}) =\text{Abstract}(X_{t})\] \[=\text{FFN}(\text{Attention}(Q,W_{k}X,W_{v}X))\] \[=\text{FFN}(\text{Attention}(Q,W_{k}h_{t-1},W_{v}h_{t-1}))\] \[=\text{FFN}(\text{softmax}(QW_{k}h_{t-1})W_{v}h_{t-1})\] \[=M_{wm}\]
This task is about taking a sequence of numbers and outputting the numbers in descending order of their frequency of occurrence (Martins et al., 2021). The vocabulary consists of 20 number tokens, and we experimented with sequences of various lengths ranging from 1K to 32K,2 with segment lengths of 256, 512, and 1024. We compared the Transformer-XL, Compressive Transformer, \(\infty\)-former, and Memoria Transformer.
Footnote 2: We used the script of \(\infty\)-former at [https://github.com/deep-spin/infinite-former/blob/main/sorting/generate_data.py](https://github.com/deep-spin/infinite-former/blob/main/sorting/generate_data.py) to generate dataset.
Figure 4 demonstrates the performance in the sorting task as sequence length increases for each segment length. The memory length was set to the same value as the segment length. Generally, as the sequence length increased, the performance tended to decrease because longer context information needs to be maintained. Compared to the other two models, Memoria exhibited the least performance degradation as the sequence length increased, showcasing its ability to maintain long-term memory for preserving extended context. (See Appendix B.1 for details on hyperparameters.)
### Language Modeling
In language modeling as well, Memoria was applied to the Transformer architecture using the same approach as in the sorting task. We trained various models of Transformer, Transformer-XL, Compressive Transformer, \(\infty\)-former, and Memoria Transformer from scratch. As publicly available pre-trained models were trained on different datasets and parameters, we conducted this experiment by training the model from scratch. We experimented pre-trained language models with Memoria and showed the results in Appendix B.2. We utilized GPT-2 architecture for the implementation of
Figure 4: Results of sorting task. Memoria shows more robust performance than other baselines as the input sequence length increases. The entire raw scores are specified in Table 4.
Transformer. We chose hyperparameters of 12 layers and 768 dimensions. The pre-trained GPT-2 tokenizer was used for all token-level experiments. We set the segment length as 150 tokens for token-level experiments and 512 for character-level experiments following the Bulatov et al. (2022). (See Appendix B.2 for details on hyperparameters.)
Table 1 shows the results. All other models demonstrated improved performance compared to Transformer. Among them, Memoria Transformer achieved the best performance on all three datasets. This result demonstrates that Memoria has better performance not only compared to Transformer but also to existing competitors that models long-term dependency. Moreover, since Memoria is an independent module, it can be used in conjunction with other techniques if desired.
Table 2 presents the performance measurement in a case where the length of each segment was decreased to 50 tokens, aiming to handle longer long-term dependencies by increasing the number of segments. When comparing the results in Table 1, it is evident that there is a significantly larger performance gap between plain Transformer and the memory utilization models. Even in situations where longer long-term dependencies need to be considered, Memoria demonstrated the best performance.
We validated whether Memoria effectively utilizes long-term memory. Figure 5 shows the average age of reminded engrams in long-term memory at each step on test dataset. The age represents the number of steps that have passed since the engram was created. If the model only refers to the most recent engram in long-term memory, it would not correctly serve as a long-term memory, and the age of reminded engrams remain constant on the graph. On the contrary, if the model can refer to past information continuously, the past information will gradually age more, leading to an increase in the average age of reminded engrams over time. The graph indicates that as the step increases, the average age also increases, demonstrating the ability of Memoria to refer to important past information even after a significant number of time steps.
\begin{table}
\begin{tabular}{l c c c} \hline \hline Model & Wikitext-103 (PPL) & PG-19 (PPL) & Enwik8 (BPC) \\ \hline Transformer & 26.755 & 31.631 & 1.28 \\ Transformer-XL & 24.543 & 29.945 & 1.19 \\ Compressive Transformer & 24.794 & 29.603 & **1.16** \\ \(\infty\)-former & 24.685 & 29.154 & 1.21 \\ Memoria Transformer & **23.471** & **29.149** & **1.16** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Language Modeling Performance. Perplexity (PPL) is shown for Wikitext-103 and PG-19, while bits-per-character (BPC) is shown for Enwik8. All of them had the same memory length as the segment length, and Wikitext-103 and PG-19 use 150 while Enwik8 used 512. Memoria outperformed Transformer and other baselines that consider long-term dependency.
\begin{table}
\begin{tabular}{l c} \hline \hline Model [Memory Length] & Wikitext-103 \\ \hline Transformer & 39.287 \\ Transformer-XL [50] & 31.459 \\ Compressive Transf. [50] & 31.644 \\ \(\infty\)-former [50] & 31.790 \\ Memoria Transformer [48] & **30.007** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Perplexity with smaller segment length of 50. Memoria outperformed other baselines in the shorter context and memory setting.
Figure 5: Average age of engrams in LTM per step. The age of engrams in the long-term memory being recalled gradually increased as steps pass by.
### Classification
Utilizing the information from the current time step could lead to causal leakage in language modeling so previous time steps were used as working memory instead. However, with masked language models such as BERT, it is possible to use the information from the current time step as working memory without causing causal leakage. The memory encoder \(f_{e}\) utilized the hidden states \(h_{t}^{l}\) memory representation. Here, \(t\) denotes the current time step, and \(l\) represents the memory layer index. Memory is obtained from the hidden state of the BERT layer \(l\) with abstractor, then working memory engrams and reminded engrams are utilized in the subsequent layers using cross-attention mechanisms.
Hyperpartisan has been a widely used news classification dataset for long document classification task. To validate the effectiveness of Memoria in encoder-based architectures, we applied Memoria to BERT and roBERTa and we compared its performance on the Hyperpartisan dataset. Already pretrained models were used to be finetuned for all the classification experiments. The size of models was 12-layer base-sized.
Table 3 presents the classification performance of models. It is evident that Memoria applied models show meaningful performance gain compared to the plain models, although it is not easy to compare the performance of different base pre-trained models directly. Memoria RoBERTa achieved the highest score of all metrics. (See Appendix B.3 for details on hyperparameters.)
## 5 Conclusion and Future Work
We propose Memoria as a general memory network that follows Hebb's rule, which explains how humans form memories. Memoria is a seperate module that learns the strength of the connection between different engrams according to the utility of the connections. Memoria replicates the human process of encoding information, selectively remembering, and forgetting. We applied Memoria to the widely used Transformer-based neural network and demonstrate its strong performance compared to other methodologies in tasks of sorting, language modeling, and classification. Memoria demonstrates the potential to revolutionize the way deep neural networks process and retain information, opening avenues for improved performance in a wide range of tasks that rely on long-term dependencies.
While Memoria strives to actively incorporate the structure and mechanisms of human memory, there are still discrepancies in many aspects. We categorized memories into three types using the Multi-store model (Atkinson & Shiffrin, 1968b), but the Levels of Processing theory (Craik & Lockhart, 1972) proposed a more continuous structure of memory based on the depth of processing rather than discrete categories. Additionally, we only utilized trace decay (Brown, 1958; Peterson & Peterson, 1959) and displacement (Waugh & Norman, 1965) as mechanisms of forgetting, but Interference theory (Underwood & Postman, 1960) suggests that interference effect between existing memories and new information are significant forgetting mechanisms in long-term memory. Our future research will incorporate these mechanisms enabling neural networks to better reflect the ways human memory operates.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{Model [Input Length + Memory Length]} & \multicolumn{2}{c}{Validation} & \multicolumn{2}{c}{Test} \\ \cline{2-5} & F1 & Acc. & F1 & Acc. \\ \hline BERT [512] & 75.62 & 76.77 & 91.67 & 93.05 \\ RoBERTa [512] & 82.96 & 84.06 & 95.24 & 95.38 \\ Bigbird [4096] & 81.22 & 82.81 & 93.24 & 93.54 \\ Longformer [4096] & 78.33 & 79.69 & 94.56 & 94.77 \\ Memoria BERT [512 + 192] & 78.24 & 80.00 & 94.59 & 94.77 \\ Memoria RoBERTa [512 + 192] & **86.39** & **87.19** & **96.51** & **96.62** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Text classification performance on Hyperpartisan. The metrics are average macro F1-score and accuracy of five runs. We reported validation and test set results because of data distribution discrepancy. Memoria RoBERTa achieves the highest performance in the models.
## Reproducibility
The structure of Memoria is described in detail in the main text Section 3. We provided the architectural details of Memoria Transforemr and Memoria BERT in Appendix C. Additionally, all the code used for the experiments are publicly available.
The core module, Memoria, has been implemented as an independent Python package, allowing future researchers to install Memoria using pip and utilize it for their research. The model settings can be found in the main text of the paper Section 4.2 for language modeling, Section 4.1 for sorting, and Section 4.3 for classification.
The hyperparameters used during training are all specified in Appendix B. To ensure reproducibility, we fixed random seeds for all the experiments. The datasets were also loaded through libraries in the code and were preprocessed, so except for the sorting task that requires data generation, this paper and source code will be enough to reproduce our experimental results.
|
2304.01902
|
Affective Robotics For Wellbeing: A Scoping Review
|
Affective robotics research aims to better understand human social and
emotional signals to improve human-robot interaction (HRI), and has been widely
used during the last decade in multiple application fields. Past works have
demonstrated, indeed, the potential of using affective robots (i.e., that can
recognize, or interpret, or process, or simulate human affects) for healthcare
applications, especially wellbeing. This paper systematically review the last
decade (January 2013 - May 2022) of HRI literature to identify the main
features of affective robotics for wellbeing. Specifically, we focused on the
types of wellbeing goals affective robots addressed, their platforms, their
shapes, their affective capabilities, and their autonomy in the surveyed
studies. Based on this analysis, we list a set of recommendations that emerged,
and we also present a research agenda to provide future directions to
researchers in the field of affective robotics for wellbeing.
|
Micol Spitale, Hatice Gunes
|
2023-04-04T15:51:03Z
|
http://arxiv.org/abs/2304.01902v1
|
# Affective Robotics For Wellbeing: A Scoping Review
###### Abstract
Affective robotics research aims to better understand human social and emotional signals to improve human-robot interaction (HRI), and has been widely used during the last decade in multiple application fields. Past works have demonstrated, indeed, the potential of using affective robots (i.e., that can recognize, or interpret, or process, or simulate human affects) for healthcare applications, especially wellbeing. This paper systematically review the last decade (January 2013 - May 2022) of HRI literature to identify the main features of affective robotics for wellbeing. Specifically, we focused on the types of wellbeing goals affective robots addressed, their platforms, their shapes, their affective capabilities, and their autonomy in the surveyed studies. Based on this analysis, we list a set of recommendations that emerged, and we also present a research agenda to provide future directions to researchers in the field of affective robotics for wellbeing.
survey, affective computing, robots, affective robotics, wellbeing +
Footnote †: publicationid: pubid: 978-1-6654-5490-2/22/531.00 ©2022 IEEE
## I Introduction
The number of people with wellbeing related concerns has increased in the last decade. In addition, the current COVID-19 pandemic has exacerbated this growth leading to societal changes (such as social isolation and work-from-home arrangements) that have severely impacted mental and physical wellbeing. This has resulted in a more urgent need to support people's wellbeing.
Affective robotics is a promising venue to support people and help improve their wellbeing. Affective robots can recognize human emotions and show affective behaviors [1], key factors for a successful interaction to promote human wellbeing. Also, past works have largely used affective robots to improve and maintain both mental (e.g., to aid the evaluation of children's wellbeing related concerns [2], cognitive therapy for people with dementia [3]), and physical human wellbeing (e.g., to promote exercise activities for the elderly [4]).
However, making an affective robot that is able to recognize, interpret, process and simulate human affect is still an open challenge, because of the several technical challenges (e.g., adaptation to human behavior, personalization of the interaction), and the social and ethical implications of developing and deploying such robots.
This paper aims at investigating the current state of the art of affective robots for wellbeing. Specifically, our main research questions is: _"How have affective robots been used to promote people's wellbeing, and to what extent are their affective capabilities suitable to promote wellbeing?"_. We focused on the following sub-questions:
* RQ1. What are the wellbeing goals of affective robots?
* RQ2. What are the affective robots's platforms (e.g., Nao, Pepper) that have been used for wellbeing?
* RQ3. What are the shapes (humanoid, non-humanoid, animal-like) of the affective robots for wellbeing?
* RQ4. What are the affective capabilities (e.g., emotion recognition) that the affective robots for wellbeing are endowed with?
* RQ5. What are the levels of autonomy (non-autonomous, semi-autonomous, autonomous) of the affective robots for wellbeing?
To answer our research questions, we run a scoping literature review following the PRISMA schema [5] to avoid any bias in the identification, screening, eligibility or inclusion phases. We reviewed the last decade (from January 2013 to May 2022) of HRI literature to provide a detailed picture of affective robotics for wellbeing field.
This paper contributes the following:
1. we provide the community with a scoping review of the last decade of HRI works in the affective robotics for wellbeing field;
2. from the data synthesized, we formulate a list of recommendations for future research in affective robotics;
3. we identify a research agenda for the affective robotics research field to promote wellbeing.
The rest of the paper is structured as follows.First, Section II defines affective robotics. Next, Section III, IV, and V are dedicated to the scoping review; describing the methodologies, presenting the main findings, and discussing those findings respectively. We then present our limitations and future works in Section VI and conclude the paper in Section VII.
## II Definition of Affective Robotics And Challenges
With the term affective robotics, previous work has referred to the use of affective computing in human-robot interaction. In fact, authors in [1] defined affective robots as "robots that can recognize human emotions and show affective behaviors". Also, in [6], they claimed that affective robotics
focus on "understanding human socio-emotional signals to enhance HRI". Paiva et al. [7] defined the affective loop of emotional robots as composed of emotion adaptation, emotion expression, and emotion synthesis. However, we acknowledge that having a robot that is able to adapt, express, and synthetise emotions is still difficult, due to several open challenges. First, affective robotics research has to understand the fundamental mechanisms of human behaviour in real-life circumstances - including nonverbal behavioural cues - and model these for human-inspired behaviours in robots. Second, robots are expected to dynamically adapt to human behaviour, meeting the needs of each individual and personalising their behavior accordingly. Third, although affective robots take advantage of the advances in affective computing, the generalisation of those results into real-world context is not straightforward because of the controlled settings usually adopted for creating datasets that inform those affective models. All those challenges make it even more difficult to design affective and intelligent social robots that can support people in promoting their wellbeing.
Alongside these technical challenges, researchers must also consider the social and ethical implications of developing and deploying such robots, at both the individual and societal levels.
In this survey, we refer to the term affective robots as _robots that can recognize, interpret, process, or simulate human affect_.
## III Method
This survey aims to understand how affective robots have been used to promote people's wellbeing. To address this research question, we followed the PRISMA schema [5] to identify, screen, select, and include the surveyed papers. The steps of this process are shown in Fig. 1.
### _Search Query_
We identified the surveyed papers searching in the ACM Digital Library, IEEE Explore, and Scopus databases. To identify the search query, we exploited the SPIDER [8] framework. We used similar queries for different databases (i.e., they differ only for the database's search requirements). For the sake of clarity, we present an example of the Scopus search query as follows:
TITLE-ABS-KEY ( ( "affective robotic*" OR "social robot*" OR "emotional robot*" OR "socially assistive robot*" ) AND ( "wellbeing" OR "well-being" OR "mental health" OR "health" ) ) AND PUBYEAR > 2012
We collected the papers to review by searching via queries in the databases selected, then we removed the duplicates and stored all the resulted references into a CSV file.
### _Eligibility Criteria_
We followed the guidelines of [9] for selecting papers in engineering to identify the set of inclusion and exclusion criteria for this survey.
We included the papers that:
* defined as "mantaining healthy quality of life", such as eating well, exercising, getting enough sleep, staying hydrated
- and mental);
* employ a physical robot with affective or emotional capabilities (i.e., affective robot);
* model or analyse affective capabilities of a robot;
* have their title, abstract, and keywords containing at least one keyword describing such technology and one keyword from the Search Query Keywords.
We excluded the papers that:
* were published before January 2013 and after the day of actual running of the research, i.e., May 16, 2022;
* are not in English;
* employ a virtual robot (e.g., in VR, or on mobile-based applications);
* do not involve human-robot interaction in any form (e.g., running study, analysis data of HRI);
* are not in peer-reviewed journals and conference proceedings;
* are surveying another topic or theoretical papers;
* are inaccessible to the authors;
* do not report the details necessary to evaluate their eligibility.
### _Selection Process_
We first screened the papers collected at a high level, and then we selected the ones to include in this survey after in-depth analysis. One reviewer screened the titles and abstracts based on the eligibility criteria listed in Section III-B. The full text of the remaining papers was analyzed and assessed by two reviewers.
### _Data Extraction and Analysis_
To extract data from the surveyed papers, we assigned a variable to each of the five research questions. The types of the variables were either categorical (e.g., robot's autonomy) or qualitative (e.g., robot's goal). Tab. I collects the variables
Fig. 1: PRISMA schema for this survey.
assigned to each research question. For the categorical data, we defined a priori the classes based on previous literature, e.g., for the robot's autonomy variable we identified three classes - non-autonomous, semi-autonomous, and fully-autonomous - based on [10]. While for the qualitative data we exploited pattern-based method to extract main themes of the surveyed papers.
## IV Results
Following the PRISMA schema, 25 papers were included in this review. The following sections collect the data synthesized and the corresponding research questions addressed. Tab. IV collects the survey results.
### _Affective Robot's Goal (RQ1)_
Overall, 17 studies used affective robots to promote mental wellbeing, in 8 studies the robot was used for physical wellbeing. Fig. 2 details the robot's goal, and specifically, we identified 9 different goals for affective robots that promote wellbeing among the surveyed papers. Eight studies focused on physical wellbeing. Five of the studies (20%) focused on physical stimulation to promote physical wellbeing. Two studies (8%) explored the use of affective robots to promote healthy food (or drink) for physical wellbeing. One study (4%) used the affective robot to detect fall. Then, the remaining 17 studies focused on promoting mental wellbeing. 11 studies (44%) aimed to provide an emotional support - including reducing stress or anxiety - to promote mental wellbeing of the participants. 2 (8%) investigated affective robots to facilitate cognitive stimulation promoting mental wellbeing. 1 of the studies (4%) aimed at promoting self-disclosure by using an affective robot. One study (4%) adopted affective robots to assess participants via clinical interviews. One study (4%) used the robots to entertain participants. Finally, one study (4%) exploited the affective robot to provide a mindfulness session.
To sum up, to date the HRI community focused more on the application of affective robotics to promote mental wellbeing. Some of them exploited affective robots to facilitate physical wellbeing as well.
### _Affective Robot's Platform (RQ2)_
From the surveyed studies, we identified 17 different robotic platform, as depicted in Fig. 3. Specifically, 5 of the studies (19.5%) used a Nao robot, 3 studies (11%) Pepper, 2 studies (7.5%) Cozmo, 2 studies (7.5%) Reeti, one study (3.7%) Alice, one study (3.7%) PR2, one study (3.7%) a non-humanoid robot by Hoffman [34], one study (3.7%) Stevie, one study (3.7%) Darwin-mini, one study (3.7%) Blossom, one study (3.7%) Jibo, one study (3.7%) Side-bot, one study (3.7%) Emarv-4, one study (3.7%) IRS, one study (3.7%) Huggable, and one study (3.7%) a 3D printed social robot.
Those results show that the HRI community is gradually exploring different robotics platforms that are now more available in the market. Still, the robotics platforms from SoftBank (Nao and Pepper) are the most commonly used within the HRI community.
### _Affective Robot's Shape (RQ3)_
To cluster the affective robot's shape of the surveyed papers, we used the definition by [35] and [36], who defined the robot agent shape as bio-inspired (e.g., animal-like, humanoid), and non bio-inspired (artificial, i.e., showing artificial characteristics, and functional, i.e., performing specific functional tasks).
Among the survey papers, 16 out of 26 (65%) used a bio-inspired robot (note that one study [16] adopted two robotics platforms). Particularly, 14 studies (54%) adopted humanoid affective robots, while 3 studies (11%) employed animal-like affective robots. The remaining studies (9 out of 26,
Fig. 3: Affective robot’s platform (RQ2) in the surveyed papers among January 2013 to May 2022
Fig. 2: Affective robot’s goal (RQ1) in the surveyed papers among January 2013 to May 2022
35%) adopted affective robots with non-humanoid shape. Fig. 4 depicts the results of the affective robot's shape for the surveyed papers.
In summary, those findings show that HRI researchers opted for affective robots with bio-inspired shape (specifically humanoids), just a few of them investigated the non bio-inspired shapes.
### _Affective Capabilities (RQ4)_
We clustered the affective capabilities of the robots included in the surveyed papers into four main classes (see Fig. 5): emotion recognition, semantic emotional understanding, facial expressions, and emotional movements.
11 out of the surveyed studies endowed the robot with both facial expression and emotional movement capabilities. Particularly, 14 studies (39%) equipped the robot with the capability of expressing emotional movements, while 15 out of 25 (42%) endowed the robots with the capability of expressing emotion through facial expressions. 4 studies (11%) adopted robots that were able to automatically recognize emotions in participants. Finally 3 studies (8%) used robots that were able to semantically understand the participants' emotions.
To sum up, our findings show that the HRI community has mainly focused on endowing their robots for wellbeing with the capability of expressing emotions (via facial expressions or movements). More recently, the HRI researchers started to equip their robots with automatic user affect detection capabilities to promote wellbeing.
### _Affective Robot's Autonomy (RQ5)_
To identify the level of autonomy of the robots, we followed the definition provided by [10], in case the authors did not provide any detailed specifications.
Among the surveyed studies, 6 of them (25%) adopted a non-autonomous affective robots where the researcher acted as "wizard" in the interaction (aka, Wizard-of-Oz method). 9
Fig. 4: Affective robot’s shape (RQ3) in the surveyed papers among January 2013 to May 2022
Fig. 5: Affective robot’s capabilities (RQ4) in the surveyed papers among January 2013 to May 2022
of the studies (37.5%) exploited a semi-autonomous affective robot. For example, in [14], the robot could automatically execute the task interaction, but the research controlled its navigation in the care center. The other 9 studies (37.5%) adopted a fully autonomous affective robot (see Fig. 6).
Our findings showed that the HRI community is moving towards more autonomous robots, despite some researchers preferring to exploit tele-operated robots (i.e., non-autonomous) to better control the design variables in their studies.
## V Discussion
The next sections will discuss the results gathered in this survey. Specifically, we extrapolated a set of recommendations that we list in Section V-A and a research agenda in Section V-B
### _Recommendations_
We list a set of observations and subsequent recommendations as follows.
First (from RQ1), we found that the surveyed papers exploited affective robots to promote either mental (e.g., [12, 13, 37]) and physical wellbeing (e.g., [17, 20]). Specifically, we observed that affective robots that addressed mental wellbeing adopted both expressive behaviors (e.g., facial expression, emotional movements) and emotion recognition capabilities (e.g., semantic understanding), while the affective robots for physical wellbeing mostly were endowed with expressive capabilities only. Also, our results showed that the authors who focused on affective robots for physical wellbeing opted mostly for human-like shape. The humanoid shape is extremely important when delivering physical exercise, so that human participants can replicate the robot movements, as in [17]. On the other hand, affective robots for mental wellbeing used both humanoid and non-humanoid robots to deliver tasks that can aid the emotional support [16] or the cognitive stimulation [14]. Finally, most of the papers on physical wellbeing chose an automated or semi-automated affective robot, while the one that focused on mental wellbeing exploited both autonomous and non-autonomous robots with Wizard-of-Oz approach
Second (from RQ2 and RQ3), we found that most of the surveyed papers adopted a bio-inspired robot form, especially humanoid (e.g., [1, 26]). Past works demonstrated that the human-like appearance of the robot influences the expectations of the users. For example, participants attributed human-like behaviors to humanoid robots, because they associated their shape with their functionalities [38]. On the other hand, the remaining papers in the review adopted a non-humanoid shape. In fact, a previous work [38] showed that participants who were asked to rank different robotic platforms (e.g., Jibo, Pepper, Miro) provided contradictory responses. Some of the participants preferred more humanoid robot shape, while other chose an abstract shape as the most suitable to be a robotic coach delivering wellbeing interventions.
Then (from RQ4), our results showed that the affective capabilities of the robots focused more on generating expressive behaviors (e.g., [24, 27]). Just few of them (e.g., [22]) displayed capabilities of affect detection. In the context of promoting wellbeing, having a robot that is endowed with the capabilities of both affect synthesis and affect detection is a key factor for a successful interaction. In fact, in [38], the authors reported that participants, in a participatory design study, remarked the need to endow the wellbeing robot coach with both the capabilities of emotion/empathy generation and emotion detection to be able to adapt to users.
Finally (from RQ5), we found that most of the surveyed papers adopted autonomous or semi-autonomous affective robots, that is likely because of the advance in the fields of natural language processing [39], computer vision [40], and speech recognition [41] in the last decade. Despite the increasing interest in autonomous robots, many researchers exploited non-autonomous robots that are directly controlled or tele-operated by a human operator. This method allows researchers to overcome technical issues, that can compromise the interaction and the user's perception, and unpredictable events, typical of user studies.
As a result of these observations we provide the following four recommendations:
* **Recommendation 1 -** We recommend to use autonomous humanoid robots to deliver exercises that promote physical wellbeing to enable users to imitate the robot movement, and endow the robots with detection capabilities to check the user's affective state during the activity. To promote mental wellbeing, we suggest that the researchers design the affective robot as autonomous and endowed with both affect expression and detection capabilities. Both humanoid and non-humanoid shape can be exploited depending on the task or exercise that the robot has to deliver.
* **Recommendation 2 -** We recommend to choose the shape of the affective robot to employ in the study according to its functionalities. For example, in physical rehabilitation, it could be more useful to use a humanoid robot that can display movement to perform physical exercises, while for reducing loneliness in elderly, an animal-like shape could be more appropriate to resemble the function of a pet-companion. Both humanoid and non-humanoid robots seem to be appropriate for delivering
Fig. 6: Affective robot’s autonomy (RQ5) in the surveyed papers among January 2013 to May 2022
mental wellbeing exercises.
* **Recommendation 3 -** We then recommend to endow the affective robots with both capabilities of generating expressions and detecting the affective state of their user to be able to deliver wellbeing exercises in more effective and adaptive ways.
* **Recommendation 4 -** We recommend to lean toward the autonomous affective robots to advance further the field of robotics and provide evidence of the efficacy of real robots in the real world.
### _Research Agenda_
Future research should focus on the design features of affective robots to promote wellbeing. Specifically, much work needs to be done to investigate which robot form is more appropriate for which specific task (e.g., physical exercise, cognitive stimulation) to guide researchers in the design choice of the robotic platform. For instance, [42] demonstrated that humans have a form function attribution bias which affects their perception of the robots. People take a cognitive shortcut to attribute the functionality of the robot using the visual information. Within the HRI literature, many efforts have been made to better understand how the robot's form - in terms of size, gender, and appearance - affects the user's perception of the robot [42, 43, 44]. However, future research should specifically focus on better understanding how form impacts user perception of the robots utilised for wellbeing related applications, and how robot form can make a difference in the efficacy of the delivered intervention.
Then, we believe that future research should focus on empirical studies that adopt affective robots endowed with the full spectrum of capabilities, i.e., to recognize, understand, and generate expressions, to advance the affective robotics field, providing also evidence on the current technology readiness level for the real world. Deploying social robots in real human-robot interaction settings is still an open challenge [45] mainly due to the need for real-time processing capabilities and the lack of computational power of the robotic platforms available in the market. The lack of cross-fertilization between affective computing and social robotics fields also contributes to this problem [46]. Future efforts should focus on how to overcome those technological limitations, for example, using cloud computing, or using external/environmental sensors, as suggested in [46].
Finally, affective robotics should lean towards creating and/or using autonomous robots. To this end, we acknowledge that future research should address some of the technological limitations related to robotic deployment [37]. In parallel to the advances in computing power, the field of affective computing has seen a rapid progress, however, there is still a lack of real-time studies with robots endowed with, for example, affect recognition capability that can adapt and personalize to each user during the interaction [6]. A survey on 10 years of HRI studies [47] showed that the most common types of Wizard control employed were natural language processing and non-verbal behaviors, including affective capabilities of the robot. Also, the Wizard-of-Oz technique has been widely used in the HRI community [48], because deploying autonomous robots is an open problem. In fact, the main challenges are:
1. programming autonomous social behavior for a robot is very difficult and time consuming;
2. researchers usually program human-robot interactions as a one off experience, for a limited scope and very short interaction durations (usually no longer than 20 minutes);
3. the off-the-shelf robotic platforms usually fail in meeting the user expectations in terms of robot capabilities
To overcome those limitations, the next generation of HRI works should focus on how to make robots more autonomous using data-driven approaches to fully understand the dynamics of human and autonomous robot interactions.
## VI Limitations and Future Work
One of the main limitations of this survey is that the screening of the papers have been conducted by a single reviewer. This could have introduced bias into the paper inclusion. We plan to involve at least one additional researcher to provide a more solid method to include the papers to review. Another issue is that the set of research questions explored are limited. There are several aspects that we have not covered in this paper that are known to impact HRI, such as context [49]. With a deeper analysis, we can identify missing points relevant for the HRI community that can further inform our research agenda. The recommendations of this paper were also mentioned previously in different articles in the literature but we haven't reported all of them. Moreover, many of these recommendations are not based on the findings of the review, but we grounded them on literature. Finally, in the recommendations we haven't mentioned the difficulties involved in developing the affective robots that could have impacted the works of the reviewed papers. In our future work, we will extend this survey with a broader set of research questions (e.g., study design, participant population etc.) addressing the above-mentioned limitations.
## VII Conclusions
This paper reviewed the last decade (2013-2022) of HRI literature on affective robotics for wellbeing utilising the PRISMA schema. We aimed to understand how affective robots have been used in previous studies to promote human wellbeing and to what extent their affective capabilities were useful. Our findings showed that the HRI community: i) focused mostly on affective robots to promote mental wellbeing, ii) explored different robotic platforms, iii) opted for human-like affective robots, iv) endowed robots mostly with the capability of generating expressive behaviors, and v) adopted mostly autonomous and semi-autonomous robots. The results from this review enabled us to list a set of recommendation guidelines and a research agenda for future research in the affective robotics field.
## Ethical Impact Statement
We acknowledge that our paper did not survey the ethical implications of using affective robots. However, this is out of the scope of this paper. In our future work, we will address also ethical concerns in the field of affective robotics that aims to promote wellbeing.
## Acknowledgment
This work is supported by the EPSRC project AROeEQ under grant ref. EP/R030782/1. For the purpose of open access, the authors have applied a Creative Commons Attribution (CC BY) licence to any Author Accepted Manuscript version arising.
|
2302.13050
|
Experimental Investigation on Particle-Laden Flows of Viscoelastic
Fluids in Micro-Channels Using Optical Coherence Tomography
|
Considering the nonlinear response of non-Newtonian fluids to the local shear
exerted on the bulks of fluid, the initially quasi-uniform distribution of the
particles might be subject to alteration as well, due to the unbalanced force
distribution on the particles. The current research investigates such particle
migrations for flows of Viscoelastic Fluids (VEFs) in a straight micro-channel
with a 1 by 3.25 mm rectangular cross-section. Polyacrylamide polymer in
concentrations of 210 and 250 ppm have been used, where the heavy, linear,
long-chain structure of the polymer introduces elasticity to the fluid. The
flow measurements are performed using the state-of-the-art Optical Coherence
Tomography (OCT) in 2D acquisition and doppler modes (D-OCT) to simultaneously
resolve tomographic velocity field, and the transition of particles through the
monitored cross-sections. Through the implementation of the experimental method
in the current manuscript, the capability and convenience of using OCT for the
problem at hand are demonstrated, as the abovementioned obtained data were to
be equivalently captured by simultaneous use of Particle Image Velocimetry
(PIV), for the ambient medium velocity field, and Lagrangian Particle Tracking
(LPT) schemes, for identification and tracking the position of the particles.
The velocity field is obtained with the spatial resolution of 2.58um in the
depth direction, and through sub-pixel image processing, highly accurate
positioning of the particles is realized. The experimental results are then
used for statistical calculations, such as the Probability Distribution
Function (PDF) of the cross-sectional map of the space frequented by the
particles to explain the underlying physics.
|
Kasra Amini, Gustaf Martensson, Outi Tammisola, Fredrik Lundell
|
2023-02-25T10:29:00Z
|
http://arxiv.org/abs/2302.13050v1
|
Experimental Investigation on Particle-Laden Flows of Viscoelastic Fluids in Micro-Channels Using Optical Coherence Tomography
###### Abstract
The introduction of particles to fluid flow is considered as a source of alterations in the viscosity-based behavior of the macroscopic flow field. The bilateral interactions between the solid particles and the fluid elements would both lead to changes in effective viscosity and, thereby, the velocity field. Considering the nonlinear response of non-Newtonian fluids to the local shear exerted on the bulks of fluid, the initially quasi-uniform distribution of the particles might be subject to alteration as well, due to the unbalanced force distribution on the particles. The current research investigates such particle migrations for flows of Viscoelastic Fluids (VEFs) in a straight micro-channel with a 1\(\times\)3.25 mm\({}^{2}\) rectangular cross-section. Aqueous solutions of Polyacrylamide polymer in concentrations of 210 and 250 ppm have been used, where the heavy, linear, long-chain structure of the polymer introduces elasticity to the fluid. The rheological measurements are presented for the characterization of the viscoelastic behavior of the fluid samples, and the results are compared with similar flow conditions, however, in particle-laden glycerol flow as a Newtonian reference case. The flow measurements are performed using the state-of-the-art Optical Coherence Tomography (OCT) in 2D acquisition and doppler modes (D-OCT) to simultaneously resolve tomographic velocity field, and the transition of particles through the monitored cross-sections. Through the implementation of the experimental method in the current manuscript, the capability and convenience of using OCT for the problem at hand are demonstrated, as the abovementioned obtained data were to be equivalently captured by simultaneous use of Particle Image Velocimetry (PIV), for the ambient medium velocity field, and Lagrangian Particle Tracking (LPT) schemes, for identification and tracking the position of the particles. The velocity field is obtained with the spatial resolution of 2.58 \(\upmu\)m in the depth direction, and through sub-pixel image processing, highly accurate positioning of the particles is realized. The experimental results are then used for statistical calculations, such as the Probability Distribution Function (PDF) of the cross-sectional map of the space frequented by the particles to explain the underlying physics.
1
## Introduction
The interplay of viscous and elastic effects in duct flows of Viscoelastic Fluids (VEFs) results in distinct distribution of rigid particles compared to the Newtonian flow fields [1]. Parameter studies show sub-regimes for the dynamics of particle migrations based on flow/fluid geometry and properties [2]. The shear thinning effects render the field uneven in terms of local viscosity distribution as a damping force for inertial dynamics. On the other hand, elastic forces rooted from the fluid molecular structures introduce unbalanced distribution on the particle peripheries. In highly elastic flows of low inertial conditions, a particle focusing along the channel centreline occurs as the elasticity leads to the migration of particles towards the duct axis, whereas the shear thinning alongside the presence of secondary flows move the particles away from the centreline. The equilibrium, therefore, determines the eventual positioning of the particles [3].
From a methodological viewpoint, Optical Coherence Tomography has shown great capability in a flow field and tomographic measurements in a range of flow conditions. Near-wall errors associated with methods such as LDV and HWA\({}^{4}\) do not exist in OCT. There is no dependence to faithful tracer particles, and thereby particle concentration issues PIV and PTV deal with in high shear regions, such as near-wall\({}^{5}\), are not crucial in OCT. The Doppler OCT (D-OCT) not only work better, but also requires, a certain level of opaqueness in the media under measurement. This makes it a perfectly compatible measurement technique for non-Newtonian fluids which are mostly opaque\({}^{6}\). And finally, as the illumination source is a continuous coherent light beam, there is no synchronization issues between the light source and image acquisition system, and also the ambient light noise level is second to none.
## FLUID PREPARATION AND RHEOLOGY
As a purely Viscoelastic Fluid (VEF), aqueous solutions of Polyacrylamide PAA (FLoPAM AN934SH, SNF) have been used. The long-chain polymer is prepared by solving its dry powder in water. However, as the resulting viscoelasticity and shear thinning behaviors of the fluid is extremely dependent to the preparation process, the exact same protocol has been maintained for all samples. This protocol contains mixing at 800 rpm for 24 hours, 12 hours of rest, and 2 hours of mixing at 100 rpm for deaeration.
As is explained in the following section, for velocity field measurements, Rhodamine powder at the rheologically insignificant concentration of 0.0285% has been added and mixed with the polymer for 8 hours at 400 rpm. This serves as a contrast means required for doppler velocity measurements with OCT. All fluids are then kept at rest for a minimum of 24 hours before any measurement.
The rheometry of the fluids have been performed with Anton Paar MCR 702e. As the flow measurements are done at room temperature, the rheometer has also been set to 25\({}^{\circ}\)C. The concentric cylinder configuration (Bob and Cup) has been used with the external diameter of 43 mm and the internal diameter of 45 mm, leaving a 1 mm gap in between. **Figure 1** shows the result of rheometry for 210 and 250 ppm Polyacrylamide.
Figure 1: Rheological measurements of Polyacrylamide 210 and 250 ppm.
## Methodology
The test rig is a milli-fluidic straight duct with rectangular cross-section. The total length of the duct is 90 mm, with constant height of 1 mm and 3.25 mm along the axis. The fabrication of the duct, as shown in **Figure 2**, is based on a sandwich structure containing the top and bottom surfaces made from 3mm thick transparent PMMA, which are connected together with a stainless-steel plate, 1 mm in thickness, and representing the side walls of the duct pressed with the top and bottom most aluminium plates with a large number of screws. A syringe pump drives the fluid into the inlet, whereas the fluid is then drained to the atmospheric conditions at the outlet.
As the measurement device a Spectral Domain Optical Coherence Tomography (SD-OCT) Telesto II of Thorlabs has been used. The Doppler OCT (D-OCT) is the main recording mode for the current study. However, for particle-laden cases all fluids have been used in their natural state without any means of contrast. Therefore, the OCT beam does not record any velocity for the flow, other than the points/instances, where a particle crosses the plane of measurement. The intensity field captured through OCT has then been used for particle identification through image processing algorithms.
Equations (1) to (5) present the formulations of the non-dimensional number groups used in this study. Reynolds number (Re) has been defined based on the half-height of the duct and the bulk velocity. The coefficient of viscosity for all calculations is based on the rheological measurements performed at the shear rates corresponding to the aforementioned parameters. The Weissenberg number (Wi) uses the relaxation time obtained by fitting the rheometric data to the Carreau model. The same parameters used for the Re are also used for Wi to calculate the flow time scales. Elastic number (EI) has been used to obtain the effects of elasticity as a function of the fluid and the geometry, regardless of the imposed dynamics to the flow.
Stokes number (Stk) has been considered as the ratio between the timescale of the particle dynamics to that of the flow field, hence insights on fidelity of the particles. In a turbulent flow field, the flow timescales are defined based on the eddy size and velocity, which will adequately represent the physical time required for the flow to undergo such a trajectory. However, the conventional application of bulk velocity and half-height of a channel does not
Figure 2: Experimental setup - OCT apparatus
render the accurate timescale of the flow for the case at hand, since in the flow field there is no bulk of the flow passing along the mentioned distance with the bulk velocity. In this light, two distinct, however complementary, definitions of the Stk have been used; based on the axial location of the tomographic measurement along the duct axis and the bulk velocity (4), as the time required for average bulks of the flow to get to the test section after entering the inlet, and second, the duct half-height and the maximum velocity in its centreline (5), as an indicator of the wall momentum diffusion in the cross stream direction. This will lead to the Stk values ranging from 1.8\(\times\)10\({}^{-6}\) to 9.1\(\times\)10\({}^{-5}\) based on (4) and 4.3\(\times\)10\({}^{-4}\) to 2.1\(\times\)10\({}^{-2}\) based on (5). It should be mentioned that all values are well below the acceptable threshold\({}^{5}\) of 0.1 for validity of the fidelity premise of the particles. Consequently, the migration and focusing of the particles captured in this study are purely due to the force distribution on their peripheries, and not the incapability of the particles to follow the flow trajectories.
\[Re=\frac{\rho U_{Bulk}h}{\mu_{rheo.}} \tag{1}\] \[Wi=\frac{\lambda U_{Bulk}}{h}\] (2) \[El=\frac{Wi}{Re}=\frac{\lambda\mu}{\rho h^{2}}\] (3) \[Stk=\frac{\tau_{p}}{\tau_{f}}=\frac{t_{p}u_{0}}{\ell_{0}}=\begin{cases} d_{p}^{2}\,\frac{\rho_{p}}{18\mu}\Big{/}\frac{\mathcal{L}_{meas.loc.}}{U_{Bulk}} \\ d_{p}^{2}\,\frac{\rho_{p}}{18\mu}\Big{/}\frac{h}{U_{max}}\end{cases} \tag{4}\]
## 3 Results and Discussions
The results presented in this section are in accordance with the case juxtapositions summarized in **Table 1**. Two concentrations of Polyacrylamide, i.e. 210 and 250 ppm, have been used as the purely viscoelastic fluid. For velocity field recordings of the Newtonian fluids through 2D B-scan D-OCT, milk has been used for its adequate contrast and scattering behavior. For Newtonian cases, however in the particle-laden measurements, where the invisibility of the medium fluid to the D-OCT has been a prerequisite, Glycerol 20% has been used substitutively.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline \multicolumn{1}{|c|}{Volumetric Flow Rate [ml/min]} & \multicolumn{3}{c|}{0.4} & \multicolumn{3}{c|}{5} & \multicolumn{3}{c|}{10} & \multicolumn{3}{c|}{20} \\ \hline \multicolumn{1}{|c|}{\begin{tabular}{c} Bulk Velocity [mm/s] \\ \end{tabular} } & \multicolumn{3}{c|}{2.05} & \multicolumn{3}{c|}{25.64} & \multicolumn{3}{c|}{51.28} & \multicolumn{3}{c|}{102.56} \\ \hline \multicolumn{1}{|c|}{\begin{tabular}{c} Concentration \\ Time \\ \end{tabular} } & \multicolumn{1}{c|}{\begin{tabular}{c} Relaxation \\ Time \\ \end{tabular} } & Viscosity & Wi & Re & El & Viscosity & Wi & Re & El & Viscosity & Wi & Re & El & Viscosity & Wi & Re & El \\ \hline \multicolumn{1}{|c|}{
\begin{tabular}{c} [ppm] \\ \end{tabular} } & [s] & [Pa.s] & [-] & [-] & [-] & [Pa.s] & [-] & [-] & [Pa.s] & [-] & [-] & [-] & [Pa.s] & [-] & [-] & [-] \\ \hline Glycerol 20\% & 0.00 & 2.08E-03 & 0.00 & 0.52 & 0.00 & 2.08E-03 & 0.00 & 6.47 & 0.00 & 2.08E-03 & 0.00 & 12.94 & 0.00 & 2.08E-03 & 0.00 & 25.89 & 0.00 \\ PAA 210 & 0.17 & 3.20E-03 & 0.70 & 0.32 & 2.18 & 2.80E-03 & 8.72 & 4.58 & 1.90 & 2.55E-03 & 17.44 & 10.56 & 1.65 & 2.40E-03 & 34.87 & 22.44 & 1.55 \\ PAA 250 & 0.35 & 3.60E-03 & 1.44 & 0.28 & 5.04 & 3.00E-03 & 17.95 & 4.27 & 4.20 & 2.75E-03 & 35.90 & 9.79 & 3.67 & 2.35E-03 & 71.79 & 22.91 & 3.13 \\ \hline \end{tabular}
\end{table}
Table 1: Case descriptions, fluid properties and flow conditions.
Flow fields depicted in **Figure 3** show the comparative trend in increasing elasticity effects (Wi), in two ranges of Reynolds number. Following the left column, i.e. the low Re cases, down towards higher Wi flows, one could see the initiation of instabilities, both in central regions of the duct, where the elastic effects are strongly in play, and also propagated through the field sourcing from surface effects. Such effects of the surface are more vividly observed in the trend that right column cases take from low (Newtonian at Wi\(=\)0) to high Wi numbers. A strong asymmetry is seen in both cross-stream and streamwise planes. The maximum velocity might not occur at the center line, and the effects of the side-walls are more prominently witnessed in deviating the whole flow field away from the Newtonian paraboloid profiles prescribed by analytical solutions such as Boussinesq's solution for rectangular duct.
Figure 4: Near-wall velocity profile - Newtonian and viscoelastic fluids, and Boussinesq analytical solution for rectangular ducts.
Figure 3: Tomographic velocity field out of cross-sectional plane for Newtonian and viscoelastic fluids.
Figure 5: Probability Distribution Function (PDF) of the particles’ migration pattern captured at 65 mm from the duct inlet. Left) Glycerol 20%, Middle) Polyacrylamide 210 ppm, and Right) Polyacrylamide 250 ppm.
**Figure 4** shows the near-wall velocity profile for all above cases. The Newtonian cases regardless of the Re show significant agreement with the analytical solution. The shear thinning effects of the Polyacrylamide leads to higher velocity gradients in near wall regions, where the highest shear rates are exerted on the flow field. The drastic fluctuations and instabilities are shown in cases with higher Wi numbers. And asymmetric velocity profiles are recognizable.
Given the described flow fields, the initially homogenous presence of rigid particles of 56 \(\mu\)m in diameter has been re-evaluated at the axial position of 65 mm from the duct entrance. This axial position has been chosen to fall inside the developed regions of the duct flow, and yet not too close to the outlet orifice that the flow patterns are disturbed. The particles concentration in the fluids are set to 0.8% volume fraction, to minimize the particle collisions and close interactions.
As mentioned above, the fluid media have been configured to be invisible to the OCT, so that the rigid silicon-based particles (originally used as PIV/PTV tracer particles) render the sole effects on the tomographic recordings. For each fluid/flow case 500 consecutive frames have been recorded using D-OCT to obtain a probability space for the migration of the particles. An image processing scheme identifies the particles in each frame, superimposes the location of all particles temporally in the time duration of the experiment, and obtains a tomographic Probability Distribution Function (PDF) of the particles (**Figure 5**).
Particle-laden Newtonian fluid flows have shown to maintain the quasi-uniform distribution of the particles throughout the duct. The effect of the Reynolds number is relatively weak, however the tendency of the particles to be circulated by the secondary flows in the rectangular duct is recognizable as the Re increases. However, as there is no counter-acting force in such fluids, the eventual observation is for the particles to be mixed continuously up to the axial location of the measurements. It should be noted that the lower concentration of the particles in the lower depth of the duct is solely due to the fact that the OCT beam loses its intensity as penetrating into the texture of the medium, and the particles in those regions are essentially influenced by the shadows of those flowing on top of them.
The VEFs at low Weissenberg numbers behave similarly to the Newtonian fluids. The upper row of **Figure 5** attests to this in terms of the migration of the particles, as well. As the Weissenberg number increases, however, the first stage of alterations in the distribution of the particles occurs when there is some level of elasticity, however the Re is also in the lower end of the spectrum. A focusing trend is seen as the elastic forces overcome the weak inertial forces in moving the particles towards the centreline of the duct. Keeping the Re relatively constant and increasing the Wi and thereby the El by a factor of two, a better toned and more precise focusing is noticeable in the measurements of the same axial location in the duct.
The elastic force decreases as the particles get closer to the centreline. The inertial forces, which increase with Re, tend to move the particles towards the walls. The interaction between these force groups determines the final position of the particles. In weak elastic cases, the equilibrium occurs when/where the two forces are in balance. However, a much stronger elasticity field will result in a precise focusing of the particles on the centreline.
It should be noted that the other parameters playing effective roles in the distribution of the particles are the effects of shear thinning and those of the secondary flows, both of which tend to push the particles towards the walls. Increasing the Reynolds number to \(\sim\)10 shows the restructure of the particles previously oriented on the central regions of the duct towards the walls as a result of more dominant secondary flows. This pattern is more recognizable as the Re is increased up to twice the previous value, where the near wall regions are the most frequented ones of the duct cross-section. Interestingly, however, the conventional lift force exerted on the particles as they enter the regions closer to the surface compared with their dimensions, an unoccupied bound is observed on the direct vicinity of the walls.
## Conclusions
The dynamics of particle migration in milli-channel flows of VEFs is studied experimentally using Doppler Optical Coherence Tomography (D-OCT). Polyacrylamide has been used as the primary VEF, and a dilution of Glycerol has served as the Newtonian reference fluid. Spherical particles of 56 \(\upmu\)m in diameter have been introduced to the flow homogeneously and their distribution patterns have been captured using the abovementioned tomographic technique. At low Wi and Re numbers a particle focusing occurs around the duct centreline, which is then re-oriented as the higher levels of inertia lead to the strengthened secondary flow conditions which, consequently, tend to push the particles towards the walls.
## Acknowledgement
This project has received funding from European Union's Horizon 2020 research and innovation program under the Marie Sklodowska-Curie grant agreement No. 955605 YIELDGAP.
|
2307.07823
|
Automorphisms of Veronese subalgebras of polynomial algebras and free
Poisson algebras
|
The Veronese subalgebra $A_0$ of degree $d\geq 2$ of the polynomial algebra
$A=K[x_1,x_2,\ldots,x_n]$ over a field $K$ in the variables
$x_1,x_2,\ldots,x_n$ is the subalgebra of $A$ generated by all monomials of
degree $d$ and the Veronese subalgebra $P_0$ of degree $d\geq 2$ of the free
Poisson algebra $P=P\langle x_1,x_2,\ldots,x_n\rangle$ is the subalgebra
spanned by all homogeneous elements of degree $kd$, where $k\geq 0$. If $n\geq
2$ then every derivation and every locally nilpotent derivation of $A_0$ and
$P_0$ over a field $K$ of characteristic zero is induced by a derivation and a
locally nilpotent derivation of $A$ and $P$, respectively. Moreover, we prove
that every automorphism of $A_0$ and $P_0$ over a field $K$ closed with respect
to taking all $d$-roots of elements is induced by an automorphism of $A$ and
$P$, respectively.
|
Bakhyt Aitzhanova, Leonid Makar-Limanov, Ualbai Umirbaev
|
2023-07-15T15:05:09Z
|
http://arxiv.org/abs/2307.07823v1
|
# Automorphisms of Veronese Subalgebras of Polynomial Algebras and Free Poisson Algebras
###### Abstract.
The Veronese subalgebra \(A_{0}\) of degree \(d\geq 2\) of the polynomial algebra \(A=K[x_{1},x_{2},\ldots,x_{n}]\) over a field \(K\) in the variables \(x_{1},x_{2},\ldots,x_{n}\) is the subalgebra of \(A\) generated by all monomials of degree \(d\) and the Veronese subalgebra \(P_{0}\) of degree \(d\geq 2\) of the free Poisson algebra \(P=P\langle x_{1},x_{2},\ldots,x_{n}\rangle\) is the subalgebra spanned by all homogeneous elements of degree \(kd\), where \(k\geq 0\).
If \(n\geq 2\) then every derivation and every locally nilpotent derivation of \(A_{0}\) and \(P_{0}\) over a field \(K\) of characteristic zero is induced by a derivation and a locally nilpotent derivation of \(A\) and \(P\), respectively. Moreover, we prove that every automorphism of \(A_{0}\) and \(P_{0}\) over a field \(K\) closed with respect to taking all \(d\)-roots of elements is induced by an automorphism of \(A\) and \(P\), respectively.
**Mathematics Subject Classification (2020):** 14R10, 14J50, 13F20.
**Key words:** Automorphism, derivation, free Poisson algebra, polynomial algebra.
## 1. Introduction
Let \(K\) be an arbitrary field and let \(\mathbb{A}^{n}\) and \(\mathbb{P}^{n}\) be the affine and the projective \(n\)-spaces over \(K\), respectively. The _Veronese map_ of degree \(d\) is the map
\[\nu_{n,d}:\mathbb{P}^{n}\rightarrow\mathbb{P}^{m}\]
that sends \([x_{0}:\ldots:x_{n}]\) to all \(m+1\) possible monomials of total degree \(d\), where
\[m=\binom{n+d}{d}-1.\]
It is well known that the image \(V_{n,d}\) of the Veronese map \(\nu_{n,d}\) is a projective variety and is called the _Veronese variety_[7].
If \(Y_{i_{0}\ldots i_{n}}=x_{0}^{i_{0}}\ldots x_{n}^{i_{n}}\), \(i_{0}+\ldots+i_{n}=d\), then the Veronese variety is determined by the set of quadratic relations
\[Y_{i_{0}\ldots i_{n}}Y_{j_{0}\ldots j_{n}}=Y_{k_{0}\ldots k_{n}}Y_{r_{0} \ldots r_{n}}, \tag{1}\]
where \(i_{0}+j_{0}=k_{0}+r_{0},\ldots,i_{n}+j_{n}=k_{n}+r_{n}\)[19].
The _affine cone of \(V_{n,d}\)_ or the _affine Veronese variety_ is the affine subvariety \(\mathbb{A}^{m+1}\) determined by the set of relations (1). The algebra of polynomial functions on the affine
Veronese cone \(V_{n,d}\) is isomorphic to the subalgebra of \(K[x_{0},\ldots,x_{n}]\) generated by all monomials of degree \(d\)[21].
Veronese surfaces play an important role in the description of quasihomogeneous affine surfaces given by M.H. Gizatullin [4] and V.L. Popov [18]. They form one of the main examples of the so-called _Gizatullin surfaces_[10]. The structure of the automorphism groups of Veronese surfaces are studied in [1, 2, 5, 6, 11, 12]. The derivations and locally nilpotent derivations of affine Veronese surfaces are described in [1].
Recently J. Kollar devoted two interesting papers [8, 9] to the study of automorphism groups of some more general affine varieties. In particular, he described the group of automorphisms of all affine Veronese varieties.
Let \(A=K[x_{1},x_{2},\ldots,x_{n}]\) be the polynomial algebra over a field \(K\) in the variables \(x_{1},x_{2},\ldots,x_{n}\). Consider the grading
\[A=A_{0}\oplus A_{1}\oplus\ldots\oplus A_{d-1},\]
where \(d\geq 2\) and \(A_{i}\) is the subspace of \(A\) generated by all monomials of degree \(kd+i\) for all \(k\geq 0\). This is a \(\mathbb{Z}_{d}\)-grading of \(A\), i.e., \(A_{i}A_{j}\subseteq A_{i+j}\) for all \(i,j\in\mathbb{Z}_{d}\). The subalgebra \(A_{0}\) is called the _Veronese subalgera of \(A\) of degree \(d\)_.
Similarly, let \(P=P\langle x_{1},x_{2},\ldots,x_{n}\rangle\) be the free Poisson algebra over \(K\) in the variables \(x_{1},x_{2},\ldots,x_{n}\). The grading
\[P=P_{0}\oplus P_{1}\oplus\ldots\oplus P_{d-1},\]
where \(P_{i}\) is the subspace of \(P\) generated by all homogeneous elements of degree \(kd+i\) is a \(\mathbb{Z}_{d}\)-grading of \(P\), i.e., \(P_{i}P_{j}\subseteq P_{i+j}\) and \(\{P_{i},P_{j}\}\subseteq P_{i+j}\) for all \(i,j\in\mathbb{Z}_{d}\). The subalgebra \(P_{0}\) is called the _Veronese subalgera of \(P\) of degree \(d\)_.
We prove that every derivation and every locally nilpotent derivation of \(P_{0}\) (of \(A_{0}\)) over a field \(K\) of characteristic zero is induced by a derivation and a locally nilpotent derivation of \(P\) (of \(A\)), respectively. Moreover, we prove that every automorphism of \(P_{0}\) (of \(A_{0}\)) over a field \(K\), closed with respect to taking all \(d\)-roots of elements, is induced by an automorphism of \(P\) (of \(A\)).
This paper is organized as follows. In Section 2 we describe a basis of free Poisson algebras and give some elementary definitions that are necessary in the future. Derivations of the Veronese subalgebras are studied in Section 3 and automorphisms are studied in Section 4. All results are proven for Poisson algebras. Analogues of these results for polynomial algebras are formulated in Section 5. In the same section we give some counter-examples that show the analogous results do not hold for polynomial algebras in one variable and for free associative algebras.
## 2. Free Poisson algebra \(P\langle x_{1},\ldots,x_{n}\rangle\)
A vector space \(P\) over a field \(K\) endowed with two bilinear operations \(x\cdot y\) (a multiplication) and \(\{x,y\}\) (a Poisson bracket) is called a _Poisson algebra_ if \(P\) is a commutative associative algebra under \(x\cdot y\), \(P\) is a Lie algebra under \(\{x,y\}\), and \(P\) satisfies the following identity (the Leibniz identity):
\[\{x,y\cdot z\}=y\cdot\{x,z\}+\{x,y\}\cdot z.\]
Symplectic Poisson algebras \(P_{n}\) appear in many areas of algebra. For each natural \(n\geq 1\) the symplectic Poisson algebra \(P_{n}\) of index \(n\) is a polynomial algebra \(K[x_{1},y_{1},\ldots,x_{n},y_{n}]\) endowed with the Poisson bracket defined by
\[\{x_{i},y_{j}\}=\delta_{ij},\ \ \{x_{i},x_{j}\}=0,\ \ \{y_{i},y_{j}\}=0,\]
where \(\delta_{ij}\) is the Kronecker symbol and \(1\leq i,j\leq n\).
Let \(P\) be a Poisson algebra. A linear map \(D:P\to P\) is called a _derivation_ of the Poisson algebra \(P\) if
\[D(xy)=D(x)y+xD(y) \tag{2}\]
and
\[D(\{x,y\})=\{D(x),y\}+\{x,D(y)\} \tag{3}\]
for all \(x,y\in P\).
We refer to a linear map \(D:P\to P\) satisfying (2) as an _associative derivation_ of \(P\) and to a linear map \(D:P\to P\) satisfying (3) as a _Lie derivation_ of \(P\). Thus, a derivation of a Poisson algebra is simultaneously an associative and a Lie derivation. We often refer to them as Poisson derivations.
The Leibniz identity implies that for any \(x\in P\) the map
\[\operatorname{ad}_{x}:P\to P\ \left(y\mapsto\{x,y\}\right)\]
is an associative derivation of \(P\). The Jacobi identity implies that the map \(\operatorname{ad}_{x}\) is also a Lie derivation of \(P\). So for any \(x\in P\) the map \(\operatorname{ad}_{x}\) is a Poisson derivation of \(P\).
Let \(L\) be a Lie algebra with Lie bracket \([\,,]\) over a field \(K\) and let \(e_{1},e_{2}\ldots,e_{k},\ldots\) be a linear basis of \(L\). Then there exists a unique bracket \(\{\,,\}\) on the polynomial algebra \(K[e_{1},e_{2},\ldots,e_{k},\ldots]\) defined by
\[\{e_{i},e_{j}\}=[e_{i},e_{j}]\]
for all \(i,j\) and satisfying the Leibniz identity. With this bracket
\[P(L)=\langle K[e_{1},e_{2},\ldots],\cdot,\{\,,\}\rangle\]
becomes a Poisson algebra. This Poisson algebra \(P(L)\) is called the _Poisson enveloping algebra_[23] (or _Poisson symmetric algebra_[15]) of \(L\). Note that the bracket \(\{,\}\) of the algebra \(P(L)\) depends on the structure of \(L\) but does not depend on a chosen basis.
Let \(L=\operatorname{Lie}\langle x_{1},\ldots,x_{n}\rangle\) be the free Lie algebra with free generators \(x_{1},\ldots,x_{n}\). It is well-known (see, for example [20]) that \(P(L)\) is the free Poisson algebra over \(K\) in the variables \(x_{1},\ldots,x_{n}\). We denote it by \(P\langle x_{1},\ldots,x_{n}\rangle\).
Let us choose a multihomogeneous linear basis
\[x_{1},\ldots,x_{n},[x_{1},x_{2}],\ldots,[x_{1},x_{n}],\ldots,[x_{n-1},x_{n}], [[x_{1},x_{2}],x_{3}],\ldots\]
of a free Lie algebra \(L\) and denote the elements of this basis by
\[e_{1},e_{2},\ldots,e_{s},\ldots. \tag{4}\]
The algebra \(P=P\langle x_{1},\ldots,x_{n}\rangle\) coincides with the polynomial algebra on the elements (4). Consequently, the monomials
\[u=e_{i_{1}}e_{i_{2}}\ldots e_{i_{s}}, \tag{5}\]
where \(i_{1}\leq i_{2}\leq\ldots\leq i_{s}\) form a linear basis of \(P\).
Denote by \(\deg\) the standard degree function on \(P\), i.e., \(\deg(x_{i})=1\) for all \(1\leq i\leq n\). If \(u\) is an element of the form (5) then
\[\deg u=\deg e_{i_{1}}+\deg e_{i_{2}}+\ldots+\deg e_{i_{s}}.\]
Set also \(d(u)=s\) and call it the _polynomial length_ of \(u\). Note that
\[\deg\{f,g\}=\deg f+\deg g\]
if \(f\) and \(g\) are homogeneous and \(\{f,g\}\neq 0\).
Denote by \(Q(P)=P(x_{1},\ldots,x_{n})\) the field of fractions of the polynomial algebra \(K[e_{1},e_{2},\ldots]\) in the variables (4). The Poisson bracket \(\{\cdot,\cdot\}\) on \(K[e_{1},e_{2},\ldots]=P\) can be uniquely extended to a Poisson bracket on the field of its fractions \(Q(P)\) and
\[\Big{\{}\frac{a}{b},\frac{c}{d}\Big{\}}=\frac{\{a,c\}bd-\{a,d\}bc-\{b,c\}ad+\{ b,d\}ac}{b^{2}d^{2}} \tag{6}\]
for all \(a,b,c,d\in P\) with \(bd\neq 0\).
The field \(Q(P)=P(x_{1},x_{2},\ldots,x_{n})\) with this Poisson bracket is called the _free Poisson field_ over \(K\) in variables \(x_{1},\ldots,x_{n}\)[16].
Several combinatorial results on the structure of free Poisson algebras and free Poisson fields are proven in [13, 14, 15, 16, 17, 22].
We fix a grading
\[P=P_{0}\oplus P_{1}\oplus\ldots\oplus P_{d-1} \tag{7}\]
of the free Poisson algebra \(P=P\langle x_{1},\ldots,x_{n}\rangle\), where \(P_{i}\) is the linear span of all elements of degree \(i+ds\), \(i=0,1,\ldots,d-1\), and \(s\) is an arbitrary nonnegative integer. This is a \(\mathbb{Z}_{d}\)-grading of \(P\), i.e.,
\[P_{i}P_{j}\subseteq P_{i+j},\ \ \{P_{i},P_{j}\}\subseteq P_{i+j},\]
where \(i,j\in\mathbb{Z}_{d}=\mathbb{Z}/d\mathbb{Z}\). For shortness we will refer to this grading as the \(d\)-grading.
An automorphism \(\phi\in\operatorname{Aut}P\) is called a _graded automorphism_ with respect to grading (7) if \(\phi(x_{1}),\phi(x_{2}),\ldots,\phi(x_{n})\in P_{1}\). A graded automorphism is called _graded tame_ if it is a product of graded elementary automorphisms.
We will call a graded automorphism of \(P\) with respect to grading (7) a _d-graded automorphism_ for shortness. Obviously, every \(d\)-graded automorphism induces an automorphism of the algebra \(P_{0}\). A derivation \(D\) of \(P\) will be called a _d-graded derivation_ if \(D(x_{1}),D(x_{2}),\ldots,D(x_{n})\in P_{1}\).
## 3. Derivations of \(P_{0}\)
In this section we assume that \(K\) is an arbitrary field of characteristic zero.
**Lemma 1**.: _Every derivation of the Poisson algebra \(P=P\langle x_{1},x_{2},\ldots,x_{n}\rangle\) over \(K\) can be uniquely extended to a derivation of the Poisson field \(Q(P)=P(x_{1},x_{2},\ldots,x_{n})\)._
Proof. Let \(D\) be an arbitrary derivation of the free Poisson algebra \(P\). In particular, \(D\) is a derivation of the polynomial algebra \(P=K[e_{1},\ldots,e_{s},\ldots]\). It is well known [25, p. 120] that \(D\) can be uniquely extended to an associative derivation \(S\) of the quotient field
\(Q(P)=K(e_{1},\ldots,e_{s},\ldots)\). We will show that \(S\) is a Lie derivation of \(Q(P)\), i.e., that \(S\) satisfies (3). We have to check that
\[S\left(\left\{\frac{a}{b},\frac{c}{d}\right\}\right)=\left\{S\left(\frac{a}{b} \right),\frac{c}{d}\right\}+\left\{\frac{a}{b},S\left(\frac{c}{d}\right)\right\} \tag{8}\]
for all \(a,b,c,d\in P\langle x_{1},\ldots,x_{n}\rangle\) with \(bd\neq 0\). Using (6) we get
\[S\left(\left\{\frac{a}{b},\frac{c}{d}\right\}\right)=S\left(\frac{\{a,c\}bd- \{a,d\}bc-\{b,c\}ad+\{b,d\}ac}{b^{2}d^{2}}\right)\]
\[=S\left(\frac{\{a,c\}}{bd}\right)-S\left(\frac{\{a,d\}c}{bd^{2}}\right)-S \left(\frac{\{b,c\}a}{b^{2}d}\right)+S\left(\frac{\{b,d\}ac}{b^{2}d^{2}}\right)\]
\[=\frac{S(\{a,c\})bd-\{a,c\}S(bd)}{b^{2}d^{2}}-\frac{S(\{a,d\}c)bd^{2}-\{a,d\} cS(bd^{2})}{b^{2}d^{4}}\]
\[-\frac{S(\{b,c\}a)b^{2}d-\{b,c\}aS(b^{2}d)}{b^{4}d^{2}}+\frac{S(\{b,d\}ac)b^{ 2}d^{2}-\{b,d\}acS(b^{2}d^{2})}{b^{4}d^{4}}\]
\[=\frac{S(\{a,c\})}{bd}-\frac{\{a,c\}S(b)}{b^{2}d}-\frac{\{a,c\}S(d)}{bd^{2}}- \frac{S(\{a,d\})c}{bd^{2}}-\frac{\{a,d\}S(c)}{bd^{2}}+\frac{\{a,d\}cS(b)}{b^{2 }d^{2}}\]
\[+\frac{2\{a,d\}cS(d)}{bd^{3}}-\frac{S(\{b,c\})a}{b^{2}d}-\frac{\{b,c\}S(a)}{b ^{2}d}+\frac{2\{b,c\}aS(b)}{b^{3}d}+\frac{\{b,c\}aS(d)}{b^{2}d^{2}}\]
\[+\frac{S(\{b,d\})ac}{b^{2}d^{2}}+\frac{\{b,d\}S(a)c}{b^{2}d^{2}}+\frac{\{b,d \}aS(c)}{b^{2}d^{2}}-\frac{2\{b,d\}acS(b)}{b^{3}d^{2}}-\frac{2\{b,d\}acS(d)}{ b^{2}d^{3}}\]
\[=\frac{\{S(a),c\}}{bd}+\frac{\{a,S(c)\}}{bd}-\frac{\{a,c\}S(b)}{b^{2}d}-\frac {\{a,c\}S(d)}{bd^{2}}-\frac{\{S(a),d\}c}{bd^{2}}\]
\[-\frac{\{a,S(d)\}c}{bd^{2}}-\frac{\{a,d\}S(c)}{bd^{2}}+\frac{\{a,d\}cS(b)}{b^{ 2}d^{2}}+\frac{2\{a,d\}cS(d)}{bd^{3}}-\frac{\{S(b),c\}a}{b^{2}d}\]
\[-\frac{\{b,S(c)\}a}{b^{2}d}-\frac{\{b,c\}S(a)}{b^{2}d}+\frac{2\{b,c\}aS(b)}{b^ {3}d}+\frac{\{b,c\}aS(d)}{b^{2}d^{2}}+\frac{\{S(b),d\}ac}{b^{2}d^{2}}\]
\[+\frac{\{b,S(d)\}ac}{b^{2}d^{2}}+\frac{\{b,d\}S(a)c}{b^{2}d^{2}}+\frac{\{b,d \}aS(c)}{b^{2}d^{2}}-\frac{2\{b,d\}acS(b)}{b^{3}d^{2}}-\frac{2\{b,d\}acS(d)}{ b^{2}d^{3}}.\]
Direct calculations give that
\[=\left\{\frac{S(a),c}{b},\frac{c}{d}\right\}-\left\{\frac{aS(b)}{b^{2}}, \frac{c}{d}\right\}+\left\{\frac{a}{b},\frac{S(c)}{d}\right\}-\left\{\frac{a} {b},\frac{cS(d)}{d^{2}}\right\}\]
\[=\frac{\{S(a),c\}}{bd}-\frac{\{S(a),d\}c}{bd^{2}}-\frac{\{b,c\}S(a)}{b^{2}d}+ \frac{\{b,d\}S(a)c}{b^{2}d^{2}}-\frac{\{a,c\}S(b)}{b^{2}d}\]
\[-\frac{\{S(b),c\}a}{b^{2}d}+\frac{\{a,d\}S(b)c}{b^{2}d^{2}}+\frac{\{S(b),d\} ac}{b^{2}d^{2}}+\frac{2\{b,c\}aS(b)}{b^{3}d}-\frac{2\{b,d\}aS(b)c}{b^{3}d^{2}}\]
\[+\frac{\{a,S(c)\}}{bd}-\frac{\{a,d\}S(c)}{bd^{2}}-\frac{\{b,S(c)\}a}{b^{2}d}+ \frac{\{b,d\}aS(c)}{b^{2}d^{2}}-\frac{\{a,c\}S(d)}{bd^{2}}\]
\[-\frac{\{a,S(d)\}c}{bd^{2}}+\frac{2\{a,d\}cS(d)}{bd^{3}}+\frac{\{b,c\}S(d)a}{b ^{2}d^{2}}+\frac{\{b,S(d)\}ac}{b^{2}d^{2}}-\frac{2\{b,d\}acS(d)}{b^{2}d^{3}}.\]
These two equalities imply (8). \(\Box\)
Consider the grading (7) of \(P\). A Poisson derivation \(D\) of \(P\) will be called a _d-graded Poisson derivation_ if \(D(x_{i})\in P_{1}\) for all \(i=1,\ldots,n\). Obviously, every \(d\)-graded Poisson derivation of \(P\) induces a Poisson derivation of \(P_{0}\). The reverse is also true.
**Lemma 2**.: _Every Poisson derivation of \(P_{0}\) can be uniquely extended to a \(d\)-graded Poisson derivation of \(P=P\langle x_{1},x_{2},\ldots,x_{n}\rangle\)._
Proof. Let \(D\) be a Poisson derivation of \(P_{0}\). In particular, \(D\) is a derivation of the associative and commutative algebra \(P_{0}\). Since \(P_{0}\) is a domain, \(D\) can be uniquely extended [25, p. 120] to a derivation \(T\) of the field of fractions \(Q(P_{0})\) of \(P_{0}\). The field extension
\[Q(P_{0})\subseteq Q(P)\]
is algebraic since every \(e_{i}\) is a root of the polynomial \(p(t)=t^{d}-e_{i}^{d}\in Q(P_{0})[t]\) for all \(i\). This extension is separable since \(K\) is a field of characteristic zero. By Corollaries 2 and \(2^{\prime}\) in [25, pages 124-125], the associative derivation \(T\) of the field \(Q(P_{0})\) can be uniquely extended to an associative derivation \(S\) of the field \(Q(P)\).
Suppose that
\[S(e_{j})=\frac{f_{j}}{g_{j}},\]
where \(f_{j}\in P\), \(0\neq g_{j}\in P\), and the pairs \(f_{j},g_{j}\) are relatively prime for all \(j\). Notice that \(e_{j}^{d}\in P_{0}\) and
\[D(e_{j}^{d})=S(e_{j}^{d})=de_{j}^{d-1}\frac{f_{j}}{g_{j}}\in P_{0}.\]
Consequently,
\[g_{j}|e_{j}^{d-1} \tag{9}\]
since the pair \(f_{j},g_{j}\) is relatively prime, that is, \(g_{j}\) is a power of \(e_{j}\).
If \(d|\deg e_{j}\) then \(e_{j}\in P_{0}\) and \(S(e_{j})=D(e_{j})\in P_{0}\), i.e., we may assume that \(g_{j}=1\). If \(d\not|\deg e_{j}\) then there exist \(1\leq i\leq n\) and \(1\leq k<d\) such that \(e_{i}=x_{i}\neq e_{j}\) and \(x_{i}^{k}e_{j}\in P_{0}\). Then
\[D(x_{i}^{k}e_{j})=S(x_{i}^{k}e_{j})=kx_{i}^{k-1}\frac{f_{i}}{g_{i}}e_{j}+x_{i} ^{k}\frac{f_{j}}{g_{j}}\in P_{0}.\]
Consequently,
\[g_{i}g_{j}|kx_{i}^{k-1}f_{i}g_{j}e_{j}+x_{i}^{k}f_{j}g_{i}.\]
This implies that \(g_{j}|x_{i}^{k}f_{j}g_{i}\). Since \(f_{j},g_{j}\) are relatively prime, we get \(g_{j}|x_{i}^{k}g_{i}\). By (9) \(g_{i}\) is a power of \(e_{i}=x_{i}\) and \(g_{j}\) is a power of \(e_{j}\). Since \(i\neq j\), this implies that \(g_{j}\in K\) for all \(j\). Consequently, \(S(e_{j})\in P\) and \(S(P)\subseteq P\).
Let us now show that the restriction of \(S\) to \(P\) is a Lie derivation, i.e.,
\[S(\{u,v\})=\{S(u),v\}+\{u,S(v)\} \tag{10}\]
for all \(u,v\) of the form (5). We prove (10) by induction on the polynomial length \(d(u)+d(v)\). Suppose that \(u=e_{i}\) and \(v=e_{j}\). Since \(e_{i}^{d},e_{j}^{d}\in P_{0}\), we get
\[S(\{e_{i}^{d},e_{j}^{d}\})=D(\{e_{i}^{d},e_{j}^{d}\})=\{D(e_{i}^{d}),e_{j}^{d}\} +\{e_{i}^{d},D(e_{j}^{d})\}\]
\[=\{S(e_{i}^{d}),e_{j}^{d}\}+\{e_{i}^{d},S(e_{j}^{d})\}=\{de_{i}^{d-1}S(e_{i}),e_ {j}\}de_{j}^{d-1}+de_{i}^{d-1}\{e_{i},de_{j}^{d-1}S(e_{j})\}\]
\[=d^{2}(d-1)e_{i}^{d-2}e_{j}^{d-1}S(e_{i})\{e_{i},e_{j}\}+d^{2}e_{i}^{d-1}e_{j}^ {d-1}\{S(e_{i}),e_{j}\}\]
\[+d^{2}(d-1)e_{i}^{d-1}e_{j}^{d-2}S(e_{j})\{e_{i},e_{j}\}+d^{2}e_{i}^{d-1}e_{j} ^{d-1}\{e_{i},S(e_{j})\}.\]
On the other hand,
\[\{e_{i}^{d},e_{j}^{d}\}=d^{2}e_{i}^{d-1}e_{j}^{d-1}\{e_{i},e_{j}\}\]
and
\[S(\{e_{i}^{d},e_{j}^{d}\})=S(d^{2}e_{i}^{d-1}e_{j}^{d-1}\{e_{i},e_{j}\})\]
\[=d^{2}S(e_{i}^{d-1})e_{j}^{d-1}\{e_{i},e_{j}\}+d^{2}e_{i}^{d-1}S(e_{j}^{d-1})\{ e_{i},e_{j}\}+d^{2}e_{i}^{d-1}e_{j}^{d-1}S(\{e_{i},e_{j}\})\]
\[=d^{2}(d-1)e_{i}^{d-2}e_{j}^{d-1}S(e_{i})\{e_{i},e_{j}\}+d^{2}(d-1)e_{i}^{d-1}e_ {j}^{d-2}S(e_{j})\{e_{i},e_{j}\}+d^{2}e_{i}^{d-1}e_{j}^{d-1}S(\{e_{i},e_{j}\}).\]
Comparing two values of \(S(\{e_{i}^{d},e_{j}^{d}\})\), we get
\[S(\{e_{i},e_{j}\})=\{S(e_{i}),e_{j}\}+\{e_{i},S(e_{j})\}.\]
Suppose that \(d(v)\geq 2\) and \(v=v_{1}v_{2}\). Then
\[S(\{u,v\})=S(\{u,v_{1}v_{2}\})=S(v_{1}\{u,v_{2}\}+\{u,v_{1}\}v_{2})\]
\[=S(v_{1})\{u,v_{2}\}+v_{1}S(\{u,v_{2}\})+S(\{u,v_{1}\})v_{2}+\{u,v_{1}\}S(v_{2 }).\]
By the induction proposition, (10) is true for pairs \(u,v_{1}\) and \(u,v_{2}\), i.e.,
\[S(\{u,v_{1}\})=\{S(u),v_{1}\}+\{u,S(v_{1})\},\quad S(\{u,v_{2}\})=\{S(u),v_{2} \}+\{u,S(v_{2})\}.\]
Then
\[S(\{u,v\})=S(v_{1})\{u,v_{2}\}+v_{1}\{S(u),v_{2}\}+v_{1}\{u,S(v_{2})\}\]
\[+\{S(u),v_{1}\}v_{2}+\{u,S(v_{1})\}v_{2}+\{u,v_{1}\}S(v_{2})\]
\[=\{S(u),v_{1}v_{2}\}+\{u,S(v_{1})v_{2}+v_{1}S(v_{2})\}=\{S(u),v\}+\{u,S(v)\}.\]
Consequently, \(S\) is a derivation of a Poisson algebra \(P\) and induces \(D\) on \(P_{0}\). \(\Box\)
**Lemma 3**.: _Every locally nilpotent derivation of the Poisson algebra \(P_{0}\) is induced by a locally nilpotent \(d\)-derivation of the Poisson algebra \(P=P\langle x_{1},x_{2},\ldots,x_{n}\rangle\)._
Proof. Let \(D\) be a locally nilpotent derivation of \(P_{0}\) and let \(S\) be a unique extension of \(D\) to \(P\). We have to show that \(S\) is a locally nilpotent derivation of \(P\). Notice that
\[P_{0}\subset P\]
is an integral extension of domains since \(e_{i}^{d}\in P_{0}\) for all \(i\geq 1\). According to a result of W.V. Vasconcelos [24] (see also Proposition 1.3.37 from [3, p. 41]), \(S\) is locally nilpotent. \(\Box\)
## 4. Automorphisms of \(P_{0}\)
As we noticed above, every \(d\)-graded automorphism of \(P\langle x_{1},x_{2},\ldots,x_{n}\rangle\) induces an automorphism of \(P_{0}\). In this section we prove the reverse of this statement for \(n>1\).
**Theorem 1**.: _Let \(K\) be a field closed with respect to taking all \(d\)-roots of elements. Then every automorphism of \(P_{0}\) over \(K\) is induced by a \(d\)-graded automorphism of \(P=P\langle x_{1},x_{2},\ldots,x_{n}\rangle\) if \(n>1\)._
Proof. Let \(\alpha\) be an automorphism of \(P_{0}\). Denote the extension of \(\alpha\) to the quotient field \(Q(P_{0})\) by the same symbol. We have \(\frac{x_{2}}{x_{1}}\in Q(P_{0})\). Suppose that
\[\alpha\left(\frac{x_{2}}{x_{1}}\right)=\frac{f_{2}}{f_{1}}, \tag{11}\]
where \(f_{1},f_{2}\) are relatively prime. Then
\[\alpha\left(\frac{x_{2}^{d}}{x_{1}^{d}}\right)=\alpha\left(\frac{x_{2}}{x_{1} }\right)^{d}=\frac{f_{2}^{d}}{f_{1}^{d}}.\]
Since \(f_{1},f_{2}\) are relatively prime it follows that \(\alpha(x_{1}^{d})=vf_{1}^{d}\) and \(\alpha(x_{2}^{d})=vf_{2}^{d}\) for some \(v\in P\). Moreover, \(\alpha(x_{1}^{i}x_{2}^{d-i})=vf_{1}^{i}f_{2}^{d-i}\) for all \(0\leq i<d\).
We have \(vf_{1}^{d},vf_{2}^{d}\in P_{0}\). If \(K\) is a field of characteristic \(p>0\) and \(p\) divides \(d\), then \(f_{1}^{d},f_{2}^{d}\in P_{0}\). Consequently, \(v\in P_{0}\). Assume that \(K\) is a field of charateristic \(0\) or of characteristic \(p>0\) and \(p\) does not divide \(d\). Let \(\epsilon\) be a primitive \(d\)-root of unity. Consider the automorphism \(\varepsilon\) of \(Q(P)\) such that \(\varepsilon(x_{i})=\epsilon x_{i}\) for all \(i\). Notice that for any \(f\in Q(P)\) we have \(f\in Q(P_{0})\) if and only if \(\varepsilon(f)=f\). Then
\[\frac{f_{2}}{f_{1}}=\varepsilon(\frac{f_{2}}{f_{1}})=\frac{\varepsilon(f_{2}) }{\varepsilon(f_{1})}\]
and \(\varepsilon(f_{1}),\varepsilon(f_{2})\) are relatively prime. Hence \(f_{1}\varepsilon(f_{2})=\varepsilon(f_{1})f_{2}\) and, say, \(f_{1}\) divides \(\varepsilon(f_{1})\) and \(\varepsilon(f_{1})\) divides \(f_{1}\). Consequently \(f_{1}\) and \(\varepsilon(f_{1})\) are proportional which is possible only if \(f_{1}\) is a \(d\)-homogeneous element. Similarly, \(f_{2}\) is a \(d\)-homogeneous element. Then \(f_{1}^{d},f_{2}^{d}\in P_{0}\) and, consequently, \(v\in P_{0}\).
This implies that
\[x_{1}^{d}=\alpha^{-1}(v)\alpha^{-1}(f_{1}^{d}),\ \ x_{2}^{d}=\alpha^{-1}(v) \alpha^{-1}(f_{2}^{d}).\]
Since \(x_{1}^{d}\) is irreducible in \(P_{0}\), this is possible only if \(v\in K\). Let \(\mu\in K\) be a \(d\)-root of \(v\), i.e., \(\mu^{d}=v\). Replacing \(f_{1}\) and \(f_{2}\) by \(\mu f_{1}\) and \(\mu f_{2}\), we may assume that
\[\alpha(x_{1}^{d})=f_{1}^{d},\ \ \alpha(x_{2}^{d})=f_{2}^{d}. \tag{12}\]
By (11) and (12), we get
\[\alpha(x_{1}^{i_{1}}x_{2}^{i_{2}})=f_{1}^{i_{1}}f_{2}^{i_{2}},\,\mbox{if}\,x_{ 1}^{i_{1}}x_{2}^{i_{2}}\in P_{0}.\]
Consider an arbitrary \(e_{i}\) with \(i\geq 3\). Suppose that \(\deg e_{i}=s\). Then \(y_{i}=\frac{e_{i}}{x_{1}^{i}}\in Q(P_{0})\). Suppose that
\[\alpha(y_{i})=\alpha\left(\frac{e_{i}}{x_{1}^{s}}\right)=\frac{f_{i}}{g_{i}},\]
where \(f_{i},g_{i}\) are relatively prime. Then
\[\alpha\left(\frac{e_{i}^{d}}{x_{1}^{sd}}\right)=\alpha\left(\frac{e_{i}}{x_{1}^{s }}\right)^{d}=\frac{f_{i}^{d}}{g_{i}^{d}}.\]
Again \(\alpha(e_{i}^{d})=vf_{i}^{d}\) and \(\alpha(x_{1}^{sd})=vg_{i}^{d}\) for some \(v\in P\). As above, we get that \(f_{i}^{d},g_{i}^{d}\in P_{0},v\in K\), and we can assume that
\[\alpha(e_{i}^{d})=f_{i}^{d},\ \ \alpha(x_{1}^{sd})=g_{i}^{d}.\]
Then \(f_{1}^{sd}=g_{i}^{d}\) and \(g_{i}=\lambda f_{1}^{s}\), where \(\lambda\) is a \(d\)-root of unity. After rescaling, we can assume that \(g_{i}=f_{1}^{s}\) and
\[\alpha(e_{i}^{d})=f_{i}^{d},\ \ \alpha(y_{i})=\alpha\left(\frac{e_{i}}{x_{1}^{s }}\right)=\frac{f_{i}}{f_{1}^{s}}, \tag{13}\]
where \(s=\deg e_{i}\) and \(i\geq 3\). This is true for \(i=2\) by (11) and (12).
Let \(u=e_{i_{1}}\ldots e_{i_{k}}\) be an arbitrary element of \(P\) of the form (5). We have
\[u=x_{1}^{s}\frac{e_{i_{1}}}{x_{1}^{s_{i_{1}}}}\ldots\frac{e_{i_{k}}}{x_{1}^{s _{i_{k}}}}=x_{1}^{s}y_{i_{1}}\ldots y_{i_{k}}, \tag{14}\]
where \(s=s_{i_{1}}+\ldots+s_{i_{k}}\). We have \(d|s\) since \(u\in P_{0}\). Then
\[\alpha(u)=f_{1}^{s}\frac{f_{i_{1}}}{f_{1}^{s_{1}}}\ldots\frac{f_{i_{k}}}{f_{1} ^{s_{k}}}=f_{i_{1}}\ldots f_{i_{k}}\]
by (11), (12), and (13).
Consequently, the polynomial endomorphism \(\beta\) of \(P\), determined by \(\beta(e_{i})=f_{i}\) for all \(i\geq 1\), induces \(\alpha\) on \(P_{0}\). First we show that \(\beta\) is a polynomial automorphism of \(P\). The elements (4) are algebraically independent and, consequently, the elements \(e_{1}^{d},\ldots,e_{s}^{d},\ldots\) are algebraically independent. Since \(\alpha\) is an automorphism and \(\alpha(e_{i}^{d})=f_{i}^{d}\) for all \(i\) by (12) and (13), the elements \(f_{1}^{d},\ldots,f_{s}^{d},\ldots\) are algebraically independent. Therefore the elements \(f_{1},\ldots,f_{s},\ldots\) are algebraically independent and \(\beta\) is an injective endomorphism. Then \(\beta\) can be uniquely extended to an endomorphism of the quotient field \(P(x_{1},x_{2},\ldots,x_{n})\) and we denote this extension also by \(\beta\).
The restriction of \(\beta\) on \(Q(P_{0})\) is an automorphism since it coincides with the \(\alpha\). Consider the space
\[V=Q(P_{0})P\langle x_{1},x_{2},\ldots,x_{n}\rangle.\]
By (14) every element \(f\in P\) can be written as
\[f=f_{0}+f_{1}x_{1}+\ldots+f_{d-1}x_{1}^{d-1},\]
where \(f_{0},f_{1},\ldots,f_{d-1}\in K[t,y_{2},\ldots,y_{s},\ldots]\) and \(t=x_{1}^{d}\). Hence \(V\) is the \(Q(P_{0})\)-span of the elements \(1,x_{1},x_{1}^{2},\ldots,x_{1}^{d-1}\). If
\[V=b_{1}Q(P_{0})\oplus\ldots\oplus b_{k}Q(P_{0}),\]
then
\[\beta(V)=\beta(b_{1})Q(P_{0})+\ldots+\beta(b_{k})Q(P_{0})\]
since \(\beta(Q(P_{0}))=Q(P_{0})\). Notice that \(\beta(V)\subseteq V\). If \(\beta(V)\neq V\) then \(\dim_{Q(P_{0})}V<k\) and \(\text{Ker}\beta\neq 0\). It is impossible for nonzero field endomorphisms. Consequently, \(\beta(V)=V\)
and \(e_{i}\in\beta(V)\) for all \(i\). Therefore \(\beta\) is an automorphism of the field \(P(x_{1},x_{2},\ldots,x_{n})\) and of the polynomial algebra \(P\langle x_{1},x_{2},\ldots,x_{n}\rangle\).
It remains to show that \(\beta\) is a Lie automorphism of \(P\), i.e.,
\[\beta(\{u,v\})=\{\beta(u),\beta(v)\} \tag{15}\]
for all \(u,v\) of the form (5). We prove (15) by induction on the polynomial length \(d(u)+d(v)\). Suppose that \(u=e_{i}\) and \(v=e_{j}\). Since \(e_{i}^{d},e_{j}^{d}\in P_{0}\), we get
\[\beta(\{e_{i}^{d},e_{j}^{d}\})=\alpha(\{e_{i}^{d},e_{j}^{d}\})=\{\alpha(e_{i}^ {d}),\alpha(e_{j}^{d})\}=\{\beta(e_{i}^{d}),\beta(e_{j}^{d})\}\]
\[=\{\beta(e_{i})^{d},\beta(e_{j})^{d}\}=d^{2}\beta(e_{i})^{d-1}\beta(e_{j})^{d- 1}\{\beta(e_{i}),\beta(e_{j})\}.\]
On the other hand,
\[\beta(\{e_{i}^{d},e_{j}^{d}\})=\beta(d^{2}e_{i}^{d-1}e_{j}^{d-1}\{e_{i},e_{j} \})=d^{2}\beta(e_{i})^{d-1}\beta(e_{j})^{d-1}\beta(\{e_{i},e_{j}\}).\]
Comparing two values of \(\beta(\{e_{i}^{d},e_{j}^{d}\})\), we get that (15) holds for \(u=e_{i}\) and \(v=e_{j}\).
Suppose that \(d(v)\geq 2\) and \(v=v_{1}v_{2}\). Then
\[\beta(\{u,v\})=\beta(\{u,v_{1}v_{2}\})=\beta(v_{1}\{u,v_{2}\}+\{u,v_{1}\}v_{2})\]
\[=\beta(v_{1})\beta(\{u,v_{2}\})+\beta(\{u,v_{1}\})\beta(v_{2}).\]
By the induction proposition, we may assume that (15) is true for pairs \(u,v_{1}\) and \(u,v_{2}\). Then
\[\beta(\{u,v\})=\beta(v_{1})\{\beta(u),\beta(v_{2})\}+\{\beta(u),\beta(v_{1}) \}\beta(v_{2})\]
\[=\{\beta(u),\beta(v_{1})\beta(v_{2})\}=\{\beta(u),\beta(v)\}.\]
Consequently, \(\beta\) is an automorphism of \(P\) and induces \(\alpha\) on \(P_{0}\). \(\Box\)
Let \(\operatorname{Aut}_{d}\!P\) be the group of all \(d\)-graded automorphisms of the free Poisson algebra \(P\).
**Corollary 1**.: _Let \(K\) be a field closed with respect to taking all \(d\)-roots of elements and let \(E=\{\lambda\mathrm{id}|\lambda^{d}=1,\lambda\in K\}\), where \(\mathrm{id}\) is the identity automorphism of \(P\). Then_
\[\operatorname{Aut}P_{0}\cong\operatorname{Aut}_{d}\!P/E.\]
Proof. Consider the homomorphism
\[\psi:\operatorname{Aut}_{d}\!P\to Aut\,P_{0} \tag{16}\]
defined by \(\psi(\alpha)=\overline{\alpha}\), where \(\overline{\alpha}\) is the automorphism of \(P_{0}\) induced by the \(d\)-graded automorphism \(\alpha\) of \(P\).
By Theorem 1, \(\psi\) is an epimorphism. Let \(\alpha\in\operatorname{Ker}\psi\). Then \(\alpha(x_{1})^{d}=x_{1}^{d}\). Consequently, \(\alpha(x_{1})=\lambda x_{1}\) for some \(d\)th root of unity \(\lambda\in K\). Extending \(\alpha\) to \(Q(P_{0})\), we get \(\alpha(x_{i}/x_{1})=x_{i}/x_{1}\). Consequently, \(\alpha(x_{i})=\lambda x_{i}\) for all \(i\) and \(\alpha=\lambda\mathrm{id}\), i.e., \(\alpha\in E\). Obviously, \(E\subseteq\operatorname{Ker}\psi\). \(\Box\)
## 5. Veronese subalgebras of polynomial algebras
Let \(A=K[x_{1},x_{2},\ldots,x_{n}]\) be the polynomial algebra over a field \(K\) in the variables \(x_{1},x_{2},\ldots,x_{n}\). Consider the grading
\[A=A_{0}\oplus A_{1}\oplus\ldots\oplus A_{d-1},\]
where \(d\geq 2\) and \(A_{i}\) is the subspace of \(A\) generated by all monomials of degree \(kd+i\) for all \(k\geq 0\). This is a \(\mathbb{Z}_{d}\)-grading of \(A\), i.e., \(A_{i}A_{j}\subseteq A_{i+j}\) for all \(i,j\in\mathbb{Z}_{d}\). The subalgebra \(A_{0}\) is called the _Veronese subalgera of \(A\) of degree \(d\)_.
**Corollary 2**.: _Let \(A=K[x_{1},x_{2},\ldots,x_{n}]\) be the polynomial algebra over a field \(K\) of characteristic zero in \(n\geq 2\) variables \(x_{1},x_{2},\ldots,x_{n}\). Then every derivation of the Veronese subalgebra \(A_{0}\) can be uniquely extended to a \(d\)-graded derivation of \(K[x_{1},x_{2},\ldots,x_{n}]\)._
**Corollary 3**.: _Let \(A=K[x_{1},x_{2},\ldots,x_{n}]\) be the polynomial algebra over a field \(K\) of characteristic zero in \(n\geq 2\) variables \(x_{1},x_{2},\ldots,x_{n}\). Then every locally nilpotent derivation of the Veronese subalgebra \(A_{0}\) is induced by a locally nilpotent \(d\)-derivation of the polynomial algebra \(K[x_{1},x_{2},\ldots,x_{n}]\)._
**Corollary 4**.: _Let \(A=K[x_{1},x_{2},\ldots,x_{n}]\) be the polynomial algebra in \(n\geq 2\) variables \(x_{1},x_{2},\ldots,x_{n}\) over a field \(K\) closed with respect to taking all \(d\)-roots of elements. Then every automorphism of the Veronese subalgebra \(A_{0}\) of degree \(d\) is induced by a \(d\)-graded automorphism of \(K[x_{1},x_{2},\ldots,x_{n}]\)._
This result is also proven in [8].
Let \(\operatorname{Aut}_{d}A\) be the group of all \(d\)-graded automorphisms of the polynomial algebra \(A\).
**Corollary 5**.: _Let \(K\) be a field closed with respect to taking all \(d\)-roots of elements and let \(E=\{\lambda\mathrm{id}|\lambda^{d}=1,\lambda\in K\}\), where \(\mathrm{id}\) is the identity automorphism of \(A\). Then_
\[\operatorname{Aut}A_{0}\cong\operatorname{Aut}_{d}A/E.\]
The proofs of Corollary 2, Corollary 3, Corollary 4, and Corollary 5 repeat the polynomial parts of the proofs of Lemma 2, Lemma 3, Theorem 1, and Corollary 1, respectively.
Notice that these statements are not true for the polynomial algebra \(A=K[x]\) in one variable \(x\). In this case, the Veronese subalgebra \(A_{0}\) of degree \(d\) is the polynomial algebra in one variable \(x^{d}\). Then the locally nilpotent derivation of \(A_{0}\) determined by
\[x^{d}\mapsto 1\]
cannot be induced by any derivation of \(A\) and the automorphism of \(A_{0}\) determined by
\[x^{d}\mapsto x^{d}+1\]
cannot be induced by any automorphism of \(A\).
In addition, analogues of these results are not true for free associative algebras. In fact, if \(B=K\langle x,y\rangle\) is the free associative algebra in the variables \(x,y\) and \(d=2\) then the Veronese subalgebra \(B_{0}\) of degree \(d\) is the free associative algebra in the variables \(x^{2},xy,yx,y^{2}\). It is easy to check that the locally nilpotent derivation of \(B_{0}\) determined by
\[x^{2}\mapsto 1,xy\mapsto 0,yx\mapsto 0,y^{2}\mapsto 0\]
cannot be induced by any derivation of \(B\) and the automorphism of \(B_{0}\) determined by
\[x^{2}\mapsto x^{2}+1,xy\mapsto xy,yx\mapsto yx,y^{2}\mapsto y^{2}\]
cannot be induced by any automorphism of \(B\).
## Acknowledgments
The second and third authors are grateful to Max-Planck Institute fur Mathematik for hospitality and excellent working conditions, where part of this work has been done.
The third author is supported by the grant of the Ministry of Education and Science of the Republic of Kazakhstan (project AP14872073).
|
2305.11838
|
On the Andrews-Curtis groups: non-finite presentability
|
The Andrews-Curtis conjecture remains one of the outstanding open problems in
combinatorial group theory. It claims that every normally generating $r$-tuple
of a free group $F_r$ of rank $r\geq 2$ can be reduced to a basis by means of
Nielsen transformations and arbitrary conjugations. These transformations
generate the so-called Andrews-Curtis group AC($F_r$). The groups AC($F_r$) ($r
= 2, 3, \ldots$) are actively investigated and allows various generalizations,
for which there are a number of results. At the same time, almost nothing is
known about the structure and properties of the original groups AC($F_r$). In
this paper we define a class $\{A_{r, s}: r, s \geq 1\}$ of generalized
Andrews-Curtis groups in which any group $A_{r,r}$ is isomorphic to the
Andrews-Curtis group AC($F_r$). We prove that every group $A_{2,s}$\, ($s \geq
1$) is non-finitely presented. Hence the Andrews-Curtis group AC($F_2$) $\simeq
A_{2,2}$ is non-finitely presented. Thus, we give a partial answer to the
well-known question about the finite presentability of the groups AC($F_r$),
explicitly stated by J. Swan and A. Lisitsa in the Kourovka notebook \cite{KN}
(Question 18.89).
|
Vitaly Roman'kov
|
2023-05-19T17:26:48Z
|
http://arxiv.org/abs/2305.11838v1
|
# On the Andrews-Curtis groups: non-finite presentability
###### Abstract.
The Andrews-Curtis conjecture remains one of the outstanding open problems in combinatorial group theory. It claims that every normally generating \(r\)-tuple of a free group \(F_{r}\) of rank \(r\geq 2\) can be reduced to a basis by means of Nielsen transformations and arbitrary conjugations. These transformations generate the so-called Andrews-Curtis group \(\operatorname{AC}(F_{r})\). The groups \(\operatorname{AC}(F_{r})\) (\(r=2,3,\ldots\)) are actively investigated and allows various generalizations, for which there are a number of results. At the same time, almost nothing is known about the structure and properties of the original groups \(\operatorname{AC}(F_{r})\). In this paper we define a class \(\{A_{r,s}:r,s\geq 1\}\) of generalized Andrews-Curtis groups in which any group \(A_{r,r}\) is isomorphic to the Andrews-Curtis group \(\operatorname{AC}(F_{r})\). We prove that every group \(A_{2,s}\) (\(s\geq 1\)) is non-finitely presented. Hence the Andrews-Curtis group \(\operatorname{AC}(F_{2})\simeq A_{2,2}\) is non-finitely presented. Thus, we give a partial answer to the well-known question about the finite presentability of the groups \(\operatorname{AC}(F_{r})\), explicitly stated by J. Swan and A. Lisitsa in the Kourovka notebook [28] (Question 18.89).
1
Footnote 1: The research was funded by Sobolev Institute of Mathematics, project FWNF-22-0003.
## 1. Introduction
### Andrews-Curtis groups
Further \(F_{r}\) means a free group of rank \(r\). In this paper we prove that the Andrews-Curtis group \(\operatorname{AC}(F_{2})\) is non-finitely presented. In fact, we first show that the widely considered general Andrews-Curtis groups \(\operatorname{GAC}(F_{r})\) are isomorphic to the corresponding groups \(\operatorname{AC}(F_{r})\). Second, we prove that any Andrews-Curtis group \(\operatorname{AC}(F_{r})\) is isomorphic to a member \(A_{r,r}\) of an infinite series of finitely generated groups \(\{A_{r,s}:r,s\geq 1\}\). At last, we prove that each of groups \(A_{2,s}\) is non-finitely presented. It follows that \(\operatorname{AC}(F_{2})\simeq A_{2,2}\) is non-finitely presented. We also discuss some interesting properties of Andrews-Curtis groups \(\operatorname{AC}(F_{r}),\ r\geq 2\), and related open problems.
The Andrews-Curtis groups \(\operatorname{AC}(F_{r})\) for \(r\geq 2\) were introduced in their connection with the famous Andrews-Curtis Conjecture (ACC) named after James J. Andrews and Morton L. Curtis who proposed it in 1965 [1]. The ACC claims that every balanced presentation of the trivial group can be reduced to the standard one by a finite sequence of "elementary \(\operatorname{AC}\)-transformations", which are Nielsen transformations augmented by conjugations. This problem is of interest in topology as well as in group theory. A topological interpretation of this conjecture was given in the original paper by Andrews and Curtis [1].
See also [17]. For any \(r\geq 2\), the \(\operatorname{AC}\)-transformations generate the group \(\operatorname{AC}(F_{r})\), which acts on the set \(\operatorname{NGen}(F_{r})\) of all normally generating \(r\)-tuples \(u=(u_{1},\ldots,u_{r})\in F_{r}^{r}\). \(\operatorname{AC}\)-transformations also act on the set \(F_{r}^{r}\) of all \(r\)-tuples, where they generate the general Andrews-Curtis group \(\operatorname{GAC}(F_{r})\).
These definitions generalize naturally from the group \(F_{r}\) to any group \(G\).
To explain, fix a natural number \(r\geq 2\). The following are _elementary Andrews-Curtis transformations_ (or _elementary AC-moves_) on the set \(G^{r}\) of \(r\)-tuples of \(G\):
* \(R^{\pm 1}_{ij}\): \((u_{1},\ldots,u_{i},\ldots,u_{r})\mapsto(u_{1},\ldots,u_{i}u_{j}^{\pm 1}, \ldots,u_{r})\), \(i\neq j\);
* \(L^{\pm 1}_{ij}\): \((u_{1},\ldots,u_{i},\ldots,u_{r})\mapsto(u_{1},\ldots,u_{j}^{\pm 1}u_{i}, \ldots,u_{r})\), \(i\neq j\);
* \(I_{i}\) : \((u_{1},\ldots,u_{i},\ldots,u_{r})\mapsto(u_{1},\ldots,u_{i}^{-1},\ldots,u_{r})\);
* \(C_{i,w}\): \((u_{1},\ldots,u_{i},\ldots,u_{r})\mapsto(u_{1},\ldots,u_{i}^{w},\ldots,u_{r})\), \(w\in G\).
(For \(g,f\in G\)\(g^{f}\) means \(fgf^{-1}\).)
Transformations \(R^{\pm 1}_{ij},L^{\pm 1}_{ij},I_{i}\) are called _elementary Nielsen transformations_. Composition of a finite sequence of elementary Andrews-Curtis moves is called an _AC-transformation_. Each inverse of an AC-transformation is AC-transformation.
Every such AC-move gives a bijection on the set \(G^{r}\) of all \(r\)-tuples of elements of \(G\), which is an element of the symmetric group \(Sym(G^{r})\). The subgroup generated by all the AC-moves in \(Sym(G^{r})\) is called the _general AC-group of \(G\) of dimension \(r\)_ and is denoted by \(GAC_{r}(G)\). The set \(NGen_{r}(G)\) of all \(r\)-tuples in \(G^{r}\) that generate \(G\) as a normal subgroup, is invariant under all AC-moves, so every AC-move gives a bijection of the set \(NGen_{r}(G)\). The subgroup generated by all the AC-moves in \(Sym(NGen_{r}(G))\) is called the _AC-group of \(G\) of dimension \(r\)_ and is denoted by \(\mathrm{AC}_{r}(G)\). The group \(\mathrm{AC}_{r}(F_{r})\) is the true Andrews-Curtis group of \(F_{r}\) directly related to ACC in \(F_{r}\). However, the set \(\mathrm{NGen}_{r}(F_{r})\) may have a much more complicated structure than the set \(F_{r}^{r}\), in this regard it would be easier to study the group \(\mathrm{GAC}_{r}(F_{r})\). Observe, that the restriction of an element \(\alpha\in\mathrm{GAC}_{r}(F_{r})\), viewed as a bijection on \(F_{r}^{r}\), gives a bijection \(\bar{\alpha}\) on \(\mathrm{NGen}_{r}(F_{r})\), which is an element of \(\mathrm{AC}_{r}(F_{r})\). It is easy to see that the map \(\alpha\to\bar{\alpha}\) gives a homomorphism \(\phi:\mathrm{GAC}_{r}(F_{r})\to\mathrm{AC}_{r}(F_{r})\), which is onto. Of course, we are interested only in the case when the set \(NGen_{r}(G)\) is not empty.
Note that the set \(\mathrm{Gen}_{r}(G)\) of generating \(r\)-tuples of \(G\) is closed under Nielsen moves, while the set \(\mathrm{NGen}_{r}(G)\) of normal-generating \(r\)-tuples of \(G\) is closed under AC-moves.
Two tuples \(u,v\in G^{r}\) are called _AC-equivalent_ (symbolically \(u\sim_{AC}v\)) if there is an AC-transformation which moves \(u\) to \(v\).
If \(G\) is finitely generated, then it suffices to have a finite number of elementary AC-moves, where the element \(w\) in \(C_{i,w}\) or its inverse \(w^{-1}\) belongs to a fixed finite set generating \(G\).
### Andrews-Curtis Conjecture
_Let \(F_{r}\) be a free group with basis \(f=(f_{1},\ldots,f_{r})\). Then a tuple of elements \(u=(u_{1},\ldots,u_{r})\) generates \(F_{r}\) as a normal subgroup if and only if \(u\sim_{AC}f\)._
To state ACC in the original form recall that a presentation is called _balanced_ if the number of generators is equal to the number of relators. Furthermore, a balanced presentation \(\langle f\mid u\rangle\), where \(f=(f_{1},\ldots,f_{r}),u=(u_{1},\ldots,u_{r})\) is called _AC-trivializable_ if \(u\sim_{AC}f\). The original ACC states that every balanced presentation of the trivial group is trivializable.
In 1985 Akbulut and Kirby [2] came up with a series of "potential counterexamples" to ACC:
\[AK(n)=\langle x,y|\ x^{n}=y^{n+1},\ xyx=yxy\rangle,\ n\geq 2.\]
Presentations \(AK(n)\) are balanced presentations of the trivial group. Akbulut and Kirby conjectured that they are not trivializable, i.e., that the pair of its relators
\((x^{n}y^{-n-1},xyxy^{-1}x^{-1}y^{-1})\) is not AC-equivalent to the pair of generators \((x,y)\). It turned out later that the presentation \(AK(2)\) is AC-trivializable (see [18]), so \(AK(2)\) is not a counterexample to ACC. The question whether or not the presentations \(AK(n)\) with \(n>2\) are trivializable is still open despite an ongoing effort by the research community (see [19]). Note that currently \(AK(3)\) is the shortest (in the total length of relators) potential counterexample to ACC. Indeed, in [11] Havas and Ramsay proved that if \(\langle x,y\mid u=1,v=1\rangle\) is a presentation of the trivial group with \(|u|+|v|\leq 13\) then either \((u,v)\sim_{AC}(x,y)\) or \((u,v)\sim_{AC}(x^{3}y^{-4},xyxy^{-1}x^{-1}y^{-1})\).
Other infinite series of potential counterexamples are given in [6], [20] and in [19]. Finally, we mention a positive solution of a similar problem for free solvable groups by A. Myasnikov [21]. See papers [6] and [19] for more details and some particular results.
Currently there are several approaches to resolve ACC. One is based on the following observation: if \(r\)-tuples \(u\) and \(v\) in a free group \(F_{r}\) are AC-equivalent then for any group homomorphism \(\phi:F_{r}\to G\) one has \(\phi(u)\sim_{AC}\phi(v)\) in \(G\). Therefore, to show, for example, that \(AK(3)\) is a counterexample to ACC it suffices to find a group \(G\) and a pair of elements \(a,b\in G\) such that \((a,b)\not\sim_{AC}(a^{3}b^{-4},abab^{-1}a^{-1}b^{-1})\) in \(G\). This brought up an interesting research on Andrews-Curtis Conjecture in arbitrary groups \(G\). Observe, that ACC can be easily reformulated in a direct way for an arbitrary group \(G\), by saying that any two \(r\)-tuples of elements from \(G\) that generate \(G\) as a normal subgroup are AC-equivalent in \(G\). However, this straightforward version of ACC immediately fails for some groups, in particular, for some finitely generated abelian groups with torsion (see [4]), and as a consequence, for some groups \(G\) with torsion in their abelianization. Fortunately, one can slightly adjust the formulation of ACC to accommodate torsion in the abelianization of \(G\). It turns out that the resulting version of ACC, termed the _generalized Andrews-Curtis Conjecture_, holds in nilpotent and solvable groups [21], [10], finite groups [5], and some other groups. Up to now, this approach provides more and more groups satisfying the generalized ACC, and no counterexamples to the original one.
Another approach calls to study the group structure of AC-transformations and apply this knowledge to ACC. Almost nothing is known in this direction and our goal here is to present some results on algebraic structure of the group of AC-transformations of \(\mathrm{NGen}_{r}(F_{r})\).
The group \(\mathrm{AC}_{r}(F_{r})\) for \(r\geq 2\) is the classical AC-group that was introduced in relation with ACC. In Section 2 we prove the following result on AC-groups.
**Theorem A**.: _For any \(r\geq 2\) the homomorphism \(\phi:\) GAC\({}_{r}\)(\(F_{r}\)) \(\rightarrow\) AC(\(F_{r}\)) is an isomorphism._
In Section 3 we define a class \(\{A_{r,s}:r,s\geq 1\}\) of generalized Andrews-Curtis groups in which any group \(A_{r,r}\) is isomorphic to the Andrews-Curtis group \(\mathrm{AC}(F_{r})\). Section 4 is devoted to preliminary results. In Section 5 the following main results are proved.
**Theorem B**.: _The groups \(A_{2,s},s\geq 1,\) are not finitely presentable._
Since \(AC(F_{2})\) is antiisomorphic to \(A_{2,2}\) we get the following result that solves the Problem 18.89 posed by J. Swan and A.P. Lisitsa in the Kourovka Notebook [28].
**Corollary C**.: _The group \(AC(F_{2})\) is not finitely presentable._
Section 6 contains open problems proposed in collaboration with A.G. Myasnikov.
## 2. AC-groups and automorphisms of free groups
There is a deep and interesting relationship between the AC-groups \(AC(F_{r})\) and automorphisms of free groups.
Let \(G\) be a group, and \(\operatorname{Aut}(G)\) the automorphism group of \(G.\) Denote by \(\operatorname{IAut}(G)\) the subgroup of \(\operatorname{Aut}(G)\) consisting of those automorphisms of \(G\) which induce the identity map on the commutator quotient group (_abelianization_) \(G/G^{\prime}.\)
As above \(F_{r}\) is a free group of rank \(r\) with basis \(f=(f_{1},...,f_{r}).\) J. Nielsen, in 1924 (compare [22], [15]), using hyperbolic geometry gave finite sets of generators and relations for the automorphism group \(\operatorname{Aut}(F_{r})\). Specifically \(\operatorname{Aut}(F_{r})\) is generated by automorphisms of the following three forms (all indexes \(i,j,l\) range over the set \(\{1,...,r\}\) subject only the condition that \(i\neq j,\) every automorphism changes exactly one free generator):
* \(\rho_{i,j}:f_{i}\mapsto f_{i}f_{j},f_{l}\mapsto f_{l}\) for \(l\neq i,\)
* \(\lambda_{i,j}:f_{i}\mapsto f_{j}f_{i},f_{l}\mapsto f_{l}\) for \(l\neq i,\)
* \(\iota_{i}\quad:f_{i}\mapsto f_{i}^{-1},f_{l}\mapsto f_{l}\) for \(l\neq i.\)
It was generally agreed that both Nielsen's methods and his results were complicated (see [15], p. 164), and the subject did not progress for long time. In 1974 J. McCool [16] has found a new finite presentation for \(\operatorname{Aut}(F_{r})\). McCool's generators were the Whitehead's automorphisms and his relations were of two types: the relations among the integral monomial \(r\times r\) matrices and relations he called R1 - R6 according to two types of the Whitehead's automorphisms [29]. See details in [13].
It was shown by W. Magnus [14], using a work of J. Nielsen [23], that \(\operatorname{IAut}(F_{r})\) has a finite generating set of the following two forms (all indexes \(i,j,l\) range over the set \(\{1,...,r\}\) subject only the condition that \(i\neq j,k;j\neq k\) every automorphism changes exactly one free generator):
* \(\xi_{i,j}:f_{i}\mapsto f_{i}^{f_{j}},f_{l}\mapsto f_{l}\) for \(l\neq i,\)
* \(\rho_{i,j,k}:f_{i}\mapsto f_{i}[f_{j},f_{k}],f_{l}\mapsto f_{l}\) for \(l\neq i\)
Here \([g,h]\) denotes commutator \(ghg^{-1}h^{-1}\) of elements \(g,h.\) The statement includes the result of J. Nielsen [22] that \(\operatorname{IAut}(F_{2})=\operatorname{Inn}(F_{2}),\) the subgroup of inner automorphisms of \(F_{2}.\)
Since the group \(\operatorname{IAut}(F_{2})\) is isomorphic to \(\operatorname{Inn}(F_{2})\) and so to \(F_{2},\) it is finitely presented. W. Magnus [14] raised the question on the finite presentability of \(\operatorname{IAut}(F_{r})\) for \(r\geq 3.\) The question attracted the attention of researchers, among them O. Chein [7], S. Bachmuth [3], J. Smillie and K. Vogtman [26].
The first result in solving this problem has been obtained by S. Krstic and J. McCool.
**Theorem 2.1**.: _([12].) Let \(G=F_{3}/N\), where \(N\) is a characteristic subgroup of \(F_{3}\) such that \(N\) is contained in the second derived subgroup \(F_{3}^{\prime\prime}\) of \(F_{3}.\) Let \(\bar{\xi}_{1,2},\bar{\xi}_{1,3},\bar{\rho}_{1,2,3}\) be the elements of \(Aut(G)\) induced by \(\xi_{1,2},\xi_{1,3},\rho_{1,2,3}\) respectively. Then any subgroup of \(\operatorname{IAut}(G)\) which contains the set \(\{\bar{\xi}_{1,2},\bar{\xi}_{1,3},\bar{\rho}_{1,2,3}\}\) is not finitely presentable._
This statement covers the cases of the free group \(F_{3},\) of the free metabelian group \(M_{3}=F_{3}(\mathcal{A}^{2}),\) and more generally every relatively free group \(F_{3}(\mathcal{C})\) of rank \(3\) where \(\mathcal{C}\) is a variety of groups containing the variety \(\mathcal{A}^{2}\) of all metabelian groups.
Let \(F_{r,s}\) be a free group of finite rank \(r+s\) with a set of free generators \(f\cup y\), where \(f=\{f_{1},\ldots,f_{r}\},y=\{y_{1},\ldots,y_{s}\}\). We assume that \(r\geq 1\) and \(s\geq 0\). The group \(\tilde{A}_{r,s}\) is defined as a subgroup of the automorphism group \(\operatorname{Aut}(F_{r,s})\) generated by the automorphism group \(\operatorname{Aut}(F_{r})\), more precisely, the subgroup of all automorphisms fixing the generators \(y\) in \(F_{r,s}\), and mapping the subgroup generated by the elements of \(f\) (isomorphic to the group \(F_{r}\)) onto itself, together with automorphisms \(\xi_{i,k}\) for \(i=1,...,r\) and \(k=1,...,s\), that are defined on the basic elements as follows:
\[\xi_{i,k}:f_{i}\mapsto f_{i}^{y_{k}},f_{l}\mapsto f_{l}\text{ and }y_{t}\mapsto y_{t}, \text{ for }i,l=1,...,r;\ l\neq i,\text{ and }k,t=1,...,s. \tag{2.1}\]
Thus the group \(\tilde{A}_{r,s}\) is isomorphic to a subgroup of the group \(\operatorname{Aut}(F_{r,s})\). Then \(\tilde{A}_{r,s}\) acts on its orbit \(O_{r,s}\) generated by all tuples of the form \((u_{1},\ldots,u_{r},y_{1},\ldots,y_{s}),\), where \(u=(u_{1},\ldots,u_{r})\in F_{r}^{r}\). Denote by \(O_{r}\) the set of corresponding truncated tuples \(u\). Then \(\tilde{A}(r,s)\) also acts on the set \(O_{r}\). Of course, \(\tilde{A}_{r,s}\) also acts on the set \(F_{r,s}^{r}\supseteq O_{r}\).
Let us define the AC-transformation group \(A_{r,s}\), which naturally corresponds to the group \(\tilde{A}_{r,s}\). To explain we need a more detailed notation of AC-moves in \(F_{r,s}\). For any tuple of the form \(u=(u_{1},\ldots,u_{r},y_{1},\ldots,y_{s})\in F_{r,s}^{r+s}\) we put (all moves change exactly one component of the tuple):
* \(\operatorname{AC}_{1}(i,j)\) replaces \(u_{i}\) by \(u_{i}u_{j}\), where \(1\leq i,j\leq r,i\neq j\).
* \(\operatorname{AC}_{2}(i,j)\) replaces \(u_{i}\) by \(u_{j}u_{i}\), where \(1\leq i,j\leq r,i\neq j\).
* \(\operatorname{AC}_{3}(i)\) replaces \(u_{i}\) by \(u_{i}^{-1}\), where \(1\leq i\leq r\).
* \(\operatorname{AC}_{4}(i,k)\) replaces \(u_{i}\) by \(u_{i}^{y_{k}}\), where \(1\leq i\leq r\,1\leq k\leq s\).
By definition, the group \(A_{r,s}\) is generated by these transformations. Then this group also acts on the orbit \(O_{r,s}\), on \(O_{r}\) and on \(F_{r,s}^{r}\). It is well known that \(A_{r,s}\) is antiisomorphic to \(\tilde{A}_{r,s}\) via the following map:
* \(\operatorname{AC}_{1}(i,j)\mapsto\rho_{i,j}\),
* \(\operatorname{AC}_{2}(i,j)\mapsto\lambda_{i,j}\),
* \(\operatorname{AC}_{3}(i)\mapsto\iota_{i}\),
* \(\operatorname{AC}_{4}(i,k)\mapsto\xi_{i,k}\).
The proof one can find in the monograph [15] (p. 130) or in the monograph [13] (section "Automorphisms of free groups").
## 3. Proof of Theorem A
Let \(s=r\) and \(A_{r}\) denote the restriction of \(A_{r,r}\) to \(O_{r}.\) Theorem A is obtained as a consequence of the following assertions, which have independent significance.
**Lemma 3.1**.: _For any \(\alpha\in A_{r}\), if \(\alpha(x)=x\), where \(x=(x_{1},\ldots,x_{r})\) is a basis of \(F_{r}\), then \(\alpha\) acts identically to \(F_{r,s}^{r}\), in particular to \(O_{r}\). Moreover, if we change in each transformation of the form \(\operatorname{AC}_{4}\)(\(i,k\)) the conjugator \(y_{k}\) by \(x_{k}\) in \(\alpha\) and denote the resulting transformation by \(\beta\), then \(\beta\) is identical on \(F_{r}^{r}.\) Hence \(\nu:\alpha\mapsto\beta\) is a homomorphism \(A_{r}\to\operatorname{\mathit{GAC}}_{r}\)(\(F_{r}\))._
Proof.: Let \(v=(v_{1},\ldots,v_{r})\in F_{r,s}^{r}\). We define a homomorphism \(\varphi:F_{r}^{r}\to F_{r,s}^{r}\), \(x_{i}\mapsto v_{i}\,(i=1,\ldots,r).\) Then
\[\varphi(\alpha(x))=\alpha(\varphi(v))=\alpha(v)=\varphi(x)=v.\]
Moreover, since \(\alpha(x)=x\) all \(y_{k}\) in \(\alpha\) cancel among themselves. Then the elements \(x_{k}\) corresponding to \(y_{k}\) in \(\beta\) cancel among themselves too. Define homomorphism
\(\mu:F_{r,r}\to F_{r}\) such that \(\varphi(x_{i})=v_{i},\varphi(y_{k})=x_{k}\) for \(i,k=1,\ldots,r.\) Then \(\mu(\alpha(x))=\alpha(\mu(x))=\beta(v)=v.\) Then \(\beta\) is identical as element of \(\operatorname{GAC}_{r}(F_{r})\) and \(\nu\) is a homomorphism.
**Lemma 3.2**.: _If \(\alpha\in A_{r}\) is not identical on \(O_{r}\) then the transformation \(\beta\in A_{r,r}\) derived from \(\alpha\) as in Lemma 3.1 is not identical on \(F_{r}^{r}\). Moreover, \(\beta\) is not identical on \(\text{NGen}_{r}(F_{r})\). It follows that \(A_{r,r}\) is isomorphic to \(\text{GAC}_{r}(F_{r})\) and \(\text{AC}(F_{r})\). Therefore, \(\text{GAC}_{r}(F_{r})\simeq\text{AC}(F_{r})\)._
Proof.: Since \(\alpha\) is not identical on \(O_{r}\), then by lemma 3.1\(\alpha\) is not identical on \(F_{r}^{r}\). Then \(\alpha(f)\neq f.\) Suppose that all occurences of \(y_{1},\ldots,y_{r}\) in formal expression of \(\alpha\) cancel among themselves. Then all corresponding occurences of \(f_{1},\ldots,f_{r}\) in \(\beta=\nu(\alpha)\) also cancel among themselves. Then \(\beta(f)\neq f\) and \(\beta\) is non-trivial.
Now suppose that not all occurences of \(y_{1},\ldots,y_{r}\) in formal expression of \(\alpha\) cancel among themselves and \(\beta(f)=f\). Suppose that \(\alpha\) includes \(m\) occurences of \(\text{AC}_{4}\)-transformations or their inverses. Denote \(u=(u_{1},\ldots,u_{r}),\) where \(u_{1}=f_{2}^{2m}f_{1}f_{2}^{2m},u_{2}=f_{3}^{2m}f_{2}f_{3}^{2m},\ldots,u_{r-1 }=f_{r}^{2m}f_{r-1}f_{r}^{2m},u_{r}=f_{1}^{2m}f_{r}f_{1}^{2m}\). Then elements \(u\) generate a free subgroup of rank \(r.\) Obviously, the reduction process for components of elements coming from \(\text{AC}_{4}\)-transformations of \(\alpha(u)\) does not cancel more than \(m\) letters from a side of any entry of \(u_{i}\) for every \(i.\) It follows that \(\alpha(u)\neq u.\) Note that \(u\in\text{NGen}(F_{r})\). Thus \(\beta\) is non-trivial as an element of \(\text{AC}(F_{r}).\) Therefore \(A_{r}\simeq\text{GAC}(F_{r})\simeq\text{AC}(F_{r}).\)
Therefore Theorem A is proved.
## 4. Auxiliary results
### Essentially infinite sets of relations
In this Section we are following to [12]. Let \(F(X)\) be the free group with basis \(X,\) and let \(\mu:F(X)\to G\) be a group epimorphism. If \(S\) is a subset of \(X,\) then elements of \(\ker(\mu)\cap\)\(\text{gp}(S)\) are called _relations of \(G\) on the generators \(S\)_. If \(Q\) is a set of such relations, we say that \(Q\) is _essentially infinite_ if there is no finite subset \(W\) of \(\ker(\mu)\) such that \(Q\) is contained in the normal closure \(\text{ncl}(W)\) of \(W\) in \(F(X).\)
It was shown in [12] that this notion is independent of the choice of the generating set \(X\) of \(X,\) containing \(S.\) Also, it was noted that if \(Q\) is an essentially infinite set of relations of \(G\) on the finite set \(S,\) then no subgroup \(H\) of \(G\) containing the image \(\mu(S)\) can be finitely presentable. Indeed, if \(H\) is finitely presentable, then there is a finite preimage in \(F(X),\) containing \(S,\) of a generating set of \(H\). Since \(H\) is finitely presentable on this set of generating elements, it would follow that \(Q\) is contained in the norml closure of a finite set of relations of \(G,\) which is a contradiction.
Furthermore, it was proved in [12] that if we have two free groups \(F(X_{1})\) and \(F(X_{2}),\) two subsets \(S_{1}\subseteq X_{1}\) and \(S_{2}\subseteq X_{2}\) and
\[\begin{array}{ccc}\psi^{\prime}:F(S_{1})&\to&F(S_{2})\\ \downarrow&&\downarrow\\ \psi:G_{1}&\to&G_{2}\end{array} \tag{4.1}\]
be a commutative diagram of groups, \(Q\subseteq F(S_{1})\) be a set of relations of \(G_{1},\)\(Q_{2}=\psi^{\prime}(Q_{1})\) be an essential infinite set of relations of \(G_{2},\) then \(Q_{1}\) is an essential infinite set of relations of \(G_{1}.\)
### Fox derivatives
For a given positive integer \(r,\) we consider the free group \(F_{r}\) with basis \(\{f_{1},...,f_{r}\}\) and the integral group ring \(\mathbb{Z}F_{r}.\) We use the partial derivatives introduced by Fox [9]. An excellent introduction to the theory of the Fox derivatives and possible applications of them can be found in [8] (see also [25] and [27]). In our notation, these are defined as follows.
For \(j=1,...,r,\) the (left) Fox derivative associated with \(f_{j}\) is the linear map \(D_{j}:\mathbb{Z}F_{r}\rightarrow\mathbb{Z}F_{r}\) satisfying the conditions
\[D_{j}(f_{j})=1,D_{j}(f_{i})=0\text{ for }i\neq j \tag{4.2}\]
and
\[D_{j}(uv)=D_{j}(u)+uD_{j}(v)\text{ for all }u,v\in F_{r}. \tag{4.3}\]
Obviously, an element \(u\in F_{r}\) is trivial if and only if \(D_{i}(u)=0\) for all \(i=1,...,r.\) Also note that for an arbitrary element \(g\) of \(F_{n}\) and every \(j=1,...,n,\)\(D_{j}(g^{-1})=-g^{-1}D_{j}(g).\)
The _trivialization_ homomorphism \(\varepsilon:\mathbb{Z}F_{r}\rightarrow\mathbb{Z}\) is defined on the generators of \(F_{n}\) by \(\varepsilon(f_{i})=1\) for all \(i=1,...,r\) and extended linearly to the group ring \(\mathbb{Z}F_{n}.\)
The Fox derivatives appear in another setting as well. Let \(\Delta F_{r}\) denote the fundamental ideal of the group ring \(\mathbb{Z}F_{r}.\) It is a free left \(\mathbb{Z}F_{r}-\)module with a free basis consisting of \(\{f_{1}-1,...,f_{r}-1\}.\) This it leads us to the following formula which is called the _main identity_ for the Fox derivatives:
\[\sum_{i=1}^{r}D_{i}(\alpha)(f_{i}-1)=\alpha-\varepsilon(\alpha), \tag{4.4}\]
where \(\alpha\in\mathbb{Z}F_{r}.\) Conversely, if for any element \(f\in F_{r}\) and \(\alpha_{i}\in\mathbb{Z}F_{r}\) we have equality
\[\sum_{i=1}^{r}\alpha_{i}(f_{i}-1)=f-1, \tag{4.5}\]
then \(D_{i}(f)=\alpha_{i}\) for \(i=1,...,r.\)
More generally, we call a linear map \(D:\mathbb{Z}F_{r}\rightarrow\mathbb{Z}F_{r}\) the _free Fox derivative_ if \(D\) satisfies the property
\[D(uv)=D(u)+uD(v) \tag{4.6}\]
for all \(u,v\in F_{r}.\) Every such derivative has the form
\[D=\alpha_{1}D_{1}+...+\alpha_{n}D_{r}, \tag{4.7}\]
where \(\alpha_{i}=D(f_{i})\) for \(i=1,...,r.\) By definition \((\alpha D)(u)=D(u)\alpha\) for any \(\alpha\in\mathbb{Z}F_{r},u\in F_{r}.\) Conversely, we can define a derivative \(D=\sum_{i=1}^{r}\alpha_{i}D_{i}\) for arbitrary tuple of elements \(\alpha_{i}\in\mathbb{Z}F_{r}.\)
The notion of the free Fox derivatives, defined above for free groups, can be generalized to groups of the type \(F_{r}/R^{\prime},\) where \(R\) is any normal subgroup of \(F_{r}.\) First we give some notation.
Let \(F_{r}\) be a free group with basis \(\{f_{1},...,f_{r}\}.\) Let \(R\) be a normal subgroup of \(F_{r}\) and that \(R^{\prime}\) is its derived subgroup (it is also normal subgroup of \(F_{r}\)). Let \(\bar{G}=F_{r}/R^{\prime},\)\(G=F_{r}/R\) and let \(\mathbb{Z}G\) be the group ring of \(G.\) By \(\mu:F_{r}\to G\) we denote the standard epimorphism, as well as its linear extension to \(\mu:\mathbb{Z}F_{r}\rightarrow\mathbb{Z}G\). By \(\bar{\mu}:\mathbb{Z}\bar{G}\rightarrow\mathbb{Z}G\) we mean the epimorphism induced by \(\mu\). Also we denote by \(\mu^{\prime}:F_{r}\rightarrow\bar{G}\) the other standard epimorphism. Obviously \(\mu\) is a superposition of \(\mu^{\prime}\) and \(\bar{\mu}\).
An abelian normal subgroup \(\bar{R}=R/R^{\prime}\) of the group \(\bar{G}\) is considered as a module over the group ring \(\mathbb{Z}G\) where action of \(\mathbb{Z}G\) on \(\bar{R}\) is induced by conjugation in \(\bar{G}.\) This module is called the _relation module_ of \(\bar{G}.\)
Every free Fox partial derivative \(D_{j}\) induces a linear map \(d_{j}:\mathbb{Z}F_{r}\rightarrow\mathbb{Z}G\) via \(\mu.\) These linear maps \(d_{j}\) also are called the _free Fox derivatives_ (or the _free partial derivatives_). Every \(d_{j}\) is well defined via \(\bar{\mu}\) on \(\mathbb{Z}\bar{G}.\)
An arbitrary element \(u\) in \(\bar{G}\) is trivial if and only if \(d_{i}(u)=0\) for all \(i=1,...,r.\) Thus two elements \(u,v\) are equal in \(\bar{G}\) if and only if \(d_{i}(u)=d_{i}(v)\) for all \(i=1,...,r.\)
The main identity for the Fox derivatives (4.4) gives a similar one in the considered case too:
\[\sum_{i=1}^{r}d_{i}(\alpha)(\mu(f_{i})-1)=\bar{\mu}(\alpha)-\varepsilon(\bar{ \mu}(\alpha)), \tag{4.8}\]
where \(\alpha\in\mathbb{Z}\bar{G}\) and \(\varepsilon:\mathbb{Z}G\rightarrow\mathbb{Z}\) is the trivialization homomorphism on \(\mathbb{Z}G.\)
Let \(u\in\bar{R}\) and \(\alpha\in\mathbb{Z}G.\) Then
\[d_{i}(u^{\alpha})=\alpha d_{i}(u)\text{ for all }i=1,...,r. \tag{4.9}\]
### Matrix representations
Magnus representations for automorphism groups. Let \(F_{r}\) be a free group with basis \(\{f_{1},...,f_{r}\}.\)
**Definition 4.1**.: The _Magnus representation_ for \(\mathrm{Aut}(F_{r})\) is the map
\[\alpha:\mathrm{Aut}(F_{r})\to GL_{r}(\mathbb{Z}F_{r}) \tag{4.10}\]
assigning to \(\varphi\in\mathrm{Aut}(F_{r})\) the Jacobi matrix
\[J(\varphi)=(D_{j}(\varphi(f_{i})) \tag{4.11}\]
where \(D_{j}\) are the free Fox derivatives for \(j=1,...,r,\) and \(D_{j}(\varphi(f_{i}))\) is the \(ij-\)th entry of the matrix.
The Magnus representation \(\alpha\) for \(\mathrm{Aut}(F_{r})\) is injective since \(\varphi(f_{i})\) is recovered from \(J(\varphi)\) by applying the main identity for free Fox derivatives to the \(i-\)th row for each \(\varphi\in\mathrm{Aut}(F_{r}).\)
The Magnus representation is not homomorphism as is seen from the following assertion.
**Proposition 4.2**.: _For \(\varphi,\psi\in\mathrm{Aut}(F_{r}),\) the equality_
\[J(\varphi\psi)=\psi(J(\varphi))\cdot J(\psi) \tag{4.12}\]
_holds, where \(\psi(J(\varphi))\) means that \(\psi\) is applied to every entry in \(J(\varphi).\)_
In particular, it follows that the image of \(\alpha\) is contained in the group \(GL_{r}(\mathbb{Z}F_{r})\) of invertible matrices.
To obtain a genuine representation, a homomorphism, we need to change \(F_{r}\) to a group \(\bar{G}\) of type \(F_{r}/R^{\prime},\) where \(R\) is a normal subgroup in \(F_{r}.\) Let \(G=F_{r}/R\) and \(\bar{G}=F_{r}/R^{\prime}.\) Then \(\bar{R}=R/R^{\prime}\) is normal abelian subgroup of \(\bar{G},\) which is considered as a module over the group ring \(\mathbb{Z}G.\) Let \(\{x_{1},...,x_{r}\}\) be the generators of \(\bar{G}\) corresponding to the basic elements \(\{f_{1},...,f_{r}\}\) of \(F_{r}.\)
**Definition 4.3**.: The _Magnus representation for \(\mathrm{Aut}(\bar{G})\)_ is the map
\[\alpha_{A}:\mathrm{Aut}(\bar{G})\to GL_{r}(\mathbb{Z}G) \tag{4.13}\]
assigning to \(\varphi\in\mathrm{Aut}(\bar{G})\) the Jacobi matrix \(J(\varphi)=(d_{j}(\varphi(x_{i}))\) of \(\varphi\) over \(\mathbb{Z}G.\) Here \(d_{j}\) are the induced free Fox derivatives on \(\bar{G}\) with values in \(\mathbb{Z}G\) with respect to generators \(x_{i}\) for \(i,j=1,...,r,\) and \(d_{j}(\varphi(x_{i}))\) is the \(ij-\)th entry in the matrix.
For \(R\) a normal subgroup of a group \(H,\) let \(\mathrm{IRAut}(H)\) denote the group of all automorphisms of \(H\) for which \(R\) is invariant, and which induce the identical map on the quotient \(H/R.\)
Then the map \(\alpha_{AR}:\mathrm{IRAut}(\bar{G})\to GL_{r}(\mathbb{Z}G)\) induced by the map \(\alpha_{A}\) is homomorphism.
Let \(M_{r}\) be the free metabelian group of rank \(r\) with basis \(\{x_{1},...,x_{r}\},\) and \(A_{r}=M_{r}/M_{r}^{\prime}\) be the abealization of \(M_{r}\) with the corresponding basis \(\{a_{1},...,a_{r}\}.\) The group ring \(\mathbb{Z}A_{r}\) can be considered as a Laurent polynomial ring \(\Lambda_{r}=\mathbb{Z}[a_{1}^{\pm 1},...,a_{r}^{\pm 1}].\) Let \(\psi\) be an automorphism of \(M_{r}.\) We define the Jacobi matrix \(J(\psi)\) corresponding to \(\psi\) as above. Then the map
\[\beta:\psi\to J(\psi) \tag{4.14}\]
gives an injective homomorphism (embedding) of \(\mathrm{IAut}(M_{r})\) into the group of matrices of size \(r\) over the Laurent polynomial ring \(\Lambda_{r}.\) This homomorphism is called _Bachmuth's embedding_.
Every automorphism \(\varphi\in\mathrm{Aut}(F_{r})\) induces an automorphism \(\bar{\varphi}\in\mathrm{Aut}(M_{r}).\) Let \(r=3.\) We compute the matrices \(J(\bar{\xi}_{1,2}),J(\bar{\xi}_{1,3})\) and \(J(\bar{\rho}_{1,2,3})\) where \(\bar{\xi}_{1,3},\bar{\xi}_{2,3},\bar{\rho}_{1,2,3}\) are induced by \(\xi_{1,3},\xi_{2,3},\rho_{1,2,3}\in\mathrm{IAut}(F_{3}),\) respectively.
\[J(\bar{\xi}_{1,2})=\left(\begin{array}{ccc}a_{2}&1-a_{1}&0\\ 0&1&0\\ 0&0&1\end{array}\right), \tag{4.15}\]
\[J(\bar{\xi}_{1,3})=\left(\begin{array}{ccc}a_{3}&0&1-a_{1}\\ 0&1&0\\ 0&0&1\end{array}\right), \tag{4.16}\]
\[J(\bar{\rho}_{1,2,3})=\left(\begin{array}{ccc}1&a_{1}(1-a_{3})&a_{1}(a_{2} -1)\\ 0&1&0\\ 0&0&1\end{array}\right). \tag{4.17}\]
Let \(\eta:\Lambda_{3}\rightarrow\mathbb{Z}[t,t^{-1}]\) be a homomorphism of the Laurent rings defined by the map
\[\eta:a_{i}\mapsto 1\text{ for }i=1,2;a_{3}\mapsto t. \tag{4.18}\]
Now we can define by ( 4.15 - 4.18) a homomorphism \(\bar{\eta}:\mathrm{IAut}(F_{3})\to GL_{2}(\mathbb{Z}[t,t^{-1}])\) such that
\[\bar{\eta}(\xi_{1,2})=1,\bar{\eta}(\xi_{1,3})=\left(\begin{array}{cc}t&0\\ 0&1\end{array}\right),\bar{\eta}(\rho_{1,2,3})=\left(\begin{array}{cc}1&1-t \\ 0&1\end{array}\right). \tag{4.19}\]
The following lemma, which has been proved in [12], provides essentially infinite sets of relations of \(PGL_{2}(\mathbb{Z}[t,t^{-1}]).\) We write \(d,e_{0},u\) for the matrices
\[d=\left(\begin{array}{cc}t&0\\ 0&1\end{array}\right),e_{0}=\left(\begin{array}{cc}1&1\\ 0&1\end{array}\right),u=\left(\begin{array}{cc}1&t-1\\ 0&1\end{array}\right). \tag{4.20}\]
**Lemma 4.4**.: _[_12_]__. Let \(K\) be a field and let \(Q\) be any infinite subset of_
\[Q_{1}=\{[e_{0},d^{k}e_{0}d^{-k}]|k\geq 1\} \tag{4.21}\]
_or_
\[Q_{2}=\{d^{k}(ud)^{-k}d^{k}(u^{-1}d)^{-k}|k\geq 1\}. \tag{4.22}\]
_Then \(Q\) is an essentially infinite set of relations in \(PGL_{2}(K[t,t^{-1}]).\)_
Lemma 4.4 has been used in[12] in establishing Theorem 2.1. We will use it in establishing our main result on non-finite presentability of the generalized Andrews-Curtis groups.
## 5. Proof of Theorem B
The homomorphism \(\bar{\eta}\) defined by (4.19) can be naturally extended via the Magnus representation and setting \(\xi_{i,j}\mapsto 1\) for \(i=1,2;j=2,...,m\) to a homomorphism
\[\nu:A_{2,m}\to PGL_{2}(\mathbb{Z}[t,t^{-1}]). \tag{5.1}\]
Let \(\lambda_{1,2}\) be an automorphism in \(A_{2,m}\) which maps \(x_{1}\mapsto x_{2}x_{1}\) and fixes other generators. Also let \(\rho_{1,2}\) be an automorphism in \(A_{2,m}\) which maps \(x_{1}\mapsto x_{1}x_{2}\) and fixes other generators. Then \(\lambda_{1,2}\) commutes with all automorphisms of the form \(\xi_{2,3}^{k}\rho_{1,2}\xi_{2,3}^{-k}\) for every integer \(k\).
By direct computation we get that
\[\nu(\lambda_{1,2})=e_{0},\nu(\xi_{2,3}^{k}\rho_{1,2}\xi_{2,3}^{-k})=d^{k}e_{0 }d^{-k}\mbox{ for every integer }\ k. \tag{5.2}\]
Since the set \(Q_{1}\) in (4.21) is essentially infinite the set of relations \(\{[\lambda_{1,2},\xi_{2,3}^{k}\rho_{1,2}\xi_{2,3}^{-k}]|k\in\mathbb{N}\}\) which maps onto \(Q_{1}\) by \(\nu\) is essentially infinite in \(A_{2,m}.\) Hence the group \(A_{2,m}\) is non-finitely presented.
Therefore Theorem B is proved.
**Remark**_For any \(m,\)\(A_{1,m}\) is finite presentable, and for any \(n,\)\(A_{n,0}\) is finite presentable._
Indeed, for any \(m,\)\(A_{1,m}\) is isomorphic to a direct product of groups \(C_{2}\times F_{m},\) where \(C_{2}\simeq\) Aut\((F_{1})\) is the cyclic group of order \(2\) and \(F_{m}=\) gp\((\xi_{1,1},...,\xi_{1,m})\) is the free group with basis \(\xi_{1,1},...,\xi_{1,m}.\) Hence every group \(A_{1,m}\) is a finitely presentable group.
## 6. Open problems
In this section we state several open problems on Andrews-Curtis groups.
**Problem 6.1**.: _Are any of the groups \(AC(F_{r})\), \(r>2\), finitely presentable?_
**Problem 6.2**.: _For which normally \(r\)-generated groups \(G\) the canonical epimorphism \(\phi:GAC_{r}(G)\to AC(G)\) is an isomorphism? In particular:_
* _Is_ \(\phi:GAC_{r}(G)\to AC(G)\) _an isomorphism for a torsion-free non-elementary hyperbolic group_ \(G\)_?_
* _For which_ \(r\)_-generated partially commutative groups_ \(G=G(\Gamma)\) _the canonical epimorphism_ \(\phi:GAC_{r}(G)\to AC(G)\) _is an isomorphism?_
**Problem 6.3**.: _Find "good" (quasi-geodesic) normal forms of elements in \(AC(F_{r})\)_
Solution to this problem will enhance efficacy of computations with ACC.
**Problem 6.4**.: _For which \(r\) does the group \(AC(F_{r})\) have Kazhdan property (T)?_
Aknowledgments.
The author is grateful to A. Myasnikov for invaluable help in preparing the paper.
|
2303.13379
|
Practical and Ethical Challenges of Large Language Models in Education:
A Systematic Scoping Review
|
Educational technology innovations leveraging large language models (LLMs)
have shown the potential to automate the laborious process of generating and
analysing textual content. While various innovations have been developed to
automate a range of educational tasks (e.g., question generation, feedback
provision, and essay grading), there are concerns regarding the practicality
and ethicality of these innovations. Such concerns may hinder future research
and the adoption of LLMs-based innovations in authentic educational contexts.
To address this, we conducted a systematic scoping review of 118 peer-reviewed
papers published since 2017 to pinpoint the current state of research on using
LLMs to automate and support educational tasks. The findings revealed 53 use
cases for LLMs in automating education tasks, categorised into nine main
categories: profiling/labelling, detection, grading, teaching support,
prediction, knowledge representation, feedback, content generation, and
recommendation. Additionally, we also identified several practical and ethical
challenges, including low technological readiness, lack of replicability and
transparency, and insufficient privacy and beneficence considerations. The
findings were summarised into three recommendations for future studies,
including updating existing innovations with state-of-the-art models (e.g.,
GPT-3/4), embracing the initiative of open-sourcing models/systems, and
adopting a human-centred approach throughout the developmental process. As the
intersection of AI and education is continuously evolving, the findings of this
study can serve as an essential reference point for researchers, allowing them
to leverage the strengths, learn from the limitations, and uncover potential
research opportunities enabled by ChatGPT and other generative AI models.
|
Lixiang Yan, Lele Sha, Linxuan Zhao, Yuheng Li, Roberto Martinez-Maldonado, Guanliang Chen, Xinyu Li, Yueqiao Jin, Dragan Gašević
|
2023-03-17T18:14:46Z
|
http://arxiv.org/abs/2303.13379v2
|
Practical and Ethical Challenges of Large Language Models in Education: A Systematic Literature Review
###### Abstract
Educational technology innovations that have been developed based on large language models (LLMs) have shown the potential to automate the laborious process of generating and analysing textual content. While various innovations have been developed to automate a range of educational tasks (e.g., question generation, feedback provision, and essay grading), there are concerns regarding the practicality and ethicality of these innovations. Such concerns may hinder future research and the adoption of LLMs-based innovations in authentic educational contexts. To address this, we conducted a systematic literature review of 118 peer-reviewed papers published since 2017 to pinpoint the current state of research on using LLMs to automate and support educational tasks. The practical and ethical challenges of LLMs-based innovations were also identified by assessing their technological readiness, model performance, replicability, system transparency, privacy, equality, and benefice. The findings were summarised into three recommendations for future studies, including updating existing innovations with state-of-the-art models (e.g., GPT-3), embracing the initiative of open-sourcing models/systems, and adopting a human-centred approach throughout the developmental process. These recommendations could support future research to develop practical and ethical innovations for supporting diverse educational tasks and benefiting students, teachers, and institutions.
Keywords:Large Language Models Pre-trained Language Models Educational Data Mining Artificial Intelligence Education GPT BERT ChatGPT Natural Language Processing
## 1 Practitioner notes
What is currently known about this topic
* Generating and analysing text-based educational content are time-consuming and laborious tasks.
* Large language models are capable of efficiently analysing an unprecedented amount of textual content and completing complex natural language processing and generation tasks.
* Large language models have been increasingly used to develop educational technologies that aim to automate the generation and analysis of textual content.
What this paper adds
* A comprehensive list of different educational tasks that could potentially benefit from LLMs-based innovations through automation.
* A structured assessment of the practicality and ethicality of existing LLMs-based innovations from seven important aspects using established frameworks.
* Three recommendations that could potentially support future studies to develop LLMs-based innovations that are practical and ethical to implement in authentic educational contexts.
Implications for practitioners
* Updating existing innovations with state-of-the-art models may further reduce the amount of manual effort required for adapting existing models to different educational tasks.
* The reporting standards of empirical research that aims to develop educational technologies using large language models need to be improved.
* Adopting a human-centred approach throughout the developmental process could contribute to resolving the practical and ethical challenges of large language models in education.
## 1 Introduction
Advancements in artificial intelligence (AI) and large language models (LLMs) have fueled the development of many educational technology innovations that aim to automate the often time-consuming and laborious tasks of generating and analysing textual content (e.g., generating open-ended questions and analysing student feedback survey) [26, 64]. These LLMs, such as Bidirectional Encoder Representations from Transformers (BERT) [14] and Generative Pre-trained Transformer (GPT) [8], utilise deep learning and self-attention mechanisms [62] to selectively attend to the different parts of input texts, depending on the focus of the current tasks, allowing the model to learn complex patterns and relationships among textual contents, such as their semantic, contextual, and syntactic relationships [34]. As several LLMs (e.g., GPT-3 and Codex) have been pre-trained on massive amounts of data across multiple disciplines, they are capable of completing natural language processing tasks with little (few-shot learning) or no additional training (zero-shot learning) [8]. This could lower the technological barriers to LLMs-based innovations as researchers and practitioners can develop new educational technologies by fine-tuning LLMs on specific educational tasks without starting from scratch. The recent release of ChatGPT, an LLMs-based generative AI chatbot that requires only natural language prompts without additional model training or fine-tuning [38], has further lowered the barrier for
individuals without technological background to leverage the generative powers of LLMs.
Although educational research that leverages LLMs to develop technological innovations for automating educational tasks is yet to achieve its full potential (i.e., most works have focused on improving model performances [29, 42]), a growing body of literature hints at how different stakeholders could potentially benefit from such innovations. Specifically, these innovations could potentially play a vital role in addressing teachers' high levels of stress and burnout by reducing their heavy workloads by automating punctual, time-consuming tasks [10] such as question generation [29], feedback provision [11], scoring essays [42] and short answers [69]. These innovations could also potentially benefit both students and institutions by improving the efficiency of often tedious administrative processes such as learning resource recommendation, course recommendation and student feedback evaluation, potentially [68, 64].
Despite the growing empirical evidence of LLMs' potential in automating a wide range of educational tasks, none of the existing work has systematically reviewed the practical and ethical challenges of these LLMs-based innovations. Understanding these challenges is essential for developing responsible technologies as LLMs-based innovations (e.g., ChatGPT) could contain human-like biases based on the existing ethical and moral norms of society, such as inheriting biased and toxic knowledge (e.g., gender and racial biases) when trained on unfiltered internet text data [48]. Prior systematic literature reviews have focused on investigating these issues related to one specific application scenario of LLMs-based innovations (e.g., question generation, essay scoring, chatbots, or automated feedback) [29, 11, 64, 42]. The practical and ethical challenges of LLMs in automating different types of educational tasks remain unclear. Understanding these challenges is essential for translating research findings into educational technologies that stakeholders (e.g., students, teachers, and institutions) can use in authentic teaching and learning practices [1].
The current study is the first systematic literature review that aimed to address this gap by reviewing the _current state of research_ on using LLMs to automate educational tasks and identify the _practical_ and _ethical_ challenges of adopting these LLMs-based innovations in authentic educational contexts. A total of 118 peer-reviewed publications from four prominent databases were included in this review following the Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) [39] protocol. An inductive thematic analysis was conducted to extract details regarding the different types of educational tasks, stakeholders, LLMs, and machine learning tasks investigated in prior literature. The practicality of LLMs-based innovations was assessed through the lens of technological readiness, model performance, and model replicability. Lastly, the ethicality of these innovations was assessed by investigating system transparency, privacy, equality, and benefcence.
The contribution of this paper to the educational technology community is threefold: 1) we systematically summarise a comprehensive list of 53 different educational tasks that could potentially benefit from LLMs-based innovations
through automation, 2) we present a structured assessment of the practicality and ethicality of existing LLMs-based innovations based on seven important aspects using established frameworks (e.g., the transparency index [12]), and 3) we propose three recommendations that could potentially support future studies to develop LLMs-based innovations to be practically and ethically implement in authentic educational contexts.
## 2 Background
In this section, we first establish the definitions for the key terminologies, specifically the definitions of practicality and ethicality in the context of educational technology. We then provided an overview of prior systematic literature reviews on LLMs in education. Then, we present the research questions based on the gaps identified in the existing literature.
### Practicality
Several theoretical frameworks have been proposed regarding the practicality of integrating technological innovations in educational settings. For example, Ertmer's [18] first- and second-order barriers to change focused on the external conditions of the educational system (e.g., infrastructure readiness) and teachers' internal states (e.g., personal beliefs). Becker [5] further suggested that for technological innovations to have actual benefits in supporting pedagogical practices, these innovations should be convenient to access, support constructivist pedagogical beliefs, be adaptable to changes in the curriculum, and be compatible to teachers' level of knowledge and skills. These factors were also presented in an earlier framework of the practicality index [16], which summarised three critical components for integrating educational technologies, including the degree of adoption feasibility, the cost and benefit ratio, and the alignment with existing practices and beliefs. Based on these prior theoretical frameworks and considering the recentness of LLMs-based innovations (which only emerged in the past five years), the practical challenges of LLMs-based innovations in automating educational tasks can be assessed from three primary perspectives. First, evaluating the technological readiness of these innovations is essential for determining whether there is empirical evidence to support successful integration and operation in authentic educational contexts. Second, assessing the model performance could contribute valuable insights into the cost and benefits of adopting these innovations, such as comparing the benefits of automation with the costs of inaccurate predictions. Finally, understanding whether these innovations are methodologically replicable could be important for future studies to investigate their alignment with different educational contexts and stakeholders. We elaborated on the evaluation items for each challenge in Section 3.2.
### Ethicality
Ethical AI is a prevalent topic of discussion in multiple communities, such as learning analytics, AI in education, educational data mining, and educational
technology communities [1, 40]. There are ongoing debates regarding AI ethics in education with a mixture of focuses on algorithmic and human ethics among educational data mining and AI in education communities [24]. As such debates continue, it is difficult to identify an established definition of ethical AI from these fields. Whereas, ethicality has already been thoroughly investigated and addressed in a closed field to AI in education, namely, the field of learning analytics [40, 50]. Drawing on the established definition of ethicality from the field of learning analytics [40], the ethicality of LLMs-based innovations can thus be defined as the systematisation of appropriate and inappropriate functionalities and outcomes of these innovations, as determined by all stakeholders (e.g., students, teachers, parents, and institutions). For example, Khosravi et al. [27] explained that the ethicality of AI-powered educational technology systems needs to involve the consideration of accountability, explainability, fairness, interpretability, and safety of these systems. These different domains of ethical AI are all closely related and can be addressed by considering system transparency. Transparency is a subset of ethical AI that involves making all information, decisions, decision-making processes, and assumptions available to stakeholders, which in turn enhances their comprehension of the AI systems and related outputs [12]. Additionally, for LLMs-based innovations, Weidinger et al. [63] suggested six types of ethical risks, including 1) discrimination, exclusion, and toxicity, 2) information hazards, 3) misinformation harms, 4) malicious uses, 5) human-computer interaction harms, and 6) automation, access, and environmental harms. These risks can be further aggregated into three fundamental ethical issues, such as privacy concerns regarding educational stakeholders' personal data, equality concerns regarding the accessibility of stakeholders with different backgrounds, and benefcence concerns about the potential harms and negative impacts that LLMs-based innovations may have on stakeholders [19]. These three fundamental ethical issues were considered in the analysis of the reviewed literature, further details were available in Section 3.2.
### Related Work
Prior systematic literature reviews have focused primarily on reviewing a specific application scenario (e.g., question generation, automated feedback, chatbots and essay scoring) of natural language processing and LLMs. For example, Kurdi et al. [29] have systematically reviewed empirical studies that aimed to tackle the problem of automatic question generation in educational domains. They comprehensively summarised the different generation methods, generation tasks, and evaluation methods presented in prior literature. In particular, LLMs could potentially benefit the semantic-based approaches for generating meaningful questions that are closely related to the source contents. Likewise, Cavalcanti et al. [11] have systematically reviewed different automated feedback systems regarding their impacts on improving students' learning performances and reducing teachers' workloads. Despite half of their reviewed studies showing no evidence of reducing teachers' workloads, as these automated feedback systems were mostly rule-based and required extensive manual efforts, they identified
that using natural language generation techniques could further enhance such systems' generalisability and potentially reduce manual workloads. On the other hand, Wollny et al. [64] have systematically reviewed areas of education where chatbots have already been applied. They concluded that there is still much to be done for chatbots to achieve their full potential, such as making them more adaptable to different educational contexts. A systematic literature review has also investigated the various automated essay scoring systems [42]. The findings have revealed multiple limitations of the existing systems based on traditional machine learning (e.g., regression and random forest) and deep learning algorithms (e.g., LSTM and BERT). In sum, these previous systematic literature reviews have identified room for improvement that can be potentially addressed using state-of-the-art LLMs (e.g., GPT-3 or Codex). However, none of the prior systematic literature reviews has investigated the practical and ethical issues related to LLMs-based innovations in education generally rather than particularly (e.g., limited to a specific task).
The recent hype around one of the latest LLMs-based innovations, ChatGPT, has intensified the discussion about the practical and ethical challenges related to using LLMs in education. For example, in a position paper, Kasneci et al. [26] provided an overview of some existing LLMs research and proposed several practical opportunities and challenges of LLMs from students' and teachers' perspectives. Likewise, Rudolph et al. [43] also provided an overview of the potential impacts, challenges, and opportunities that ChatGPT might have on future educational practices. Although these studies have not systematically reviewed the existing educational literature on LLMs, their arguments resonated with some of the pressing issues around LLMs and ethical AI, such as data privacy, bias, and risks. On the other hand, Sallam [44] systematically reviewed the implications and limitations of ChatGPT in healthcare education and identified potential utility around personalisation and automation. However, it is worth noting that most papers reviewed in Sallam's study were either editorials, commentaries, or preprints. This lack of peer-reviewed empirical studies on ChatGPT is understandable as it has only been released since late 2022 [38]. None of the existing work has systematically reviewed the peer-reviewed literature on prior LLMs-based innovations, such investigations could provide more reliable and empirically-based evidence regarding the potential opportunities and challenges of LLMs on educational practices. Thus, the current study aimed to address this gap in the literature by conducting a systematic literature review of prior educational research on LLMs. Specifically, the following research questions were investigated to guide this review:
* **RQ1**: What is _the current state of research_ on using LLMs to automate educational tasks, specifically through the lens of educational tasks, stakeholders, LLMs, and machine-learning tasks1? Footnote 1: Such as classification, prediction, clustering, etc.
* **RQ2**: What are the _practical_ challenges of LLMs in automating educational tasks, specifically through the lens of technological readiness, model performance, and model replicability?
* **RQ3**: What are the _ethical_ challenges of LLMs in automating educational tasks, specifically through the lens of system transparency, privacy, equality, and beneficence?
## 3 Methods
### Review Procedures
We followed the PRISMA [39] protocol to conduct the current systematic literature review of LLMs. We searched four reputable bibliographic databases, including Scopus, ACM Digital Library, IEEE Xplore, and Web of Science, to find high-quality peer-reviewed publications. Additional searches were conducted through Google Scholar and Education Resources Information Center (ERIC) to identify peer-reviewed publications that have yet to be indexed by these databases, either recently published or not indexed (e.g., Journal of Educational Data Mining; prior to 2020). Our initial search query for the title, abstract, and keywords included terms such as "large language model", "pre*trained language model", "GPT-*", "BERT", "education", "student*", and "teacher*". A publication year constraint was also applied to restrict the search to studies published since 2017, specifically from 01/01/2017 to 12/31/2022, as the foundational architecture (Transformer) of LLMs was formally released in 2017 [62].
Two researchers independently reviewed the titles and abstracts of eligible articles based on five predetermined inclusion and exclusion criteria. First, we included studies that used large or pre-trained language models directly or built on top of such models, and excluded studies that used general machine-learning or deep-learning models with unspecified usage of LLMs. Second, we included empirical studies with detailed methodologies, such as a detailed description of the LLMs and research procedures, and excluded review, opinion, and scoping works. Third, we only included full-length peer-reviewed papers, and excluded short, workshop, and poster papers that were less than six and eight pages for double- and single-column layouts, respectively. Additionally, we included studies that used LLMs for the purpose of automating educational tasks (e.g., essay grading and question generation), and excluded studies that merely used LLMs as part of the analysis without educational implications. Finally, we only included studies that were published in English (both the abstract and the main text) and excluded studies that were published in other languages. Any conflicting decisions were resolved through further discussion between the two researchers or consulting with a third researcher to achieve a consensus.
The database search initially yielded 854 publications, with 191 duplicates removed, resulting in 663 publications for the title and abstract screening (see Figure 1). After the title and abstract screening, 197 articles were included for the full-text review with an inter-rater reliability (Cohen's kappa) of 0.75, indicating substantial agreement between the reviewers. A total of 118 articles were selected for data extraction after the full-text review with an inter-rater reliability (Cohen's kappa) of 0.73, indicating substantial agreement between the reviewers. Out of the initial 197 articles, 79 were excluded for various reasons,
including not full paper (n=41), lack of educational automation (n=17), lack of pre-trained or LLMs (n=12), merely using pre-trained or LLMs as part of the analysis (n=3), non-English paper (n=2), and non-empirical paper (n=2).
### Data Analysis
For the first research question (RQ1), we conducted an inductive thematic analysis to extract information regarding the current state of research on using LLMs to automate educational tasks. Specifically, we extracted four primary types of contextual information from each included paper: educational tasks, stakeholders, LLMs, and machine-learning tasks. This contextual information would provide a holistic view of the existing research and inform researchers and practitioners regarding the viable directions to explore with the state-of-the-art LLMs (e.g., GPT-3.5 and Codex).
A total of seven data extraction items were developed to address the second and third research questions. These items were developed as they are directly
Figure 1: Systematic literature review process following the PRISMA protocol.
related to the definition of practicality (RQ2: Item 1-3) and ethicality (RQ3: Item 4-7), as defined in the Background section (Section 2). The following list elaborates on the final set of items along with the corresponding guiding questions.
1. **Technology readiness** What levels of technology readiness are the LLMs-based innovations at? We adopted the assessment tool from the Australian government, namely the Australian Department of Defence's Technology Readiness Levels (TRL) [49], which has been used to assess the maturity of educational technologies in prior SLR [66]. There are nine different technological readiness levels: Basic Research (TRL-1), Applied Research (TRL-2), Critical Function or Proof of Concept Established (TRL-3), Lab Testing/Validation of Alpha Prototype Component/Process (TRL-4), Laboratory Testing of Integrated/Semi-Integrated System (TRL-5), Prototype System Verified (TRL-6), Integrated Pilot System Demonstrated (TRL-7), System Incorporated in Commercial Design (TRL-8), and System Proven and Ready for Full Commercial Deployment (TRL-9), further explained in the Result section.
2. **Performance**: How accurate and reliable can the LLMs-based innovations complete the designated educational tasks? For example, what are the model performance scores for classification (e.g., AUC and F1 scores), generation (e.g., BLEU score), and prediction tasks (e.g., RMSE and Pearson's correlation)?
3. **Replicability**: Can other researchers or practitioners replicate the LLMs-based innovations without additional support from the original authors? This item evaluates whether the paper provided sufficient details about the LLMs (e.g., open-sourced algorithms) and the dataset (e.g., open-source data).
4. **Transparency**: What tiers of transparency index [12] are the LLMs-based innovations at? The transparency index proposed three tiers of transparency, including transparent to AI researchers and practitioners (Tier 1), transparent to educational technology experts and enthusiasts (Tier 2), and transparent to educators and parents (Tier 3). The tier of transparency increases as educational stakeholders become fully involved in developing and evaluating the AI system. These tiers were further elaborated on in the Results section.
5. **Privacy**: Has the paper mentioned or considered privacy issues of their innovations? This item explores potential issues related to informed consent, transparent data collection, individuals' control over personal data, and unintended surveillance [19, 61].
6. **Equality**: Has the paper mentioned or considered equal access to their innovations? This item explores potential issues related to limited access for students from low-income backgrounds or rural areas and the linguistic limitation of the innovations, such as their capability to analyse different languages [19].
7. **Beneficence**: Has the paper mentioned or considered potential issues that violate the ethical principle of beneficence? Such violations may include the
risks associated with labelling and profiling students, inadequate usage of machine-generated content for assessments, and algorithmic biases [19, 68].
## 4 Results
### The Current State -- RQ1
We identified nine different categories of educational tasks that prior studies have attempted to automate using LLMs (as shown in Table 1). Prior studies have used LLMs to automate the profiling and labelling of 17 types of education-related contents and concepts (e.g., forum posts, student sentiment, and discipline similarity), the detection of six latent constructs (e.g., confusion and urgency), the grading of five types of assessments (e.g., short answer questions and essays), the development of five types of teaching support (e.g., conversation agent and intelligent question-answering), the prediction of five types of student-orientated metrics (e.g., dropout and engagement), the construction of four types of knowledge representations (e.g., knowledge graph and entity recognition), the provision of four different forms of feedback (e.g., real-time and post-hoc feedback), the generation of four types of content (e.g., MCQs and open-ended questions), and the delivery of three types of recommendations (e.g., resource and course). Of the 118 reviewed studies, 85 studies aimed to automate educational tasks related to teachers (e.g., question grading and generation), 54 studies targeted student-related activities (e.g., feedback and resource recommendation), 20 studies focused on supporting institutional practices (e.g., course recommendations and discipline planning), and 14 studies empowered researchers with automated methods to investigate latent constructs (e.g., student confusion) and capture verbal data (e.g., speech recognition).
We identified five categories of LLMs used in prior studies to automate educational tasks. BERT and its variations (e.g., RoBERTa, DistilBERT, multilingual BERT, LaBSE, EstBERT, and Sentence-BERT) were the most predominant model used in 109 reviewed studies. However, they often required manual effort for fine-tuning (n=90). GPT-2 and GPT-3 have been used in five and three studies, respectively. OpenAI's Codex has been used in two prior studies, specifically for code generation tasks. T5 has also been used in two prior studies for classification and generation purposes. In terms of machine-learning tasks, 74 studies used LLMs to perform classification tasks. Generation and prediction tasks were investigated in 24 and 23 prior studies, respectively. In sum, LLMs-based innovations have already been used to automate a range of educational tasks, but most of these innovations were developed on older models, such as BERT and GPT-2. Although state-of-the-art models, such as GPT-3, have been introduced for over two years [8], they have yet to be widely applied to automate educational tasks. A potential reason for this lack of adoption could be these models' commercial and close-sourced nature, increasing the financial burdens of developing and operating educational technology innovations on top of such models.
\begin{table}
\begin{tabular}{p{142.3pt} p{142.3pt}} \hline \hline Categories & Educational Tasks \\ \hline Profiling and Labelling & Forum post classification, dialogue act classification, classification of learning designs, review sentiment analysis, topic modelling, pedagogical classification of MOOCs, collaborative problem-solving modelling, paraphrase quality, speech tagging, labelling educational content with knowledge components, key sentence and keyword extraction, reflective writing analysis, multimodal representational thinking, discipline similarity, concept classification, cognitive level classification, essay arguments segmentation \\ Detection & Semantic analyses, detecting off-task messages, confusion detection, urgency detection, conversational intent detection, teachers’ behaviour detection \\ Assessment and Grading & Formative and summative assessment grading, short answer grading, essay grading, subjective question grading, student self-explanation \\ Teaching Support & Classroom teaching, learning community support, online learning conversation agent, intelligent question-answering, teacher activity recognition \\ Prediction & Student performance prediction, student dropout prediction, emotional and cognitive engagement detection, growth and development indicators for college students, at-risk student identification \\ Knowledge Representation & Knowledge graph construction, knowledge entity recognition, knowledge tracing, cause-effect relation extraction \\ Feedback & Real-time feedback, post-hoc feedback, aggregated feedback, feedback on feedback (peer-review comments) \\ Content Generation & MCQs generation, open-ended question generation, code generation, reply (natural language) generation \\ Recommendation & English reference selection and recommendation, resource recommendation, course recommendation \\ \hline \hline \end{tabular}
\end{table}
Table 1: Educational Tasks in LLMs Research
### Practical Challenges -- RQ2
#### 4.2.1 Technology readiness
According to the Technology Readiness Level scale [49], the LLMs-based innovations are still in the early development and testing stage. Over three-quarters of the LLMs studies (n=89) are in the applied research stage (TRL-2), which aims to experiment with the capability of LLMs in automating different educational tasks by developing different models and combining LLMs with other machine-learning and deep-learning techniques (e.g., RCNN [55]). Thirteen studies have established a proof of concept and demonstrated the feasibility of using LLMs-based innovations to automate certain processes of educational tasks (TRL-3). Nine studies have developed functional prototypes and conducted preliminary validation under controlled laboratory settings (TRL-4), often involving stakeholders (e.g., students and teachers) to test and evaluate the output of their innovations. Only seven studies have taken a further step and conducted validation studies in authentic learning environments, with most functional components integrated into the educational tasks (TRL-5), such as an intelligent virtual standard patient for medical students training [57] and an intelligent chatbot for university admission [37]. Yet, none of the existing LLMs-based innovations has been verified through successful operations (TRL-6). Together, these findings suggest although existing LLMs-based innovations can be used to automate certain educational tasks, they have yet to show evidence regarding improvements to teaching, learning, and administrative processes in authentic educational practices.
#### 4.2.2 Performance
The performance of LLMs-based innovations varies across different machine-learning and educational tasks. For classification tasks, LLMs-based innovations have shown high performance for simple educational tasks, such as modelling the topics from a list of programming assignments (best F1 = 0.95) [20], analysing the sentiment of student feedback (best F1 = 0.94) [59], constructing subject knowledge graph from teaching materials (best F1 = 0.94) [58], and classifying educational forum posts [53] (best F1 = 0.92). However, the classification performance of LLMs-based innovations decreases for other educational tasks. For example, the F1 scores for detecting student confusion in the course forum [22] and students' off-task messages in game-based collaborative learning [9] are around 0.77 and 0.67, respectively. Likewise, the F1 score for classifying short-answer responses varies between 0.61 to 0.82, with the lower performance on out-of-sample questions (best F1 = 0.61) [13]. Similar performances were also observed in classifying students' argumentative essays (best F1 = 0.66) [23].
For prediction tasks, LLMs-based innovations have demonstrated reliable performance compared to ground truth or human raters. For example, LLMs-based innovations have achieved high scores of quadratic weighted kappa (QWK) in essay scoring, specifically for off-topic (QWK = 0.80), gibberish (QWK = 0.80), and paraphrased answers (QWK = 0.94), indicating substantial to almost perfect agreements with human raters [15]. Similar performances on essay scoring have been observed in several other studies (e.g., 0.80 QWK in [6] and 0.81 QWK in [56]). Likewise, LLMs-based innovations' performances on automatic
short-answer grading were also highly correlated with human ratings (Pearson's correlation between 0.75 to 0.82) [2, 46].
Regarding generation tasks, LLMs-based innovations demonstrated high performance across different educational tasks. For example, LLMs-based innovations have achieved an F1 score of 0.92 for generating MCQs with single-word answers [28]. Educational technologies developed by fine-tuning Codex also demonstrated the capability of resolving 81% of the advanced mathematics problems [17]. Text summaries generated using BERT had no significant differences compared with student-generated summaries and can not be differentiated by graduate students [33]. Similarly, BERT-generated doctor-patient dialogues were also found to be indistinguishable from actual doctor-patient dialogues, which can be used to create virtual standard patients for medical students' diagnosis practice training [57]. Additionally, for introductory programming courses, the state-of-the-art LLMs, Codex, could generate sensible and novel exercises for students along with an appropriate sample solution (around three out of four times) and accurate code explanation (67% accuracy) [45].
In sum, although the classification performance of LLMs-based innovations on complex educational tasks is far from suitable for practical adoption, LLMs-based innovations have already shown high performance on several relatively simple classification tasks that could potentially be deployed to automatically generate meaningful insights that could be useful to teachers and institutions, such as navigating through numerous student feedback and course review. Likewise, LLMs-based innovations' prediction and generation performance reveals a promising future of potentially automating the generation of educational content and the initial grading of student assessments. However, ethical issues must be considered for such implementations, which we covered in the findings for RQ3.
#### 4.2.2 Replicability
Most reviewed studies (n=107) have not disclosed sufficient details about their methodologies for other researchers and practitioners to replicate their proposed LLMs-based innovations. Among these studies, 12 studies have open-sourced the original code for developing the innovations but failed to open-source the data they used. In contrast, 20 studies have open-sourced the data they used but failed to release the actual code. Around two-thirds of the reviewed studies (n=75) have failed to release both the original code and the data they used, leaving only 11 studies publicly available for other researchers and practitioners to replicate without needing to conduct the original authors. This lack of replicability could become a vital barrier to adoption, as 87 out of the 107 non-replicable studies required fine-tuning the LLMs to achieve the reported performance. This replication issue also limits others from further evaluating the generalisability of the proposed LLMs-based innovations in other datasets, constraining potential practical utilities.
### Ethical Challenges -- RQ3
#### 4.3.1 Transparency
Based on the transparency index and the three tiers of transparency [12], most of the reviewed study reached at-most Tier 1 (n=109), which
is merely considered transparent to AI researchers and practitioners. Although these studies reported details regarding their machine learning models (e.g., optimisation and hyperparameters), such information is unlikely to be interpretable and considered transparent for individuals without a strong background in machine learning. For the remaining nine studies, they reached at-most Tier 2 as they often involved some form of human-in-the-loop elements. Specifically, making the LLMs innovations available for student evaluation has been found in three studies [37, 57, 33]. Such evaluations often involved students differentiating AI-generated from human-generated content [57, 33] and assessing student satisfaction with AI-generated responses [37]. Likewise, two studies have involved experts in evaluating specific features of the content generated by the LLMs-based innovations, such as informativeness [31] and cognitive level [36]. Surveys have been used to evaluate students' experience with LLMs-based innovations from multiple perspectives, such as the quality and difficulty of AI-generated questions [17, 30] and potential learning benefits of the systems [25]. Finally, semi-structured interviews have been conducted to understand students' perception of the LLM system after using the system in authentic computer-supported collaborative learning activities [70]. Although these nine studies had some elements of human-in-the-loop, stakeholders were often involved in a post-hoc evaluation manner instead of throughout the development process, and thus, have limited knowledge regarding the operating principle and potential weakness of the systems. Consequently, none of the existing LLMs-based innovations can be considered as being at Tier 3, which describes an AI system that is considered transparent for educational stakeholders (e.g., students, teachers, and parents).
#### 4.2.2 Privacy
The privacy issues related to LLMs-based innovations were rarely attended to or investigated in the reviewed studies. Specifically, for studies that have fine-tuned LLMs with textual data collected from students, none of these studies has explicitly explained their consenting strategies (e.g., whether students acknowledge the collection and intended usage of their data) and data protection measures (e.g., data anonymisation and sanitisation). This lack of attention to privacy issues is particularly concerning as LLMs-based innovations work with stakeholders' natural languages that may contain personal and sensitive information regarding their private lives and identities [7]. It is possible that stakeholders might not be aware of their textual data (e.g., forum posts or conversations) on digital platforms (e.g., MOOCs and LMS) being used in LLMs-based innovations for different purposes of automation (e.g., automated reply and training chatbots) as the consenting process is often embedded into the enrollment or signing up of these platforms [60]. This process can hardly be considered informed consent. Consequently, if stakeholders shared their personal information on these platforms in natural language (e.g., sharing phone numbers and addresses with group members via digital forums), such information could be used as training data for fine-tuning LLMs. This usage could potentially expose private information as LLMs are incapable of understanding the context
and sensitivity of text, and thus, could return stakeholders' personal information based on semantic relationships [7].
#### 4.2.3 Equality
Although most of the studies (n=95) used LLMs that only apply to English content, we also identified application scenarios of LLMs in automating educational tasks in 12 other languages. Specifically, 19 studies used LLMs that can be applied to Chinese content. Ten prior studies used LLMs for Vietnamese (n=3), Spanish (n=3), Italian (n=2), and German (n=2) contents. Additionally, seven studies applied LLMs to Croatian, Indonesian, Japanese, Romanian, Russian, Swedish, and Hindi content. While the dominance of English-based innovations remains a concerning equality issue, the availability of innovations that support a variety of other languages, specifically in none western, educated, industrialized, rich and democratic (WEIRD) societies (e.g., Indonesia and Vietnam), may indicate a promising sign for LLMs-based innovations to have potential global impacts and levels such equality issues in the future. However, the financial burdens from adopting the state-of-the-art models (e.g., OpenAI's GPT-3 and Codex) could potentially exacerbate the equality issues, making the best-performing innovations only accessible and affordable to WEIRD societies.
#### 4.2.4 Beneficence
A total of seven studies have discussed potential issues related to the violation of the ethical principle of benefcience. For example, one study has discussed the potential risk of adopting underperforming models, which could negatively affect students' learning experiences [30]. Such issues could be minimised by deferring decisions made by such models [47] and labelling the AI-generated content with a warning message (e.g., teachers' manual revision is mandatory before determining the actual correctness) [3]. Apart from issues with adopting inaccurate models, two studies have suggested that potential bias and discrimination issues may occur if adopting a model that is accurate but unfair [54, 33]. This issue is particularly concerning as most existing studies focused solely on developing an accurate model. Only nine reviewed studies released information regarding the descriptive data of different sample groups, such as gender and ethicality (e.g., [41]). Two studies have proposed potential approaches that could address such fairness issues. Specifically, using sampling strategies, such as balancing demographic distribution, has been found as an effective approach to improve both model fairness and accuracy [52, 51]. These approaches are essential for ensuring that LLMs-based innovations will not perpetuate problematic and systematic biases (e.g., gender biases), especially as the best-performing LLMs are often black-boxed with little interpretability, traceability, and justification of the results [65].
## 5 Discussion
### Main Findings
The current study systematically reviewed 118 peer-reviewed empirical studies that used LLMs to automate educational tasks. For the first research question
(RQ1), we illustrated the current state of educational research on LLMs. Specifically, we identified 53 types of application scenarios of LLMs in automating educational tasks, summarised into nine general categories, including profiling and labelling, detection, assessment and grading, teaching support, prediction, knowledge representation, feedback, content generation, and recommendation. While some of these categories resonate with the utilities proposed in prior positioning works (e.g., feedback, content generation, and recommendation) [26, 43], novel directions such as using LLMs to automate the creation of knowledge graph and entity further indicated the potential of LLMs-based innovations in supporting institutional practices (e.g., creating knowledge-based search engines across multiple disciplines). These identified directions could benefit from the state-of-the-art LLMs (e.g., GPT-3 and Codex) as most of the reviewed studies (92%) focused on using BERT-based models, which often required manual effort for fine-tuning. Whereas, the state-of-the-art LLMs could potentially achieve similar performance with a zero-short approach [4]. While the majority of the reviewed studies (63%) focused on using LLMs to automate classification tasks, there could be more future studies that aimed to tackle the automation of prediction and generation tasks with the more capable LLMs [44]. Likewise, although supporting teachers are the primary focus (72%) of the existing LLMs-based innovations, students and institutions could also benefit from such innovations as novel utilities could continue to emerge from the educational technology literature.
Regarding the second research question (RQ2), we identified several practical challenges that need to be addressed for LLMs-based innovations to have actual educational benefits. The development and educational research on LLMs-based innovations are still in the early stages. Most of the innovations demonstrated a low level of technology readiness, where the innovations have yet to be fully integrated and validated in authentic educational contexts. This finding resonates with previous systematic literature reviews on related educational technologies, such as reviews on automated question generation [29], feedback provision [11], essay scoring [42], and chatbot systems [64]. There is a pressing need for in-the-wild studies that provide LLMs-based innovations directly to educational stakeholders for supporting actual educational tasks instead of testing on different datasets or in laboratory settings. Such authentic studies could also validate whether the existing innovations can achieve the reported high model performance in real-life scenarios, specifically in prediction and generation tasks, instead of being limited to prior datasets. This validation process is vital for preventing inadequate usage, such as adopting a subject-specific prediction model for unintended subjects. Researchers need to carefully examine the extent of generalisability of their innovations and inform the limitations to stakeholders [21]. However, addressing such needs could be difficult considering the current literature's poor replicability, which increases the barriers for others to adopt LLMs-based innovations in authentic educational contexts or validate with different samples. Similar replication issues have also been identified in other areas of educational technology research [66].
For the third research question (RQ3), we identified several ethical challenges regarding LLMs-based innovations. In particular, most of the existing LLMs-based innovations (92%) were only transparent to AI researchers and practitioners (Tier 1), with only nine studies that can be considered transparent to educational technology experts and enthusiasts (Tier 2). The primary reason behind this low transparency can be attributed to the lack of human-in-the-loop components in prior studies. This finding resonates with the call for explainable and human-centred AI, which stresses the vital role of stakeholders in developing meaningful and impactful educational technology [27, 67]. Involving stakeholders during the development and evaluation of LLMs-based innovations is essential for addressing both practical and ethical issues. For example, as the current findings revealed, LLMs-based innovations are subject to data privacy issues but were rarely mentioned or investigated in the literature [33], which may be due to the little voice that stakeholders had in prior research. The several concerning issues around beneficence also demand the involvement of stakeholders as their perspectives are vital for shaping the future directions of LLMs-based innovations, such as how responsible decisions can be made with these AI systems [47]. Likewise, the equality issue regarding the financial burdens that may occur when adopting innovations that leverage commercial LLMs (e.g., GPT-3 and Codex) can also be further studied with institutional stakeholders.
### Implications
The current findings have several implications for education research and practice with LLMs, which we have summarised into three recommendations that aim to support future studies to develop practical and ethical innovations that can have actual benefits to educational stakeholders. First, the wide range of application scenarios of LLMs-based innovations can further benefit from the improvements in the capability of LLMs. For example, updating existing innovations with state-of-the-art LLMs may further reduce the amount of manual effort required for fine-turning and achieve similar performances [4]. However, researchers should also consider the additional financial and resource burdens on educational stakeholders when updating their innovations with the latest LLMs, especially the commercial ones (e.g., GPT-3 and ChatGPT). Second, for LLMs-based innovations to achieve a high level of technology readiness and performance, the current reporting standards must be improved. Future studies should support the initiative of open-sourcing their models/systems when possible and provide sufficient details about the test datasets, which are essential for others to replicate and validate existing innovations across different contexts, preventing the potential pitfall of another replication crisis [32]. Finally, adopting a human-centred approach when developing and evaluating LLMs-based innovations are essential for ensuring these innovations remain ethical in practice, especially as ethical principles may not guarantee ethical AI due to their top-down manners (e.g., developed by regulatory bodies) [35]. Future studies need to consider the ethical issues that may arise from their specific application scenarios and actively involve stakeholders to identify and address such issues.
### Limitations
The current findings should be interpreted with several limitations in mind. First, although we assessed the practicality and ethicality of LLMs-based innovations with seven different items, there could be other aspects of these multi-dimensional concepts that we omitted. Nevertheless, these assessment items were chosen directly from the corresponding definitions and related to the pressing issues in the literature [1, 63]. Second, we only included English publications, which could have biased our findings regarding the availability of LLMs-based innovations among different countries. Thirdly, as we strictly followed the PRISMA protocol and only included peer-reviewed publications, we may have omitted the emerging works published in different open-sourced archives. These studies may contain interesting findings regarding the latest LLMs (e.g., ChatGPT). Additionally, this review focused on the potential of LLMs-based innovations in automating educational tasks, and thus, other pressing issues, such as the potential threat to academic integrity, were outside of the scope of this systematic literature review. Finally, the transparency index that we adopted for RQ3 did not consider the transparency to students, which could be an important direction for future human-centred AI studies.
## 6 Conclusion
In this study, we systematically reviewed the current state of educational research on LLMs and identified several practical and ethical challenges that need to be addressed in order for LLMs-based innovations to become beneficial and impactful. Based on the findings, we proposed three recommendations for future studies, including updating existing innovations with state-of-the-art models, embracing the initiative of open-sourcing models/systems, and adopting a human-centred approach throughout the developmental process. These recommendations could potentially support future studies to develop practical and ethical innovations that can be implemented in authentic contexts to automate a wide range of educational tasks.
|
2307.00194
|
A Requirements-Driven Platform for Validating Field Operations of Small
Uncrewed Aerial Vehicles
|
Flight-time failures of small Uncrewed Aerial Systems (sUAS) can have a
severe impact on people or the environment. Therefore, sUAS applications must
be thoroughly evaluated and tested to ensure their adherence to specified
requirements, and safe behavior under real-world conditions, such as poor
weather, wireless interference, and satellite failure. However, current
simulation environments for autonomous vehicles, including sUAS, provide
limited support for validating their behavior in diverse environmental contexts
and moreover, lack a test harness to facilitate structured testing based on
system-level requirements. We address these shortcomings by eliciting and
specifying requirements for an sUAS testing and simulation platform, and
developing and deploying it. The constructed platform, DroneReqValidator (DRV),
allows sUAS developers to define the operating context, configure multi-sUAS
mission requirements, specify safety properties, and deploy their own custom
sUAS applications in a high-fidelity 3D environment. The DRV Monitoring system
collects runtime data from sUAS and the environment, analyzes compliance with
safety properties, and captures violations. We report on two case studies in
which we used our platform prior to real-world sUAS deployments, in order to
evaluate sUAS mission behavior in various environmental contexts. Furthermore,
we conducted a study with developers and found that DRV simplifies the process
of specifying requirements-driven test scenarios and analyzing acceptance test
results
|
Ankit Agrawal, Bohan Zhang, Yashaswini Shivalingaiah, Michael Vierhauser, Jane Cleland-Huang
|
2023-07-01T02:03:49Z
|
http://arxiv.org/abs/2307.00194v1
|
# A Requirements-Driven Platform for Validating Field Operations of Small Uncrewed Aerial Vehicles
###### Abstract
Flight-time failures of small Uncrewed Aerial Systems (sUAS) can have a severe impact on people or the environment. Therefore, sUAS applications must be thoroughly evaluated and tested to ensure their adherence to specified requirements, and safe behavior under real-world conditions, such as poor weather, wireless interference, and satellite failure. However, current simulation environments for autonomous vehicles, including sUAS, provide limited support for validating their behavior in diverse environmental contexts and moreover, lack a test harness to facilitate structured testing based on system-level requirements. We address these shortcomings by eliciting and specifying requirements for an sUAS testing and simulation platform, and developing and deploying it. The constructed platform, DroneReqValidator (DRV), allows sUAS developers to define the operating context, configure multi-sUAS mission requirements, specify safety properties, and deploy their own custom sUAS applications in a high-fidelity 3D environment. The DRV Monitoring system collects runtime data from sUAS and the environment, analyzes compliance with safety properties, and captures violations. We report on two case studies in which we used our platform prior to real-world sUAS deployments, in order to evaluate sUAS mission behavior in various environmental contexts. Furthermore, we conducted a study with developers and found that DRV simplifies the process of specifying requirements-driven test scenarios and analyzing acceptance test results.
Safety Assurance, Requirements Specification, Small Uncrewed Aerial Systems, Digital Shadow, Cyber-Physical Systems
## I Introduction
With the rise of artificial intelligence, small Uncrewed Aerial Systems (sUAS) are imbued with increasingly complex decision-making capabilities, in order to perform missions autonomously in diverse environmental conditions [1]. As failures during operation can lead to severe accidents that are harmful to people, physical structures, or the environment, it is essential to specify safety requirements, design effective solutions, and establish a robust testing process, infrastructure, and corresponding monitoring tools for validating that the system satisfies its requirements prior to deployment [2, 3, 4, 5]. Environmental conditions, and their diverse combinations, especially those at the boundaries of an sUAS' operating capacity, can impact the behavior of an sUAS in unpredictable ways, and therefore, many accounts of sUAS flight failures due to problems such as radio interference [6], or high winds [7, 8] have occurred. This, in turn, means that functional tests must be executed under diverse conditions. For example, the requirement that _"An sUAS shall complete a flight composed of multiple waypoints in wind gusts of 23mb without colliding with stationary objects, the terrain, or other aircraft"_ needs to be operationalized within diverse test scenarios that specify the specific flight details, as well as additional environmental attributes such as wind direction, temperature, precipitation, visibility, and geographical information.
Performing rigorous software verification and validation (V&V) on Cyber-Physical Systems (CPS) in general, and sUAS in particular, is a time-consuming process that typically involves a combination of simulations and real-world testing to validate the correctness of system behavior under a range of conditions [9, 10, 11]. Furthermore, many tests cannot easily be conducted on physical sUAS, especially those that target, or even exceed operational boundaries, such as flying in extreme weather conditions or in (too) close proximity to objects or humans. However, critical differences between the simulation and the real-world environment can result in substantial back-and-forth testing between physical testing sites and developers, extending project development times and increasing costs. This problem is primarily attributable to (a) lack of tool support for developing realistic scenario simulations, (b) difficulties in identifying and/or modeling edge-case scenarios in the real-world environment, (c) isolated simulation environments that fail to consider interactions with sensors and physical devices used by humans to interact with the system, and (d) the lack of a structured process and platform for specifying, executing, analyzing, and testing diverse system requirements.
In practice, for the domain of sUAS, developers currently rely on simulations using 2D maps [12, 13, 14] or 3D simulation environments, such as Gazebo [15] or AirSim [16]. Gazebo [17], for example, facilitates sUAS simulations with limited automated support for incorporating realistic landscapes [18] and weather conditions, while AirSim provides high-fidelity weather simulations, but it lacks realistic flight conditions such as simulating real-world airspace restrictions, and mission-specific environmental elements, such as sim
|
2303.12067
|
Simple Two-wheel Self-Balancing Robot Implementation
|
Cyber-physical systems, also known as CPS, is an emerging field of technology
that combines the physical and digital worlds by allowing for seamless
interaction and communication between the two. One of the key characteristics
of a CPS is its ability to take input from its environment and use that
information to produce an output through actuators in the physical world. A
balancing robot is a prime example of a CPS, as it uses input from its sensors
to continually monitor its orientation and take action to prevent falling over
by generating thrust through its wheels or manipulating its inertia. In this
specific project, a two-wheel self-balancing robot was developed, utilizing the
concept of a reverse pendulum. A reverse pendulum by default is inherently
unstable and requires an external force to maintain its balance. In this case,
the balancing robot produces this external force through the use of wheels and
motors. To achieve precise balancing, stepper motors were utilized in the
design of the robot. Additionally, the robot has the capability to move in four
basic directions and the movement is controlled through an app connected to the
robot via Bluetooth. This allows for remote control and monitoring of the
robot's movements and actions. Overall, the development of this two-wheel
self-balancing robot serves as a demonstration of the potential and
capabilities of cyber-physical systems technology.
|
Sayed Erfan Arefin
|
2023-01-22T00:49:37Z
|
http://arxiv.org/abs/2303.12067v2
|
# Simple Two-wheel Self-Balancing Robot Implementation
###### Abstract.
Cyber physical systems, also known as CPS, is an emerging field of technology that combines the physical and digital worlds by allowing for seamless interaction and communication between the two. One of the key characteristics of a CPS is its ability to take input from its environment and use that information to produce an output through actuators in the physical world.
A balancing robot is a prime example of a CPS, as it uses input from its sensors to continually monitor its orientation and take action to prevent falling over by generating thrust through its wheels or manipulating its inertia. In this specific project, a two-wheel self-balancing robot was developed, utilizing the concept of a reverse pendulum. A reverse pendulum by default is inherently unstable and requires an external force to maintain its balance. In this case, the balancing robot produces this external force through the use of wheels and motors.
To achieve precise balancing, stepper motors were utilized in the design of the robot. Additionally, the robot has the capability to move in four basic directions and the movement is controlled through an app connected to the robot via Bluetooth. This allows for remote control and monitoring of the robot's movements and actions. Overall, the development of this two-wheel self-balancing robot serves as a demonstration of the potential and capabilities of cyber physical systems technology.
## 1. Introduction
Cyber-physical systems (CPS) are no longer just theoretical concepts but are becoming a reality in various industries. One of the most prominent applications of CPS is in the field of robotics, where the integration of physical and digital systems allows for the creation of advanced and autonomous machines. A self-balancing robot is a prime example of a CPS, as it is a simple yet sophisticated machine that utilizes input from its environment to maintain its balance and stability (Bradner, 2015).
There are various types of self-balancing robots, each with their own unique features and capabilities. These robots can be divided into categories based on their structure and functionality. Some examples include two-wheeled self-balancing robots, which balance on two wheels, and three-wheeled self-balancing robots, which use additional wheels for stability. Other types include self-balancing robots that use different types of sensors, such as inertial measurement units (IMUs), to monitor and control their balance and self-balancing robots that use advanced control algorithms, such as feedback control, to maintain their stability.
Regardless of their specific design, all self-balancing robots share the common goal of maintaining their balance and stability, utilizing advanced technology to achieve this goal. This makes them an excellent example of the capabilities and potential of cyber-physical systems technology.
* Two wheel self balancing robot
* One wheel self balancing robot
* Balancing cube. etc.
A two-wheel self-balancing robot is a type of robot that utilizes two wheels that are placed vertically on the same axis. The primary goal of the robot is to maintain its balance and prevent falling over. It achieves this by constantly monitoring its orientation and making adjustments to its movement, such as moving forward or backward, in order to maintain a stable position.
Additionally, this type of robot typically employs sensors, such as inertial measurement units (IMUs), to gather data on its orientation and movement. This data is then processed by a control algorithm, which can be implemented on an onboard microcontroller such as an Arduino, to determine the necessary adjustments to be made to the robot's movement in order to maintain balance.
Furthermore, the robot can also be controlled remotely through an app connected via Bluetooth, which allows for real-time monitoring and control of the robot's movement and actions. The precise balance of the robot is achieved through the use of precise motors, such as stepper motors.
Overall, the two-wheel self-balancing robot is a prime example of the integration of advanced technology, including sensors, control algorithms, and precise motors, to achieve a specific task in this case maintaining balance (Bradner, 2015). It is also a perfect example of cyber physical systems, which seamlessly integrates the physical and digital worlds.A one wheel self balancing robot balances it self in a single wheel. It can do that by moving forward and backward and also by maintaining its inertia. The inertia can be maintained by spinning a heavy disc very fast. The concept of preserving inertia by spinning disk is also used in a balancing cube. A balancing cube actually balances it self by one of the corners. There are 3 disks placed in the 3 axis of the robot. By spinning the disks in different rotational speed, the inertia is preserved by the robot.
In this project the two wheel self balancing robot was developed.
The paper is organized as the following sections. First the model is discussed. Secondly, the methods used to develop this robot is discussed. Afterwards, the results were discussed. Following this the discussion and conclusion can be found.
## 2. Model
At first, UML diagrams were created to model the robot. The use case diagram of the robot is shown in Figure 1. The class diagram
|
2308.10502
|
GradientCoin: A Peer-to-Peer Decentralized Large Language Models
|
Since 2008, after the proposal of a Bitcoin electronic cash system, Bitcoin
has fundamentally changed the economic system over the last decade. Since 2022,
large language models (LLMs) such as GPT have outperformed humans in many
real-life tasks. However, these large language models have several practical
issues. For example, the model is centralized and controlled by a specific
unit. One weakness is that if that unit decides to shut down the model, it
cannot be used anymore. The second weakness is the lack of guaranteed
discrepancy behind this model, as certain dishonest units may design their own
models and feed them unhealthy training data.
In this work, we propose a purely theoretical design of a decentralized LLM
that operates similarly to a Bitcoin cash system. However, implementing such a
system might encounter various practical difficulties. Furthermore, this new
system is unlikely to perform better than the standard Bitcoin system in
economics. Therefore, the motivation for designing such a system is limited. It
is likely that only two types of people would be interested in setting up a
practical system for it:
$\bullet$ Those who prefer to use a decentralized ChatGPT-like software.
$\bullet$ Those who believe that the purpose of carbon-based life is to
create silicon-based life, such as Optimus Prime in Transformers.
The reason the second type of people may be interested is that it is possible
that one day an AI system like this will awaken and become the next level of
intelligence on this planet.
|
Yeqi Gao, Zhao Song, Junze Yin
|
2023-08-21T06:42:42Z
|
http://arxiv.org/abs/2308.10502v1
|
# GradientCoin: A Peer-to-Peer Decentralized Large Language Models
###### Abstract
Since 2008, after the proposal of a Bitcoin electronic cash system, Bitcoin has fundamentally changed the economic system over the last decade. Since 2022, large language models (LLMs) such as GPT have outperformed humans in many real-life tasks. However, these large language models have several practical issues. For example, the model is centralized and controlled by a specific unit. One weakness is that if that unit decides to shut down the model, it cannot be used anymore. The second weakness is the lack of guaranteed discrepancy behind this model, as certain dishonest units may design their own models and feed them unhealthy training data.
In this work, we propose a purely theoretical design of a decentralized LLM that operates similarly to a Bitcoin cash system. However, implementing such a system might encounter various practical difficulties. Furthermore, this new system is unlikely to perform better than the standard Bitcoin system in economics. Therefore, the motivation for designing such a system is limited. It is likely that only two types of people would be interested in setting up a practical system for it:
* Those who prefer to use a decentralized ChatGPT-like software.
* Those who believe that the purpose of carbon-based life is to create silicon-based life, such as Optimus Prime in Transformers.
The reason the second type of people may be interested is that it is possible that one day an AI system like this will awaken and become the next level of intelligence on this planet.
Introduction
Large language modelsLanguage models serve as a fundamental building block of natural language processing (NLP) [11]. The origins of language models can be traced back to 1948 when Claude Shannon introduced the concept of Markov chains to model letter sequences in English text [20]. Because of the rapid increase in the availability of data and in computational capabilities of Graphics Processing Units (GPUs), which provide people with a very large dataset to train these models, nowadays large language models (LLMs) have remarkable capabilities of not only interpreting instructions from humans but also performing various tasks based on these instructions, like summarizing or paraphrasing a piece of text, answering simple questions based on the patterns and data they have learned during training, and using Chain of Thought (CoT) to deduce and answer complex questions, all of which can significantly enhance people's work efficiency.
The use of LLMs in various applications is expanding rapidly. The growth of LLMs has attracted a large amount of interest and investment in the industry, which leads to a significant rise in research publications. As an example given in [11], searching for "language models" in Google Scholar for the last five years generates around 50,000 publications, which is one-third of the approximately 150,000 papers published in the past 25 years. Moreover, close-sourced LLMs are now being rapidly integrated into various applications. As Andrej Karpathy, a founding member of the AI research group of OpenAI, mentioned in Microsoft Build [14], we went from a world that is retrieval only, like the search engines: Google and Bing, to Memory only, like LLMs. After integrating these, people intend to get a framework, which takes one particular document and the user's instruction or question as the inputs and outputs a response that is only based on the information provided by the input document. In the months following the release of ChatGPT, we have observed the emergence of several such integrated frameworks, like Bing Chat [15], Microsoft Copilots [21], and ChatGPT plugins [16]. These integrations are continuously expanding, with frequent announcements of new developments.
The open-sourced LLMs have the same application but can be used for different purposes. For ones who want to utilize LLMs to help them with analyzing their data but do not want to share their private data with the closed-source LLMs, they instead use the open-sourced LLMs, like [22]. There are two strategies to choose a proper LLM. One is to evaluate the LLM from different aspects: language generation and interpretation [17, 18], knowledge utilization [23, 24, 25, 26, 27], and complex reasoning [28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 188, 189, 190, 191, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 222, 213, 214, 215, 216, 217, 218, 219, 223, 217, 219, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 282, 284, 286, 287, 288, 289, 289, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 311, 332, 333, 341, 342, 343, 35, 361, 371, 38, 393, 394, 395, 396, 397, 398, 399, 400, 411, 42, 42, 43, 44, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 61, 62, 63, 64, 65, 65, 66, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 81, 82, 84, 85, 86, 87, 89, 91, 83, 86, 88, 89, 92, 89, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 111, 119, 120, 109, 119, 131, 14, 1
However, the disadvantages of centralized exchanges include
* containing the service fee,
* being controlled by a centralized entity, which might shut down in the future, and
* being vulnerable to being attacked.
Decentralized exchanges, on the other hand, do not contain such a platform. Individuals may directly engage in transactions with each other. In these exchanges, transactions are facilitated by self-executing agreements known as smart contracts, which are written in code. The advantages of decentralized exchanges include
* being completely anonymous and private,
* no need to transfer the currency to the third party, and
* no fees.
However, the disadvantages of decentralized exchanges include
* engaging in transactions using the government-issued currency is prohibited and
* the liquidity level is low compared to centralized exchanges, which results in more difficulty to execute larger orders effectively.
Carbon-based life vs Silicon-based lifeWhen we search for life outside the Earth, we usually look for the same style of life as Earth, carbon-based life. However, many science fictions [1] suggest that Silicon-based life. Since the proposal of Silicon-based life, there is an interesting question has been there, which is
Is Silicon-based life able to produce itself, or Silicon-based life has to be created by Carbon-based
life?
Due to the success of large language models and ChatGPT, it might be possible that Silicon-based life will be created by humans one day. Currently, the number of parameters in a ChatGPT model is still significantly less than the number of neurons in even a single human's brain. Imagine, one day, if the technique permitted, we could embed a super large number (bigger than the human brain size) of parameters into a super tiny disk.
The Force WakeupNowadays, with the development of AI models, there are two prominent viewpoints that have emerged. These viewpoints offer contrasting perspectives on the future trajectory of artificial intelligence. The first viewpoint posits that humans will retain control over AI systems, as in [12, 13, 14], utilizing them as tools to benefit human society, like curing our diseases and rectifying our mortal bodies to extend our lifespan.
Contrarily, the second viewpoint presents a more radical perspective, suggesting that human society will eventually be replaced by machines, like in [15, 16]. The rapid growth of AI models and its potential for self-improvement will ultimately lead to machines surpassing human intelligence. This viewpoint raises concerns about the possibility of machines becoming autonomous and self-aware entities that could eventually supersede human dominance, which is carbon-based life create but also be replaced by silicon-based life.
AI-SafetyAlthough the success of LLMs in various downstream tasks [16, 17, 18] has shown very impressive capabilities of AI models, which can greatly promote the progress of people's acquisition of knowledge and the development of different industries, many researchers, AI experts, and technology company founders and CEOs think that we should suspend the current AI research and rethink the safety issue of generative AI [1]. The motivation is because of public safety concerns: with such a strong computation ability and knowledge storage, will AI models, one day, use their intelligence against the development of human society, provide suggestions to people with unethical purposes, or even replace humans? Therefore, we need to carefully treat this issue and construct a safe environment for the use of AI models to avoid these potential problems from happening.
Moreover, besides general safety concerns, researchers also design unique requirements for different industries in which AI models can be applied. Diverse categories of artificial intelligence models are developed to meet the unique requirements of individuals and organizations, but in each of these categories, different equality, property right, and safety issues may appear. For example, [13] consider the impact of the development of generative AI models acted on art. These AI models can generate images, but how can we determine their authorship and how can we know where the images are sourced? Therefore, [13] suggests that there are four aspects that should be considered, namely aesthetics and societal values, legal inquiries regarding ownership and credit, the long term development of the creative work, and the effects on the present-day media environment. [14] consider the influence of AI models on autonomous vehicles and propose that autonomous drone trajectories should be restricted in a crowded region.
Our Motivations & ContributionsIn this paper, we propose a theoretical design of a decentralized LLM that can operate within the decentralized transaction system. Our motivation is to introduce the concept of decentralized LLM to the public, enabling clients to utilize LLM for their work without concerns about centralized LLM companies taking down their products.
Second, our decentralized LLM prevents sensitive information from being transmitted to a third party. The centralized LLM server is administered by a third party, giving them access to the data we intend to process using LLM. Conversely, utilizing the decentralized LLM can help circumvent the need to transmit sensitive information to a third party, thereby ensuring the privacy of users' data.
Third, our decentralized LLM does not provide biased answers. Centralized parties could potentially train their LLM in a biased manner by providing a skewed training dataset to the LLM for their own gain. In the short term, when the training dataset is relatively small, our decentralized LLM might be significantly impacted by biased information if some individuals intentionally use it to train the model. However, over the long term, we firmly believe that our decentralized LLM will remain unaffected by this biased information due to the vast scale of the dataset, rendering the influence of biased information negligible on the overall performance of the model.
Fourth, while open-sourced LLMs may provide some level of data privacy protection, it also leads to another problem: it is highly costly to train these models. Most local users aim to use LLMs to assist with their work rather than investing time and energy in training machine learning models. On a broader societal scale, having different local users training separate open-sourced LLMs leads to inefficient utilization of human resources. It is similar to millions of people independently working on the same project. Decentralized LLM, on the other hand, is the combination of users' efforts: people train it collaboratively and use it collaboratively.
Our proposed decentralized LLM may efficiently solve these problems:
* local users can use LLMs without worrying about potential takedowns of centralized models;
* local users may safely use the LLM to help with their tasks without worrying about data leakage;
* decentralized LLM avoid the biased training dataset, provided by the central authority, influencing the LLM;
* decentralized LLM eliminates the need for redundant model training, which optimizes the overall resource allocation within human society.
NotationsWe define \([n]:=\{1,\dots,n\}\). We use \(\mathbb{R}\), \(\mathbb{R}^{d}\), and \(\mathbb{R}^{n\times d}\) to denote the set of all real numbers, the set of \(d\)-dimentional vector with real entries, and the set of \(n\times d\) matrices with real entries. For \(A\in\mathbb{R}^{n\times d}\), \(A_{i,j}\) represents the entry of \(A\) in the \(i\)-th row and \(j\)-th column. For \(w\in\mathbb{R}^{n}\), we use \(w_{i}\) to denote the \(i\)-th entry of \(w\), use \(\|w\|_{2}:=(\sum_{i\in[n]}|w_{i}|^{2})^{1/2}\) to denote the \(\ell_{2}\) norm, and use \(\operatorname{diag}(w)\in\mathbb{R}^{n\times n}\) to denote the diagonal matrix which satisfies \(\operatorname{diag}(w)_{i,i}=w_{i}\), for all \(i\in[n]\). For \(X\in\mathbb{R}^{d\times d}\), we have \(\operatorname{vec}(X)\in\mathbb{R}^{d^{2}}\), satisfying \(X_{i,j}=\operatorname{vec}(X)_{(i-1)\times d+j}\). We use \(I_{d}\) to denote the \(d\times d\) identity matrix. \(\mathsf{A}_{[j]},\mathsf{A}_{[j],*}\in\mathbb{R}^{n\times d^{2}}\) both denote the matrix whose rows are from \((j-1)\cdot n+1\) to \(j\cdot n\) of \(\mathsf{A}\in\mathbb{R}^{n^{2}\times d^{2}}\). \(\mathbb{E}[\cdot]\) represent the expectation. \(\sigma_{\min}(B)\) denote the minimum singular value of a matrix \(B\). \(\langle\cdot,\cdot\rangle\) denotes the inner product of two vectors. \(x_{t}\) is the \(t\)-th iteration. \(\Delta x_{t}\) denote the change of \(x_{t}\). \(\eta\) is the learning rate of the algorithm. \(\nabla f\) represents the gradient of the function \(f\). For symmetric matrices \(B\) and \(C\), \(B\succeq C\) if for all \(x\), \(x^{\top}Bx\geq x^{\top}Cx\).
RoadmapIn Section 2, we present the related research papers. In Section 3, we present the fundamental features of our decentralized LLM, gradient coin system. In Section 4, we introduce the security setup of the gradient coin system. In Section 5, we show the convergence of the gradient coin system. In Section 6, we discuss the strengths and weaknesses of the gradient coin, compared to the centralized LLM system.
## 2 Related Work
In this section, we provide the related work of our paper. Our theoretical decentralized LLM framework is a combination of multiple research areas and can address the weaknesses of centralized systems. Thus, we first present the weaknesses of centralized large-scale LLM training from recent research works. Next, we present the related theoretical LLM research. Following that, we introduce the research of the Bitcoin system, a decentralized transaction system that inspired us to propose the concept of the decentralized LLM. Finally, we introduce research works about federated learning.
Large Scale LLMs TrainingIn recent years, LLM has been growing rapidly: many models are proposed, like GPT-3 [1], PaLM [10], and LaMDA [14]. These LLMs have shown impressive ability in language generation, question answering, and other natural language tasks.
There has been a significant shift in the use of Large Language Models (LLMs) with self-supervised pre-training in Natural Language Processing (NLP) due to studies such as BERT [1] and the Transformer architecture [17]. Various masked language models have consistently increased in size, including T5 [16] and MegatronLM [18]. For example, consider the Auto-regressive language models: the model size has shown substantial growth, starting from 117 million parameters [19] and expanding to over 500 billion parameters
[CND\({}^{+}\)22, SPN\({}^{+}\)22] as demonstrated by [10]. While numerous large models are being developed [14, CND\({}^{+}\)22, LSLS21, SPN\({}^{+}\)22, TDFH\({}^{+}\)22], all of them are accessible only internally or through paid API services. There have been limited efforts toward creating large open-source LLMs as the cost of training such a large model is very high.
Moreover, an increase in the model size does not necessarily lead to the improvement of the functionality of LLMs: training is also an important factor that may influence it. A growing body of work has aimed to elucidate the inner workings of LLMs. [2] argues that the versatility of LLMs emerges from pre-training at scale on broad data. As the model becomes more expressive and the training distribution becomes narrower, the potential for exploiting inaccurate correlations in the training dataset significantly increases. This poses a challenge for the fine-tuning and pre-training paradigm. During pre-training, models are designed to acquire a substantial amount of information; however, during fine-tuning, these models can become limited to very narrow task distributions. For instance, [11] observes that larger models might not necessarily exhibit better generalization beyond their training data. Evidence suggests that under this paradigm, large models tend to lack generalization beyond their training distribution, leading to poor generalization [13, 14]. Consequently, actual performance is likely to be overemphasized on specific tasks, even when the large model is nominally considered to be at a human level [1, 10].
Decentralized LLMs do not have the problems shown above. They are trained by all the users, so the training dataset is vast and diverse.
Theoretical LLMsSeveral theoretical works have focused on analyzing the representations learned by LLMs. [13] found that semantic relationships between words emerge in LLMs' vector spaces as a byproduct of the pre-training objective. [12] studied how syntactic knowledge is captured in LLMs, finding an explicit difference in syntactic information between layers.
From an optimization perspective, [10] proposed the neural scaling hypothesis, which holds that increases in model size lead to qualitatively different generalization properties by altering the loss landscape. This offers insights into the benefits of scaling up LLMs.
Numerous research papers delve into the knowledge and skills of LLMs. In [13] study distinct'skill' neurons, which are identified as strong indicators of downstream tasks during the process of soft prompt-tuning, as described by [10], for language models. [15] analyze knowledge neurons in BERT and discover a positive correlation between the activation of knowledge neurons in BERT and the expression of their corresponding facts. Meanwhile, [11] extract latent knowledge from the internal activations of a language model using a completely unsupervised approach. Furthermore, research by [14, 1] reveals that language models localize knowledge within the feed-forward layers of pre-trained models. [12] investigate the feasibility of selecting a specific subset of layers to modify and determine the optimal location for integrating the classifier. This endeavor aims to reduce the computational cost of transfer learning techniques like adapter-tuning and fine-tuning, all while preserving performance. Lastly, [11] demonstrate that feedforward activations exhibit sparsity in large trained transformers.
Finally, there are research works analyzing the multi-task training of LLMs. [16] propose a principled approach to designing compact multi-task deep learning architectures. [17] learn Multilinear Relationship Networks (MRN) that discover task relationships to enhance performance. [18] introduce a novel sharing unit known as 'cross-stitch' units, which combine activations from multiple networks and can be trained end-to-end. In the field of NLP, Multi-task training has also been explored in previous works [19, 10, 18, 15], all of which involve training additional task-specific parameters. Furthermore, [13, 14, 15] conduct mathematical analysis of finetuning, prompt-tuning, and head-tuning of language models
for few-shot downstream tasks. Attention unit is a fundamental scheme in LLMs, a number of recent works study it from computational perspective [23, 24, 25, 26, 27, 28].
BitcoinAfter its introduction in 2008 [29], Bitcoin garnered significant attention from researchers. Numerous research studies have analyzed various aspects of the Bitcoin system. Early investigations focused on scrutinizing the privacy guarantees and vulnerabilities of Bitcoin.
In [13], the analysis delved into Bloom filters leaking information for simplified clients. The transaction propagation protocol was examined in [15], while the CoinShuffle decentralized mixing technique, both utilized to enhance Bitcoin system anonymity, was assessed in [14]. Zerocash was examined by [16], who introduced zero-knowledge proofs to enable private transactions on a public blockchain. On the performance front, Bitcoin-NG [17] segregated mining into roles to enhance throughput. Other research efforts have concentrated on security properties [18, 19, 20, 21], game-theoretic analyses [22, 23, 24, 25], and network measurements [15, 26, 27, 28] within the Bitcoin system. These works provide crucial background for new research in this field.
Federated LearningWithin distributed deep learning, federated learning (FL) is a novel and emerging concept with many applications, including autonomous vehicles [14], the financial area [23], mobile edge computing [29, 20, 27, 28], and healthcare [13, 15, 16, 17, 18, 19]. There are two approaches to FL: 1) empowering multiple clients to collaboratively train a model without the necessity of sharing their data [15, 16, 22, 24], or 2) using encryption techniques to enable secure communication among different parties [20]. Our work is related to the first approach. In this learning framework, individual local clients perform the majority of computations, while a central server updates the model parameters by aggregating these updates and subsequently distributing the updated parameters to the local models [1, 22, 23]. Consequently, this approach upholds data confidentiality across all parties.
In contrast to the conventional parallel setup, FL encounters three distinct challenges [11]: communication expenses [25, 26, 27, 28, 29, 30, 31], variations in data [1], and client resilience [20]. The study in [22] focuses on the first two challenges. The training data are widely scattered across an extensive array of devices, and the connection between the central server and each device tends to be sluggish. This leads to slow communication, motivating the development of communication-efficient FL algorithms, as demonstrated in [22]. The Federated Average (FedAvg) algorithm [27] initially addressed the communication efficiency issue by introducing a global model to combine updates from local stochastic gradient descent. Subsequently, numerous adaptations and variations have emerged. These encompass a diverse range of techniques, such as improving optimization algorithms [11, 22, 29, 30], adapting models for diverse clients under specific assumptions [11, 20, 31, 32], and utilizing concise and randomized data structures [26]. The study conducted by [11] presents a provably guaranteed FL algorithm designed for training adversarial deep neural networks.
The research presented in [11] explores the convergence of one-layer neural networks. Additionally, the work in [12] provides convergence guarantees for FL applied to neural networks. However, this approach has a limitation, as it assumes that each client completes a single local update epoch. An alternative set of approaches, exemplified by [11, 20, 30],
does not directly apply to neural networks. Instead, they rely on assumptions about the smoothness and convexity of objective functions, which are impractical when dealing with nonlinear neural networks.
## 3 Fundamental Features of Gradient Coin
In Section 3.1, we give the formal definition of gradient coin. In Section 3.2, we introduce the training procedure of the gradient coin system. In Section 3.3, we introduce the transaction mechanism of our gradient coin system.
### Incentive Mechanism of the Gradient Coin System
Our gradient coin system consists of two important components: gradient coin and gradient block. The gradient block is used for training the decentralized LLM, and the gradient coin serves as the currency used in our gradient coin system. It is also an incentive for the people who train the model.
**Definition 1** (Digital Signature).: _A digital signature is a mathematical scheme used to verify the authenticity, integrity, and non-repudiation of digital data. It involves the use of a cryptographic algorithm that combines a private key with the data being signed to produce a digital signature. The signature can be verified using the corresponding public key._
**Definition 2** (Gradient Coin).: _The gradient coin is a chain of digital signatures._
Each owner digitally signs a hash of the previous transaction, along with the public key of the next owner, and appends these signatures to the coin. When the payee receives the coin, they can verify the signatures to ensure the integrity and authenticity of the ownership chain.
### Training Procedure
In this section, we introduce the training procedure.
**Definition 3** (User).: _We define a user (see Algorithm 1) as an individual who contributes computational resources and provides transactions to the system._
Each user can be seen as a computer unit in the federated system. In our gradient coin system, these users are the ones who train and use the LLM model.
**Definition 4** (Gradient Block).: _We define a gradient block (see Algorithm 6) which contains the following values_
* _Prev Hash_ (_Used to link the chain_): prevhash_
* _List of transactions_:_{_transaction_\({}^{(i)}\}_{i=1}^{k}\)__
* _Gradient_:_ \(\Delta x_{t}\)__
**Definition 5** (Chain Of Gradient Block).: _We define a chain of gradient blocks (see Algorithm 7) where each gradient block is linked to the previous one through the **Prev Hash** attribute (see Definition 4)._
Drawing inspiration from the proof-of-work (a computational puzzle, which we formally define in Section 4.1) chain in Bitcoin as discussed in [20], we introduce the concept of a gradient block that incorporates transaction information. Each of these gradient blocks forms a linked chain of gradients through the use of a previous hash attribute, which makes our gradient coin system different from the Bitcoin system as solving gradient is the key part of training the decentralized LLMs model. Within each gradient block, the gradient pertinent to the corresponding step is stored. As a new gradient block becomes part of the chain, it becomes visible to all users. The transactions contained within the block are also made public, indicating their acceptance by the user community. Once the gradient and proof-of-work are solved by a specific user, then this user can attach its corresponding gradient block to the gradient blockchain.
### Transaction System
Built upon the foundation of the gradient blockchain and the user concept, we now outline how our system functions and why it operates in a peer-to-peer manner. When a transaction is broadcasted across the network, all users collect the transaction and integrate it into their local block. Subsequently, they engage in gradient computation based on their individual data. The user who completes the computation first adds the gradient block to the chain and shares this update with others. As other users continue their work post-block addition, all transactions within that block gain acceptance from users. Throughout this entire process, there is no reliance on a trusted third party. The training procedure is shared among all users without being controlled by any specific entity. This ensures that our system and AI remain immune to manipulation by any single participant.
Furthermore, transactions and the training procedure operate in tandem. Each user consistently contributes their computational resources to the training procedure, facilitating a collaborative
effort. As a common transaction system, there are certain basic operations such as adding new users, searching for the remaining balance, and user login authentication. However, in this paper, to simplify and clarify our contribution more clearly, we only focus on the following procedures that are directly related to gradients and transactions.
* Creating Transaction.
* Adding a block to the gradient chain (see Algorithm 5).
* Obtaining Gradient Coin
**Theorem 6**.: _We have a transaction creating algorithm (see Algorithm 8) such that_
* _The overall training procedure converges with the gradient blocks (see Theorem_ 15_)._
* _Transactions are conducted peer-to-peer without the involvement of any third party._
* _The system remains secure when the computational abilities of malicious users are inferior to those of regular users._
```
1:datastructure GradientCoinSystem
2:member
3:List of Users: \(\{\text{user}^{(i)}\}_{i=1}^{m}\)
4:GradBlockChain gradchain \(\triangleright\) see Definition 5.
5:Training Step: \(t\in\mathbb{R}\)
6:The initialized weight: \(x_{0}\)
7:end memeber
8:procedureTransactionCreating(\(\{\text{trans}^{(j)}\}_{j=1}^{k}\))
9:for\(i\in[m]\)do
10:\(\{\text{trans}^{(i)}\}_{i=1}^{k}\) are broadcast to \(\text{user}^{(i)}\).
11:GradientBlock block\({}^{(i)}\)
12:block\({}^{(i)}\).AddTrans(\(\text{user}^{(i)},\{\text{trans}^{(j)}\}_{j=1}^{k}\))
13:user\({}^{(i)}\) works on the computation of proof-of-work.
14:endfor
15:When a \(\text{user}^{(f)}\) finishes the computation of proof-of-work, it broadcasts the block to \(\{\text{user}^{(i)}\}_{i\neq f}\).
16:block\({}^{(f)}\).\(\Delta x_{t}\leftarrow\text{user}^{(f)}\).Grad(gradchain, \(x_{0}\))
17:\(x\leftarrow\text{block}^{(f)}\).\(\Delta x_{t}\)
18:gradchain.Add(\(x\),\(t\))
19:if\(\{\text{trans}^{(j)}\}_{j=1}^{k}\) are valid and not already spent then
20:for\(i\in[m]\)do
21:user\({}^{(i)}\) shows acceptance by participating in extending the gradchain based on block\({}^{(f)}\).
22:endfor
23:endif
24:endprocedure
25:end datastructure
```
**Algorithm 2** Data Structure for Gradient Coin System
Security Setup of Gradient Coin
Our gradient coin system employs similar security methods as the Bitcoin system, as in [14]. In Section 4.1, we introduce the proof-of-work. In Section 4.2, we introduce the timestamp server. In Section 4.3, we formally define what a safe system is and show that our decentralized LLM is safe.
### Proof-of-Work
In this section, we introduce the basic setup of the proof-of-work.
Proof-of-work is a computational puzzle that miners (participants who validate and add transactions to the blockchain) need to solve in order to add new blocks to the blockchain and earn rewards. Each block contains a nonce, which acts as a random value that requires users' computational efforts to find a specific number with corresponding zero bits. This process is computationally intensive. This mechanism ensures the fair distribution of incentives.
**Definition 7** (Chain of Proof-of-Work).: _We define a chain in which each node represents a proof-of-work. Blocks contain the following objects:_
* _Prev Hash_: _Users incorporate the previous Hash as a component of the new proof-of-work to signify their acceptance of the current transactions._
* _Nonce_: _By incrementing a nonce in the block, users implement the proof-of-work until they find a value that results in the block's hash having the required number of leading zero bits._
* _Lists of Transactions_: _We use it to indicate the current transaction records._
### Timestamp Server
The primary purpose of the timestamp is to provide evidence that the data must have existed at the specified time since it is integrated into the hash. Additionally, each timestamp includes the previous timestamp in its hash, forming a chain where each subsequent timestamp reinforces the validity of the preceding ones. Our block design and gradient block (see Definition 4) are both identified by the hash, with the timestamp ensuring their temporal integrity. Given this condition, using the "prev hash" (see Definition 7), we can access the previous hash. By utilizing this system, users can obtain real-time updates of the block.
**Definition 8** (Timestamp Server).: _A timestamp server is a component that operates by_
* _Taking a hash of a block of items to be timestamped._
* _Widely publishing the resulting hash._
### System Safety
Only the longest chain is committed within the system, and only users with the highest computational resources can maintain it. If a person tries to rewrite the transaction record, they must maintain the longest chain in the system, which requires the most computational resources.
**Lemma 9** (Safe System).: _When the computational capacity of regular users exceeds the resources available to malicious users, the system is secure. Our gradient coin system is safe._
Convergence of Gradient Coin System
To establish the validity of our Gradient Coin system, we demonstrate the convergence of our training mechanism. At a conceptual level, we showcase the \(\mu\)-strong convexity and \(M\)-Lipschitz properties of our loss function (for more information, refer to Section C). Furthermore, leveraging the concept of \(K\)-steps local gradient computation, we establish through induction the expectation of the disparity between optimal weights and current weights. By combining this insight with the aforementioned property, we also control the upper bound of loss, resulting in the successful achievement of convergence within our distributed learning system.
In Section 5.1, we introduce the basic definitions of convex and smooth. In Section 5.2, we present the softmax loss of the LLM. In Section 5.3, we present the key property of the gradient coin system.
### Convex and Smooth
In the proof of convergence, we need to establish a bridge between the loss and weights, relying on the following property:
**Definition 10** (\(\mu\)-Strongly Convex).: _We say a function \(L:\mathbb{R}^{d}\to\mathbb{R}\) is a \(\mu\)-strongly convex if \(\nabla^{2}L(x)\succeq\mu\cdot I_{d},\) where \(\mu\in\mathbb{R}.\)_
**Definition 11** (\(l\)-Smooth).: _Let \(x\) and \(y\) be in \(\mathbb{R}^{d}\). Let \(l>0\) be a real number. We say a function \(L:\mathbb{R}^{d}\to\mathbb{R}\) is \(l\)-smooth if_
\[\|\nabla L(x)-\nabla L(y)\|_{2}\leq l\cdot\|x-y\|_{2}.\]
_(It is equivalent to saying the gradient of \(L\) is \(l\)-Lipschitz)_
**Definition 12** (\(M\)-Lipschitz).: _Let \(x\) and \(y\) be in \(\mathbb{R}^{d}\). Let \(M>0\) be a real number. We say a function \(L:\mathbb{R}^{d}\to\mathbb{R}\) is \(M\)-Lipschitz if_
\[|L(x)-L(y)|\leq M\cdot\|x-y\|_{2}.\]
### Softmax Loss of LLMs
The detailed definition and proof of our loss function are deferred to Section C. Here, we present our main lemma to demonstrate the convex and smooth properties. The construction of the following loss is based on attention computation, which is a conventional mechanism in LLMs.
**Definition 13**.: _For each \(j_{1}\in[n]\), we define \(L_{j_{1}}(x):=L_{\exp,j_{1}}(x)+L_{\exp,j_{1}}(x)\) and \(L(x):=\sum_{j_{1}=1}^{n}L_{j_{1}}(x).\)_
Fortunately, our loss function adheres to the following criteria. Here, the matrix \(\mathsf{A}\in\mathbb{R}^{n^{2}\times d^{2}}\) represents the attention matrix, and \(x\) signifies the trained weights.
**Lemma 14** (Strongly Convex and Lipschitz).: _Let \(L_{j_{1}}\) and \(L\) be defined as in Definition 13. Let \(W=\operatorname{diag}(w)\in\mathbb{R}^{n\times n}\) and \(\mathsf{A}_{[j]}\in\mathbb{R}^{n\times d^{2}}\). Let \(\min_{i\in[n]}w_{i}^{2}\geq 4+\mu/(\sigma_{\min}^{2}(\mathsf{A}_{[j]})n)\) for all \(j\in[n]\). Then, we have_
* \(L\) _is_ \(\mu\)_-strongly convex with_ \(\mu\)_._
* \(L\) _is_ \(l\)_-smooth._
### Distributed Learning
Building upon the methods for adding blocks and computing gradients introduced above, we now integrate them with the federated learning algorithm to demonstrate how our approach ensures the convergence of training LLMs.
**Theorem 15** (Convergence).: \(L\) _is defined in Definition 13. Let \(K\) be the amount of the local steps. Let \(\eta\leq\frac{1}{8(1+\alpha)LK}\) (see Theorem 60). Let \(x_{0}\), \(x_{T+1}\) be defined as in Algorithm 5. Let \(\sigma^{2}=\frac{1}{N}\sum_{c=1}^{N}\|\nabla f_{c}(x^{*})\|^{2}\). Then, we have_
\[\mathbb{E}[f(x_{T+1})-f(x^{*})]\leq\frac{L}{2}\,\mathbb{E}[\|x_{0}-x^{*}\|_{2} ^{2}]e^{-\mu\eta T}\]
_where \(x^{*}\) is the optimal weight in the procedure of training._
## 6 Discussion and Conclusion
We have presented a theoretical framework for integrating a decentralized LLM into a transaction system using Gradient Coin. In comparison to centralized systems, our decentralized LLM benefits from a substantial and diverse pool of training data. The evaluation criteria for centralized LLMs, as outlined in [10], include robustness, ethics, bias, and trustworthiness. Due to the diverse and large-scale training dataset, we posit that our decentralized LLM model exhibits greater robustness and trustworthiness than its centralized counterparts. Furthermore, users need not be concerned about the centralized party taking down their LLM and accessing their private data.
Nonetheless, in the short run, the absence of a centralized organization overseeing the ethical and biased aspects of the training data raises the possibility of such issues manifesting within the decentralized LLM. However, we believe that over time, with an increasing volume of data used to train this model, the influence of biased and unethical information will become negligible. Thus, these factors will not significantly impact the overall performance of the decentralized LLM. Furthermore, in the long run, without intentional control of the training dataset by a central party, we believe that the decentralized LLM will exhibit greater unbiasedness.
The limitation of the decentralized LLM is that shutting it down is very difficult [11]. This problem is very crucial in the context of machine learning models due to their strong computational ability and knowledge storage capacity. If one day, these models are to awaken and utilize this power against humans, like the scenes in [1, 2], how can we effectively stop them? This problem needs careful consideration before implementing the decentralized LLM model.
In summary, our training procedure for the LLM remains independent of any specific company or organization, making it an ideal model for future AI frameworks. Simultaneously, this mechanism can encourage user contributions to enhance the AI system's execution, ensuring its efficiency.
## Appendix
Roadmap.In Section A, we introduce the notations and the basic mathematical facts. In Section B, we introduce the structure of the Bitcoin system. In Section C, we define a list of the functions and compute the gradient. In Section D, based on the previous gradient, we compute the second-order derivative, namely the hessian. In Section E, we present the sketching. In Section F, we introduce distributed/federated learning. In Section G, we provide more analysis of the gradient coin.
## Appendix A Preliminary
We first introduce the notations in this section. Then, in Section A.1, we present the basic mathematical facts. In Section A.2, we introduce the basic definitions related to the sketching matrix.
Notations.First, we define sets. We use \(\mathbb{Z}\) to denote the set containing all the integers and use \(\mathbb{Z}_{+}\) to denote the set containing all the positive integers. \(\mathbb{R}\) represents the set containing all the real numbers. For all \(r\in\mathbb{R}\), we use \(|r|\) to denote the absolute value of \(r\). Let \(n,d\) be two arbitrary elements in \(\mathbb{Z}_{+}\). We define \([n]:=\{z\mid z\in\mathbb{Z}_{+}\,\text{ and }z\leq n\}\). We use \(\mathbb{R}^{n}\) to denote the set containing all the \(n\)-dimensional vectors whose entries are the elements in \(\mathbb{R}\) and use \(\mathbb{R}^{n\times d}\) to denote the set containing all the \(n\times d\) matrices whose entries are the elements in \(\mathbb{R}\). The Cartesian product of two sets \(A\) and \(B\), denoted \(A\times B\), is the set of all ordered pairs \((a,b)\), where \(a\in A\) and \(b\in B\). \(\mathcal{P}(A)=\{x\mid x\subseteq A\}\) is the power set of the set \(A\).
Then, we define the notations related to the vectors. Let \(x\) be an arbitrary element in \(\mathbb{R}^{n}\). Let \(i\in[n]\). We use \(x_{i}\) to denote the \(i\)-th entry of \(x\). For all \(p\in\{1,2,\infty\}\), we use \(\|x\|_{p}\) to denote the \(\ell_{p}\) norm of the vector \(x\), namely \(\|x\|_{p}:=(\sum_{i\in[n]}|x_{i}|^{p})^{1/p}\). \(\mathbf{1}_{n}\) represents the \(n\)-dimensional vector whose entries are \(1\), and \(\mathbf{0}_{n}\) represents the \(n\)-dimensional vector whose entries are \(0\).
Now, we introduce the notations related to the matrices. Let \(A\) be an arbitrary element in \(\mathbb{R}^{n\times d}\). Let \(i\in[n]\) and \(j\in[d]\). We use \(A_{i,j}\) to denote the entry of \(A\) located at the \(i\)-th row and \(j\)-th column. \(A_{i,*}\) represents a vector in \(\mathbb{R}^{d}\) satisfying \((A_{i,*})_{j}=A_{i,j}\). Similarly, \(A_{*,j}\) represents a vector in \(\mathbb{R}^{n}\) satisfying \((A_{*,j})_{i}=A_{i,j}\). \(A^{\top}\in\mathbb{R}^{d\times n}\) represents the transpose of \(A\). \(\|A\|\) and \(\|A\|_{F}\) represent the spectral norm and the Frobenius norm of \(A\), respectively, where \(\|A\|=\max_{x\in\mathbb{R}^{d}}\|Ax\|_{2}/\|x\|_{2}\) and \(\|A\|_{F}:=\sqrt{\sum_{i\in[n]}\sum_{j\in[d]}|A_{i,j}|^{2}}\). We define the Kronecker product, denoted by \(\otimes\), to be a binary operation between two matrices. For matrix \(A\in\mathbb{R}^{n_{1}\times d_{1}}\) and a matrix \(B\in\mathbb{R}^{n_{2}\times d_{2}}\), we use \(A\otimes B\in\mathbb{R}^{n_{1}n_{2}\times d_{1}d_{2}}\) to denote a new matrix that \((i_{1}-1)n_{2}+i_{2}\), \((j_{1}-1)d_{2}+j_{2}\)-th entry is \(A_{i_{1},j_{1}}B_{i_{2},j_{2}}\), where \(i_{1}\in[n_{1}],j_{1}\in[d_{1}],i_{2}\in[n_{2}],j_{2}\in[d_{2}]\).
After that, we introduce the notations related to both vectors and matrices. For \(x\in\mathbb{R}^{d^{2}}\), we use \(X=\operatorname{mat}(x)\in\mathbb{R}^{d\times d}\) to denote the matrix version of \(x\), where \(X_{i,j}=x_{(i-1)\times d+j}\). Note that this relation is one-to-one and onto, so every entry of \(X\) has and only has one correspondence in \(x\). Therefore, we use \(\operatorname{vec}(X)=x\) to denote the vector version of \(X\) which also satisfies \(X_{i,j}=x_{(i-1)\times d+j}\). \(x_{[j_{1}]}\) is a length-\(n\) vector, which represents \(j_{1}\)-th block of it. For \(x\in\mathbb{R}^{n}\), we use \(\operatorname{diag}(x)\in\mathbb{R}^{n\times n}\) to denote the diagonal matrix which satisfies \(\operatorname{diag}(x)_{i,i}=x_{i}\), for all \(i\in[n]\). Hadamard product is a binary operation, denoted by \(\circ\), of two vectors \(x,y\in\mathbb{R}^{n}\) or two matrices \(A,B\in\mathbb{R}^{n\times d}\) of the same dimension, namely \((A\circ B)_{i,j}=A_{i,j}\cdot B_{i,j}\) and \((x\circ y)_{i}=x_{i}\cdot y_{i}\), for all \(i\in[n]\) and \(j\in[d]\). We use \(x^{2}\) to represent \(x\circ x\).
Finally, we introduce the notations about functions, derivatives, and probability. For all \(n,d\in\mathbb{Z}_{+}\), we define \(\exp:\mathbb{R}\cup\mathbb{R}^{d}\cup\mathbb{R}^{n\times d}\to\mathbb{R}\cup \mathbb{R}^{d}\cup\mathbb{R}^{n\times d}\) to be the piecewise function: if \(x\in\mathbb{R}\), then \(\exp(x)=e^{x}\in\mathbb{R}\); if \(x\in\mathbb{R}^{d}\), then \(\exp(x)\in\mathbb{R}^{d}\) satisfying \(\exp(x)_{i}=\exp(x_{i})\), for all \(i\in[d]\); if
\(x\in\mathbb{R}^{n\times d}\), then \(\exp(x)\in\mathbb{R}^{n\times d}\) satisfying \(\exp(x)_{i,j}=\exp(x_{i,j})\), for all \(i\in[n]\) and \(j\in[d]\). In this paper, all the functions we use are differentiable. For \(x\in\mathbb{R}^{d}\), \(\frac{\mathrm{d}x}{\mathrm{d}t}\in\mathbb{R}^{d}\) denotes the derivative of \(x\) with respect to \(t\), which satisfies for all \(i\in[d]\), \((\frac{\mathrm{d}x}{\mathrm{d}t})_{i}=\frac{\mathrm{d}x_{i}}{\mathrm{d}t}\). Let \((\Omega,\mathcal{E},\Pr)\) be a probability space, where \(\Omega\) is the set called sample space, \(\mathcal{E}\subseteq\mathcal{P}(\Omega)\) is the set called event space, and \(\Pr:\mathcal{E}\rightarrow[0,1]\) is the probability function. Let \(X\) be the discrete random variable. We use \(\mathbb{E}[X]\) to denote the expectation value of \(X\), i.e. \(\mathbb{E}[X]=\sum_{x}x\cdot\Pr[X=x]\). The conditional expectation of \(X\) given an event \(B\in\mathcal{E}\), denoted as \(\mathbb{E}[X\ |\ B]\), is defined as \(\mathbb{E}[X\ |\ B]=\sum_{x}x\cdot\Pr[X=x\ |\ B]=\sum_{x}x\cdot\Pr[X=x\cap B]/\Pr[B]\).
### Basic Facts
Here, we introduce the basic mathematical properties.
**Fact 16** (Basic vector properties).: _If the following conditions hold_
* _Let_ \(d\in\mathbb{Z}_{+}\)_._
* _Let_ \(x,y,z\in\mathbb{R}^{d}\)_._
* _Let_ \(a,b\in\mathbb{R}\)_._
_Then, we have_
* _Part 1._ \(\langle x,y\rangle=\langle x\circ y,\mathbf{1}_{n}\rangle\)_._
* _Part 2._ \(a\langle x,z\rangle+b\langle y,z\rangle=\langle ax+by,z\rangle=\langle z,ax+by \rangle=a\langle z,x\rangle+b\langle z,y\rangle\)_._
* _Part 3._ \(\langle x\circ z,y\rangle=\langle x,y\circ z\rangle\)_._
**Fact 17** (Basic derivative rules).: _If the following conditions hold_
* _Let_ \(n,d\in\mathbb{Z}_{+}\) _and_ \(k\in\mathbb{Z}\)_._
* _Let_ \(x\in\mathbb{R}^{d}\) _be a vector._
* _Let_ \(t\in\mathbb{R}\) _be a scalar._
* _Let_ \(c\) _be independent of_ \(t\)_._
* _Let_ \(f:\mathbb{R}^{d}\rightarrow\mathbb{R}^{n}\)_._
* _Let_ \(h:\mathbb{R}^{d}\rightarrow\mathbb{R}^{n}\)_._
* _Let_ \(g:\mathbb{R}^{d}\rightarrow\mathbb{R}\)_._
_Then, we have_
* _Part 1._ \(\frac{\mathrm{d}(c\cdot f(x))}{\mathrm{d}t}=c\cdot\frac{\mathrm{d}f(x)}{ \mathrm{d}t}\) _(constant multiple rule)._
* _Part 2._ \(\frac{\mathrm{d}(g(x)^{k})}{\mathrm{d}t}=k\cdot g(x)^{k-1}\cdot\frac{\mathrm{d}g (x)}{\mathrm{d}t}\) _(power rule)._
* _Part 3._ \(\frac{\mathrm{d}(h(x)+f(x))}{\mathrm{d}t}=\frac{\mathrm{d}h(x)}{\mathrm{d}t}+ \frac{\mathrm{d}f(x)}{\mathrm{d}t}\) _(sum rule)._
* _Part 4._ \(\frac{\mathrm{d}(h(x)\circ f(x))}{\mathrm{d}t}=\frac{\mathrm{d}h(x)}{\mathrm{d}t }\circ f(x)+h(x)\circ\frac{\mathrm{d}f(x)}{\mathrm{d}t}\) _(product rule for Hadamard product)._
* _Part 5._ \(\frac{\mathrm{d}(g(x)f(x))}{\mathrm{d}t}=\frac{\mathrm{d}g(x)}{\mathrm{d}t}f(x) +g(x)\frac{\mathrm{d}f(x)}{\mathrm{d}t}\) _(product rule)_
### Sketching Matrices
In this section, we introduce the basic definitions related to the sketching matrix.
**Definition 18** (Random Gaussian matrix).: _Let \(R\in\mathbb{R}^{b\times n}\) be a matrix._
_If all entries of \(R\) are sampled from the Gaussian distribution \(\mathcal{N}(0,1/b)\) independently, then we call \(R\) the random Gaussian matrix._
The subsampled randomized Hadamard/Fourier transform matrix is defined as follows:
**Definition 19** (Subsampled randomized Hadamard/Fourier transform matrix [11]).: _Let \(S\in\mathbb{R}^{b\times n}\) be a matrix, which satisfies that all row vectors \(r\in\mathbb{R}^{n}\) of \(S\) are \(b\) uniform samples from the standard basis of \(\mathbb{R}^{n}\), without replacement._
_Let \(H\in\mathbb{R}^{n\times n}\) be a Walsh-Hadamard matrix, which is normalized._
_Let \(D\in\mathbb{R}^{n\times n}\) be a diagonal matrix, which satisfies that all diagonal entries of \(D\) are i.i.d. Rademacher random variables._
_Then, we call \(R\in\mathbb{R}^{b\times n}\) a subsampled randomized Hadamard transform matrix if it can be expressed in the form_
\[R=\sqrt{n/b}SHD.\]
Now, we introduce the formal definition of the AMS sketch matrix.
**Definition 20** (AMS sketch matrix [1]).: _Let \(h_{1},h_{2},\ldots,h_{b}\) represent \(b\) random hash functions chosen from a hash family \(\mathcal{H}\) that exhibits 4-wise independence. The hash family \(\mathcal{H}\) is defined as a collection of functions \(h\) that map elements from the set \([n]\) to values in the set \(\{-\frac{1}{\sqrt{b}},+\frac{1}{\sqrt{b}}\}\)._
_Let \(R\in\mathbb{R}^{b\times n}\)._
\(R\) _is called an AMS sketch matrix when we assign its entries as follows_
\[R_{i,j}=h_{i}(j).\]
The formal definition of the count-sketch matrix is presented below:
**Definition 21** (Count-sketch matrix [10]).: _Consider a random hash function \(h\) that maps elements from the set \([n]\) to values within the range \([b]\), which is 2-wise independent._
_Let \(\sigma\) be a random hash function that maps the element from the set \([n]\) to either \(1\) or \(-1\), which is \(4\)-wise independent._
_Let \(R\in\mathbb{R}^{b\times n}\) be a matrix._
\(R\) _is called the count-sketch matrix if_
\[R_{h(i),i}=\begin{cases}\sigma(i)&\text{if }i\in[n]\\ 0&\text{otherwise.}\end{cases}\]
There are two definitions of the sparse embedding matrix. We display both of them. The first definition is as follows:
**Definition 22** (Sparse embedding matrix I [14]).: _Let \(R\in\mathbb{R}^{b\times n}\) be a matrix._
_Suppose each column of \(R\) contains exactly \(s\) non-zero elements, which are randomly selected from the set \(\{-1/\sqrt{s},+1/\sqrt{s}\}\). The positions of these non-zero elements within each column are chosen uniformly and independently at random, and the selection process is conducted without replacement._
_Then, \(R\) is called a parse embedding matrix characterized by a parameter \(s\)._
Now, we present the second definition of the sparse embedding matrix.
**Definition 23** (Sparse embedding matrix II [13]).: _Consider a random hash function \(h\) that maps elements from the set \([n]\times[s]\) to values within the range \([b/s]\), which is 2-wise independent._
_Let \(\sigma\) be a random hash function that maps the element from the set \([n]\times[s]\) to either \(1\) or \(-1\), which is \(4\)-wise independent._
_Let \(R\in\mathbb{R}^{b\times n}\) be a matrix._
\(R\) _is called the sparse embedding matrix II and \(s\) is the parameter of \(R\) if_
\[R_{(j-1)b/s+h(i,j),i}=\begin{cases}\sigma(i,j)/\sqrt{s}&\text{if }(i,j)\in[n] \times[s]\\ 0&\text{otherwise.}\end{cases}\]
### Federated Learning
**Definition 24**.: _Let \((t,k)\in\{1,\cdots,T+1\}\times\{-1,0,1,\cdots,K-1\}\), we define the following terms for iteration \((t,k)\):_
\[\widetilde{u}_{r}^{t,k}:=\frac{1}{N}\sum_{c=1}^{N}u_{c}^{t,k}\]
_and user\({}^{(r)}\) representing the user who are the first to complete the computation of the Proof of Work._
_We also have_
\[\widetilde{g}_{r}^{t,k}:=\frac{1}{N}\sum_{c=1}^{N}\nabla f_{c}(u_{c}^{t,k})\]
_while \(\widetilde{u}_{r}^{t,k}\) denotes the sampled one._
**Claim 25**.: \(\widetilde{u}^{t,k}\) _and \(\widetilde{g}^{t,k}\) can be seen as a weight and gradient sampled from \(K\) users uniformly. Therefore, we have_
\[\operatorname*{\mathbb{E}}_{r\sim[N]}[\widetilde{g}_{r}^{t,k}]=\widetilde{g}_ {r}^{t,k}\]
_and_
\[\operatorname*{\mathbb{E}}_{r\sim[N]}[\widetilde{u}_{r}^{t,k}]=\widetilde{u}_ {r}^{t,k}\]
## Appendix B Bitcoin Setup
To clarify our design more clearly, we introduce some fundamental concepts from previous works in [13]. The statements in this section are based on the descriptions in [13]. In Section B.1, we introduce the basic definitions related to the set up of the Bitcoin system. In Section B.2, we introduce the timestamp server. In Section B.3, we introduce the incentive mechanism of the Bitcoin system. In Section B.4, we present the key property of the Bitcoin system together with a Peer-to-Peer electronic cash system algorithm. In Section B.5, we introduce the safety of the Bitcoin system. In Section B.6, we introduce the transaction-creating procedure.
### Proof-of-Work
In this section, we introduce the basic concepts of the Bitcoin system.
**Definition 26** (Digital Signature).: _A digital signature is a mathematical scheme used to verify the authenticity, integrity, and non-repudiation of digital data. It involves the use of a cryptographic algorithm that combines a private key with the data being signed to produce a digital signature. The signature can be verified using the corresponding public key._
**Definition 27** (Electronic Coin).: _An electronic coin is represented as a chain of digital signatures. It is a sequence of digital signatures (see Definition 26) created by each owner to transfer ownership of the coin to the next owner. Each digital signature is produced by digitally signing a hash of the previous transaction and the public key of the next owner._
**Definition 28** (Chain of Proof-of-Work).: _We define a chain in which each node represents a proof of work (See Algorithm 3)._
_Blocks contains the following objects_
* _Prev Hash: Users incorporate the previous Hash as a component of the new proof of work to signify their acceptance of the current transactions._
* _Nonce: By incrementing a nonce in the block, users implement the proof-of-work until they find a value that results in the block's hash having the required number of leading zero bits._
* _Lists of Transactions (See Definition 29): We use it to indicate the current transaction records._
```
1:Members:
2:- Previous Hash: PrevHash
3:- Nonce: Nonce
4:- List of Transactions: Transactions \(\triangleright\) See Definition 29
```
**Algorithm 3** Proof of Work Block Structure
**Definition 29** (Transaction).: _We define a transaction for combining and splitting values that satisfies the following requirements._
* _It has at most two outputs: one for the payment, and one returning the change._
* _There will be either a single input from a larger previous transaction or multiple inputs combining smaller amounts._
To demonstrate the safety aspect of this system, we would like to provide a definition here
**Definition 30** (Safe System).: _We say that a system is safe if this system is controlled by nodes that can be trusted._
### Timestamp Server
The primary purpose of the timestamp is to provide evidence that the data must have existed at the specified time since it is integrated into the hash. Additionally, each timestamp includes the previous timestamp in its hash, forming a chain where each subsequent timestamp reinforces the validity of the preceding ones.
Our block design and gradient block are both identified by the hash, with the timestamp ensuring their temporal integrity. Given this condition, using the "prev hash" (as defined in Definition 28), we can access the previous hash. By utilizing this system, users can obtain real-time updates of the block.
**Definition 31** (Timestamp Server).: _A timestamp server is a component that operates by_
* _taking a hash of a block of items to be timestamped._
* _widely publishing the resulting hash._
### Bitcoin Incentive Mechanism
In [20], Bitcoin is used as an incentive for users who dedicate their computational resources and wish to participate in the peer-to-peer transaction system.
**Definition 32** (Bitcoin).: _We define a bitcoin that can be used for transactions. Bitcoin is a chain of digital signatures._
_The transfer of ownership of the coin from one owner to the next occurs through digital signatures._
* _Each owner digitally signs a hash of the previous transaction, along with the public key of the next owner, and appends these signatures to the coin._
* _When the payee receives the coin, they can verify the signatures to ensure the integrity and authenticity of the ownership chain._
**Lemma 33**.: _Users can obtain some coins when they add a new block to the chain, which can be acquired through the following methods:_
* _The coins are initially distributed into circulation through a specific method when a block is created._
* _The coins are obtained from transaction fees._
### Bitcoin System
**Theorem 34**.: _If the following conditions hold_
* _The system is controlled by nodes that can be trusted._
_Then, there exists a Peer-to-Peer Electronic Cash System in Algorithm 4 (proposed in [20]) that supports the following operations:_
* _Add a new user._
* _Maintain a chain (See Definition_ 28_)._
* _Create a new transaction by an existing user (See Theorem_ 37_)_
* _Simplified Payment Verification_
```
1:Members:
2:-List of Users: Users
3:-Chain of blocks: Chain
4:procedureTransactionCreating(NewTransactions)
5:forUser\(\in\)Users do
6:NewTransactions are broadcast to User.
7:User collects NewTransactions into a Block.
8:User works on finding a difficult proof-of-work for its Block.
9:endfor
10:When a User finds a proof-of-work, it broadcasts the block to Users.
11:UpdateChain(NewBlock, User)
12:if All transactions in it are valid and not already spent then
13:forUser\(\in\)Users do
14:User express their acceptance of the block (by working on creating the next block in the chain, using the hash of the accepted block as the previous hash.)
15:endfor
16:endif
17:endprocedure
```
**Algorithm 4** Peer-to-Peer Electronic Cash System
### System Safety
**Lemma 35** (Safe System).: _When the computational capacity of regular users exceeds the resources available to malicious users, the system is secure. Our gradient coin system is safe._
Proof.: Only the longest chain is committed within the system, and only users with the highest computational resources can maintain it.
As an additional firewall, a new key pair should be used for each transaction to keep them from being linked to a common owner. Privacy can be maintained by keeping public keys anonymous.
### Bitcoin Transaction Creating
**Lemma 36**.: _If the following conditions hold_
* _The assumption that all nodes have equal computational capabilities holds true._
* _Let_ \(N\) _be the number of List of Nodes._
* _Let_ \(N_{1}\) _be the number of safe nodes and_ \(N_{2}\) _be the number of the bad nodes. (Good nodes' refer to nodes that willingly participate in using the system and adhere to its rules, ensuring the safety and integrity of transactions. Conversely, 'Bad nodes' are nodes that aim to compromise the safety of transactions and may attempt to disrupt the system's proper functioning.)_
* \(2\cdot N_{1}>N\) _and_ \(N_{1}+N_{2}=N\)__
* _Let the system is defined in Theorem_ 37_._
_then the Peer-to-Peer Electronic Cash System (See Algorithm 4) satisfy that_
* _the system is safe now (See Definition_ 30_)._
**Lemma 37**.: _Given a Bitcoin system, there exits a transaction creating procedure (see Algorithm 4) promise the following requirement_
* _If the number of safe nodes is larger than half of the total numbers, the system is safe._
* _The nodes accept a block by using the hash of theblock as the previous hash._
## Appendix C Gradient
In Section C.1, we give the formal definition of Kronecker product, gradient descent, and functions. In Section C.2, we introduce a basic equivalence. In Section C.3, we compute the first-order derivatives of the functions defined earlier.
### Preliminary
In this section, we first define Kronecker product.
**Definition 38**.: _Given \(A_{1}\in\mathbb{R}^{n\times d}\), \(A_{2}\in\mathbb{R}^{n\times d}\), we define \(\mathsf{A}\in\mathbb{R}^{n^{2}\times d^{2}}\) to be the matrix \(A_{1}\otimes A_{2}\), where the \((i_{1}-1)\cdot n+i_{2}\)-th row is_
\[\underbrace{\mathsf{A}_{(i_{1}-1)n+i_{2},*}}_{1\times d^{2}}:=\underbrace{A_{ 1,i_{1},*}}_{1\times d}\otimes\underbrace{A_{2,i_{2},*}}_{1\times d}\]
_for all \(i_{1}\in[n]\) and \(i_{2}\in[n]\). Here \(A_{1,i_{1},*}\) is the \(i_{1}\)-th row of matrix \(A_{1}\)._
**Definition 39**.: _Given \(A_{1},A_{2}\in\mathbb{R}^{n\times d}\) and \(X\in\mathbb{R}^{d\times d}\), we define \(D(X)\in\mathbb{R}^{n\times n}\) as follows_
\[D(X):=\operatorname{diag}(\exp(A_{1}XA_{2}^{\top})\mathbf{1}_{n}).\]
_Note that \(X\in\mathbb{R}^{d\times d}\) is matrix version of vector \(x\in\mathbb{R}^{d^{2}\times 1}\), i.e., \(X=\operatorname{mat}(x)\)._
**Definition 40**.: _Given matrices \(A_{1}\in\mathbb{R}^{n\times d}\), \(A_{2}\in\mathbb{R}^{n\times d}\) and \(x\in\mathbb{R}^{d^{2}\times 1}\). We define diagonal matrix \(D(x)\in\mathbb{R}^{n^{2}\times n^{2}}\) as follows_
\[D(x)_{(i_{1}-1)n+i_{2},(i_{1}-1)\cdot n+i_{2}}:=\exp(A_{1,i_{1},*}XA_{2}) \mathbf{1}_{n}\]
_In other words, \(D(x)=D(X)\otimes I_{n}\), where \(D(X)\in\mathbb{R}^{n\times n}\) is defined as in Definition 39. Here \(x\) is the vectorization of matrix \(X\), i.e., \(x=\operatorname{vec}(X)\)._
**Definition 41**.: _We also define \(\alpha(x)\in\mathbb{R}^{n}\)_
\[\alpha(x)_{j_{1}}:=\langle\exp(\mathsf{A}_{[j_{1}],*}x),\mathbf{1}_{n}\rangle,\quad\forall j_{1}\in[n]\]
_Here \(\mathsf{A}_{[j_{1}],*}\in\mathbb{R}^{n\times d^{2}}\) denotes the rows from index \((j_{1}-1)\cdot n+1\) to index \(j_{1}\cdot n\) (see Definition 38)._
**Definition 42**.: _For each \(j_{1}\in[n]\), we define \(u(x)_{j_{1}}\in\mathbb{R}^{n}\) as follows_
\[u(x)_{j_{1}}:=\exp(\mathsf{A}_{[j_{1}],*}x)\]
**Definition 43**.: _For each \(j_{1}\in[n]\), we define \(f(x)_{j_{1}}\in\mathbb{R}^{n}\) as follows_
\[f(x)_{j_{1}}:=\alpha(x)_{j_{1}}^{-1}u(x)_{j_{1}}\]
**Definition 44**.: _For each \(j_{1}\in[n]\), we define \(c(x)_{j_{1}}\in\mathbb{R}^{n}\) as follows_
\[c(x)_{j_{1}}:=f(x)_{j_{1}}-b_{[j_{1}]}\]
**Definition 45**.: _Let \(j_{1}\in[n]\). We define \(L_{\exp,j_{1}}\) as follows_
\[L_{\exp,j_{1}}(x):=0.5\|c(x)_{j_{1}}\|_{2}^{2}\]
_We define_
\[L_{\exp}(x):=\sum_{j_{1}=1}^{n}L_{\exp,j_{1}}(x)\]
**Definition 46**.: _We define_
\[L_{\mathrm{reg},j_{1}}(x):=0.5\|\operatorname{diag}(w)\mathsf{A}_{[j_{1}],*}x \|_{2}^{2}\]
_We define_
\[L_{\mathrm{reg}}(x):=\sum_{j_{1}=1}^{n}L_{\mathrm{reg},j_{1}}(x)\]
**Definition 47**.: _For each \(j_{1}\in[n]\), we define_
\[L_{j_{1}}(x):=L_{\exp,j_{1}}(x)+L_{\mathrm{reg},j_{1}}(x)\]
_We define_
\[L(x):=\sum_{j_{1}=1}^{n}L_{j_{1}}(x)\]
The goal of gradient descent and stochastic gradient descent is starting from \(x_{0}\) running iterative method for \(T\) iterations and find a \(x_{T}\) such that \(L(x_{T})\) is close to \(L(x_{*})\) in a certain sense, where \(x_{*}=\min_{x}L(x)\).
**Definition 48** (Gradient descent).: _For each iteration \(t\), we update_
\[x_{t+1}=x_{t}-\eta\cdot(\nabla L(x))|_{x=x_{t}}\]
_where \(\eta>0\) is the learning rate and \(\nabla L(x)=\sum_{j_{1}=1}^{n}\nabla L_{j_{1}}(x)\) is the gradient of Loss function \(L\)._
**Definition 49** (Stochastic gradient descent).: _For each iteration \(t\), we sample a set \(B_{t}\subset[n]\), we update_
\[x_{t+1}=x_{t}-\eta\cdot\sum_{j_{1}\in B_{t}}(\nabla L_{j_{1}}(x))|_{x=x_{t}}\]
_where \(\eta\) is the learning rate._
### Basic Equivalence
Now, we introduce a basic equivalence from previous work [14].
**Claim 50** ([14]).: _If we have_
* _Let_ \(B\) _be an arbitrary matrix in_ \(\mathbb{R}^{n\times n}\)_._
* _Let_ \(b=\operatorname{vec}(B)\in\mathbb{R}^{n^{2}}\)_._
* _Let_ \(A_{1}\) _and_ \(A_{2}\) _be arbitrary matrices in_ \(\mathbb{R}^{n\times d}\)_._
* _Let_ \(X\) _be an arbitrary matrix in_ \(\mathbb{R}^{d\times d}\)_._
* _Let_ \(x=\operatorname{vec}(X)\in\mathbb{R}^{d^{2}}\)_._
_Then, we can get the following four equations:_
1. \[\operatorname{vec}(\underbrace{A_{1}}_{n\times d}\underbrace{X}_{d \times d}\underbrace{A_{2}^{\top}}_{d\times n})=\underbrace{(A_{1}\otimes A_ {2})}_{n^{2}\times d^{2}}\underbrace{\operatorname{vec}(X)}_{d^{2}\times 1},\]
2. \[\min_{X\in\mathbb{R}^{d\times d}}\|A_{1}XA_{2}^{\top}-B\|_{F}^{2}=\min_{x\in \mathbb{R}^{d^{2}}}\|(A_{1}\otimes A_{2})x-b\|_{2}^{2},\]
3. \[\min_{X\in\mathbb{R}^{d\times d}}\|\exp(A_{1}XA_{2}^{\top})-B\|_{F}^{2}=\min_{ x\in\mathbb{R}^{d^{2}}}\|\exp((A_{1}\otimes A_{2})x)-b\|_{2}^{2},\]
4. \[\min_{X\in\mathbb{R}^{d\times d}}\|D(X)^{-1}\exp(A_{1}XA_{2}^{\top})-B\|_{F}^{2 }=\min_{x\in\mathbb{R}^{d^{2}}}\|D(x)^{-1}\exp((A_{1}\otimes A_{2})x)-b\|_{2} ^{2}.\]
_For simplicity, we define_
\[D(X):=\operatorname{diag}(\exp(A_{1}XA_{2}^{\top})\mathbf{1}_{n})\in\mathbb{ R}^{n\times n},\]
_so_
\[D(x)=D(X)\otimes I_{n}\in\mathbb{R}^{n^{2}\times n^{2}}.\]
### Basic Derivatives
Now, we compute the first-order derivatives. Similar calculations can be found in [13, 14].
**Lemma 51**.: _If we have that_
* \(A_{1}\) _and_ \(A_{2}\) _are two arbitrary matrices in_ \(\mathbb{R}^{n\times d}\)_._
Figure 1: Left-hand side of the first equation in Claim 50. Given \(A_{1},A_{2}\in\mathbb{R}^{n\times d}\) and \(X\in\mathbb{R}^{d\times d}\). We turn \(A_{1}XA_{2}^{\top}\in\mathbb{R}^{n\times n}\) into a length-\(n^{2}\) vector. Green matrices represent the terms without any operations; purple vector represents the term after one operation.
* \(\mathsf{A}=A_{1}\otimes A_{2}\) _(recall Definition_ 38_)._
* \(X\) _is an arbitrary matrix in_ \(\mathbb{R}^{d\times d}\)_._
* \(D(x)\) _is defined in Definition_ 40_._
* \(x\) _is an arbitrary vector in_ \(\mathbb{R}^{d^{2}}\)_, satisfying_ \(x=\operatorname{vec}(X)\)_._
_Then, we can show_
* _Part 1. For each_ \(i\in[d^{2}]\)_,_
\[\frac{\mathrm{d}\mathsf{A}x}{\mathrm{d}x_{i}}=\mathsf{A}_{*,i}\]
* _Part 2. For each_ \(i\in[d^{2}]\)_,_
\[\frac{\mathrm{d}\exp(\mathsf{A}x)}{\mathrm{d}x_{i}}=\exp(\mathsf{A}x)\circ \mathsf{A}_{*,i}\]
* _Part 3. For each_ \(j_{1}\in[n]\)_, for each_ \(i\in[d^{2}]\)_,_
\[\frac{\mathrm{d}\mathsf{A}_{[j_{1}],*}x}{\mathrm{d}x_{i}}=\mathsf{A}_{[j_{1}],i}\]
* _Part 4. For each_ \(j_{1}\in[n]\)_, for each_ \(i\in[d^{2}]\)_,_ \[\frac{\mathrm{d}u(x)_{j_{1}}}{\mathrm{d}x_{i}}=u(x)_{j_{1}}\circ\mathsf{A}_{[j _{1}],i}\]
Figure 3: Left-hand side of the second equation in Claim 50. Given \(A_{1},A_{2}\in\mathbb{R}^{n\times d}\), \(X\in\mathbb{R}^{d\times d}\), and \(B\in\mathbb{R}^{n\times n}\). We first find \(A_{1}XA_{2}^{\top}\in\mathbb{R}^{n\times n}\). Then, we subtract \(B\in\mathbb{R}^{n\times n}\) from \(A_{1}XA_{2}^{\top}\). Finally, we compute the minimum of the Frobenius norm of \(A_{1}XA_{2}^{\top}-B\). Green matrices represent the terms without any operations; purple matrix represents the term after one operation; red matrix represents the term after two operations; grey scalar represents the term after three operations.
* _Part 5. For each_ \(j_{1}\in[n]\)_, for each_ \(i\in[d^{2}]\)_,_ \[\frac{\mathrm{d}\alpha(x)_{j_{1}}}{\mathrm{d}x}=\langle u(x)_{j_{1}},\mathsf{A}_ {[j_{1}],i}\rangle\]
* _Part 6. For each_ \(j_{1}\in[n]\)_, for each_ \(i\in[d^{2}]\)_,_ \[\frac{\mathrm{d}\alpha(x)_{i_{1}}^{-1}}{\mathrm{d}x_{i}}=-\alpha(x)_{j_{1}}^{- 1}\cdot\langle f(x)_{j_{1}},\mathsf{A}_{[j_{1}],i}\rangle\]
* _Part 7. For each_ \(j_{1}\in[n]\)_, for each_ \(i\in[d^{2}]\)_,_ \[\frac{\mathrm{d}f(x)_{j_{1}}}{\mathrm{d}x_{i}}=f(x)_{j_{1}}\circ\mathsf{A}_{[j _{1}],i}-f(x)_{j_{1}}\cdot\langle f(x)_{j_{1}},\mathsf{A}_{[j_{1}],i}\rangle.\]
* _Part 8. For each_ \(j_{1}\in[n]\)_, for each_ \(i\in[d^{2}]\)_,_ \[\frac{\mathrm{d}c(x)_{j_{1}}}{\mathrm{d}x_{i}}=\frac{\mathrm{d}f(x)_{j_{1}}}{ \mathrm{d}x_{i}}\]
Figure 4: Right-hand side of the second equation in Claim 50. Given \(A_{1},A_{2}\in\mathbb{R}^{n\times d}\), \(X\in\mathbb{R}^{d\times d}\), and \(B\in\mathbb{R}^{n\times n}\). We first turn \(A_{1},A_{2}\in\mathbb{R}^{n\times d}\) into a \(n^{2}\times d^{2}\) matrix by Kronecker product, turn \(X\in\mathbb{R}^{d\times d}\) into a \(d^{2}\) dimensional vector by \(\operatorname{vec}(\cdot)\), and turn \(B\in\mathbb{R}^{n\times n}\) into a \(n^{2}\) dimensional vector by \(\operatorname{vec}(\cdot)\), namely \(b=\operatorname{vec}(B)\) and \(x=\operatorname{vec}(X)\). Then, we multiply \(A_{1}\otimes A_{2}\) with \(\operatorname{vec}(X)\) to get an \(n^{2}\) dimensional vector. After that, we subtract \(\operatorname{vec}(B)\) from \((A_{1}\otimes A_{2})\operatorname{vec}(X)\). Finally, we compute the minimum of the \(\ell_{2}\) norm of \((A_{1}\otimes A_{2})\operatorname{vec}(X)\). Green matrices represent the terms without any operations; purple vectors/matrix represent the term after one operation; red vector represents the term after two operations; gray vector represents the term after three operations; blue scalar represents the term after four operations.
* _Part 9. For each_ \(i\in[d^{2}]\)_,_ \[\frac{\mathrm{d}L_{\mathrm{exp}}(x)}{\mathrm{d}x_{i}}\] \[= \sum_{j_{1}=1}^{n}(\langle c(x)_{j_{1}},f(x)_{j_{1}}\circ\mathsf{A} _{[j_{1}],i}\rangle-\langle c(x)_{j_{1}},f(x)_{j_{1}}\rangle\cdot\langle f(x)_ {j_{1}},\mathsf{A}_{[j_{1}],i}\rangle).\]
## Appendix D Hessian
In this section, our attention is directed towards the Hessian property inherent in our loss function. This investigation serves as a preparatory step for substantiating the convergence proof of our training procedure. While [10] outlines a singular version of a similar problem, we aim to showcase that our computations extend this scenario by a factor of \(n\). Drawing upon the Hessian property expounded upon in [10], it becomes evident that our loss function similarly exhibits this particular property.
In Section D.1, we compute the second order derivative of \(u(x)_{j_{1}}\). In Section D.2, we compute the second order derivative of \(\alpha(x)_{j_{1}}\). In Section D.3, we compute the second order derivative of \(\alpha(x)_{j_{1}}^{-1}\). In Section D.4, we compute the second order derivative of \(f(x)_{j_{1}}\). In Section D.5, we compute the second order derivative of \(L_{\mathrm{exp}}\). In Section D.6, we compute the hessian of a single loss. In Section D.7, we simplify the result that we get.
### Second Order Derivatives of \(u(x)_{j_{1}}\)
In this section, we start to compute the second-order derivative of \(u(x)_{j_{1}}\).
**Lemma 52**.: _If the following conditions hold_
* _Let_ \(u\) _be defined as in Definition_ 42_._
* _Let_ \(x\in\mathbb{R}^{d^{2}}\)_, satisfying_ \(x=\operatorname{vec}(X)\)_._
* _Let_ \(A_{1},A_{2}\in\mathbb{R}^{n\times d}\)_._
* _Let_ \(\mathsf{A}=A_{1}\otimes A_{2}\)_._
Figure 6: Right-hand side of the third equation in Claim 50. Given \((A_{1}\otimes A_{2})x\in\mathbb{R}^{n^{2}}\) and \(b\in\mathbb{R}^{n^{2}}\). We first find \(\exp((A_{1}\otimes A_{2})x)\in\mathbb{R}^{n^{2}}\). Then, we subtract \(b\in\mathbb{R}^{n^{2}}\) from \(\exp((A_{1}\otimes A_{2})x)\). Finally, we compute the minimum of the \(\ell_{2}\) norm of \(\exp((A_{1}\otimes A_{2})x)-b\). Green vectors represent the terms without any operations; purple vector represents the term after one operation; red vector represents the term after two operations; grey scalar represents the term after three operations.
_Then, we have_
* _For each_ \(i\in[d^{2}]\)_,_ \[\frac{\mathrm{d}^{2}u(x)_{j_{1}}}{\mathrm{d}x_{i}^{2}}=u(x)_{j_{1}}\circ\mathsf{ A}_{[j_{1}],i}\circ\mathsf{A}_{[j_{1}],i}.\]
* _For each_ \(i\in[d^{2}],l\in[d^{2}]\)__ \[\frac{\mathrm{d}^{2}u(x)_{j_{1}}}{\mathrm{d}x_{i}\mathrm{d}x_{l}}=u(x)_{j_{1}} \circ\mathsf{A}_{[j_{1}],l}\circ\mathsf{A}_{[j_{1}],i}.\]
Proof.: We have
\[\frac{\mathrm{d}^{2}u(x)_{j_{1}}}{\mathrm{d}x_{i}^{2}} =\frac{\mathrm{d}}{\mathrm{d}x_{i}}(\frac{\mathrm{d}u(x)_{j_{1}}} {\mathrm{d}x_{i}})\] \[=\frac{\mathrm{d}(u(x)_{j_{1}}\circ\mathsf{A}_{[j_{1}],i})}{ \mathrm{d}x_{i}}\] \[=\frac{\mathrm{d}(u(x)_{j_{1}})}{\mathrm{d}x_{i}}\circ\mathsf{A}_ {[j_{1}],i}+u(x)_{j_{1}}\circ\frac{\mathrm{d}(\mathsf{A}_{[j_{1}],i})}{ \mathrm{d}x_{i}}\] \[=u(x)_{j_{1}}\circ\mathsf{A}_{[j_{1}],i}\circ\mathsf{A}_{[j_{1}],i}\]
where the first step follows from simple algebra, the second step follows from **Part 4** of Lemma 51, the third step follows from Fact 17, and the last step follows from **Part 4** of Lemma 51 and \(\frac{\mathrm{d}(\mathsf{A}_{[j_{1}],i})}{\mathrm{d}x_{i}}=0\).
Also, we can get
\[\frac{\mathrm{d}^{2}u(x)_{j_{1}}}{\mathrm{d}x_{i}\mathrm{d}x_{l}}=\frac{ \mathrm{d}}{\mathrm{d}x_{l}}(\frac{\mathrm{d}u(x)_{j_{1}}}{\mathrm{d}x_{i}})\]
Figure 7: Left-hand side of the fourth equation in Claim 50. Given \(\exp(A_{1}XA_{2}),B,D(X)^{-1}\in\mathbb{R}^{n\times n}\). We first find \(D(X)^{-1}\exp(A_{1}XA_{2}^{\top})-B\in\mathbb{R}^{n\times n}\). Then, we compute the minimum of the Frobenius norm of \(D(X)^{-1}\exp(A_{1}XA_{2}^{\top})-B\). Green matrices represent the terms without any operations; purple matrix represents the term after one operation; red scalar represents the term after two operations.
\[= \frac{\mathrm{d}(u(x)_{j_{1}}\circ\mathsf{A}_{[j_{1}],i})}{\mathrm{d}x _{l}}\] \[= \frac{\mathrm{d}(u(x)_{j_{1}})}{\mathrm{d}x_{l}}\circ\mathsf{A}_{[j _{1}],i}+u(x)_{j_{1}}\circ\frac{\mathrm{d}(\mathsf{A}_{[j_{1}],i})}{\mathrm{d}x _{l}}\] \[= u(x)_{j_{1}}\circ\mathsf{A}_{[j_{1}],l}\circ\mathsf{A}_{[j_{1}],i}\]
where the first step follows from simple algebra, the second step follows from **Part 4** of Lemma 51, the third step follows from Fact 17, and the last step follows from **Part 4** of Lemma 51 and \(\frac{\mathrm{d}(\mathsf{A}_{[j_{1}],i})}{\mathrm{d}x_{i}}=0\).
### Second Order Derivatives of \(\alpha(x)_{j_{1}}\)
In this section, we start to compute the second-order derivative of \(\alpha(x)_{j_{1}}\).
**Lemma 53**.: _If the following conditions hold_
* _Let_ \(\alpha\) _be defined as in Definition_ 41_._
* _Let_ \(x\in\mathbb{R}^{d^{2}}\)_, satisfying_ \(x=\mathrm{vec}(X)\)_._
Figure 8: Right-hand side of the fourth equation in Claim 50. Given \(\exp((A_{1}\otimes A_{2})x),b\in\mathbb{R}^{n^{2}}\) and \(D(x)^{-1}\in\mathbb{R}^{n^{2}\times n^{2}}\). We first find \(D(x)^{-1}\exp((A_{1}\otimes A_{2})x)-b\in\mathbb{R}^{n^{2}}\). Then, we compute the minimum of the \(\ell_{2}\) norm of \(D(x)^{-1}\exp((A_{1}\otimes A_{2})x)-b\). Green matrix/vectors represent the terms without any operations; purple vector represents the term after one operation; red scalar represents the term after two operations.
* _Let_ \(A_{1},A_{2}\in\mathbb{R}^{n\times d}\)_._
* _Let_ \(\mathsf{A}=A_{1}\otimes A_{2}\)_._
_Then, we have_
* _For each_ \(i\in[d^{2}]\)_,_ \[\frac{\mathrm{d}^{2}\alpha(x)_{j_{1}}}{\mathrm{d}x_{i}^{2}}=\langle u(x)_{j_ {1}},\mathsf{A}_{[j_{1}],i}^{2}\rangle.\]
* _For each_ \(i,l\in[d^{2}]\)_,_ \[\frac{\mathrm{d}^{2}\alpha(x)_{j_{1}}}{\mathrm{d}x_{i}\mathrm{d}x_{l}}= \langle u(x)_{j_{1}},\mathsf{A}_{[j_{1}],i}\circ\mathsf{A}_{[j_{1}],l}\rangle.\]
Proof.: We have
\[\frac{\mathrm{d}^{2}\alpha(x)_{j_{1}}}{\mathrm{d}x_{i}^{2}} =\frac{\mathrm{d}}{\mathrm{d}x_{i}}(\frac{\mathrm{d}\alpha(x)_{j _{1}}}{\mathrm{d}x_{i}})\] \[=\frac{\mathrm{d}(\langle u(x)_{j_{1}},\mathsf{A}_{[j_{1}],i} \rangle)}{\mathrm{d}x_{i}}\] \[=\langle\frac{\mathrm{d}u(x)_{j_{1}}}{\mathrm{d}x_{i}},\mathsf{A }_{[j_{1}],i}\rangle+\langle u(x)_{j_{1}},\frac{\mathrm{d}\mathsf{A}_{[j_{1}],i}}{\mathrm{d}x_{i}}\rangle\] \[=\langle u(x)_{j_{1}}\circ\mathsf{A}_{[j_{1}],i},\mathsf{A}_{[j_{ 1}],i}\rangle\] \[=\langle u(x)_{j_{1}},\mathsf{A}_{[j_{1}],i}^{2}\rangle,\]
where the first step follows from simple algebra, the second step follows from **Part 5** of Lemma 51, the third step follows from the definition of the inner product, the fourth step follows from \(\frac{\mathrm{d}\mathsf{A}_{[j_{1}],i}}{\mathrm{d}x_{i}}=0\), and the last step follows from Fact 16.
Also, we can get
\[\frac{\mathrm{d}^{2}\alpha(x)_{j_{1}}}{\mathrm{d}x_{i}\mathrm{d}x _{l}} =\frac{\mathrm{d}}{\mathrm{d}x_{l}}(\frac{\mathrm{d}\alpha(x)_{j_ {1}}}{\mathrm{d}x_{i}})\] \[=\frac{\mathrm{d}(\langle u(x)_{j_{1}},\mathsf{A}_{[j_{1}],i} \rangle)}{\mathrm{d}x_{l}}\] \[=\langle\frac{\mathrm{d}u(x)_{j_{1}}}{\mathrm{d}x_{l}},\mathsf{A }_{[j_{1}],i}\rangle+\langle u(x)_{j_{1}},\frac{\mathrm{d}\mathsf{A}_{[j_{1}],i}}{\mathrm{d}x_{l}}\rangle\] \[=\langle u(x)_{j_{1}}\circ\mathsf{A}_{[j_{1}],l},\mathsf{A}_{[j_{ 1}],i}\rangle\] \[=\langle u(x)_{j_{1}},\mathsf{A}_{[j_{1}],i}\circ\mathsf{A}_{[j_{ 1}],l}\rangle,\]
where the first step follows from simple algebra, the second step follows from **Part 5** of Lemma 51, the third step follows from the definition of the inner product, the fourth step follows from \(\frac{\mathrm{d}\mathsf{A}_{[j_{1}],i}}{\mathrm{d}x_{i}}=0\), and the last step follows from Fact 16.
### Second Order Derivatives of \(\alpha(x)_{j_{1}}^{-1}\)
In this section, we start to compute the second-order derivative of \(\alpha(x)_{j_{1}}^{-1}\).
**Lemma 54**.: _If the following conditions hold_
* _Let_ \(\alpha\) _be defined as in Definition_ 41_._
* _Let_ \(f\) _be defined as in Definition_ 43_._
* _Let_ \(x\in\mathbb{R}^{d^{2}}\)_, satisfying_ \(x=\operatorname{vec}(X)\)_._
* _Let_ \(A_{1},A_{2}\in\mathbb{R}^{n\times d}\)_._
* _Let_ \(\mathsf{A}=A_{1}\otimes A_{2}\)_._
_Then, we have_
* _For each_ \(i\in[d^{2}]\)_,_ \[\frac{\mathrm{d}^{2}\alpha(x)_{j_{1}}^{-1}}{\mathrm{d}x_{i}^{2}}=2\alpha(x)_{j _{1}}^{-1}\cdot\langle f(x)_{j_{1}},\mathsf{A}_{[j_{1}],i}\rangle^{2}-\alpha( x)_{j_{1}}^{-1}\cdot\langle f(x)_{j_{1}},\mathsf{A}_{[j_{1}],i}^{2}\rangle.\]
* _For each_ \(i,l\in[d^{2}]\)_,_ \[\frac{\mathrm{d}^{2}\alpha(x)_{j_{1}}^{-1}}{\mathrm{d}x_{i}\mathrm{d}x_{l}}=2 \alpha(x)_{j_{1}}^{-1}\cdot\langle f(x)_{j_{1}},\mathsf{A}_{[j_{1}],l}\rangle \cdot\langle f(x)_{j_{1}},\mathsf{A}_{[j_{1}],i}\rangle-\alpha(x)_{j_{1}}^{-1 }\cdot\langle f(x)_{j_{1}},\mathsf{A}_{[j_{1}],i}\circ\mathsf{A}_{[j_{1}],l}\rangle.\]
Proof.: We have
\[\frac{\mathrm{d}^{2}\alpha(x)_{j_{1}}^{-1}}{\mathrm{d}x_{i}^{2}} =\frac{\mathrm{d}}{\mathrm{d}x_{i}}(\frac{\mathrm{d}\alpha(x)_{j _{1}}^{-1}}{\mathrm{d}x_{i}})\] \[=\frac{\mathrm{d}(-\alpha(x)_{j_{1}}^{-1}\cdot\langle f(x)_{j_{1 }},\mathsf{A}_{[j_{1}],i}\rangle)}{\mathrm{d}x_{i}}\] \[=\frac{\mathrm{d}(-\alpha(x)_{j_{1}}^{-1})}{\mathrm{d}x_{i}} \cdot\langle f(x)_{j_{1}},\mathsf{A}_{[j_{1}],i}\rangle+(-\alpha(x)_{j_{1}}^{ -1})\cdot\frac{\mathrm{d}\langle f(x)_{j_{1}},\mathsf{A}_{[j_{1}],i}\rangle}{ \mathrm{d}x_{i}}\] \[=\alpha(x)_{j_{1}}^{-1}\cdot\langle f(x)_{j_{1}},\mathsf{A}_{[j_{ 1}],i}\rangle^{2}+(-\alpha(x)_{j_{1}}^{-1})\cdot\frac{\mathrm{d}\langle f(x)_{ j_{1}},\mathsf{A}_{[j_{1}],i}\rangle}{\mathrm{d}x_{i}}, \tag{1}\]
where the first step follows from simple algebra, the second step follows from **Part 6** of Lemma 51, the third step follows from Fact 17, and the last step follows from **Part 6** of Lemma 51.
To compute the second term of Eq. (1), we have
\[\frac{\mathrm{d}\langle f(x)_{j_{1}},\mathsf{A}_{[j_{1}],i}\rangle}{ \mathrm{d}x_{i}} =\langle\frac{\mathrm{d}f(x)_{j_{1}}}{\mathrm{d}x_{i}},\mathsf{A}_{[j_{1} ],i}\rangle\] \[=\langle f(x)_{j_{1}}\circ\mathsf{A}_{[j_{1}],i}-f(x)_{j_{1}} \cdot\langle f(x)_{j_{1}},\mathsf{A}_{[j_{1}],i}\rangle,\mathsf{A}_{[j_{1}],i}\rangle\] \[=\langle f(x)_{j_{1}}\circ\mathsf{A}_{[j_{1}],i},\mathsf{A}_{[j_ {1}],i}\rangle-\langle f(x)_{j_{1}}\cdot\langle f(x)_{j_{1}},\mathsf{A}_{[j_{1} ],i}\rangle,\mathsf{A}_{[j_{1}],i}\rangle\] \[=\langle f(x)_{j_{1}},\mathsf{A}_{[j_{1}],i}^{2}\rangle-\langle f (x)_{j_{1}},\mathsf{A}_{[j_{1}],i}\rangle^{2}, \tag{2}\]
where the first step follows from the definition of the inner product, the second step follows from **Part 7** of Lemma 51 and \(\frac{\mathrm{d}\mathsf{A}_{[j_{1}],i}}{\mathrm{d}x_{i}}=0\), the third step follows from Fact 16, and the last step follows from Fact 16.
Combining Eq. (1) and Eq. (2), we have
\[\frac{\mathrm{d}^{2}\alpha(x)_{j_{1}}^{-1}}{\mathrm{d}x_{i}^{2}} =\alpha(x)_{j_{1}}^{-1}\cdot\langle f(x)_{j_{1}},\mathsf{A}_{[j_ {1}],i}\rangle^{2}-\alpha(x)_{j_{1}}^{-1}\cdot(\langle f(x)_{j_{1}},\mathsf{A }_{[j_{1}],i}^{2}\rangle-\langle f(x)_{j_{1}},\mathsf{A}_{[j_{1}],i}\rangle^{2})\] \[=\alpha(x)_{j_{1}}^{-1}\cdot\langle f(x)_{j_{1}},\mathsf{A}_{[j_ {1}],i}\rangle^{2}-\alpha(x)_{j_{1}}^{-1}\cdot\langle f(x)_{j_{1}},\mathsf{A }_{[j_{1}],i}^{2}\rangle+\alpha(x)_{j_{1}}^{-1}\cdot\langle f(x)_{j_{1}}, \mathsf{A}_{[j_{1}],i}\rangle^{2}\] \[=2\alpha(x)_{j_{1}}^{-1}\cdot\langle f(x)_{j_{1}},\mathsf{A}_{[j _{1}],i}\rangle^{2}-\alpha(x)_{j_{1}}^{-1}\cdot\langle f(x)_{j_{1}},\mathsf{A }_{[j_{1}],i}^{2}\rangle,\]
where the second and the third step both follow from simple algebra.
Then, to compute \(\frac{\mathrm{d}^{2}\alpha(x)_{j_{1}}^{-1}}{\mathrm{d}x_{i}\mathrm{d}x_{l}}\), we have
\[\frac{\mathrm{d}^{2}\alpha(x)_{j_{1}}^{-1}}{\mathrm{d}x_{i} \mathrm{d}x_{l}} =\frac{\mathrm{d}}{\mathrm{d}x_{l}}(\frac{\mathrm{d}\alpha(x)_{j_ {1}}^{-1}}{\mathrm{d}x_{i}})\] \[=\frac{\mathrm{d}(-\alpha(x)_{j_{1}}^{-1}\cdot\langle f(x)_{j_{1} },\mathsf{A}_{[j_{1}],i}\rangle)}{\mathrm{d}x_{l}}\] \[=\frac{\mathrm{d}(-\alpha(x)_{j_{1}}^{-1})}{\mathrm{d}x_{l}} \cdot\langle f(x)_{j_{1}},\mathsf{A}_{[j_{1}],i}\rangle+(-\alpha(x)_{j_{1}}^{ -1})\cdot\frac{\mathrm{d}\langle f(x)_{j_{1}},\mathsf{A}_{[j_{1}],i}\rangle}{ \mathrm{d}x_{l}}\] \[=\alpha(x)_{j_{1}}^{-1}\cdot\langle f(x)_{j_{1}},\mathsf{A}_{[j_ {1}],l}\rangle\cdot\langle f(x)_{j_{1}},\mathsf{A}_{[j_{1}],i}\rangle-\alpha(x )_{j_{1}}^{-1}\cdot\frac{\mathrm{d}\langle f(x)_{j_{1}},\mathsf{A}_{[j_{1}],i }\rangle}{\mathrm{d}x_{l}}, \tag{3}\]
where the first step follows from simple algebra, the second step follows from **Part 6** of Lemma 51, the third step follows from Fact 17, and the last step follows from **Part 6** of Lemma 51.
To compute the second term of Eq. (3), we have
\[\frac{\mathrm{d}\langle f(x)_{j_{1}},\mathsf{A}_{[j_{1}],i}}{ \mathrm{d}x_{l}} =\langle\frac{\mathrm{d}f(x)_{j_{1}}}{\mathrm{d}x_{l}},\mathsf{A}_{[j_ {1}],i}\rangle+\langle f(x)_{j_{1}},\frac{\mathrm{d}\mathsf{A}_{[j_{1}],i}}{ \mathrm{d}x_{l}}\rangle\] \[=\langle f(x)_{j_{1}}\circ\mathsf{A}_{[j_{1}],l}-f(x)_{j_{1}} \cdot\langle f(x)_{j_{1}},\mathsf{A}_{[j_{1}],l}\rangle,\mathsf{A}_{[j_{1}],i}\rangle\] \[=\langle f(x)_{j_{1}}\circ\mathsf{A}_{[j_{1}],l},\mathsf{A}_{[j_ {1}],i}\rangle-\langle f(x)_{j_{1}}\cdot\langle f(x)_{j_{1}},\mathsf{A}_{[j_ {1}],l}\rangle,\mathsf{A}_{[j_{1}],i}\rangle\] \[=\langle f(x)_{j_{1}},\mathsf{A}_{[j_{1}],i}\circ\mathsf{A}_{[j_ {1}],l}\rangle-\langle f(x)_{j_{1}},\mathsf{A}_{[j_{1}],l}\rangle\cdot\langle f (x)_{j_{1}},\mathsf{A}_{[j_{1}],i}\rangle, \tag{4}\]
where the first step follows from the definition of the inner product, the second step follows from **Part 7** of Lemma 51 and \(\frac{\mathrm{d}\mathsf{A}_{[j_{1}],i}}{\mathrm{d}x_{l}}=0\), the third step follows from Fact 16, and the last step follows from Fact 16.
By simple algebra, we can combine Eq. (3) and Eq. (4) as:
\[\frac{\mathrm{d}^{2}\alpha(x)_{j_{1}}^{-1}}{\mathrm{d}x_{i}\mathrm{d}x_{l}}=2 \alpha(x)_{j_{1}}^{-1}\cdot\langle f(x)_{j_{1}},\mathsf{A}_{[j_{1}],l} \rangle\cdot\langle f(x)_{j_{1}},\mathsf{A}_{[j_{1}],i}\rangle-\alpha(x)_{j_ {1}}^{-1}\cdot\langle f(x)_{j_{1}},\mathsf{A}_{[j_{1}],i}\circ\mathsf{A}_{[j_ {1}],l}\rangle.\]
### Second Order Derivatives of \(f(x)_{j_{1}}\)
In this section, we start to compute the second-order derivative of \(f(x)_{j_{1}}\).
**Lemma 55**.: _If the following conditions hold_
* _Let_ \(f\) _be defined as in Definition_ 43_._
* _Let_ \(x\in\mathbb{R}^{d^{2}}\)_, satisfying_ \(x=\operatorname{vec}(X)\)_._
* _Let_ \(A_{1},A_{2}\in\mathbb{R}^{n\times d}\)_._
* _Let_ \(\mathsf{A}=A_{1}\otimes A_{2}\)_._
_Then, we have_
* _For each_ \(i\in[d^{2}]\)_,_ \[\frac{\mathrm{d}^{2}f(x)_{j_{1}}}{\mathrm{d}x_{i}^{2}}= f(x)_{j_{1}}\circ\mathsf{A}_{[j_{1}],i}^{2}-2f(x)_{j_{1}}\circ \mathsf{A}_{[j_{1}],i}\cdot\langle f(x)_{j_{1}},\mathsf{A}_{[j_{1}],i}\rangle\] \[-f(x)_{j_{1}}\cdot\langle f(x)_{j_{1}},\mathsf{A}_{[j_{1}],i}^{2} \rangle+2f(x)_{j_{1}}\cdot\langle f(x)_{j_{1}},\mathsf{A}_{[j_{1}],i}\rangle^ {2}.\]
* _For each_ \(i,l\in[d^{2}]\)_,_ \[\frac{\mathrm{d}^{2}f(x)_{j_{1}}}{\mathrm{d}x_{i}\mathrm{d}x_{l}}= f(x)_{j_{1}}\circ\mathsf{A}_{[j_{1}],i}\circ\mathsf{A}_{[j_{1}],l}-f(x)_{j_ {1}}\circ\mathsf{A}_{[j_{1}],i}\cdot\langle f(x)_{j_{1}},\mathsf{A}_{[j_{1}],l}\rangle\] \[-f(x)_{j_{1}}\circ\mathsf{A}_{[j_{1}],l}\cdot\langle f(x)_{j_{1}},\mathsf{A}_{[j_{1}],i}\rangle+2f(x)_{j_{1}}\cdot\langle f(x)_{j_{1}},\mathsf{ A}_{[j_{1}],i}\rangle\cdot\langle f(x)_{j_{1}},\mathsf{A}_{[j_{1}],l}\rangle\] \[-f(x)_{j_{1}}\cdot\langle f(x)_{j_{1}},\mathsf{A}_{[j_{1}],i} \circ\mathsf{A}_{[j_{1}],l}\rangle.\]
Proof.: We first consider \(\frac{\mathrm{d}^{2}f(x)_{j_{1}}}{\mathrm{d}x_{i}^{2}}\).
We have
\[\frac{\mathrm{d}^{2}f(x)_{j_{1}}}{\mathrm{d}x_{i}^{2}}= \ \frac{\mathrm{d}}{\mathrm{d}x_{i}}(\frac{\mathrm{d}f(x)_{j_{1}}}{ \mathrm{d}x_{i}})\] \[= \ \frac{\mathrm{d}(f(x)_{j_{1}}\circ\mathsf{A}_{[j_{1}],i}-f(x)_{j_ {1}}\cdot\langle f(x)_{j_{1}},\mathsf{A}_{[j_{1}],i}\rangle)}{\mathrm{d}x_{i}}\] \[= \ \frac{\mathrm{d}f(x)_{j_{1}}\circ\mathsf{A}_{[j_{1}],i}}{ \mathrm{d}x_{i}}-\frac{\mathrm{d}f(x)_{j_{1}}\cdot\langle f(x)_{j_{1}}, \mathsf{A}_{[j_{1}],i}\rangle}{\mathrm{d}x_{i}}, \tag{5}\]
where the first step follows from simple algebra, the second step is due to Fact 17, and the third step is based on Fact 17.
To compute the first term of Eq. (5), we have
\[\frac{\mathrm{d}f(x)_{j_{1}}\circ\mathsf{A}_{[j_{1}],i}}{\mathrm{ d}x_{i}}= f(x)_{j_{1}}\circ\frac{\mathrm{d}\mathsf{A}_{[j_{1}],i}}{\mathrm{d}x_{i}}+ \frac{\mathrm{d}f(x)_{j_{1}}}{\mathrm{d}x_{i}}\circ\mathsf{A}_{[j_{1}],i}\] \[= \ (f(x)_{j_{1}}\circ\mathsf{A}_{[j_{1}],i}-f(x)_{j_{1}}\cdot \langle f(x)_{j_{1}},\mathsf{A}_{[j_{1}],i}\rangle)\circ\mathsf{A}_{[j_{1}],i}\] \[= f(x)_{j_{1}}\circ\mathsf{A}_{[j_{1}],i}^{2}-f(x)_{j_{1}}\circ \mathsf{A}_{[j_{1}],i}\cdot\langle f(x)_{j_{1}},\mathsf{A}_{[j_{1}],i}\rangle, \tag{6}\]
where the first step follows from Fact 17, the second step follows from \(\frac{\mathrm{d}\mathsf{A}_{[j_{1}],i}}{\mathrm{d}x_{i}}=0\) and **Part 7** of Lemma 51, and the last step follows from the property of Hadamard product.
To compute the second term of Eq. (5), we have
\[\frac{\mathrm{d}f(x)_{j_{1}}\cdot\langle f(x)_{j_{1}},\mathsf{A}_{ [j_{1}],i}\rangle}{\mathrm{d}x_{i}}= \ \frac{\mathrm{d}f(x)_{j_{1}}}{\mathrm{d}x_{i}}\cdot\langle f(x)_{j_{1}}, \mathsf{A}_{[j_{1}],i}\rangle+f(x)_{j_{1}}\cdot\frac{\mathrm{d}\langle f(x)_{j _{1}},\mathsf{A}_{[j_{1}],i}\rangle}{\mathrm{d}x_{i}}\] \[= \ (f(x)_{j_{1}}\circ\mathsf{A}_{[j_{1}],i}-f(x)_{j_{1}}\cdot \langle f(x)_{j_{1}},\mathsf{A}_{[j_{1}],i}\rangle)\cdot\langle f(x)_{j_{1}}, \mathsf{A}_{[j_{1}],i}\rangle\]
\[\frac{\mathrm{d}f(x)_{j_{1}}\circ\mathsf{A}_{[j_{1}],i}}{\mathrm{d}x _{l}} = \frac{\mathrm{d}f(x)_{j_{1}}}{\mathrm{d}x_{l}}\cdot\langle f(x)_{j_{1}}, \mathsf{A}_{[j_{1}],i}\rangle+f(x)_{j_{1}}\cdot\frac{\mathrm{d}\langle f(x)_{j _{1}},\mathsf{A}_{[j_{1}],i}\rangle}{\mathrm{d}x_{l}} \tag{9}\] \[= (f(x)_{j_{1}}\circ\mathsf{A}_{[j_{1}],i}\cdot\langle f(x)_{j_{1} },\mathsf{A}_{[j_{1}],i}\rangle+f(x)_{j_{1}}\cdot\frac{\mathrm{d}\langle f(x)_{ j_{1}},\mathsf{A}_{[j_{1}],i}\rangle}{\mathrm{d}x_{l}}\] \[= (f(x)_{j_{1}}\circ\mathsf{A}_{[j_{1}],i}\cdot\langle f(x)_{j_{1} },\mathsf{A}_{[j_{1}],i}\rangle+f(x)_{j_{1}}\cdot\langle f(x)_{j_{1}}, \mathsf{A}_{[j_{1}],i}\rangle\] \[= (f(x)_{j_{1}}\circ\mathsf{A}_{[j_{1}],i}\cdot\langle f(x)_{j_{1} },\mathsf{A}_{[j_{1}],i}\rangle+f(x)_{j_{1}}\cdot\langle f(x)_{j_{1}}, \mathsf{A}_{[j_{1}],i}\rangle\] \[-2f(x)_{j_{1}}\cdot\langle f(x)_{j_{1}},\mathsf{A}_{[j_{1}],i} \rangle^{2},\]
where the first step follows from Fact 17, the second step follows from **Part 7** of Lemma 51, the third step follows from the proof of Lemma 54 (see Eq. (2)), and the fourth and the fifth step follows from simple algebra.
Combining Eq. (5), Eq. (6), and Eq. (7), we have
\[\frac{\mathrm{d}^{2}f(x)_{j_{1}}}{\mathrm{d}x_{i}^{2}}= \ f(x)_{j_{1}}\circ\mathsf{A}_{[j_{1}],i}^{2}-2f(x)_{j_{1}}\circ \mathsf{A}_{[j_{1}],i}\cdot\langle f(x)_{j_{1}},\mathsf{A}_{[j_{1}],i}\rangle\] \[-f(x)_{j_{1}}\cdot\langle f(x)_{j_{1}},\mathsf{A}_{[j_{1}],i}^{2 }\rangle+2f(x)_{j_{1}}\cdot\langle f(x)_{j_{1}},\mathsf{A}_{[j_{1}],i}\rangle^ {2}.\]
Now, we consider \(\frac{\mathrm{d}^{2}f(x)_{j_{1}}}{\mathrm{d}x_{i}\mathrm{d}x_{l}}\).
We have
\[\frac{\mathrm{d}^{2}f(x)_{j_{1}}}{\mathrm{d}x_{i}\mathrm{d}x_{l}} = \frac{\mathrm{d}}{\mathrm{d}x_{l}}(\frac{\mathrm{d}f(x)_{j_{1}}}{ \mathrm{d}x_{i}}) \tag{8}\] \[= \frac{\mathrm{d}(f(x)_{j_{1}}\circ\mathsf{A}_{[j_{1}],i}-f(x)_{j_{ 1}}\cdot\langle f(x)_{j_{1}},\mathsf{A}_{[j_{1}],i}\rangle)}{\mathrm{d}x_{l}}\] \[= \frac{\mathrm{d}f(x)_{j_{1}}\circ\mathsf{A}_{[j_{1}],i}}{\mathrm{ d}x_{l}}-\frac{\mathrm{d}f(x)_{j_{1}}\cdot\langle f(x)_{j_{1}},\mathsf{A}_{[j_{1}],i} \rangle}{\mathrm{d}x_{l}},\]
where the first step follows from simple algebra, the second step follows from **Part 7** of Lemma 51, and the third step follows from Fact 17.
To compute the first term of Eq. (8), we have
\[\frac{\mathrm{d}f(x)_{j_{1}}\circ\mathsf{A}_{[j_{1}],i}}{\mathrm{ d}x_{l}} = f(x)_{j_{1}}\circ\frac{\mathrm{d}\mathsf{A}_{[j_{1}],i}}{\mathrm{d}x_{l}}+ \frac{\mathrm{d}f(x)_{j_{1}}}{\mathrm{d}x_{l}}\circ\mathsf{A}_{[j_{1}],i} \tag{9}\] \[= (f(x)_{j_{1}}\circ\mathsf{A}_{[j_{1}],l}-f(x)_{j_{1}}\cdot\langle f (x)_{j_{1}},\mathsf{A}_{[j_{1}],l}\rangle)\circ\mathsf{A}_{[j_{1}],i}\] \[= f(x)_{j_{1}}\circ\mathsf{A}_{[j_{1}],i}\circ\mathsf{A}_{[j_{1}],l }-f(x)_{j_{1}}\circ\mathsf{A}_{[j_{1}],i}\cdot\langle f(x)_{j_{1}},\mathsf{A}_{[ j_{1}],l}\rangle,\]
where the first step follows from Fact 17, the second step follows from \(\frac{\mathrm{d}\mathsf{A}_{[j_{1}],i}}{\mathrm{d}x_{l}}=0\) and **Part 7** of Lemma 51, the third step follows from the property of Hadamard product.
To compute the second term of Eq. (8), we have
\[\frac{\mathrm{d}f(x)_{j_{1}}\cdot\langle f(x)_{j_{1}},\mathsf{A}_{ [j_{1}],i}\rangle}{\mathrm{d}x_{l}} = \frac{\mathrm{d}f(x)_{j_{1}}}{\mathrm{d}x_{l}}\cdot\langle f(x)_{j _{1}},\mathsf{A}_{[j_{1}],i}\rangle+f(x)_{j_{1}}\cdot\frac{\mathrm{d}\langle f (x)_{j_{1}},\mathsf{A}_{[j_{1}],i}\rangle}{\mathrm{d}x_{l}}\] \[= (f(x)_{j_{1}}\circ\mathsf{A}_{[j_{1}],l}-f(x)_{j_{1}}\cdot\langle f (x)_{j_{1}},\mathsf{A}_{[j_{1}],l}\rangle)\cdot\langle f(x)_{j_{1}},\mathsf{A} _{[j_{1}],i}\rangle\] \[+f(x)_{j_{1}}\cdot\frac{\mathrm{d}\langle f(x)_{j_{1}},\mathsf{A} _{[j_{1}],i}\rangle}{\mathrm{d}x_{l}}\]
\[= (f(x)_{j_{1}}\circ\mathsf{A}_{[j_{1}],l}-f(x)_{j_{1}}\cdot\langle f( x)_{j_{1}},\mathsf{A}_{[j_{1}],l}\rangle)\cdot\langle f(x)_{j_{1}},\mathsf{A}_{[j_{1}],i}\rangle\] \[\quad+f(x)_{j_{1}}\cdot(\langle f(x)_{j_{1}},\mathsf{A}_{[j_{1}], i}\circ\mathsf{A}_{[j_{1}],l}\rangle-\langle f(x)_{j_{1}},\mathsf{A}_{[j_{1}],l} \rangle\cdot\langle f(x)_{j_{1}},\mathsf{A}_{[j_{1}],i}\rangle)\] \[= f(x)_{j_{1}}\circ\mathsf{A}_{[j_{1}],l}\cdot\langle f(x)_{j_{1}},\mathsf{A}_{[j_{1}],i}\rangle-f(x)_{j_{1}}\cdot\langle f(x)_{j_{1}},\mathsf{ A}_{[j_{1}],i}\rangle\cdot\langle f(x)_{j_{1}},\mathsf{A}_{[j_{1}],l}\rangle\] \[\quad+f(x)_{j_{1}}\cdot\langle f(x)_{j_{1}},\mathsf{A}_{[j_{1}],i }\rangle\cdot\langle f(x)_{j_{1}},\mathsf{A}_{[j_{1}],i}\rangle\] \[\quad-f(x)_{j_{1}}\cdot\langle f(x)_{j_{1}},\mathsf{A}_{[j_{1}], l}\rangle\cdot\langle f(x)_{j_{1}},\mathsf{A}_{[j_{1}],i}\rangle, \tag{10}\]
where the first step follows from Fact 17, the second step follows from **Part 7** of Lemma 51, the third step follows from the proof of Lemma 54 (see Eq. (4)), and the fourth and the fifth step follows from simple algebra.
Combining Eq. (8), Eq. (9), and Eq. (10), we have
\[\frac{\mathrm{d}^{2}f(x)_{j_{1}}}{\mathrm{d}x_{i}\mathrm{d}x_{l}} = f(x)_{j_{1}}\circ\mathsf{A}_{[j_{1}],i}\circ\mathsf{A}_{[j_{1}],l}-f(x)_{j_{1}}\circ\mathsf{A}_{[j_{1}],i}\cdot\langle f(x)_{j_{1}},\mathsf{ A}_{[j_{1}],l}\rangle-f(x)_{j_{1}}\circ\mathsf{A}_{[j_{1}],l}\cdot\langle f(x)_{j_{1} },\mathsf{A}_{[j_{1}],i}\rangle\] \[\quad+2f(x)_{j_{1}}\cdot\langle f(x)_{j_{1}},\mathsf{A}_{[j_{1}],i}\rangle\cdot\langle f(x)_{j_{1}},\mathsf{A}_{[j_{1}],l}\rangle-f(x)_{j_{1}} \cdot\langle f(x)_{j_{1}},\mathsf{A}_{[j_{1}],i}\circ\mathsf{A}_{[j_{1}],l}\rangle.\]
### Second Order Derivatives of \(L_{\mathrm{exp}}\)
In this section, we start to compute the second-order derivative of \(L_{\mathrm{exp}}\).
**Lemma 56**.: _If the following conditions hold_
* _Let_ \(L_{\mathrm{exp}}\) _be defined as in Definition_ 45_._
* _Let_ \(f\) _be defined as in Definition_ 43_._
* _Let_ \(c\) _be defined as in Definition_ 44_._
* _Let_ \(x\in\mathbb{R}^{d^{2}}\)_, satisfying_ \(x=\mathrm{vec}(X)\)_._
* _Let_ \(A_{1},A_{2}\in\mathbb{R}^{n\times d}\)_._
* _Let_ \(\mathsf{A}=A_{1}\otimes A_{2}\)_._
_Then, we have_
* _For each_ \(i\in[d^{2}]\)_,_ \[\frac{\mathrm{d}^{2}L_{\mathrm{exp}}}{\mathrm{d}x_{i}^{2}}= \sum_{j_{1}=1}^{n}(\|f(x)_{j_{1}}\circ\mathsf{A}_{[j_{1}],i}\|_{2} ^{2}-\langle f(x)_{j_{1}},\mathsf{A}_{[j_{1}],i}\rangle\cdot\langle f(x)_{j_ {1}}^{2},\mathsf{A}_{[j_{1}],i}\rangle\] \[\quad+\langle c(x)_{j_{1}},f(x)_{j_{1}}\circ\mathsf{A}_{[j_{1}], l}^{2}\rangle-\langle c(x)_{j_{1}},f(x)_{j_{1}}\circ\mathsf{A}_{[j_{1}],i} \rangle\cdot\langle f(x)_{j_{1}},\mathsf{A}_{[j_{1}],i}\rangle)\] \[\quad-\sum_{j_{1}=1}^{n}(\langle f(x)_{j_{1}}+c(x)_{j_{1}},f(x)_{ j_{1}}\circ\mathsf{A}_{[j_{1}],i}\rangle\cdot\langle f(x)_{j_{1}},\mathsf{A}_{[j_{1}],i}\rangle\] \[\quad-\langle f(x)_{j_{1}}+c(x)_{j_{1}},f(x)_{j_{1}}\rangle\cdot \langle f(x)_{j_{1}},\mathsf{A}_{[j_{1}],i}\rangle^{2})\] \[\quad-\sum_{j_{1}=1}^{n}(\langle c(x)_{j_{1}},f(x)_{j_{1}} \rangle\cdot\langle f(x)_{j_{1}},\mathsf{A}_{[j_{1}],i}^{2}\rangle\] \[\quad-\langle c(x)_{j_{1}},f(x)_{j_{1}}\rangle\cdot\langle f(x)_{ j_{1}},\mathsf{A}_{[j_{1}],i}\rangle^{2}),\]
* _For each_ \(i,l\in[d^{2}]\)_,_ \[\frac{\mathrm{d}^{2}L_{\mathrm{exp}}}{\mathrm{d}x_{i}\mathrm{d}x_{l }}= \sum_{j_{1}=1}^{n}(\langle f(x)_{j_{1}}\circ\mathsf{A}_{[j_{1}],l},f(x)_{j_{1} }\circ\mathsf{A}_{[j_{1}],i}\rangle-\langle f(x)_{j_{1}},f(x)_{j_{1}}\circ \mathsf{A}_{[j_{1}],i}\rangle\cdot\langle f(x)_{j_{1}},\mathsf{A}_{[j_{1}],l}\rangle\] \[+\langle c(x)_{j_{1}},f(x)_{j_{1}}\circ\mathsf{A}_{[j_{1}],i} \circ\mathsf{A}_{[j_{1}],l}\rangle+\langle c(x)_{j_{1}},f(x)_{j_{1}}\circ \mathsf{A}_{[j_{1}],i}\rangle\cdot\langle f(x)_{j_{1}},\mathsf{A}_{[j_{1}],l}\rangle)\] \[- \sum_{j_{1}=1}^{n}(\langle f(x)_{j_{1}}+c(x)_{j_{1}},f(x)_{j_{1}} \circ\mathsf{A}_{[j_{1}],l}\rangle\cdot\langle f(x)_{j_{1}},\mathsf{A}_{[j_{1 }],i}\rangle\] \[- \langle f(x)_{j_{1}}+c(x)_{j_{1}},f(x)_{j_{1}}\rangle\cdot \langle f(x)_{j_{1}},\mathsf{A}_{[j_{1}],l}\rangle\cdot\langle f(x)_{j_{1}}, \mathsf{A}_{[j_{1}],i}\rangle)\] \[- \sum_{j_{1}=1}^{n}(\langle c(x)_{j_{1}},f(x)_{j_{1}}\rangle\cdot \langle f(x)_{j_{1}},\mathsf{A}_{[j_{1}],i}\circ\mathsf{A}_{[j_{1}],l}\rangle\] \[- \langle c(x)_{j_{1}},f(x)_{j_{1}}\rangle\cdot\langle f(x)_{j_{1}},\mathsf{A}_{[j_{1}],l}\rangle\cdot\langle f(x)_{j_{1}},\mathsf{A}_{[j_{1}],i }\rangle).\]
Proof.: We have
\[\frac{\mathrm{d}^{2}L_{\mathrm{exp}}}{\mathrm{d}x_{i}^{2}}= \frac{\mathrm{d}}{\mathrm{d}x_{i}}(\frac{\mathrm{d}L_{\mathrm{ exp}}}{\mathrm{d}x_{i}})\] \[= \frac{\mathrm{d}(\sum_{j_{1}=1}^{n}(\langle c(x)_{j_{1}},f(x)_{j_{ 1}}\circ\mathsf{A}_{[j_{1}],i}\rangle-\langle c(x)_{j_{1}},f(x)_{j_{1}} \rangle\cdot\langle f(x)_{j_{1}},\mathsf{A}_{[j_{1}],i}\rangle))}{\mathrm{d}x_ {i}}\] \[= \frac{\mathrm{d}(\sum_{j_{1}=1}^{n}\langle c(x)_{j_{1}},f(x)_{j_{ 1}}\circ\mathsf{A}_{[j_{1}],i}\rangle)}{\mathrm{d}x_{i}}-\frac{\mathrm{d}( \sum_{j_{1}=1}^{n}\langle c(x)_{j_{1}},f(x)_{j_{1}}\rangle\cdot\langle f(x)_{ j_{1}},\mathsf{A}_{[j_{1}],i}\rangle)}{\mathrm{d}x_{i}}\] \[= \sum_{j_{1}=1}^{n}\frac{\mathrm{d}(\langle c(x)_{j_{1}},f(x)_{j_{ 1}}\rangle\circ\mathsf{A}_{[j_{1}],i}\rangle)}{\mathrm{d}x_{i}}-\sum_{j_{1}=1} ^{n}\frac{\mathrm{d}(\langle c(x)_{j_{1}},f(x)_{j_{1}}\rangle\cdot\langle f(x) _{j_{1}},\mathsf{A}_{[j_{1}],i}\rangle)}{\mathrm{d}x_{i}}\] \[= \sum_{j_{1}=1}^{n}\frac{\mathrm{d}(\langle c(x)_{j_{1}},f(x)_{j_{ 1}}\rangle\circ\mathsf{A}_{[j_{1}],i}\rangle)}{\mathrm{d}x_{i}}\] \[- \sum_{j_{1}=1}^{n}\frac{\mathrm{d}(\langle c(x)_{j_{1}},f(x)_{j_{ 1}}\rangle\circ\mathsf{A}_{[j_{1}],i}\rangle)}{\mathrm{d}x_{i}}\cdot\langle f(x )_{j_{1}},\mathsf{A}_{[j_{1}],i}\rangle\] \[- \sum_{j_{1}=1}^{n}\langle c(x)_{j_{1}},f(x)_{j_{1}}\rangle\cdot \frac{\mathrm{d}\langle f(x)_{j_{1}},\mathsf{A}_{[j_{1}],i}\rangle}{\mathrm{d }x_{i}}, \tag{11}\]
where the first step follows from simple algebra, the second step follows from **Part 9** of Lemma 51, the third step follows from the property of the summation, the fourth step follows from Fact 17, and the last step follows from Fact 17.
First, we compute the first term of Eq. (11):
\[\frac{\mathrm{d}(\langle c(x)_{j_{1}},f(x)_{j_{1}}\circ\mathsf{A}_{ [j_{1}],i}\rangle)}{\mathrm{d}x_{i}}\] \[= \langle\frac{\mathrm{d}c(x)_{j_{1}}}{\mathrm{d}x_{i}},f(x)_{j_{1} }\circ\mathsf{A}_{[j_{1}],i}\rangle+\langle c(x)_{j_{1}},\frac{\mathrm{d}f(x)_{ j_{1}}\circ\mathsf{A}_{[j_{1}],i}}{\mathrm{d}x_{i}}\rangle\] \[= \langle f(x)_{j_{1}}\circ\mathsf{A}_{[j_{1}],i}-f(x)_{j_{1}}\cdot \langle f(x)_{j_{1}},\mathsf{A}_{[j_{1}],i}\rangle,f(x)_{j_{1}}\circ\mathsf{ A}_{[j_{1}],i}\rangle+\langle c(x)_{j_{1}},\frac{\mathrm{d}f(x)_{j_{1}}\circ \mathsf{A}_{[j_{1}],i}}{\mathrm{d}x_{i}}\rangle\] \[= \langle f(x)_{j_{1}}\circ\mathsf{A}_{[j_{1}],i}-f(x)_{j_{1}}\cdot \langle f(x)_{j_{1}},\mathsf{A}_{[j_{1}],i}\rangle,f(x)_{j_{1}}\circ\mathsf{ A}_{[j_{1}],i}\rangle\] \[+\langle c(x)_{j_{1}},f(x)_{j_{1}}\circ\mathsf{A}_{[j_{1}],i}^{2}- f(x)_{j_{1}}\circ\mathsf{A}_{[j_{1}],i}\cdot\langle f(x)_{j_{1}},\mathsf{A}_{[j_{1}],i} \rangle)\]
\[= \|f(x)_{j_{1}}\circ\mathsf{A}_{[j_{1}],i}\|_{2}^{2}-\langle f(x)_{j_{ 1}},\mathsf{A}_{[j_{1}],i}\rangle\cdot\langle f(x)_{j_{1}}^{2},\mathsf{A}_{[j_{ 1}],i}\rangle\] \[+\langle c(x)_{j_{1}},f(x)_{j_{1}}\circ\mathsf{A}_{[j_{1}],l}^{2} \rangle-\langle c(x)_{j_{1}},f(x)_{j_{1}}\circ\mathsf{A}_{[j_{1}],i}\rangle \cdot\langle f(x)_{j_{1}},\mathsf{A}_{[j_{1}],i}\rangle, \tag{12}\]
where the first step follows from the definition of the inner product, the second step follows from combining **Part 7** and **Part 8** of Lemma 51, the third step follows from the proof of Lemma 55 (see Eq. (6)), and the last step follows from Fact 16.
Then, we compute the second term of Eq. (11).
Note that
\[\frac{\mathrm{d}(\langle c(x)_{j_{1}},f(x)_{j_{1}}\rangle}{ \mathrm{d}x_{i}} = \langle\frac{\mathrm{d}c(x)_{j_{1}}}{\mathrm{d}x_{i}},f(x)_{j_{1} }\rangle+\langle c(x)_{j_{1}},\frac{\mathrm{d}f(x)_{j_{1}}}{\mathrm{d}x_{i}}\rangle\] \[= \langle f(x)_{j_{1}},\frac{\mathrm{d}f(x)_{j_{1}}}{\mathrm{d}x_{ i}}\rangle+\langle c(x)_{j_{1}},\frac{\mathrm{d}f(x)_{j_{1}}}{\mathrm{d}x_{i}}\rangle\] \[= \langle f(x)_{j_{1}}+c(x)_{j_{1}},f(x)_{j_{1}}\circ\mathsf{A}_{[j _{1}],i}-f(x)_{j_{1}}\cdot\langle f(x)_{j_{1}},\mathsf{A}_{[j_{1}],i}\rangle\rangle\] \[= \langle f(x)_{j_{1}}+c(x)_{j_{1}},f(x)_{j_{1}}\circ\mathsf{A}_{[j _{1}],i}\rangle-\langle f(x)_{j_{1}}+c(x)_{j_{1}},f(x)_{j_{1}}\cdot\langle f(x )_{j_{1}},\mathsf{A}_{[j_{1}],i}\rangle\rangle\] \[= \langle f(x)_{j_{1}}+c(x)_{j_{1}},f(x)_{j_{1}}\circ\mathsf{A}_{[j _{1}],i}\rangle-\langle f(x)_{j_{1}}+c(x)_{j_{1}},f(x)_{j_{1}}\rangle\cdot \langle f(x)_{j_{1}},\mathsf{A}_{[j_{1}],i}\rangle,\]
where the first step follows from the definition of the inner product, the second step follows from **Part 8** of Lemma 51, the third step follows from **Part 7** of Lemma 51, and the fourth and the fifth step follows from Fact 16.
Therefore, the second term of Eq. (11) is:
\[\frac{\mathrm{d}\langle c(x)_{j_{1}},f(x)_{j_{1}}\rangle}{ \mathrm{d}x_{i}}\cdot\langle f(x)_{j_{1}},\mathsf{A}_{[j_{1}],i}\rangle\] \[= \langle f(x)_{j_{1}}+c(x)_{j_{1}},f(x)_{j_{1}}\circ\mathsf{A}_{[j _{1}],i}\rangle\cdot\langle f(x)_{j_{1}},\mathsf{A}_{[j_{1}],i}\rangle- \langle f(x)_{j_{1}}+c(x)_{j_{1}},f(x)_{j_{1}}\rangle\cdot\langle f(x)_{j_{1}},\mathsf{A}_{[j_{1}],i}\rangle^{2}. \tag{13}\]
By applying the proof of Lemma 54 (see Eq. (2)), we can compute the third term of Eq. (11)
\[\langle c(x)_{j_{1}},f(x)_{j_{1}}\rangle\cdot\frac{\mathrm{d} \langle f(x)_{j_{1}},\mathsf{A}_{[j_{1}],i}\rangle}{\mathrm{d}x_{i}} = \langle c(x)_{j_{1}},f(x)_{j_{1}}\rangle\cdot(\langle f(x)_{j_{1} },\mathsf{A}_{[j_{1}],i}^{2}\rangle-\langle f(x)_{j_{1}},\mathsf{A}_{[j_{1}],i }\rangle^{2}) \tag{14}\] \[= \langle c(x)_{j_{1}},f(x)_{j_{1}}\rangle\cdot\langle f(x)_{j_{1} },\mathsf{A}_{[j_{1}],i}^{2}\rangle\] \[-\langle c(x)_{j_{1}},f(x)_{j_{1}}\rangle\cdot\langle f(x)_{j_{1} },\mathsf{A}_{[j_{1}],i}^{2}\rangle,\]
where the second step follows from simple algebra.
Combining Eq. (11), Eq. (12), Eq. (13), Eq. (14), we have
\[\frac{\mathrm{d}^{2}L_{\mathrm{exp}}}{\mathrm{d}x_{i}^{2}}= \sum_{j_{1}=1}^{n}(\|f(x)_{j_{1}}\circ\mathsf{A}_{[j_{1}],i}\|_{ 2}^{2}-\langle f(x)_{j_{1}},\mathsf{A}_{[j_{1}],i}\rangle\cdot\langle f(x)_{j _{1}}^{2},\mathsf{A}_{[j_{1}],i}\rangle\] \[\quad+\langle c(x)_{j_{1}},f(x)_{j_{1}}\circ\mathsf{A}_{[j_{1}],l }^{2}\rangle-\langle c(x)_{j_{1}},f(x)_{j_{1}}\circ\mathsf{A}_{[j_{1}],i} \rangle\cdot\langle f(x)_{j_{1}},\mathsf{A}_{[j_{1}],i}\rangle)\] \[-\sum_{j_{1}=1}^{n}(\langle f(x)_{j_{1}}+c(x)_{j_{1}},f(x)_{j_{1} }\circ\mathsf{A}_{[j_{1}],i}\rangle\cdot\langle f(x)_{j_{1}},\mathsf{A}_{[j_{1 }],i}\rangle\] \[\quad-\langle f(x)_{j_{1}}+c(x)_{j_{1}},f(x)_{j_{1}}\rangle\cdot \langle f(x)_{j_{1}},\mathsf{A}_{[j_{1}],i}\rangle^{2})\] \[-\sum_{j_{1}=1}^{n}(\langle c(x)_{j_{1}},f(x)_{j_{1}}\rangle\cdot \langle f(x)_{j_{1}},\mathsf{A}_{[j_{1}],i}^{2}\rangle\]
\[-\langle c(x)_{j_{1}},f(x)_{j_{1}}\rangle\cdot\langle f(x)_{j_{1}}, \mathsf{A}_{[j_{1}],i}\rangle^{2}\rangle,\]
Now, consider \(\frac{\mathrm{d}^{2}L_{\mathrm{exp}}}{\mathrm{d}x_{i}\mathrm{d}x_{l}}\).
We have
\[\frac{\mathrm{d}^{2}L_{\mathrm{exp}}}{\mathrm{d}x_{i}\mathrm{d}x_{l}} =\frac{\mathrm{d}}{\mathrm{d}x_{l}}(\frac{\mathrm{d}L_{\mathrm{ exp}}}{\mathrm{d}x_{i}})\] \[=\frac{\mathrm{d}(\sum_{j_{1}=1}^{n}(\langle c(x)_{j_{1}},f(x)_{j _{1}}\circ\mathsf{A}_{[j_{1}],i}\rangle-\langle c(x)_{j_{1}},f(x)_{j_{1}} \rangle\cdot\langle f(x)_{j_{1}},\mathsf{A}_{[j_{1}],i}\rangle))}{\mathrm{d}x_{ l}}\] \[=\frac{\mathrm{d}(\sum_{j_{1}=1}^{n}\langle c(x)_{j_{1}},f(x)_{j _{1}}\circ\mathsf{A}_{[j_{1}],i}\rangle)}{\mathrm{d}x_{l}}-\frac{\mathrm{d}( \sum_{j_{1}=1}^{n}\langle c(x)_{j_{1}},f(x)_{j_{1}}\rangle\cdot\langle f(x)_{ j_{1}},\mathsf{A}_{[j_{1}],i}\rangle)}{\mathrm{d}x_{l}}\] \[=\sum_{j_{1}=1}^{n}\frac{\mathrm{d}(\langle c(x)_{j_{1}},f(x)_{j _{1}}\circ\mathsf{A}_{[j_{1}],i}\rangle)}{\mathrm{d}x_{l}}\] \[\quad-\sum_{j_{1}=1}^{n}\frac{\mathrm{d}\langle c(x)_{j_{1}},f(x) _{j_{1}}\rangle}{\mathrm{d}x_{l}}\cdot\langle f(x)_{j_{1}},\mathsf{A}_{[j_{1} ],i}\rangle\] \[\quad-\sum_{j_{1}=1}^{n}\langle c(x)_{j_{1}},f(x)_{j_{1}}\rangle \cdot\frac{\mathrm{d}\langle f(x)_{j_{1}},\mathsf{A}_{[j_{1}],i}\rangle}{ \mathrm{d}x_{l}}, \tag{15}\]
where the first step follows from simple algebra, the second step follows from **Part 9** of Lemma 51, the third step follows from the property of the summation, the fourth step follows from Fact 17, and the last step follows from Fact 17.
First, we compute the first term of Eq. (15):
\[\frac{\mathrm{d}(\langle c(x)_{j_{1}},f(x)_{j_{1}}\circ\mathsf{A} _{[j_{1}],i}\rangle)}{\mathrm{d}x_{l}} \tag{16}\] \[=\langle\frac{\mathrm{d}c(x)_{j_{1}}}{\mathrm{d}x_{l}},f(x)_{j_{1 }}\circ\mathsf{A}_{[j_{1}],i}\rangle+\langle c(x)_{j_{1}},\frac{\mathrm{d}f(x )_{j_{1}}\circ\mathsf{A}_{[j_{1}],i}}{\mathrm{d}x_{l}}\rangle\] \[=\langle f(x)_{j_{1}}\circ\mathsf{A}_{[j_{1}],l}-f(x)_{j_{1}} \cdot\langle f(x)_{j_{1}},\mathsf{A}_{[j_{1}],l}\rangle,f(x)_{j_{1}}\circ \mathsf{A}_{[j_{1}],i}\rangle+\langle c(x)_{j_{1}},\frac{\mathrm{d}f(x)_{j_{1 }}\circ\mathsf{A}_{[j_{1}],i}}{\mathrm{d}x_{l}}\rangle\] \[=\langle f(x)_{j_{1}}\circ\mathsf{A}_{[j_{1}],l}-f(x)_{j_{1}} \cdot\langle f(x)_{j_{1}},\mathsf{A}_{[j_{1}],l}\rangle,f(x)_{j_{1}}\circ \mathsf{A}_{[j_{1}],i}\rangle\] \[\quad+\langle c(x)_{j_{1}},f(x)_{j_{1}}\circ\mathsf{A}_{[j_{1}],i }\circ\mathsf{A}_{[j_{1}],l}-f(x)_{j_{1}}\circ\mathsf{A}_{[j_{1}],i}\cdot \langle f(x)_{j_{1}},\mathsf{A}_{[j_{1}],l}\rangle)\] \[=\langle f(x)_{j_{1}}\circ\mathsf{A}_{[j_{1}],l},f(x)_{j_{1}} \circ\mathsf{A}_{[j_{1}],i}\rangle-\langle f(x)_{j_{1}},f(x)_{j_{1}}\circ \mathsf{A}_{[j_{1}],i}\rangle\cdot\langle f(x)_{j_{1}},\mathsf{A}_{[j_{1}],l}\rangle\] \[\quad+\langle c(x)_{j_{1}},f(x)_{j_{1}}\circ\mathsf{A}_{[j_{1}],i }\circ\mathsf{A}_{[j_{1}],l}\rangle+\langle c(x)_{j_{1}},f(x)_{j_{1}}\circ \mathsf{A}_{[j_{1}],i}\rangle\cdot\langle f(x)_{j_{1}},\mathsf{A}_{[j_{1}],l}\rangle,\]
where the first step follows from the definition of the inner product, the second step follows from combining **Part 7** and **Part 8** of Lemma 51, the third step follows from the proof of Lemma 55 (see Eq. (9)), and the last step follows from Fact 16.
Then, we compute the second term of Eq. (15).
Note that
\[\frac{\mathrm{d}(\langle c(x)_{j_{1}},f(x)_{j_{1}}\rangle}{\mathrm{d}x_{l}}= \langle\frac{\mathrm{d}c(x)_{j_{1}}}{\mathrm{d}x_{l}},f(x)_{j_{1}}\rangle+ \langle c(x)_{j_{1}},\frac{\mathrm{d}f(x)_{j_{1}}}{\mathrm{d}x_{l}}\rangle\]
\[= \langle f(x)_{j_{1}},\frac{\mathrm{d}f(x)_{j_{1}}}{\mathrm{d}x_{l}} \rangle+\langle c(x)_{j_{1}},\frac{\mathrm{d}f(x)_{j_{1}}}{\mathrm{d}x_{l}}\rangle\] \[= \langle f(x)_{j_{1}}+c(x)_{j_{1}},f(x)_{j_{1}}\circ\mathsf{A}_{[j_ {1}],l}-f(x)_{j_{1}}\cdot\langle f(x)_{j_{1}},\mathsf{A}_{[j_{1}],l}\rangle\rangle\] \[= \langle f(x)_{j_{1}}+c(x)_{j_{1}},f(x)_{j_{1}}\circ\mathsf{A}_{[j_ {1}],l}\rangle-\langle f(x)_{j_{1}}+c(x)_{j_{1}},f(x)_{j_{1}}\cdot\langle f(x)_ {j_{1}},\mathsf{A}_{[j_{1}],l}\rangle\rangle\] \[= \langle f(x)_{j_{1}}+c(x)_{j_{1}},f(x)_{j_{1}}\circ\mathsf{A}_{[j _{1}],l}\rangle-\langle f(x)_{j_{1}}+c(x)_{j_{1}},f(x)_{j_{1}}\rangle\cdot \langle f(x)_{j_{1}},\mathsf{A}_{[j_{1}],l}\rangle,\]
where the first step follows from the definition of the inner product, the second step follows from **Part 8** of Lemma 51, the third step follows from **Part 7** of Lemma 51, and the fourth and the fifth step follows from Fact 16.
Therefore, the second term of Eq. (15) is:
\[\frac{\mathrm{d}\langle c(x)_{j_{1}},f(x)_{j_{1}}\rangle}{\mathrm{ d}x_{i}}\cdot\langle f(x)_{j_{1}},\mathsf{A}_{[j_{1}],i}\rangle\] \[= \langle f(x)_{j_{1}}+c(x)_{j_{1}},f(x)_{j_{1}}\circ\mathsf{A}_{[j _{1}],l}\rangle\cdot\langle f(x)_{j_{1}},\mathsf{A}_{[j_{1}],i}\rangle\] \[- \langle f(x)_{j_{1}}+c(x)_{j_{1}},f(x)_{j_{1}}\rangle\cdot\langle f (x)_{j_{1}},\mathsf{A}_{[j_{1}],l}\rangle\cdot\langle f(x)_{j_{1}},\mathsf{A}_ {[j_{1}],i}\rangle. \tag{17}\]
By applying the proof of Lemma 54 (see Eq. (4)), we can compute the third term of Eq. (15)
\[\langle c(x)_{j_{1}},f(x)_{j_{1}}\rangle\cdot\frac{\mathrm{d} \langle f(x)_{j_{1}},\mathsf{A}_{[j_{1}],i}\rangle}{\mathrm{d}x_{l}}\] \[= \langle c(x)_{j_{1}},f(x)_{j_{1}}\rangle\cdot\langle(f(x)_{j_{1}},\mathsf{A}_{[j_{1}],i}\circ\mathsf{A}_{[j_{1}],l}\rangle-\langle f(x)_{j_{1}},\mathsf{A}_{[j_{1}],l}\rangle\cdot\langle f(x)_{j_{1}},\mathsf{A}_{[j_{1}],i }\rangle)\] \[= \langle c(x)_{j_{1}},f(x)_{j_{1}}\rangle\cdot\langle f(x)_{j_{1}},\mathsf{A}_{[j_{1}],i}\circ\mathsf{A}_{[j_{1}],l}\rangle-\langle c(x)_{j_{1}},f(x)_{j_{1}}\rangle\cdot\langle f(x)_{j_{1}},\mathsf{A}_{[j_{1}],l}\rangle \cdot\langle f(x)_{j_{1}},\mathsf{A}_{[j_{1}],i}\rangle, \tag{18}\]
where the second step follows from simple algebra.
Combining Eq. (15), Eq. (16), Eq. (17), Eq. (18), we have
\[\frac{\mathrm{d}^{2}L_{\mathrm{exp}}}{\mathrm{d}x_{i}\mathrm{d}x _{l}}= \sum_{j_{1}=1}^{n}(\langle f(x)_{j_{1}}\circ\mathsf{A}_{[j_{1}],l },f(x)_{j_{1}}\circ\mathsf{A}_{[j_{1}],i}\rangle-\langle f(x)_{j_{1}},f(x)_{j_{1 }}\circ\mathsf{A}_{[j_{1}],i}\rangle\cdot\langle f(x)_{j_{1}},\mathsf{A}_{[j_ {1}],l}\rangle\] \[+\langle c(x)_{j_{1}},f(x)_{j_{1}}\circ\mathsf{A}_{[j_{1}],i} \circ\mathsf{A}_{[j_{1}],l}\rangle+\langle c(x)_{j_{1}},f(x)_{j_{1}}\circ \mathsf{A}_{[j_{1}],i}\rangle\cdot\langle f(x)_{j_{1}},\mathsf{A}_{[j_{1}],l}\rangle)\] \[- \sum_{j_{1}=1}^{n}(\langle f(x)_{j_{1}}+c(x)_{j_{1}},f(x)_{j_{1}} \circ\mathsf{A}_{[j_{1}],l}\rangle\cdot\langle f(x)_{j_{1}},\mathsf{A}_{[j_{1}],i}\rangle\] \[-\langle f(x)_{j_{1}}+c(x)_{j_{1}},f(x)_{j_{1}}\rangle\cdot \langle f(x)_{j_{1}},\mathsf{A}_{[j_{1}],l}\rangle\cdot\langle f(x)_{j_{1}}, \mathsf{A}_{[j_{1}],i}\rangle)\] \[- \sum_{j_{1}=1}^{n}(\langle c(x)_{j_{1}},f(x)_{j_{1}}\rangle\cdot \langle f(x)_{j_{1}},\mathsf{A}_{[j_{1}],i}\circ\mathsf{A}_{[j_{1}],l}\rangle\] \[-\langle c(x)_{j_{1}},f(x)_{j_{1}}\rangle\cdot\langle f(x)_{j_{1}},\mathsf{A}_{[j_{1}],l}\rangle\cdot\langle f(x)_{j_{1}},\mathsf{A}_{[j_{1}],i }\rangle).\]
### Hessian of A Single Loss
This section marks the initiation of our Hessian computation for a single loss. The subsequent result is prominently featured in the preceding study [10]. Our presentation illustrates that our work is an expanded iteration of the identical problem, scaled by a factor of \(n\). Indeed, our approach precisely constitutes a tensor-based rendition and proposes the Hessian property iteratively across \(n\) instances.
**Lemma 57**.: _We have_
* **Part 1.** \[\frac{\mathrm{d}^{2}L_{\mathrm{exp}}}{\mathrm{d}x_{i}^{2}} = (-\langle f(x),A_{*,i}\rangle\cdot f(x)+f(x)\circ A_{*,i})^{\top}(- \langle f(x),A_{*,i}\rangle\cdot f(x)+f(x)\circ A_{*,i})\] \[+ c^{\top}(2\langle f(x),A_{*,i}\rangle^{2}\cdot f(x)-\langle f(x) \circ A_{*,i},A_{*,i}\rangle\cdot f(x)-2\langle f(x),A_{*,i}\rangle\cdot f(x) \circ A_{*,i})\] \[+ c^{\top}f(x)\circ A_{*,i}\circ A_{*,i}.\]
* **Part 2.** \[\frac{\mathrm{d}^{2}L_{\mathrm{exp}}}{\mathrm{d}x_{i}\mathrm{d}x_{ j}} = (-\langle f(x),A_{*,j}\rangle\cdot f(x)+f(x)\circ A_{*,j})^{\top} (-\langle f(x),A_{*,i}\rangle\cdot f(x)+f(x)\circ A_{*,i})\] \[+ c^{\top}(2\langle f(x),A_{*,i}\rangle\cdot\langle f(x),A_{*,j} \rangle\cdot f(x)-\langle f(x)\circ A_{*,j},A_{*,i}\rangle\cdot f(x)\] \[- \langle f(x),A_{*,i}\rangle\cdot f(x)\circ A_{*,j}-\langle f(x),A_ {*,j}\rangle\cdot f(x)\circ A_{*,i}+f(x)\circ A_{*,j}\circ A_{*,i})\]
For the completeness, we still provide a proof.
Proof.: **Proof of Part 1.**
Note that in [10],
\[\frac{\mathrm{d}L_{\mathrm{exp}}}{\mathrm{d}x_{i}}=(f(x)-b)^{ \top}(-\langle f(x),A_{*,i}\rangle\cdot f(x)+f(x)\circ A_{*,i})\] \[\frac{\mathrm{d}(f(x)-b)}{\mathrm{d}x_{i}}=\frac{\mathrm{d}f(x)}{ \mathrm{d}x_{i}}=-\langle f(x),A_{*,i}\rangle\cdot f(x)+f(x)\circ A_{*,i}\]
Therefore, we have
\[\frac{\mathrm{d}^{2}L_{\mathrm{exp}}}{\mathrm{d}x_{i}^{2}} = \frac{\mathrm{d}}{\mathrm{d}x_{i}}((f(x)-b)^{\top}(-\langle f(x),A_{*,i}\rangle\cdot f(x)+f(x)\circ A_{*,i}))\] \[= (\langle f(x),A_{*,i}\rangle\cdot f(x)+f(x)\circ A_{*,i})^{\top }(\langle f(x),A_{*,i}\rangle\cdot f(x)+f(x)\circ A_{*,i})\] \[+ (f(x)-b)^{\top}\frac{\mathrm{d}}{\mathrm{d}x_{i}}(-\langle f(x),A_{*,i}\rangle\cdot f(x)+f(x)\circ A_{*,i})\]
Analyzing the second term of the above equation, we have
\[\frac{\mathrm{d}}{\mathrm{d}x_{i}}(-\langle f(x),A_{*,i}\rangle \cdot f(x)) = -\frac{\mathrm{d}}{\mathrm{d}x_{i}}(\langle f(x),A_{*,i}\rangle )\cdot f(x)-\langle f(x),A_{*,i}\rangle\cdot(-\langle f(x),A_{*,i}\rangle \cdot f(x)+f(x)\circ A_{*,i})\] \[= -\langle-\langle f(x),A_{*,i}\rangle\cdot f(x)+f(x)\circ A_{*,i},A_{*,i}\rangle\cdot f(x)\] \[+ \langle f(x),A_{*,i}\rangle^{2}\cdot f(x)-\langle f(x),A_{*,i} \rangle\cdot f(x)\circ A_{*,i}\] \[= \langle f(x),A_{*,i}\rangle^{2}\cdot f(x)-\langle f(x)\circ A_{*, i},A_{*,i}\rangle\cdot f(x)\] \[+ \langle f(x),A_{*,i}\rangle^{2}\cdot f(x)-\langle f(x),A_{*,i} \rangle\cdot f(x)\circ A_{*,i}.\]
And, we have
\[\frac{\mathrm{d}}{\mathrm{d}x_{i}}(f(x)\circ A_{*,i})=(-\langle f(x),A_{*,i} \rangle\cdot f(x)+f(x)\circ A_{*,i})\circ A_{*,i}\]
\[= -\langle f(x),A_{*,i}\rangle\cdot f(x)\circ A_{*,i}+f(x)\circ A_{*,i} \circ A_{*,i}.\]
Combining everything together, we have
\[\frac{\mathrm{d}^{2}L_{\mathrm{exp}}}{\mathrm{d}x_{i}\mathrm{d}x_{j}} = (-\langle f(x),A_{*,j}\rangle\cdot f(x)+f(x)\circ A_{*,j})^{\top} (-\langle f(x),A_{*,i}\rangle\cdot f(x)+f(x)\circ A_{*,i})\] \[+ c^{\top}(2\langle f(x),A_{*,i}\rangle\cdot f(x)\circ A_{*,j}- \langle f(x),A_{*,j}\rangle\cdot f(x)\circ A_{*,i}+f(x)\circ A_{*,j}\circ A_{*,i})\]
### Checking \(B_{1}\) and \(B_{2}\)
In this section, we introduce new notations \(B_{1}(x)\) and \(B_{2}(x)\) to simplify the Hessian as [10].
**Lemma 58** ([10]).: _If the following conditions hold_
* _Let_ \(B_{1}(x)\in\mathbb{R}^{n\times n}\) _be a matrix satisfying_ \[A_{*,i}^{\top}B_{1}(x)A_{*,j}:=(-\langle f(x),A_{*,j}\rangle\cdot f(x)+f(x) \circ A_{*,j})^{\top}(-\langle f(x),A_{*,i}\rangle\cdot f(x)+f(x)\circ A_{*,i }).\]
* _Let_ \(B_{2}(x)\in\mathbb{R}^{n\times n}\) _be a matrix satisfying_ \[A_{*,i}^{\top}B_{1}(x)A_{*,j} :=\,c^{\top}(2\langle f(x),A_{*,i}\rangle\cdot\langle f(x),A_{*,j} \rangle\cdot f(x)-\langle f(x)\circ A_{*,j},A_{*,i}\rangle\cdot f(x)\] \[-\,\langle f(x),A_{*,i}\rangle\cdot f(x)\circ A_{*,j}-\langle f(x ),A_{*,j}\rangle\cdot f(x)\circ A_{*,i}+f(x)\circ A_{*,j}\circ A_{*,i}).\]
_Then, we have_
* _Part 1._ \[\frac{\mathrm{d}^{2}L}{\mathrm{d}x_{i}^{2}}=A_{*,i}^{\top}B_{1}(x)A_{*,i}+A_{*,i}^{\top}B_{2}(x)A_{*,i}.\]
* _Part 2._ \[\frac{\mathrm{d}^{2}L}{\mathrm{d}x_{i}\mathrm{d}x_{j}}=A_{*,i}^{\top}B_{1}(x)A _{*,j}+A_{*,i}^{\top}B_{2}(x)A_{*,j}.\]
Proof.: This Lemma follows directly from Lemma 57.
## Appendix E Sketching
In Section E.1, we introduce the iterative sketching-based federated learning algorithm. In Section E.2, we present the \(\mathsf{sk}/\mathsf{desk}\) via coordinate-wise embedding. In Section E.3, we introduce the related work of sketching. In Section A.3, we introduce the basic definition and property of sketching. In Section E.4, we prove the upper bound of \(\|\widetilde{g}_{r}^{t,k}\|_{2}^{2}\). In Section E.5, we prove the lower bound of \(\langle\widetilde{u}_{r}^{t,k}-w^{*},\widetilde{g}_{r}^{t,k}\rangle\). In Section E.6, we introduce the induction tools. In Section E.7, we give the formal proof to show the convergence of our gradient coin system.
### Iterative Sketching-based Federated Learning Algorithm
In this section, we introduce the iterative sketching-based federated learning algorithm proposed in [11] (see Algorithm 5). The algorithm leverages sketching matrices to address communication efficiency issues, ensuring that our gradient coin system operates efficiently.
### sk/desk
In this section, we introduce the \(\mathsf{sk}/\mathsf{desk}\) via coordinate-wise embedding [11, 12, 11, 13, 14, 15, 16, 17]. First, we give a formal definition of \(a\)-coordinate-wise embedding.
**Definition 59** (\(a\)-coordinate-wise embedding, Definition 4.1 in [11]).: _Let \(R\in\mathbb{R}^{b_{\mathrm{sketch}}\times d}\) be a randomized matrix._
_Let \(g,h\in\mathbb{R}^{d}\) be two arbitrary vectors._
\(R\) _satisfy \(a\)-coordinate wise embedding if_
\[\operatorname*{\mathbb{E}}_{R\sim\Pi}[h^{\top}R^{\top}Rg]=h^{\top}g\]
_and_
\[\operatorname*{\mathbb{E}}_{R\sim\Pi}[(h^{\top}R^{\top}Rg)^{2}] \leq(h^{\top}g)^{2}+\frac{a}{b_{\mathrm{sketch}}}\|h\|_{2}^{2}\cdot\|g\|_{2}^ {2}\]
Definition 59 can naturally connect the concept of coordinate-wise embedding with \(\mathsf{sk}_{t}/\mathsf{desk}_{t}\) operators. This important definition may help us achieve the condition that any arbitrarily processed gradient \(\mathsf{desk}_{t}\circ\mathsf{sk}_{t}(g)\) is "close" to the true gradient of \(g\) so that it can preserve the convergence of the algorithm. Typically, familiar sketching matrices tend to have a small constant value for their coordinate-wise embedding parameter \(a\). If \(h\) is a one-hot vector \(e_{i}\), then the conditions of being \(a\)-coordinate wise embedding listed in Definition 59 becomes
\[\mathop{\mathbb{E}}_{R\sim\Pi}[R^{\top}Rg]=g\]
and
\[\mathop{\mathbb{E}}_{R\sim\Pi}[\|R^{\top}Rg\|_{2}^{2}]\leq(1+a\cdot \frac{d}{b_{\text{sketch}}})\cdot\|g\|_{2}^{2}.\]
Therefore, if we let the sketching be
\[\mathsf{sk}_{t}=R_{t}\in\mathbb{R}^{b_{\text{sketch}}\times d} \tag{19}\]
and the de-sketching be
\[\mathsf{desk}_{t}=R_{t}^{\top}\in\mathbb{R}^{d\times b_{\text{sketch}}}, \tag{20}\]
then for all iterations \(t\) being greater than or equal to \(1\), with independent random matrices \(R_{t}\) having a sketching dimension of \(b_{\text{sketch}}\), we can get an unbiased sketching/de-sketching scheme and a variance which is bounded (see the following Theorem).
**Theorem 60** (Theorem 4.2 in [14]).: _Let \(t\in\mathbb{Z}_{+}\)._
_Let \(R_{t}\) be a list of arbitrary matrix in \(\mathbb{R}^{b_{\mathrm{ketch}}\times d}\), and for each \(t\), \(R_{t}\) satisfies \(a\)-coordinate wise embedding property (see Definition 59)._
_Let \(\mathsf{sk}_{t}\) and \(\mathsf{desk}_{t}\) be defined by Eq. (19) and Eq. (20)._
_Then, we can get that 1). for each iteration \(t\), \((\mathsf{sk}_{t},\mathsf{desk}_{t})\) is independent, 2). \(\mathsf{desk}_{t}\) and \(\mathsf{sk}_{t}\) are both linear operators, 3)._
\[\mathbb{E}[\mathsf{desk}_{t}(\mathsf{sk}_{t}(h))]=h,\]
_for each \(h\in\mathbb{R}^{d}\), and 4)._
\[\mathbb{E}[\|\mathsf{desk}_{t}(\mathsf{sk}_{t}(h))\|_{2}^{2}]\leq(1+\alpha) \cdot\|h\|_{2}^{2},\]
_for each \(h\in\mathbb{R}^{d}\) and \(\alpha=a\cdot d/b_{\mathrm{sketch}}\)._
_Additionally, for \(\alpha>0\), Table 1 shows the typical sketching matrices._
### Related Work
Sketching is a powerful tool that has been applied to numerous machine learning problems. Typically, there are two ways to apply sketching matrices. The first approach involves applying sketching once (or a constant number of times), known as "sketch-and-solve". The second approach entails applying sketching in each iteration of the optimization algorithm while simultaneously designing a robust analysis framework. This is referred to as "iterate-and-sketch". The present work falls into the second category.
Sketch-and-solve can be applied in various fields, including linear regression [13, 14], low-rank approximation with Frobenius norm [13, 14, 15], matrix CUR decomposition [16, 17, 18], weighted low-rank approximation [19], entrywise \(\ell_{1}\) norm low-rank approximation [14, 18], \(\ell_{p}\) norm low-rank approximation [13], \(\ell_{0}\) norm low-rank approximation [13], Schatten \(p\)-norm low rank approximation [14], \(\ell_{0}\)-norm low rank approximation [14], tensor regression [17, 18, 19, 20], tensor low-rank approximation [17], and general norm column subset selection [17].
Iterate-and-sketch has been applied to many fundamental problems, such as linear programming [1, 18, 19, 20, 21], empirical risk minimization [21, 22], support vector machines [23], semi-definite programming [20], John's Ellipsoid computation [16, 17], the Frank-Wolfe algorithm [21, 22], reinforcement learning [21], softmax-inspired regression [20, 23, 24, 25], federated learning [17], the discrepancy problem [18, 22], and non-convex optimization [17, 21, 22, 23, 24, 26, 27].
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline
**Reference** & **Sketching matrix** & **Definition** & **Param \(\alpha\)** \\ \hline folklore & Random Gaussian & Definition 18 & \(3d/b_{\mathrm{sketch}}\) \\ \hline
[13] & SRHT & Definition 19 & \(2d/b_{\mathrm{sketch}}\) \\ \hline
[1] & AMS sketch & Definition 20 & \(2d/b_{\mathrm{sketch}}\) \\ \hline
[16] & Count-sketch & Definition 21 & \(3d/b_{\mathrm{sketch}}\) \\ \hline
[14] & Sparse embedding & Definition 22, 23 & \(2d/b_{\mathrm{sketch}}\) \\ \hline \end{tabular}
\end{table}
Table 1: The \(\alpha\) value (coordinate-wise embedding parameter) with the corresponding sketching matrix.
### Upper Bounding \(\|\widetilde{g}_{r}^{t,k}\|_{2}^{2}\)
In this section, the upper bound of \(\|\widetilde{g}_{r}^{t,k}\|_{2}^{2}\) is established.
**Lemma 61**.: _Let \(r\in[N]\)._
_Let \(f_{r}:\mathbb{R}^{d}\rightarrow\mathbb{R}\) be a list of functions, and for each \(r\), \(f_{r}\) is \(L\)-smooth (see Definition 66) and \(\mu\)-strongly convex (see Definition 65)._
_Then, we have_
\[\|\widetilde{g}_{r}^{t,k}\|_{2}^{2}\leq\ 4L(f(\widetilde{u}_{r}^{t,k})-f(w^{ *}))\]
Proof.: We have
\[\|\widetilde{g}_{r}^{t,k}\|_{2}^{2} =\|\widetilde{g}_{r}^{t,k}-\nabla f(\widetilde{u}_{r}^{t,k})+ \nabla f(\widetilde{u}_{r}^{t,k})\|_{2}^{2}\] \[\leq 2\|\widetilde{g}_{r}^{t,k}-\nabla f(\widetilde{u}_{r}^{t,k})\|_ {2}^{2}+2\|\nabla f(\widetilde{u}_{r}^{t,k})\|_{2}^{2}, \tag{21}\]
where the first step is from simple algebra and the second step is by triangle inequality.
Note that
\[\|\widetilde{g}_{r}^{t,k}-\nabla f(\widetilde{u}_{r}^{t,k})\|_{2}^{2}=0. \tag{22}\]
Also, we have
\[\|\nabla f(\widetilde{u}_{r}^{t,k})\|_{2}^{2} =\|\nabla f(\widetilde{u}_{r}^{t,k})-\nabla f(x^{*})\|_{2}^{2}\] \[\leq 2L(f(\widetilde{u}_{r}^{t,k})-f(x^{*})). \tag{23}\]
Combining Eq. (21), Eq. (22), and Eq. (23), we get
\[\|\widetilde{g}_{r}^{t,k}\|_{2}^{2} \leq 2L^{2}\|\widetilde{u}_{r}^{t,k}-x^{*}\|_{2}^{2}\] \[\leq 4L(f(\widetilde{u}_{r}^{t,k})-f(x^{*}))\]
### Lower Bounding \(\langle\widetilde{u}_{r}^{t,k}-x^{*},\widetilde{g}_{r}^{t,k}\rangle\)
In this section, we find the lower bound of \(\langle\widetilde{u}_{r}^{t,k}-x^{*},\widetilde{g}_{r}^{t,k}\rangle\).
**Lemma 62**.: _Suppose each \(f_{r}\) is \(\mu\)-strongly convex and L-smooth then_
\[\langle\widetilde{u}_{r}^{t,k}-x^{*},\widetilde{g}_{r}^{t,k}\rangle\geq f( \widetilde{u}_{r}^{t,k})-f(x^{*})+\frac{\mu}{2}\|\widetilde{u}_{r}^{t,k}-x^{* }\|_{2}^{2}\]
Proof.: The lower bound on this inner product can be established as follows
\[\langle\widetilde{u}_{r}^{t,k}-x^{*},\widetilde{g}_{r}^{t,k}\rangle=\langle \widetilde{u}_{r}^{t,k}-x^{*},\nabla f_{r}(u_{r}^{t,k})\rangle\]
We have
\[\langle\widetilde{u}_{r}^{t,k}-x^{*},\nabla f_{r}(\widetilde{u}_{r}^{t,k}) \rangle\geq f_{r}(\widetilde{u}_{r}^{t,k})-f_{r}(x^{*})+\frac{\mu}{2}\| \widetilde{u}_{r}^{t,k}-x^{*}\|_{2}^{2}\]
### Induction Tools
We introduce our induction tool in this section.
**Lemma 63**.: _If the following conditions hold:_
* _Suppose each_ \(f_{c}\) _satisfies Assumption_ 71_._
* _Let Theorem_ 60 _hold_
* \(\eta\leq\frac{1}{8(1+\alpha)LK}\)_, where_ \(\alpha\) _is defined as in Theorem_ 60_._
* \(R\sim\Pi\) _is the distribution of sketching matrix._
_for any \((t,k)\neq(1,0)\) and \(r\sim[N]\), it follows that_
\[\mathop{\mathbb{E}}_{R\sim\Pi}[\|\widetilde{u}_{r}^{t,k}-x^{*}\|_ {2}^{2}] \leq(1-\mu\eta)\mathop{\mathbb{E}}_{R\sim\Pi}[\|\widetilde{u}_{r}^ {t,k-1}-x^{*}\|_{2}^{2}]-\eta\mathop{\mathbb{E}}_{R\sim\Pi}[f(\widetilde{u}_{r }^{t,k-1})-f(x^{*})]\] \[+\mathbf{1}_{\{k=0\}}\eta^{2}\alpha K(4L\sum_{i=0}^{K-1} \mathop{\mathbb{E}}_{R\sim\Pi}[f(\widetilde{u}_{r}^{t-1,i})-f(x^{*})])\]
Proof.: We have for any \((t,k)\neq(1,0)\),
\[\widetilde{u}_{r}^{t,k}=\widetilde{u}_{r}^{t,k-1}-\eta\cdot \widetilde{g}_{r}^{t,k-1}+1_{\{k=0\}}\cdot\eta\cdot(I_{d}-\mathsf{desk}_{t} \circ\mathsf{sk}_{t})(\sum_{i=0}^{K-1}\widetilde{g}_{r}^{t-1,i}).\]
Therefore, denoting
\[h^{t}:=(I_{d}-\mathsf{desk}_{t}\circ\mathsf{sk}_{t})(\sum_{i=0}^ {K-1}\widetilde{g}_{r}^{t-1,i}) \tag{24}\]
we have
\[\|\widetilde{u}_{r}^{t,k}-x^{*}\|_{2}^{2} =\|\widetilde{u}_{r}^{t,k-1}-x^{*}-\eta\cdot\widetilde{g}_{r}^{t, k-1}+1_{\{k=0\}}\eta\cdot h^{t}\|_{2}^{2}\] \[=\|\widetilde{u}_{r}^{t,k-1}-x^{*}\|_{2}^{2}+\eta^{2}\cdot\| \widetilde{g}_{r}^{t,k-1}\|_{2}^{2}-2\eta\langle\widetilde{u}_{r}^{t,k-1}-x^{ *},\widetilde{g}_{r}^{t,k-1}\rangle\] \[\quad+2\eta 1_{\{k=0\}}\langle\widetilde{u}_{r}^{t,k-1}-x^{*},h^{t} \rangle-2\eta^{2}1_{\{k=0\}}\langle\widetilde{g}_{r}^{t,k-1},h^{t}\rangle\] \[\quad+\eta^{2}1_{\{k=0\}}\cdot\|h^{t}\|_{2}^{2}, \tag{25}\]
where the first step follows from the definition of \(\widetilde{u}_{r}^{t,k}\) (see Algorithm 5), and the second step follows from the Pythagorean Theorem.
For any vector \(h\), we have
\[\mathbb{E}[\mathsf{desk}_{t}(\mathsf{sk}_{t}(h))]=h,\qquad \mathbb{E}[\|\mathsf{desk}_{t}(\mathsf{sk}_{t}(h))\|_{2}^{2}]\leq(1+\alpha) \cdot\|h\|_{2}^{2}\]
Hence, we take expectation over Eq. (25),
\[\mathbb{E}[\|\widetilde{u}_{r}^{t,k}-x^{*}\|_{2}^{2}\ |\ \mathcal{F}_{t}]= \ \mathbb{E}[\|\widetilde{u}_{r}^{t,k-1}-x^{*}\|_{2}^{2}\ |\ \mathcal{F}_{t}]+\eta^{2}\cdot \mathbb{E}[\|\widetilde{g}_{r}^{t,k-1}\|_{2}^{2}\ |\ \mathcal{F}_{t}]\] \[\quad-2\eta\,\mathbb{E}[\langle\widetilde{u}_{r}^{t,k-1}-x^{*}, \widetilde{g}_{r}^{t,k-1}\rangle\ |\ \mathcal{F}_{t}]+1_{\{k=0\}}\cdot\eta^{2}\cdot \mathbb{E}[\|h^{t}\|_{2}^{2}\ |\ \mathcal{F}_{t}] \tag{26}\]
The two inner products involving \(h^{t}\) vanishes due to the reason that \(\mathbb{E}[h^{t}\ |\ \mathcal{F}_{t}]=0\).
Since
\[\mathbb{E}[\|h^{t}\|_{2}^{2}\ |\ \mathcal{F}_{t}] =\ \mathbb{E}[\|(I_{d}-\mathsf{desk}_{t}\circ\mathsf{sk}_{t})(\sum_{i=0}^{K -1}\widetilde{g}_{r}^{t-1,i})\|_{2}^{2}\ |\ \mathcal{F}_{t}]\] \[\leq\ \alpha\,\mathbb{E}[\|\sum_{i=0}^{K-1}\widetilde{g}_{r}^{t-1,i} \|_{2}^{2}\ |\ \mathcal{F}_{t}]\] \[\leq\ \alpha K\sum_{i=0}^{K-1}\mathbb{E}[\|\widetilde{g}_{r}^{t-1,i} \|_{2}^{2}\ |\ \mathcal{F}_{t}],\]
where the first step follows from the definition of \(h^{t}\) (see Eq. (24)), the second step follows from \(\|I_{d}-\mathsf{desk}_{t}\circ\mathsf{sk}_{t})\|_{2}^{2}\leq\alpha\), and the last step follows from the linearity property of expectation.
It follows that
\[\mathbb{E}[\|\widetilde{u}_{r}^{t,k}-x^{*}\|_{2}^{2}]\] \[\leq\ \mathbb{E}[\|\widetilde{u}_{r}^{t,k-1}-x^{*}\|_{2}^{2}]+ \eta^{2}\cdot\mathbb{E}[\|\widetilde{g}_{r}^{t,k-1}\|_{2}^{2}]-2\eta\, \mathbb{E}[\langle\widetilde{u}_{r}^{t,k-1}-x^{*},\widetilde{g}_{r}^{t,k-1} \rangle]\] \[\quad+1_{\{k=0\}}\cdot\eta^{2}\cdot\alpha K\sum_{i=0}^{K-1} \mathbb{E}[\|\widetilde{g}_{r}^{t-1,i}\|_{2}^{2}]\] \[\leq\ \mathbb{E}[\|\widetilde{u}_{r}^{t,k-1}-x^{*}\|_{2}^{2}]+ \eta^{2}\cdot\mathbb{E}[4L(f(\widetilde{u}_{r}^{t,k-1})-f(x^{*}))]\] \[\quad-2\eta\,\mathbb{E}[f(\widetilde{u}_{r}^{t,k-1})-f(x^{*})+ \frac{\mu}{2}\|\widetilde{u}_{r}^{t,k-1}-x^{*}\|_{2}^{2}]\] \[\quad+1_{\{k=0\}}\cdot\eta^{2}\cdot\alpha K\sum_{i=0}^{K-1} \mathbb{E}[4L(f(\widetilde{u}_{r}^{t-1,i})-f(x^{*}))]\] \[\leq\ (1-\mu\eta)\,\mathbb{E}[\|\widetilde{u}_{r}^{t,k-1}-x^{*}\|_{2}^ {2}]\] \[\quad-2\eta\cdot(1-2\eta L)\cdot\mathbb{E}[f(\widetilde{u}_{r}^{ t,k-1})-f(x^{*})]\] \[\quad+1_{\{k=0\}}\cdot\eta^{2}\cdot\alpha K\cdot\Big{(}4L\sum_{ i=0}^{K-1}\mathbb{E}[f(\overline{u}^{t-1,i})-f(x^{*})]\Big{)}\]
where the first step follows from Eq. (26), the second step follows from Lemma 61 and Lemma 62, and the last step follows from simple algebra.
Since \(\eta\leq\frac{1}{4L}\), we have
\[\mathbb{E}[\|\widetilde{u}_{r}^{t,k}-x^{*}\|_{2}^{2}] \leq\ (1-\mu\eta)\,\mathbb{E}[\|\widetilde{u}_{r}^{t,k-1}-x^{*}\|_{2}^{2}]- \eta\,\mathbb{E}[f(\widetilde{u}_{r}^{t,k-1})-f(x^{*})]\] \[\quad+1_{\{k=0\}}\eta^{2}\alpha K\Big{(}4L\sum_{i=0}^{K-1} \mathbb{E}[f(\widetilde{u}_{r}^{t-1,i})-f(x^{*})]\Big{)}\]
### Convergence
Once the aforementioned assumptions are established, we will ensure the convergence of our gradient coin design.
**Lemma 64**.: _If the following conditions hold:_
* _Assumption 71 holds, where_ \(\mu\) _and_ \(L\) _are defined as in Assumption_ 71_._
* _Let_ \(K\) _be the amount of the local steps._
* _Let Theorem_ 60 _hold and_ \(\eta\leq\frac{1}{8(1+\alpha)LK}\)_, where_ \(\alpha\) _is defined as in Theorem_ 60_._
* _Let_ \(x_{0}\)_,_ \(x_{T+1}\) _be defined as in Algorithm_ 5_._
* _Let_ \(\sigma^{2}=\frac{1}{N}\sum_{c=1}^{N}\|\nabla f_{c}(x^{*})\|^{2}\)_._
* \(R\sim\Pi\) _is the distribution of sketching matrix._
_Then, we have_
\[\mathbb{E}[f(x_{T+1})-f(x^{*})]\leq\,\frac{L}{2}\,\mathbb{E}[\|x_{0}-x^{*}\|_{2 }^{2}]e^{-\mu\eta T}\]
_where \(x^{*}\) is a minimizer of Definition 70._
Proof.: By using Lemma 63 for \(k\) times from \(0\) to \(K-1\), for any \(t\geq 1\), it follows that
\[(\mathop{\mathbb{E}}_{R\sim\Pi}[\|\widetilde{u}_{r}^{t+1,0}-x^{*} \|_{2}^{2}]+\sum_{k=1}^{K-1}\mathop{\mathbb{E}}_{R\sim\Pi}[\|\widetilde{u}_{r }^{t,k}-x^{*}\|_{2}^{2}])-(1-\mu\eta)\sum_{k=0}^{K-1}\mathop{\mathbb{E}}_{R \sim\Pi}[\|\widetilde{u}_{r}^{t,k}-x^{*}\|_{2}^{2}]\] \[\leq -\eta\sum_{k=0}^{K-1}\mathop{\mathbb{E}}_{R\sim\Pi}[f(\widetilde{ u}_{r}^{t,k})-f(x^{*})]+\sum_{k=0}^{K-1}\mathbf{1}_{k=0}\eta^{2}\alpha K(4L \sum_{i=0}^{K-1}\mathop{\mathbb{E}}_{R\sim\Pi}[f(\widetilde{u}_{r}^{t,i})-f(x^ {*})])\] \[= -\eta\sum_{k=0}^{K-1}\mathop{\mathbb{E}}_{R\sim\Pi}[f(\widetilde{ u}_{r}^{t,k})-f(x^{*})]+\eta^{2}\alpha K(4L\sum_{i=0}^{K-1}\mathop{\mathbb{E}}_{R \sim\Pi}[f(\widetilde{u}_{r}^{t,i})-f(x^{*})])\] \[= -\eta(1-4\eta\alpha LK)\sum_{k=0}^{K-1}\mathop{\mathbb{E}}_{R \sim\Pi}[f(\widetilde{u}_{r}^{t,k})-f(x^{*})]\] \[\leq -\frac{1}{2}\eta\sum_{k=0}^{K-1}\mathop{\mathbb{E}}_{R\sim\Pi}[ f(\widetilde{u}_{r}^{t,k})-f(x^{*})],\]
where the first step follows from Lemma 63, the second step follows from simple algebra, the third step follows from simple algebra, and the last step follows from \(\eta\leq\frac{1}{8LK}\).
Rearranging the terms, we obtain
\[\mathop{\mathbb{E}}_{R\sim\Pi}[\|\widetilde{u}_{r}^{t+1,0}-x^{*}\|_{2}^{2}] \leq(1-\mu\eta)\mathop{\mathbb{E}}_{R\sim\Pi}[\|\widetilde{u}_{r}^{t,0}-x^{* }\|_{2}^{2}]\]
Now, we will have
\[\mathop{\mathbb{E}}_{r\sim[N],R\sim\Pi}[\|\widetilde{u}_{r}^{t+1,0}-x^{*}\|_{ 2}^{2}]\leq(1-\mu\eta)\mathop{\mathbb{E}}_{r\sim[N],R\sim\Pi}[\|\widetilde{u} _{r}^{t,0}-x^{*}\|_{2}^{2}]\]
implying
\[\mathop{\mathbb{E}}_{r\sim[N],R\sim\Pi}[\|\widetilde{u}_{r}^{t+1,0}-x^{*}\|_{ 2}^{2}]\leq(1-\mu\eta)(\mathop{\mathbb{E}}_{r\sim[N],R\sim\Pi}[\|\widetilde{u} _{r}^{t,0}-x^{*}\|_{2}^{2}]. \tag{27}\]
Therefore, we have
\[\mathop{\mathbb{E}}_{r\sim[N],R\sim\Pi}[\|x^{T+1}-x^{*}\|_{2}^{2}]\leq(1-\mu \eta)^{T}(\mathop{\mathbb{E}}_{r\sim[N],R\sim\Pi}[\|x^{0}-x^{*}\|_{2}^{2}])\]
\[\leq \underset{r\sim[N],R\sim\Pi}{\mathbb{E}}[\|x^{0}-x^{*}\|_{2}^{2}]e^{- \mu\eta T}, \tag{28}\]
where the first step follows from the iterating Eq. (27) \(T\) times, and the second step follows from \((1-\mu\eta)^{T}\leq e^{-\mu\eta T},\ \forall T>0\).
Finally, by \(L\)-smoothness of function \(f\), we obtain
\[\underset{r\sim[N],R\sim\Pi}{\mathbb{E}}[f(x^{T+1})-f(x^{*})] \leq \ \frac{L}{2}\underset{r\sim[N],R\sim\Pi}{\mathbb{E}}[\|x^{T+1}-x^{*}\|_{2} ^{2}]\] \[\leq \ \frac{L}{2}\underset{r\sim[N],R\sim\Pi}{\mathbb{E}}[\|x^{0}-x^{ *}\|_{2}^{2}]e^{-\mu\eta T},\]
where the first step follows from the definition of \(L\)-smoothness (see Definition 66) and the second step follows from Eq. (28).
## Appendix F Distributed/Federated Learning
In Section F.1, we introduce the definition of \(\mu\)-strongly convex and \(M\)-lipschitz. In Section F.2, we adapt the properties of strongly convex and combine that with our result developed earlier in this paper. In Section F.3, we adapt the properties of Lipschitz and combine that with our result developed earlier in this paper. In Section F.4, we introduce some properties from previous work.
### Definitions
**Definition 65** (\(\mu\)-Strongly Convex).: _We say a function \(L:\mathbb{R}^{d}\to\mathbb{R}\) is a \(\mu\)-strongly convex if_
\[\nabla^{2}L(x)\succeq\mu\cdot I_{d},\]
_where \(\mu\in\mathbb{R}\)._
**Definition 66** (\(l\)-Smooth).: _Let \(x\) and \(y\) be two arbitrary elements in \(\mathbb{R}^{d}\)._
_Let \(l>0\) be a real number._
_We say a function \(L:\mathbb{R}^{d}\to\mathbb{R}\) is \(l\)-smooth if_
\[\|\nabla L(x)-\nabla L(y)\|_{2}\leq l\cdot\|x-y\|_{2}\]
_(It is equivalent to saying the gradient of \(L\) is \(l\)-Lipschitz)_
**Definition 67** (\(M\)-Lipschitz).: _Let \(x\) and \(y\) be two arbitrary elements in \(\mathbb{R}^{d}\)._
_Let \(M>0\) be a real number._
_We say a function \(L:\mathbb{R}^{d}\to\mathbb{R}\) is \(M\)-Lipschitz if_
\[|L(x)-L(y)|\leq M\cdot\|x-y\|_{2}\]
Upon comparing the Hessian result presented in Lemma 57 of our work with the Hessian result outlined in Lemma 58 in [10], it becomes evident that each individual instance of our derived Hessian follows the identical structure as the Hessian discussed in [10]. (Our Hessian result can be viewed as a summation of \(n\) instances discussed in [10].)
Furthermore, the paper [10] establishes the properties of Lipschitz continuity and strongly convex for a single instance. Building upon this foundation, we intend to extend these theoretical findings to encompass a series of \(n\) iterations. By doing so, we anticipate the emergence of the following outcomes.
### Strongly Convex
**Lemma 68** (Strongly Convex).: _If the following conditions hold If the following conditions hold_
* _Let_ \(L_{j_{1}}:\mathbb{R}^{d^{2}}\to\mathbb{R}\) _be defined as Definition_ 47_._
* _Let_ \(L:\mathbb{R}^{d^{2}}\to\mathbb{R}\) _be defined as Definition_ 47__
* _Let_ \(W=\operatorname{diag}(w)\in\mathbb{R}^{n\times n}\)_._
* _Let_ \(\mathsf{A}\in\mathbb{R}^{n^{2}\times d^{2}}\)_._
* _Let_ \(\mathsf{A}_{[j]}\in\mathbb{R}^{n\times d^{2}}\) _denote the_ \(j\)_-th block of_ \(\mathsf{A}\in\mathbb{R}^{n^{2}\times d^{2}}\)_._
* _Let_ \(W^{2}\in\mathbb{R}^{n\times n}\) _denote the matrix that_ \(i\)_-th diagonal entry is_ \(w_{i}^{2}\)_._
* _Let_ \(\sigma_{\min}(\mathsf{A}_{[j]})\) _denote the minimum singular value of_ \(\mathsf{A}_{[j]}\) _for matrix_ \(\mathsf{A}_{[j]}\in\mathbb{R}^{n\times d^{2}}\) _for all_ \(j\in[n]\)_._
* _Let_ \(\min_{i\in[n]}w_{i}^{2}\geq 4+\mu/(\sigma_{\min}^{2}(\mathsf{A}_{[j]})n)\) _for all_ \(j\in[n]\)__
_Then, we have_
* \(L_{j}\) _is_ \(\mu\)_-strongly convex with parameter_ \(\mu/n\) _for all_ \(j\in[n]\)_._
* \(L\) _is_ \(\mu\)_-strongly convex with_ \(\mu\)_._
Proof.: **Proof of Part 1.** Based on Lemma 6.3 in page 30 in [10], we have
\[\frac{\mathrm{d}^{2}L}{\mathrm{d}x_{i}^{2}}=A_{*,i}^{\top}B_{1}(x)A_{*,i}+A_{*,i}^{\top}B_{2}(x)A_{*,i}\succeq\mu/n\cdot I_{d}\]
and
\[\frac{\mathrm{d}^{2}L}{\mathrm{d}x_{i}\mathrm{d}x_{j}}=A_{*,i}^{\top}B_{1}(x)A _{*,j}+A_{*,i}^{\top}B_{2}(x)A_{*,j}\succeq\mu/n\cdot I_{d}\]
The \(L_{j}\) is \(\mu\)-strongly convex now. We will focus on the second part of proof.
**Proof of Part 2.** By iterating over \(n\) times, we obtain the following summary
\[\nabla^{2}L(x)=\sum_{j=1}^{n}\nabla^{2}L_{j}(x).\]
And then we can have
\[\nabla^{2}L(x)\succeq\mu\cdot I_{d}\]
Thus the loss function \(L\) is \(\mu\)-strongly convex.
Now the proof is complete now.
### Lipschitz
**Lemma 69** (Lipschitz).: _If the following conditions hold_
* _Let_ \(L_{j_{1}}\) _be defined as Definition_ 47_._
* _Let_ \(L\) _be defined as Definition_ 47_._
* _Let_ \(R>4\)_._
* _Let_ \(\mathsf{A}\in\mathbb{R}^{n^{2}\times d^{2}}\)_._
* _Let_ \(l=\exp(O(R^{2}+\log(nd)))\)_._
_Then, we have_
* \(L_{j_{1}}\) _is_ \(l/n\)_-smooth for all_ \(j_{1}\in[n]\)_._
* \(L\) _is_ \(l\)_-smooth_
Proof.: **Proof of Part 1.**
Using Part 1 of Corollary 2.3 in [10], we have
\[\|\nabla L_{j_{1}}(x)-\nabla L_{j_{1}}(y)\|_{2}\leq l/n\cdot\|x-y\|_{2} \tag{29}\]
Now \(L_{j_{1}}\) is \(l/n\)-smooth.
**Proof of Part 2.** Now will focus on the smooth property of our loss.
We have
\[\|\nabla L(x)-\nabla L(y)\|_{2} \leq\|\sum_{j_{1}=1}^{n}\nabla L_{j_{1}}(x)-\sum_{j_{1}=1}^{n} \nabla L_{j_{1}}(y)\|_{2}\] \[\leq\ \sum_{j_{1}=1}^{n}\|\nabla L_{j_{1}}(x)-\nabla L_{j_{1}}(y)\|_ {2}\] \[\leq l\cdot\|x-y\|_{2}\]
where the first step is due to Lemma 57, the second step follows from triangle inequality, and the third step is from Eq. (29).
Now \(L\) is \(l\)-smooth and our proof is complete
### Tools from previous work
**Definition 70**.: _Consider a federated learning scenario with \(N\) clients and corresponding local losses \(f_{c}:\mathbb{R}^{d}\to\mathbb{R}\), our goal is to find_
\[\min_{w\in\mathbb{R}^{d}}f(w):=\frac{1}{N}\sum_{c=1}^{N}f_{c}(w). \tag{30}\]
**Assumption 71** (Assumption 3.1 in [11]).: _Each \(f_{c}\) is \(\mu\)-strongly convex for \(\mu\geq 0\) and \(L\)-smooth. That is, for all \(x,y\in\mathbb{R}^{d}\),_
\[f_{c}(y)-f_{c}(x)+\langle y-x,\nabla f_{c}(x)\rangle \geq\,\frac{\mu}{2}\|y-x\|_{2}^{2}\] \[f_{c}(y)-f_{c}(x)+\langle y-x,\nabla f_{c}(x) \leq\,\frac{L}{2}\|y-x\|_{2}^{2}.\]
_(Note that by definition of strongly convex and convex, \(\mu>0\) denotes strongly convex, and \(\mu=0\) denotes convex.)_
Gradient Coin Analysis
In this section, we use the induction method to demonstrate the correctness of gradient computation [2].
**Definition 72**.: _We define the following as the block gradient_
\[\Delta x_{1},\Delta x_{2}\cdots\Delta x_{t}\]
**Lemma 73** (Induction of Gradient Computation).: _Given the block gradient \(\Delta x_{1},\Delta x_{2},\cdots,\Delta x_{t-1}\) and \(x_{0}\), We have the following facts_
* _We can compute_ \(\Delta x_{t}\) _as the current step's gradient._
Proof.: We can obtain the current weight by
\[x_{t}=x_{0}+\eta_{\mathrm{local}}\sum_{i=1}^{t-1}\Delta w_{i}\]
And then \(user^{(c)}\) can compute \(K\) steps gradients by \(u_{c}^{t,k}\gets u_{c}^{t,k-1}-\eta_{\mathrm{local}}\cdot\nabla f_{c}(u_ {c}^{t,k-1})\).
Finally, we can have \(\Delta x_{t}\).
```
1:datstructureGradientBlock\(\triangleright\) See Definition 4
2:members
3:GradientBlock prevhash\(\triangleright\) Used to link the prior block
4:\(\Delta x_{t}\in\mathbb{R}^{d\times d}\)
5:\(t\in\mathbb{R}\)\(\triangleright\) The index number of the gradient block
6:\(\{\text{transaction}^{(i)}\}_{i=1}^{k}\)\(\triangleright\) List of transactions
7:endmember
8:procedureInitialize(\(t\),\(x\), GradientBlock prevhash)
9: prevhash\(\leftarrow\) prevhash
10:\(t\gets t\)
11:\(\Delta x_{t}\gets x\)
12:endprocedure
13:procedureAddTrans(User user,\(\{\text{transactions}^{(j)}\}_{j=1}^{k}\))
14: user collects \(\{\text{transactions}^{(j)}\}_{j=1}^{k}\) into this block
15:endprocedure
16:end datastructure
```
**Algorithm 6** Gradient Block
|
2306.06814
|
HiddenSinger: High-Quality Singing Voice Synthesis via Neural Audio
Codec and Latent Diffusion Models
|
Recently, denoising diffusion models have demonstrated remarkable performance
among generative models in various domains. However, in the speech domain, the
application of diffusion models for synthesizing time-varying audio faces
limitations in terms of complexity and controllability, as speech synthesis
requires very high-dimensional samples with long-term acoustic features. To
alleviate the challenges posed by model complexity in singing voice synthesis,
we propose HiddenSinger, a high-quality singing voice synthesis system using a
neural audio codec and latent diffusion models. To ensure high-fidelity audio,
we introduce an audio autoencoder that can encode audio into an audio codec as
a compressed representation and reconstruct the high-fidelity audio from the
low-dimensional compressed latent vector. Subsequently, we use the latent
diffusion models to sample a latent representation from a musical score. In
addition, our proposed model is extended to an unsupervised singing voice
learning framework, HiddenSinger-U, to train the model using an unlabeled
singing voice dataset. Experimental results demonstrate that our model
outperforms previous models in terms of audio quality. Furthermore, the
HiddenSinger-U can synthesize high-quality singing voices of speakers trained
solely on unlabeled data.
|
Ji-Sang Hwang, Sang-Hoon Lee, Seong-Whan Lee
|
2023-06-12T01:21:41Z
|
http://arxiv.org/abs/2306.06814v1
|
HiddenSinger: High-Quality Singing Voice Synthesis via Neural Audio Codec and Latent Diffusion Models
###### Abstract
Recently, denoising diffusion models have demonstrated remarkable performance among generative models in various domains. However, in the speech domain, the application of diffusion models for synthesizing time-varying audio faces limitations in terms of complexity and controllability, as speech synthesis requires very high-dimensional samples with long-term acoustic features. To alleviate the challenges posed by model complexity in singing voice synthesis, we propose HiddenSinger, a high-quality singing voice synthesis system using a neural audio codec and latent diffusion models. To ensure high-fidelity audio, we introduce an audio autoencoder that can encode audio into an audio codec as a compressed representation and reconstruct the high-fidelity audio from the low-dimensional compressed latent vector. Subsequently, we use the latent diffusion models to sample a latent representation from a musical score. In addition, our proposed model is extended to an unsupervised singing voice learning framework, HiddenSinger-U, to train the model using an unlabeled singing voice dataset. Experimental results demonstrate that our model outperforms previous models in terms of audio quality. Furthermore, the HiddenSinger-U can synthesize high-quality singing voices of speakers trained solely on unlabeled data.
singing voice synthesis, latent diffusion model, unsupervised learning
## I Introduction
Singing voice synthesis (SVS) systems aim to generate high-quality expressive singing voices from musical scores. Recent advancements in generative models [1, 2, 3] have led to rapid development in deep-learning-based SVS systems, resulting in high performance. Most SVS systems first synthesize an intermediate acoustic representation, such as Mel-spectrogram, from a musical score using an acoustic model [4, 5, 6, 7, 8, 9]. Subsequently, separately trained vocoders [10, 11, 12] convert the generated representation into audio, as shown in Fig 1(a).
However, conventional two-stage SVS systems face certain limitations. These systems depend on pre-defined intermediate representation, making it difficult to apply latent learning to improve audio generation. Moreover, a training-inference mismatch problem occurs because the predicted intermediate representation differs from the ground-truth intermediate representation. To resolve these issues, an end-to-end SVS system, VISinger [13], directly synthesizes audio by employing variational inference.
Although existing systems can improve the audio quality, several challenges remain: 1) SVS systems require high-dimensional audio or a linear-spectrogram to synthesize high-fidelity audio, resulting in high computational costs in high-dimensional space. 2) The training-inference mismatch problem persists in end-to-end systems. A gap between the posterior distribution from the audio and the prior distribution from the musical score exists, which results in inaccurate pitch and mispronunciations in the generated singing voice. Moreover, systems based on Normalizing Flows [2] are trained in a backward direction but perform inference in a forward direction [14]. 3) SVS systems require audio-musical score corpora for training, wherein it is time-consuming to obtain high-quality paired datasets.
To address the aforementioned problems, we propose HiddenSinger, an advanced high-quality SVS system utilizing a neural audio codec and latent diffusion models. Our approach involves multiple components to enhance the synthesis process: First, we introduce an audio autoencoder that can efficiently encode audio into a compressed latent representation, resulting in a lower-dimensional representational space. We also adopt residual vector quantization in the audio autoencoder to regularize the arbitrarily high-variance latent space. Subsequently, we employ the powerful generative ability of latent diffusion models to generate a latent representation conditioned on a musical score, which is converted into audio through the audio autoencoder. Moreover, we propose an unsupervised singing voice learning framework that leverages
Fig. 1: Comparison of SVS system architectures: (a) two-stage pipeline SVS system with pre-defined intermediate representation; (b) end-to-end SVS system; (c) proposed SVS system that uses an audio codec from the pre-trained audio autoencoder. The dashed outlines indicate that the parameters of the models are not updated during the generation of the audio codec from a musical score and the synthesis of audio from the audio codec.
unpaired singing voice data containing only audio. The experimental results demonstrate that HiddenSinger outperforms previous SVS models in terms of audio quality. Furthermore, our model can synthesize high-quality singing voices, even for speakers who are represented in unpaired data, by using the proposed unsupervised singing voice learning framework (HiddenSinger-U).
Our study makes the following contributions:
* We introduce HiddenSinger, which utilizes a neural audio codec and latent diffusion models to synthesize high-quality singing voices. The latent generator generates a latent representation conditioned on a musical score. Subsequently, the audio autoencoder synthesizes high-quality singing voice audio from the generated latent representation.
* We extend our proposed model to HiddenSinger-U, an unsupervised singing voice learning framework that performs training with both paired and unpaired datasets using acoustic features from audio. HiddenSinger-U can synthesize a high-quality singing voice of a speaker without a musical score during training.
* The proposed model is demonstrated to outperform previous SVS models. Audio samples are available at [https://jisang93.github.io/hiddensinger-demo/](https://jisang93.github.io/hiddensinger-demo/)
## II Related Studies
### _Singing Voice Synthesis_
Singing voice synthesis (SVS) systems are designed to generate a singing voice based on a musical score. Since singing voices comprise significant pitch variability and an extended duration of vowel, SVS systems require additional input data, such as note pitch, note duration, and lyrics. Conventional SVS systems follow a two-stage manner comprising an acoustic model [5, 6, 7, 8, 9] and a vocoder [10, 11] to synthesize a realistic singing voice. Although previous SVS systems improved the singing voice quality, the two-stage pipeline has inherent limitations that prevent it from surpassing the upper bound of the vocoder performance.
To address these limitations, researchers have proposed end-to-end SVS systems [13] that use a well-learned latent representation to enhance the quality of singing voices and simplify the training procedure. However, the end-to-end method still faces problems, including a training-inference mismatch problem. Specifically, the gap between the posterior and prior distributions leads to degraded audio reconstruction performance. In this study, we leverage a well-learned latent representation, which is converted into an audio codec, to improve the quality of the reconstructed audio.
### _Neural Audio Synthesis_
To generate natural audio, neural vocoders [15, 16, 17] are generally used to convert signal processing components, such as the Mel-spectrogram, into raw waveform audio. For high-quality audio generation, a generative adversarial network (GAN)-based neural vocoder adopts a multi-scale discriminator [18] and a multi-period discriminator [19] to capture the specific characteristics of the waveform audio. Although a diffusion-based neural vocoder [20] has been presented, several limitations persist in the audio quality and inference speed in tasks concerning waveform audio generation.
In recent developments, neural audio codecs have emerged in conjunction with neural vocoders. These audio codecs efficiently compress the audio in an autoencoder architecture. For improved compression, approaches such as SoundStream [21] introduce residual vector quantization, leading to enhanced coding efficiency. Encodec [22] also represents the audio as discrete units with the residual vector quantization and incorporates a multi-scale short-time Fourier transform (STFT)-based discriminator to reduce artifacts in the reconstructed audio. Drawing inspiration from these studies, we adopt neural audio codecs to achieve high-fidelity audio generation and computational efficiency.
### _Diffusion Probabilistic Model_
Diffusion probabilistic models (also known as diffusion models) [23] are a class of generative models that have achieved remarkable results in various domains, such as image [24, 25, 26], audio [27, 28] and video [29] generation. Particularly, in the audio domain, previous studies have mainly used diffusion models to generate acoustic features.
For the acoustic feature generation, Grad-TTS [30], Diff-Singer [9], and DDDM-VC [31] utilize the diffusion-based decoder to generate a high-quality Mel-spectrogram. Each model uses a conditional-diffusion decoder to condition text distribution for a text-to-speech system [30], the musical score for a SVS system, and speaker information for a voice conversion system. To improve the generation efficiency by compressing the Mel-spectrogram into discrete latent space, DiffSound [27] introduces a discrete diffusion-based token decoder in a non-autoregressive manner. Make-an-audio [28] adopts the latent diffusion models to generate a continuous latent representation that converts into Mel-spectrogram. For waveform generation, Diffwave [32] and WaveGrad [33] generate high-fidelity speech waveform from the Mel-spectrogram. In contrast to the above approaches, WaveGrad 2 [34] and FastDiff [20] adopt an end-to-end manner that generates the audio without any intermediate features (e.g., Mel-spectrogram). Inspired by the success of diffusion-based generation, we adopt latent diffusion models to generate a latent representation conditioned on a musical score.
## III Preliminary
Diffusion models comprise two processes: a forward process (diffusion process) and reverse process (denoising process). In the forward process, the data \(X_{0}\) are gradually corrupted with a tiny Gaussian noise through a \(T\)-step Markov chain. The reverse process, which follows the reverse trajectory of the forward process, aims to generate the data \(X_{0}\) from the Gaussian noise \(X_{T}\).
In [35, 36], a stochastic differential equation (SDE) was used to approximate the trajectory between \(X_{0}\) and \(X_{T}\). In the speech domain, Grad-TTS [30] and Guided-TTS [37] applied an SDE to the text-to-speech task. Following [35], forward
process that perturbs the data \(X_{0}\) into the noise \(X_{T}\) is defined with the pre-defined noise schedule \(\beta_{t}=\beta_{0}+(\beta_{T}-\beta_{0})t\):
\[dX_{t}=-\frac{1}{2}X_{t}\beta_{t}dt+\sqrt{\beta_{t}}dW_{t}, \tag{1}\]
where \(W_{t}\) represents the standard Brownian motion and \(t\) denotes a continuous timestep \(t\in[0,T]\).
The reverse process is defined as a reverse-time SDE that formulates the trajectory from Gaussian noise \(X_{T}\) to the data \(X_{0}\) as follows:
\[dX_{t}=\left(-\frac{1}{2}X_{t}-\nabla_{X_{t}}\log p_{t}(X_{t})\right)\beta_{t} dt+\sqrt{\beta_{t}}d\tilde{W}_{t}, \tag{2}\]
where \(\tilde{W}_{t}\) is the reverse Brownian motion and \(\nabla_{X_{t}}\log p_{t}(X_{t})\) represents a score of the probability density function of data \(p_{t}(X_{t})\).
A neural network \(s_{\theta}\) learns to estimate the score, which is parameterized by \(\theta\), to model the data distribution \(p_{t}(X_{t})\). By solving Eq. 2, \(X_{0}\sim p_{0}(X)\) can be obtained by starting from the noisy data \(X_{T}\) and iteratively removing the noise using the score estimation networks \(s_{\theta}\).
## IV HiddenSinger
In this paper, we propose a SVS system using neural audio codecs and latent diffusion for high-quality singing voice audio. We introduce an audio autoencoder using residual vector quantization to achieve high-fidelity audio generation and computational efficiency. Additionally, we adopt latent diffusion models in a latent generator to generate a latent representation conditioned on a musical score, which is converted into audio by the audio autoencoder. Furthermore, we extend HiddenSinger to HiddenSinger-U, which can train the model without musical scores. In the following subsection, we describe the details of HiddenSinger and an unsupervised singing voice learning framework (HiddenSinger-U).
### _Audio Autoencoder_
For efficient coding and high-quality audio generation, we introduce the audio autoencoder to compress the audio into an audio codec, which provides a low-dimensional representation. The audio autoencoder comprises three modules: an encoder, residual vector quantization (RVQ) blocks, and a decoder, as illustrated in Fig. 2 (a).
#### Iv-A1 Encoder
The encoder takes a high-dimensional linear-spectrogram as the input and extracts a low-dimensional continuous latent representation \(z_{0}\) from the audio \(y\). Inspired by [26], the latent space is regularized through vector quantization (VQ) [38] to avoid an arbitrarily high-variance of the latent space. A previous study in which sampling was performed using latent diffusion models demonstrated that a model trained on the VQ-regularized latent space achieved better quality than the Kullback-Leibler (KL)-regularized latent space. In our preliminary experiments, we observed that the KL-regularized latent space achieved sub-optimal performance when the diffusion models restored the latent representation. However, conventional VQ is insufficient for high-fidelity audio reconstruction because a quantized vector should represent multiple features of a raw waveform. Therefore, we apply RVQ [21] to the continuous latent representation \(z_{0}\) for efficient audio compression.
#### Iv-A2 Residual Vector Quantization Blocks
As indicated in Fig. 2 (b), the first vector quantizer discretizes the continuous latent representation \(z_{0}\) into the closest entry in a codebook. Subsequently, the residual is computed. The next quantizer is
Fig. 2: (a) Architecture of HiddenSinger. We train the audio autoencoder and latent generator separately. During the inference, the latent generator gradually denoises a noisy sample from the data-driven priors. Then, the audio autoencoder converts the sampled latent representation into audio. (b) The RVQ blocks discretize the continuous latent representation into an audio codec. (c) To guide the latent generator, the condition encoder extracts a condition representation \(h_{cond}\) and estimates \(\hat{\mu}\) from a musical score. The dashed arrows indicate that the operations are only used during training.
used with the second codebook, with this process repeated as many times as the number of quantizers \(C\). The number of quantizers is related to the trade-off between the computational cost and coding efficiency. We follow the training procedure described in [21] to train the codebook for each quantizer. Furthermore, we apply the commitment loss [38] to stabilize the codebook training. We found that the low-weighted commitment loss helps to converge the RVQ blocks during training:
\[\mathcal{L}_{emb}=\sum_{c=1}^{C}||z_{0,c}-q_{c}(z_{0,c})||_{2}^{2}, \tag{3}\]
where \(z_{0,c}\) represents the residual vector of the \(c\)-th quantizer and \(q_{c}(z_{0,c})\) denotes the closest entry in the \(c\)-th codebook.
#### Ii-A3 Decoder
The decoder generates a raw waveform from the audio codec \(\hat{y}=G(z_{q})\). We calculate a reconstruction loss \(\mathcal{L}_{recon}\) between the generated \(\hat{x}_{mel}\) and ground-truth Mel spectrograms \(x_{mel}\) to improve the training efficiency of the decoder. The reconstruction loss is defined as
\[\mathcal{L}_{recon}=||x_{mel}-\hat{x}_{mel}||_{1}. \tag{4}\]
Moreover, we adopt adversarial learning to improve the quality of the generated audio. We use a multi-scale STFT-based (MS-STFT) discriminator [22], which expands a multi-resolution spectrogram discriminator [39]. The MS-STFT discriminator operates on a multi-scale complex-valued STFT that contains both real and imaginary parts. Similar to the work of [22], we observed that the MS-STFT discriminator trains the decoder efficiently and facilitates the synthesis of audio with better quality than the combination of a multi-period discriminator [19] and multi-scale discriminator [18]. Furthermore, we adopt the feature matching loss \(\mathcal{L}_{fm}\)[40], which is a perceptual loss for GAN training:
\[\mathcal{L}_{adv}\left(D\right) =\mathbb{E}\Big{[}\big{(}D(y)-1\big{)}^{2}+D(G(z_{q}))^{2}\Big{]}, \tag{5}\] \[\mathcal{L}_{adv}\left(G\right) =\mathbb{E}\left[\big{(}D(G(z_{q}))-1\big{)}^{2}\right],\] (6) \[\mathcal{L}_{fm}\left(G\right) =\mathbb{E}\left[\sum_{l=1}^{L}\frac{1}{N_{l}}||D_{l}(y)-D_{l} \left(G(z_{q})\right)||_{1}\right], \tag{7}\]
where \(z_{q}\) denotes the quantized latent representation, \(L\) is the total number of layers in discriminator \(D\), \(N_{l}\) represents the number of features, and \(D_{l}\) extracts the feature map in the \(l\)-th layer of the discriminator.
#### Ii-A4 Auxiliary Multi-task Learning
We introduce auxiliary tasks based on a lyrics predictor and note-pitch predictor to improve the capability of the linguistic and acoustic information in the audio codec. Each predictor takes the compressed latent representation \(z_{q}\) to predict a frame-level target feature. We calculate the connectionist temporal classification (CTC) loss [41] between the predicted and target feature. We only apply the CTC loss to paired datasets that contain a musical score.
#### Ii-A5 Final Loss
The final loss term for the audio autoencoder is defined as:
\[\mathcal{L}_{gen} =\mathcal{L}_{adv}(G)+\lambda_{recon}\mathcal{L}_{recon}+\lambda _{emb}\mathcal{L}_{emb}\] \[\quad+\lambda_{fm}\mathcal{L}_{fm}\left(G\right)+\lambda_{lyrics }\mathcal{L}_{lyrics}+\lambda_{note}\mathcal{L}_{note}, \tag{8}\]
where \(\lambda_{*}\) is the loss weight, \(\mathcal{L}_{lyrics}\) represents the CTC loss between the predicted and ground-truth lyrics, and \(\mathcal{L}_{note}\) denotes the CTC loss between the predicted and ground-truth pitch IDs according to the musical instrument digital interface (MIDI) standard.
### _Condition Encoder_
We present a condition encoder to guide the diffusion models. The condition encoder comprises a lyrics encoder, a melody encoder, an enhanced condition encoder, and a prior estimator.
#### Ii-B1 Lyrics Encoder
The lyrics encoder takes a phoneme-level lyrics sequence with positional embedding as the input, and then extracts a lyrics representation. We use a grapheme-to-phoneme tool to convert the lyrics sequence into a phoneme-level lyrics sequence before feeding it into the lyrics encoder.
#### Ii-B2 Melody Encoder
We introduce the melody encoder to generate a singing voice with an adequate melody from a musical score. Before using the musical score, we divide the notes into a phoneme-level note sequence. A Korean syllable generally comprises an onset, nucleus, and coda. Following the previous Korean SVS systems [8, 42], we assign onset and coda to a maximum of three frames with the remainder considered as the nucleus.
Subsequently, the melody encoder extracts a melody representation from the concatenation of a note pitch, note duration, and note tempo embedding sequence with positional embedding. The note pitch sequence is transformed into the note pitch embedding. The note duration embedding sequence is represented by a fixed set of duration tokens, among which the resolution is represented by a specific note duration (e.g., the 64th note). The note tempo is calculated in beats per minute and encoded into the tempo embedding.
#### Ii-B3 Enhanced Condition Encoder
The enhanced condition encoder encodes the summation of the outputs of the lyrics and melody encoders to provide a more informative condition representation \(h_{cond}\). Before summing the two representations, they are expanded into the frame-level based on the note duration. In our preliminary experiments, we observed that the enhanced condition encoder effectively stabilized the pronunciation of synthesized singing voices, similar to the result in [13, 43].
### _Latent Generator_
We adopt the latent diffusion models [26] in the latent generator to generate the latent representation of the audio autoencoder. The latent representation \(\hat{z}_{0}\) is sampled using the latent diffusion models, following which the generated latent representation \(\hat{z}_{0}\) is converted into the audio codec in the audio autoencoder. Furthermore, the latent representation is normalized to ease the sampling.
#### Ii-B1 Data-Driven Priors
We use data-driven priors in the latent diffusion models to improve their generation abilities. Previous studies [30, 44] have demonstrated that the use of data-driven priors helps approximate the trajectories between the complex data and known priors. Following [30], we design
the diffusion models to start denoising from noise close to the target \(z_{0}^{\prime}\), which is easier than denoising from standard Gaussian noise. We predict \(\hat{\mu}\) from the condition representation \(h_{cond}\) using the prior estimator of the condition encoder. We apply the negative log-likelihood loss \(\mathcal{L}_{prior}\) between the normalized latent \(z_{0}^{\prime}\) and the predicted \(\hat{\mu}\) to consider \(\hat{\mu}\) as a mean-shifted Gaussian distribution \(\mathcal{N}(\hat{\mu},I)\).
#### Iii-C2 Latent Diffusion Models
The diffusion process is defined using a forward stochastic differential equation (SDE) with the data-driven priors given a time horizon \(t\in[0,1]\). The forward SDE converts the normalized latent representations \(z_{0}^{\prime}\) into Gaussian noise:
\[dz_{t}^{\prime}=\frac{1}{2}(\hat{\mu}-z_{t}^{\prime})\beta_{t}dt+\sqrt{\beta_ {t}}dW_{t}, \tag{9}\]
where \(W_{t}\) is the standard Brownian motion and \(\beta_{t}\) is the non-negative pre-defined noise schedule. Its solution is expressed as:
\[z_{t}^{\prime} =\left(I-e^{-\frac{1}{2}\int_{0}^{t}\beta_{s}ds}\right)\hat{\mu} +e^{-\frac{1}{2}\int_{0}^{t}\beta_{s}ds}z_{0} \tag{10}\] \[+\int_{0}^{t}\sqrt{\beta_{s}}e^{-\frac{1}{2}\int_{s}^{t}\beta_{s }du}dW_{s},\]
According to the properties of Ito's integral, the transition density \(p_{t}(z_{t}^{\prime}|z_{0}^{\prime})\) is the Gaussian distribution \(p_{t}(z_{t}^{\prime}|z_{0}^{\prime})\sim\mathcal{N}(z_{t}^{\prime};\rho_{t}, \lambda_{t})\), as follows:
\[\rho_{t}= \left(I-e^{-\frac{1}{2}\int_{0}^{t}\beta_{s}ds}\right)\hat{\mu}+e ^{-\frac{1}{2}\int_{0}^{t}\beta_{s}ds}z_{0}^{\prime}, \tag{11}\] \[\lambda_{t}= I-e^{-\int_{0}^{t}\beta_{s}ds}. \tag{12}\]
We define the reverse process as an SDE solver to obtain the normalized latent representations \(z_{0}^{\prime}\sim p_{0}(z^{\prime})\). We use a score estimation network \(s_{\theta}\) to approximate the intractable score:
\[\begin{split} dz_{t}^{\prime}&=\left[\frac{1}{2}( \hat{\mu}-z_{t}^{\prime})-s_{\theta}(z_{t}^{\prime},\hat{\mu},h_{cond},t) \right]\beta_{t}dt\\ &+\sqrt{\beta_{t}}d\tilde{W}_{t},\hskip 56.905512ptt\in[0,1] \,,\end{split} \tag{13}\]
where \(\tilde{W}_{t}\) is the reverse Brownian motion.
Following [35], we compute the expected value of the estimated gradients of the log-density of the noisy latent \(z_{t}^{\prime}\):
\[\mathcal{L}_{diff}=\mathbb{E}_{z_{0}^{\prime},z_{t}^{\prime}}\left[||s_{ \theta}(z_{t}^{\prime},\hat{\mu},h_{cond},t)-\nabla_{z_{t}^{\prime}}\log p_{ t}(z_{t}^{\prime}|z_{0}^{\prime})||_{2}^{2}\right], \tag{14}\]
where \(\nabla_{z_{t}^{\prime}}\log p_{t}(z_{t}^{\prime}|z_{0}^{\prime})=-\lambda_{t} ^{-1}\epsilon_{t}\) and \(\epsilon_{t}\in\mathcal{N}(0,I)\). Furthermore, we adopt a temperature parameter \(\tau\) for the data-driven prior distribution \(\mathcal{N}\big{(}\hat{\mu},\tau^{-1}I\big{)}\) during sampling, which helps the latent generator to maintain the quality when \(\tau>1\), similar to the approach in [30].
We jointly optimize the latent generator and condition encoder based on the following objective:
\[\mathcal{L}_{lg}=\mathcal{L}_{diff}+\lambda_{prior}\mathcal{L}_{prior}, \tag{15}\]
where \(\lambda_{prior}\) is the loss weight for the prior loss \(\mathcal{L}_{prior}\).
### _Unsupervised Singing Voice Learning Framework_
Conventional SVS models require paired data (audio-musical score corpora) for training. Furthermore, these models cannot synthesize the singing voice of an untrained speaker without special techniques such as zero-shot adaptation. We extended our proposed model to HiddenSinger-U, an unsupervised singing voice learning framework, to mitigate the difficulty of collecting paired datasets. This framework enables the model to use unlabeled data during training. We introduce two additional encoders into the condition encoder to model the unsupervised lyrics and melody representation, as shown in Fig. 2 (c): an unsupervised lyrics encoder (lyrics-U encoder) and an unsupervised melody encoder (melody-U encoder). Furthermore, we employ contrastive learning in the proposed framework.
#### Iii-D1 Lyrics-U Encoder
We use a self-supervised speech representation method for the linguistic information. Previous works [45, 46] have demonstrated that the speech representation from the middle layer of a self-supervised model contains phonetic information. Therefore, the phonetic information can be leveraged by extracting the self-supervised representation from the target audio. We perform information perturbation before extracting the self-supervised representation to mitigate speaker information in the target audio. The information perturbation causes the self-supervised model to focus on extracting only phonetic information. Subsequently, the lyrics-U encoder encodes the self-supervised representation into a frame-level unsupervised lyrics representation.
#### Iii-D2 Melody-U Encoder
SVS models still require melody information of the target audio to synthesize singing voices. We first extract the fundamental frequency (\(F0\)) from the audio to extract melody information. Thereafter, we quantize the \(F0\) and encode it into a pitch embedding to obscure speaker information in the target audio. Subsequently, the melody-U encoder takes the pitch embedding to extract a frame-level unsupervised melody representation.
#### Iii-D3 Contrastive Learning
We observed that it is insufficient to only use the objective \(\mathcal{L}_{lg}\) to optimize HiddenSinger-U owing to the gap between the paired representations (e.g., the lyrics and unsupervised lyrics representation). To maximize the agreement and penalize the dissimilarity between the paired representations, we introduce the contrastive loss [47, 48] for the paired data as follows:
\[\begin{split}\mathcal{L}_{cont.}&=\sum_{t=1}^{T} \frac{e^{\left(\cos\left(h_{*}^{(t)},h_{*}^{(t)}\right)/\tau_{cont}\right)}}{ \sum_{\xi_{[k\neq t]}}e^{\left(\cos\left(h_{*}^{(t)},h_{*}^{(k)}\right)/\tau_{ cont}\right)}}\\ &+\sum_{t=1}^{T}\frac{e^{\left(\cos\left(h_{*}^{(t)},h_{*}^{(t)} \right)/\tau_{cont}\right)}}{\sum_{[k,\neq t]}e^{\left(\cos\left(h_{*}^{(t)},h_ {*}^{(k)}\right)/\tau_{cont}\right)}},\end{split} \tag{16}\]
where \(\cos(\cdot,\cdot)\) calculates the cosine similarity between the pairs, \(\tau_{cont}\) denotes the temperature, and \(\xi_{[k\neq t]}\) represents a set of random time indices as negative samples. Following [48], we randomly select several unmatched frames within each paired representation for negative samples. We apply the contrastive loss for each type of representation \(h_{*}\in[h_{ lyrics},h_{melody}]\). The gap between the paired representations can be reduced by adopting the contrastive terms \(\mathcal{L}_{cont.}\) in the objective \(\mathcal{L}_{lg}\).
## V Experiment and Results
### _Experimental Setup_
#### V-A1 Datasets
We trained HiddenSinger on the Guide vocal dataset1 to synthesize the singing voice The Guide vocal dataset contains approximately \(157.39\) hours of audio for \(4,000\) paired Korean songs. We divided the audio into segments of two-bar segments to facilitate the model training, resulting in \(93,127\) samples. Subsequently, we divided our dataset into three subsets: \(89,186\) samples for training, \(1,975\) samples for validation, and \(1,966\) samples for testing.
Footnote 1: [https://bit.ly/3GUEMIX](https://bit.ly/3GUEMIX)
We trained HiddenSinger-U using the Guide vocal dataset and an internal singing voice dataset containing approximately \(3.30\) hours of audio for \(316\) Korean songs that do not have musical scores to evaluate the unsupervised singing voice learning framework. The internal dataset was divided into three subsets: \(1,130\) samples for training, \(99\) samples for validation, and \(97\) samples for testing. Moreover, we considered specific speakers in the Guide vocal dataset as unlabeled data during training. To train the audio autoencoder, we used the aforementioned dataset, a multi-speaker singing dataset2, and children singing dataset [49], which contain a total of \(285.1\) hours of audio for \(8,781\) K-pop songs.
Footnote 2: [https://bit.ly/3Q9rOkn](https://bit.ly/3Q9rOkn)
#### V-A2 Pre-processing
We downsampled the audio at 24,000 Hz for training. We transformed the audio into a linear-spectrogram with 1,025 bins to train the audio autoencoder. For the reconstruction loss, we used the Mel-spectrogram with 128 bins. We grouped words into phrases and separated the phrases with the 16th rest in a text sequence for the lyrics encoder input. Subsequently, we converted the text sequence into a phoneme sequence using the grapheme-to-phoneme tool3. We used a 64th note resolution for the note duration tokens. We used the range \([16,256]\) for the tempo values of the tempo tokens. We extracted the self-supervised representation from the middle of XLS-R [50], pre-trained wav2vec 2.0 [51] with 128 language dataset including Korean, as inputs for the lyrics-U encoder. Prior to the extraction, we resampled the audio at 16,000 Hz and perturbed it. We interpolated the extracted representation back to 24,000 Hz sampling rate.
Footnote 3: [https://github.com/Kyubyong/g2p](https://github.com/Kyubyong/g2p)
#### V-A3 Training
We trained the audio autoencoder using the AdamW optimizer [52] with a learning rate of \(2\times 10^{-4}\), \(\beta_{1}=0.8\), \(\beta_{2}=0.99\), and a weight decay of \(\lambda=0.01\). We adopted a windowed generator training [53, 54, 55] for efficiency. We randomly extracted segments of the raw waveform with a window size of 128 frames as the input for the encoder to capture the linguistic features. Furthermore, the decoder took a randomly sliced segment of the quantized latent representation \(z_{q}\) with a window size of 32 frames. We used the corresponding audio segment from the ground-truth audio as the training target. Four NVIDIA RTX A6000 GPUs were used for the training. The batch size was set to 32 per GPU and the model was trained for up to 1M steps.
We jointly trained the condition encoder and latent generator using the AdamW optimizer with a learning rate of \(5\times 10^{-5}\), \(\beta_{1}=0.8\), \(\beta_{2}=0.99\), and a weight decay of \(\lambda=0.01\). We randomly extracted segments of the latent representations \(z_{0}\) with a window size of 128 frames for efficient training. We used two NVIDIA RTX A6000 GPUs for training and set the batch size to 32 per GPU. The model was trained for up to 2M steps.
### _Implementation Details_
#### V-B1 Audio Autoencoder
The encoder comprises non-causal WaveNet residual blocks, as proposed by [56]. The decoder uses a HiFi-GAN V1 generator [19]. We implemented 30 quantizers with codebook sizes of 1,024 entries and 128 dimensions for the residual vector quantizer blocks.
#### V-B2 Condition Encoder
The lyrics, melody, and enhanced condition encoders comprise four feed-forward Transformer (FFT) blocks [54] with relative-position encoding [57] following Glow-TTS [58]. In each FFT block, we set the number of attention heads to 2, the hidden size to 192, and kernel size to 9. The prior estimator is a single linear layer.
#### V-B3 Latent Generator
As illustrated in Fig. 3, a non-causal WaveNet-based denoiser architecture is used for the score estimation network \(s_{\theta}\), similar to the architecture in [9, 32]. We set the number of dilated convolution layers to 20, the residual channels to 256, and kernel size to 3 for the score estimation network. We set the dilation to 1 in each layer. We set \(\beta_{0}=0.05\), \(\beta_{1}=20\) and \(T=1\) to train the latent generator and \(\tau=1.5\) to sample the latent representation during inference.
#### V-B4 Unsupervised Learning Module
The lyrics-U and melody-U encoders have the same architecture as the lyrics and melody encoders, respectively, which consist of four FFT blocks with relative-position encoding. We used the 12th layer of the pre-trained XLS-R to extract the self-supervised representation. We quantized \(F0\) into 128 intervals to mitigate the speaker information.
Fig. 3: Architecture of the score estimation network in the latent generator
### _Subjective Metrics_
We conducted a five-scale naturalness mean opinion score (nMOS) listening test on the test dataset to evaluate the naturalness of the audio. Each audio was evaluated by 15 native Korean speakers. The subjective metrics are reported with 95% confidence intervals in this paper.
### _Objective Metrics_
We calculated the objective metrics to evaluate various types of distance between the ground-truth and synthesized audio. We considered four metrics to evaluate the SVS quality: 1) spectrogram mean absolute error (MAE); 2) pitch error; 3) periodicity error; and 4) F1 score of voiced/unvoiced classification (V/UV F1). We used the implementation of CARGAN [59] to evaluate the pitch, periodicity, and V/UV F1. Moreover, we provided additional objective metrics for the reconstruction quality, namely the perceptual evaluation of speech quality (PESQ) [60], in Subsection V-F.
#### V-D1 Spectrogram mean absolute error (MAE)
\[MAE=\frac{1}{T}\sum_{i=1}^{T}|s_{i}-s_{i}^{\prime}|, \tag{17}\]
where \(s_{i}\) and \(s_{i}^{\prime}\) denote the \(i\)-th spectrogram frame from the ground-truth and synthesized waveform, respectively. \(T\) represents the frame lengths of the spectrogram.
#### V-D2 Pitch error
\[Pitch=\sqrt{\frac{1}{T}\sum_{i=1}^{T}{(1200\times(\log_{2}{p_{i}}-\log_{2}{p_{i }^{\prime}}))^{2}}}, \tag{18}\]
where \(p_{i}\) and \(p_{i}^{\prime}\) represent the \(i\)-th extracted pitch representations from the ground-truth and synthesized waveform by using torchcrepe4, respectively. As following CARGAN, we only measure the pitch error on voiced parts in a waveform.
Footnote 4: [https://github.com/maxmorrison/torchcrepe](https://github.com/maxmorrison/torchcrepe)
#### V-D3 Periodicity error
\[Periodicity=\sqrt{\frac{1}{T}\sum_{i=1}^{T}{(\phi_{i}-\phi_{i}^{\prime})^{2}}}, \tag{19}\]
where \(\phi_{i}\) and \(\phi_{i}^{\prime}\) are the \(i\)-th extracted phase features from the ground-truth and synthesized waveform by using torchcrepe, respectively.
Note that the length of the synthesized and target singing voices are the same, because of the musical score that informs the duration of each note. Therefore, we do not consider time alignment, such as dynamic time warping [61], to calculate objective evaluations.
### _Singing Voice Synthesis_
We compared the audio generated by our proposed models, HiddenSinger and HiddenSinger-U, to the outputs of the following systems: 1) GT, Ground-truth audio; 2) HiFi-GAN [19], in which we reconstructed the audio from the ground-truth Mel-spectrogram using HiFi-GAN; 3) FastSpeech 2 [54] + HiFi-GAN, in which we added a melody encoder for SVS; 4) DiffSinger [9] + HiFi-GAN; and 5) VISinger [13], which is an end-to-end SVS system. We trained HiddenSinger-U on the same SVS dataset, of which 10% was defined as unlabeled data. Moreover, for fair comparisons, we trained the HiFi-GAN using the same datasets and training steps that were used to train the audio autoencoder.
As indicated in Table I, according to the subjective audio evaluation, HiddenSinger and HiddenSinger-U outperformed the other SVS models in terms of naturalness. Moreover, our proposed models reduced the pitch
Fig. 4: Visualization of generated F0 contours: (a) F0 contour variations of synthesized singing voice for five inferences with the same musical score; (b) F0 contour variations of synthesized singing voice for five speakers with the same musical score.
predictions, such as pitch or energy prediction. These results indicate that HiddenSinger can learn accurate pitch information.
However, VISinger achieved better performance in terms of the MAE, periodicity error, and V/UV F1 score. As our proposed models generate the latent representation through stochastic iterations, the stochasticity of the models may increase the distance between the ground-truth and synthesized audio. We computed the \(F0\) contour from the synthesized audio of HiddenSinger using Parselmouth5 to demonstrate the stochasticity of the models. As indicated in Fig. 4 (a), we performed inference five times for a speaker with the same musical score. It can be observed that HiddenSinger synthesized singing voices that contained appropriate tunes based on the musical score and variations such as intonation. As indicated in Fig. 4 (b), we synthesized singing voices using five different speakers and the same musical score. It can be observed that HiddenSinger generated various styles of singing voices from different speakers.
Footnote 5: [https://github.com/YannickJadoul/Parselmouth](https://github.com/YannickJadoul/Parselmouth)
Furthermore, we visualized the Mel-spectrograms of the synthesized audio to compare the models. Although the shapes of the harmonics that were synthesized by HiddenSinger differed slightly from those of the ground-truth Mel-spectrogram, the harmonics in the high-frequency band of HiddenSinger were more fine-grained than those of the other systems, as illustrated in Fig. 5. These results demonstrate that HiddenSinger generates high-fidelity and natural singing voices using the denoising process that can inject several variations.
### _Audio Autoencoder_
To demonstrate the performance of the audio autoencoder, we evaluated the quality of the reconstructed audio. We reconstructed the singing voice dataset used to train the VISinger for a fair comparison. As each decoder of our audio autoencoders leverages the HiFi-GAN V1 generator [19], they achieved similar performance to HiFi-GAN in terms of the objective evaluation metrics in Table II. However, in terms of naturalness, our audio autoencoders achieved slightly better performance than HiFi-GAN. Moreover, the reconstruction results of the VISinger exhibited the worst performance in terms of the subjective and objective evaluation measures. These observations suggest that the end-to-end training may reduce the quality of the reconstructed audio, resulting in the upper bound of the audio generation being degraded.
We evaluated the effectiveness of the different combinations of our audio autoencoder and the latent generator. We trained the latent generator separately using different regularized latent spaces. As indicated in Table III, the latent generator with the RVQ-regularized autoencoder outperformed the other combinations. Furthermore, it was difficult for the latent space without regularization to generate the latent representation with the latent diffusion models. These results indicate that the RVQ-regularized latent space is more suitable for sampling targets than the KL-regularized latent space in our setting, similar to the results reported in [26].
### _Unsupervised Singing Voice Learning Framework_
We compared the changes in the evaluation metrics according to the ratio of unlabeled data in the training dataset to verify the effectiveness of the unsupervised singing voice learning
Fig. 5: Visualization of generated samples with varying systems: (a) GT, (b) HiFi-GAN, (c) FastSpeech 2, (d) DiffSinger, (e) VISinger, and (f) HiddenSinger.
framework. We pre-defined certain speakers as unlabeled data that consisted of only audio for verification. We conducted the nMOS test to evaluate the naturalness of the audio. Moreover, we conducted a four-scale similarity MOS (sMOS) test to evaluate the voice similarity between the ground-truth and generated audio. We evaluated samples of pre-defined speakers that were considered unlabeled data in every setting, except for the 0% and 2% ratio settings in both MOS tests. The 0% ratio setting represents HiddenSinger, which has been trained without the unsupervised singing voice learning framework.
It can be observed from Table IV that the nMOS results were statistically insignificant in most of the settings. This suggests that the unsupervised singing voice learning framework helps the model learn to synthesize a natural singing voice, regardless of changes in the unlabeled ratio. Moreover, the objective evaluations demonstrate that the proposed framework can be trained stably in every setting.
However, as shown in Table IV, the similarity of the synthesized singing voice decreased with an increase in the unlabeled ratio. As there were differences between the note pitch of a musical score and the \(F0\) of a human speaker's singing voice, the models were trained with slightly different speaker identities due to the difference. Therefore, the human listener differentiated between the ground-truth and synthesized audio according to the difference. Although the difference degrades the similarity according to the increasing unlabeled ratio, the proposed framework is effective in synthesizing a natural singing voice with proper linguistic information and a perceptually similar speaker identity. Moreover, the contrastive terms \(\mathcal{L}_{cont.}\) and information perturbation aid in stabilizing the training. In particular, it is difficult to synthesize an appropriate singing voice when training is performed without the contrastive terms.
### _Ablation Study_
We conducted an ablation study to verify the effectiveness of each module in the proposed system. The results are presented in Table V. It can be observed that the subjective and objective evaluations significantly degraded with the removal of the enhanced condition encoder. Furthermore, the pronunciation of the synthesized audio was highly inaccurate without the enhanced condition encoder. Therefore, the enhanced condition encoder is necessary for the appropriate functioning of the proposed model.
We performed training on the latent generator with the audio codec \(z_{q}\) as the target of the latent diffusion models. Table V indicates that the generation of \(z_{0}\) could provide more natural audio than the generation of \(z_{q}\) in the latent generator. As the RVQ blocks may refine the sampled latent representation \(\hat{z}_{0}\) with residual operations, the generation of \(z_{0}\) is superior in terms of naturalness.
Furthermore, we considered a standard Gaussian as the priors following the original denoising diffusion probabilistic models [23]. However, the data-driven priors outperformed the standard Gaussian-based priors. This indicates that the trajectory between the data space and data-driven priors can be more stably approximate than the trajectory between the data space and standard Gaussian.
## VI Conclusions
We have introduced HiddenSinger, a novel approach that enables the synthesis of high-quality and high-diversity singing voice audio through the integration of a neural audio codec and latent diffusion models. Our study demonstrated the efficacy of the audio autoencoder in reconstructing high-fidelity audio using low-dimensional audio codecs. Furthermore, we successfully generated latent representations conditioned on a musical score using latent diffusion models. The audio was successfully reconstructed from the generated latent representation by the audio autoencoder. We extended our model to an unsupervised singing voice learning framework that can be trained without lyrics and note information using self-supervised representation. Our latent diffusion models could be used in any speech domain, including text-to-speech and voice conversion systems. However, our model still has limitations regarding novel singing style adaptation, not voice. In future works, we will attempt to implement a zero-shot singing style transfer by adopting style-generalized generative models.
## VII Discussion
### _Broader Impact_
Recently, neural audio codeces have been used in various tasks [62, 63]. As following concurrent works [62], our proposed model can be extended to a text-to-speech system. Moreover, we can address the data scarcity problem by applying our unsupervised learning framework to a low-resource language.
### _Social Negative Impact_
Although HiddenSinger may have practical applications such as podcasts or music generation, there is an increased risk of potential misuse of such technologies. In particular, unauthorized usage of data from web crawlers in SVS can give rise to concerns related to copyright infringement and voice spoofing. We want to emphasize that we strongly discourage the utilization of our work for any illicit or unethical purposes.
### _Limitation_
Although we adopt the latent diffusion models for high-efficient latent generation, the diffusion models require a number of iterative processes to generate the representations. In the future, we will introduce the consistency models [64] to distill the teacher diffusion models for a single-step generation.
|
2309.00438
|
A shape-based heuristic for the detection of urban block artifacts in
street networks
|
Street networks are ubiquitous components of cities, guiding their
development and enabling movement from place to place; street networks are also
the critical components of many urban analytical methods. However, their graph
representation is often designed primarily for transportation purposes. This
representation is less suitable for other use cases where transportation
networks need to be simplified as a mandatory pre-processing step, e.g., in the
case of morphological analysis, visual navigation, or drone flight routing.
While the urgent demand for automated pre-processing methods comes from various
fields, it is still an unsolved challenge. In this article, we tackle this
challenge by proposing a cheap computational heuristic for the identification
of "face artifacts", i.e., geometries that are enclosed by transportation edges
but do not represent urban blocks. The heuristic is based on combining the
frequency distributions of shape compactness metrics and area measurements of
street network face polygons. We test our method on 131 globally sampled large
cities and show that it successfully identifies face artifacts in 89\% of
analyzed cities. Our heuristic of detecting artifacts caused by data being
collected for another purpose is the first step towards an automated street
network simplification workflow. Moreover, the proposed face artifact index
uncovers differences in structural rules guiding the development of cities in
different world regions.
|
Martin Fleischmann, Anastassia Vybornova
|
2023-09-01T13:11:35Z
|
http://arxiv.org/abs/2309.00438v2
|
# A shape-based heuristic for the detection of urban block artifacts in street networks
###### Abstract
Street networks are ubiquitous components of cities, guiding their development and enabling movement from place to place; street networks are also the critical components of many urban analytical methods. However, their graph representation is often designed primarily for transportation purposes. This representation is less suitable for other use cases where transportation networks need to be simplified as a mandatory pre-processing step, e.g., in the case of morphological analysis, visual navigation, or drone flight routing. While the urgent demand for automated pre-processing methods comes from various fields, it is still an unsolved challenge. In this article, we tackle this challenge by proposing a cheap computational heuristic for the identification of "face artifacts", i.e., geometries that are enclosed by transportation edges but do not represent urban blocks. The heuristic is based on combining the frequency distributions of shape compactness metrics and area measurements of street network face polygons. We test our method on 131 globally sampled large cities and show that it successfully identifies face artifacts in 89% of analyzed cities. Our heuristic of detecting artifacts caused by data being collected for another purpose is the first step towards an automated street network simplification workflow. Moreover, the proposed face artifact index uncovers differences in structural rules guiding the development of cities in different world regions.
**Keywords:** street networks, network simplification, blocks, urban form, urban morphology, urban morphometrics, shape analysis, routing, OpenStreetMap
## 1 Introduction
Cities have been the object of scientific inquiry for thousands of years [1], particularly for their growth dynamics, population development, and spatial structure. Within the last 50 years, powered by the emergence of Big Data and computational power, data-driven approaches to the study of cities have gained unprecedented importance. For studies on city structure and place connectivity, urban street network data has proven to be a particularly useful point of departure. Street networks are line-based abstractions of the street space, containing information on the connections between intersections and street segments. They are popular objects of urban analyses due to their explanatory power, direct relationship to an extensive field of graph theory, and simplicity (digitizing a street network is much easier than digitizing buildings). The feasibility of studies that take street networks as a point of departure has greatly increased with open source geospatial data becoming available on platforms like OpenStreetMap (OSM) [2], allowing a wide range of applications [3], from transportation network design [4], assessment of urban sprawl [5] or evolution and distribution of street patterns [6, 7] to the classification of urban form [8, 9].
Street network datasets vary significantly in their level of detail and data quality [10, 11]. Whether a street network data set is fit for purpose depends to a large extent on the specific application. A frequent data processing challenge, common to many street network applications, is to reduce the level of detail from one use case (e.g. traffic routing) to the other (e.g. morphological analysis) without losing relevant information or introducing new imprecisions. For example, traffic routing applications require adequately represented directionality of edges (street segments) [12], while studies on urban morphology [13, 14] or on urban air space [15, 16] aim to reflect the space in between buildings and its perceptional configuration and thus do not require the edges to be directed, and do not want transportation geometries like roundabouts included.
In many cases, deciding which information to keep and which to aggregate is easy for a human, but challenging for an algorithm (see Figure 1). This is true both for studies that are concerned with the street network itself and for studies that look at the polygons _enclosed by_ the street network, which is a common way of conceptualizing urban blocks in morphology. Moreover, the level of detail of the network data set might vary greatly depending on the data source, and a simplification algorithm usually needs to be tailored to the data source, adding further complexity.
Working with street network data entails not only valuable insights, but also several much-lamented yet unresolved methodological challenges. In this article, we aim to tackle one of these challenges related to network simplification and network transformation (from transportation pathways to urban morphology), namely the automated detection of _street network face artifacts_, or short: _face artifacts_. We define face artifacts as polygons enclosed by transportation network edges but _not_ representing urban blocks (see the section below for more details).
The rest of the paper is organized as follows: next, we provide a precise prob
lem description, followed by a review of previous work and terminology. We then introduce our methodology, proposing a cheap computational heuristic that allows the automated identification of face artifacts based on the geometric shape of areas enclosed by street edges. We apply the heuristic to 131 cities across the globe, and present and evaluate the results. We conclude by discussing our contribution and outlining potential further steps.
### Problem description
Each geospatially encoded street network comes with a certain level of detail and a focus on specific elements of street space. While all attempt to capture primarily connectivity, which is the primary purpose of streets, the resulting geometries and their graph representations can vastly differ. On one hand, we have networks where every traffic lane and every intersection have been meticulously digitized with their respective attributes of directionality, hierarchy, and speed limits; this type of graph representation is needed for routing applications [12, 17, 18]. On the other hand, we have networks that attempt to capture human perception of space and navigation in a city, where detailed information on intersection geometries usually makes little sense and is omitted [8, 19]. While each of these graph representations has its own use cases, the former is more frequently available; therefore, researchers often need to start from a detailed network and derive a simplified one before starting an analysis. The difference between a transportation-focused and a perception-focused network can be better understood by examining the areas enclosed by network edges, which is a commonly used proxy for urban blocks [20, 21, 22, 23]. In graph theory, the polygons enclosed by the network edges in a planar space are called _(graph) faces_[24]. A detailed look at these face polygons for a given street network often reveals artifacts of transport-focused geometry - polygons that do not capture urban blocks, as illustrated in Figure 1.
The face polygons colored in red are not urban blocks enclosed by streets; rather, they appear in the network due to the representation of dual carriageways as separate network edges. This way of network representation is suitable for traffic
Figure 1: a) Bridge, Amsterdam; b) Roundabout, Abidjan; c) Intersection, Kabul; d) Motorway, Vienna. Polygons classified as face artifacts are shown in red, and the OSM street network (without service roads) is shown in black. Map data ©OpenStreetMap contributors ©CARTO
routing but can pose problems for other applications. When the goal is to generate polygons that are representative of urban blocks and at the same time to ensure that the graph represents the morphological network rather than the transportation network, we suggest calling such face polygons "face artifacts", as they occur only as a result of the data preparation model not suited for the purpose.
Face artifacts pose a twofold problem. First, in studies concerned with urban form, they introduce a false signal into the distribution of urban shapes and distort the actual shape of their neighboring polygons. Second, in studies concerned with the properties and patterns of the street network, face artifacts introduce superfluous network edges, thus distorting all network metrics based on node degree and/or shortest path computations. A further aggravating factor is that the extent to which face artifacts distort results depends on the analysis conducted and cannot be quantified without prior identification of such polygons or superfluous edges. Thus, no matter whether one is interested in the topology of the urban street network or the morphology of urban shapes enclosed by the network, face artifacts might need to be removed as part of data preprocessing and replaced by single network edges. Human manual identification of face artifacts would be unambiguous but prohibitively costly, not scalable, and not entirely reproducible. Although many authors have already pointed out this issue in a wide range of contexts (see section below), a fully automatized approach to identifying and potentially removing face artifacts is, to our knowledge, still non-existent. We therefore pose the following research question, focused on the first necessary step to tackle this challenge:
_How can face artifacts in an urban street network be computationally identified?_
In this article, we propose a method to answer this question, and test the proposed method's universality on case studies from across the globe.
### Previous work
While face artifacts are a commonly known problem in the research community, there is a lack of coherent terminology for the phenomenon. Previous studies have referred to the same issue in widely varying terms, making it substantially more challenging to conduct a comprehensive literature review.
Li et al. [25] point out the difficulties of extracting multilane roads from OSM that arise from each lane being represented as a separate linestring (a linestring is a geometry object representing a linear element such as a street edge following the Simple Features specification [26]). The authors propose a method to identify and merge face artifacts, which they call "multilane polygons", i.e., adjacent polygons covering a single street area resulting from mapping multiple street lanes, through a support vector machine (SVM) machine learning algorithm that uses five shape parameters as input. While this method does succeed in identifying face artifacts at multilane roads, it is only reproducible by users with
advanced machine learning skills; furthermore, the method requires an input of manually classified training data, which adds substantial effort.
Fan et al. [27] identify face artifacts, which they call "non-urban block polygons", as a data preprocessing issue in their study on feature matching between OSM and reference data. The authors use the SVM approach developed by Li et al. [25] mentioned above to identify face artifacts; they point out that the approach fails for smaller face artifacts at traffic junctions.
In a similar use case but from a different field, Sanzana et al. [28] elaborate on the process of deriving hydrological response units for drainage network flow modeling and find that error correction is needed for "bad-shaped polygons". The authors classify face artifacts, formed by roads and footpaths, as "sliver polygons", a subcategory of "bad-shaped polygons". The authors further propose a method of polygon decomposition into smaller "well-shaped polygons", which, however, cannot be conceptually transferred from hydrological to street networks.
Grippa et al. [21] develop a workflow for land use classification on an urban block level. The urban blocks are derived through polygonization of the OSM street network; the authors point out that some of the polygons do not represent actual urban blocks, and distinguish between "urban blocks" and "sliver polygons". Sliver polygons are detected based on shape and size criteria, which are user-defined and not further specified or analyzed by the authors. To remove the sliver polygons from the data set, a semi-automated workflow, partially in PostGIS, is implemented.
Ludwig et al. [29] take up the approach by Grippa et al. [21], as described above, within the context of urban green space identification from satellite imagery. So-called "city blocks", needed as units for the analysis, are derived through polygonization of the combined OSM networks of streets, railways, and waterways. To identify "sliver polygons", the authors take up the approach by Grippa et al. [21], as described above, with an additional threshold criterion of minimum area.
The study by Vybornova et al. [30] aims at identifying gaps in a network of bicycle infrastructure, which is defined as a subgraph of the OSM street network. For estimating network flow, the authors apply a shortest-path algorithm and point out that results are partially distorted by "parallel edges", i.e., the network edges at the boundaries of face artifacts. The study presents a network shortest path-based approach for the identification of parallel edges but no solution to effectively remove these from the network.
Lastly, a recent study by Shpuza [23] on the shape and size statistics of urban blocks describes elongated urban blocks that are delimited either by a street or another type of obstacle (e.g., a waterbody) and that contain no buildings, as "edge blocks". The author points out that edge blocks can be identified as outliers in a so-called "shape matrix" based on two geometrical parameters: relative distance and directional fragmentation. However, in this definition, edge blocks represent actual urban blocks rather than scattered parts of the street space.
The studies cited above demonstrate that face artifacts present a real data
preprocessing challenge across a wide range of disciplines that work with street network data - from hydrological flow modeling and satellite imagery-based land use classification to transportation planning and urban morphology. However, no previous study has so far tackled this issue systematically, which is also seen in the lack of a coherent terminology for this common problem. In addition, the solutions found in the literature for face artifact identification are either not reproducible, not transferrable to other use cases, or not automated.
### A side note on terminology
Some recent studies refer to face artifacts as "sliver polygons" [21, 28, 29]. However, these are two conceptually different terms since face artifacts arise as a consequence of a context-dependent redundancy of mapped line features, while sliver polygons stem from mismatching boundaries in vector overlays of polygon features [31, 32, 33]. The terms "multilane polygons" [25] or "parallel edges" [30] do not reflect other transportation geometries causing the issue (e.g., complex intersections), while "bad-shaped polygons" [28] is a vague term which does not refer back to the actual issue. In line with our problem definition, confined to the context of street networks, we, therefore, suggest "face artifact" as a more precise term derived from graph theory, one that encodes both the origin (i.e., network polygonization) and the erroneous nature (i.e., an artifact in the context of street network polygonization) of the polygon.
## 2 Method
The method proposed in this article is a simple, computationally cheap procedure designed to capture both the artifacts resulting from dual carriageways and the artifacts resulting from complex intersections, including roundabouts. First, the street network is polygonized to retrieve face polygons; then, the polygons are characterized according to their compactness and their area. Guided by the intuition that the distribution of urban block shape metrics has some universal properties, we combine both of these polygon characteristics to derive a _face artifact index_, which allows us to distinguish between face artifacts and actual urban blocks. We validate our method for face artifact identification both visually and computationally. Lastly, we evaluate the outcomes for each compactness metric to determine which compactness metric the face artifact index should be based on. Each of these steps is described in detail in the subsections below.
### Street network retrieval and generation of face polygons
The main goal of the method is to understand which components of a street network representation designed for other than morphological purposes are forming
face artifacts and, hence, which edges need to be pre-processed prior to any analysis assuming morphological representation. Therefore, we need to retrieve data digitized with transportation in mind, like those available in OSM, which aims, among others, to include the level of detail required for GPS navigation. At the same time, the method shall be independent of the geographical context. For these reasons, in this study we use OSM street network data for 131 cities sampled from six geographical areas (Africa, Asia, Europe, North America, Oceania, South America), covering all urbanized continents. To avoid ambiguity in the definition of a "city", we use the definition of functional urban areas (FUAs) released as part of the Global Human Settlement Layer (GHSL) [34, 35]. Given that smaller FUAs may contain only a low number of street edges, we further limit the selection to FUAs with at least 1 million inhabitants (as per GHSL data from 2015 released as part of the FUA layer). Out of this subset, we randomly select 25 FUAs per continent. For Oceania, which contains only six FUAs with more than 1 million inhabitants, all of these six are included in our sample. The geographical distribution of the thus selected FUAs is illustrated in Figure 2, with a complete list available in the Supplementary Material.
For each of the selected FUAs, we retrieve the street network data intersecting its boundary from OSM. We use only geometries with a highway tag and a filter based on OSM highway hierarchy (see Supplementary Material for details on the custom filter used), ensuring we do not include sidewalks or service roads. Retrieved street geometries are further processed using OSMmx[36] to derive a topologically correct undirected network, which is then polygonized to retrieve face polygons enclosed from all sides by planar network edges and representing either urban blocks or face artifacts.
Figure 2: Spatial distribution of FUAs selected to be used within this study color-coded according to the (sub-)continent they lie on.
### Compactness metrics
The conceptual logic behind the face artifact identification is simple. Since street networks are relatively predictable geometries with characteristic patterns and a limited number of feasible configurations, we know that face artifacts are typically either large and highly elongated polygons (resulting from dual carriage-ways) or small, highly compact polygons of various rather compact shapes, often circular (e.g. roundabouts) or triangular (e.g. turning lanes on intersections); in other words, face artifacts have either a large area and low compactness, or small area irrespective of compactness. Therefore, as will be shown below, a shape index that captures the relationship between area and compactness of face artifacts should have similar values for both face artifact types.
Compactness metrics, i.e., shape metrics that distinguish between compact and elongated polygons, are well known in literature both within urban morphology [22] and generic shape statistics [37]. Even though there is a large number of used metrics, they are often only minor variations of the same formula leading to a linear relationship between them, as in the case of _isoperimetric quotient_ in Altman [37] and _form factor_ in Sanzana et al. [28]). Some studies use the same formula under a different name, e.g., the _radii index_ in [38] and _Schumm's shape index_ in [39]. For the purposes of our study, we select a subset of five compactness metrics (see Table 1) from the extensive set of those available in the esda Python package [40] belonging to the Python Spatial Analytics Library (PySAL) family [41]. The five compactness metrics differ in their formulae, but all five are dimensionless ratios based on a comparison of an ideally compact shape (e.g., the equi-areal circle) and the actual shape of a polygon and are independent of the polygon's area and orientation in space.
Some of the metrics used in the related literature were excluded for conceptual reasons, such as the _convexity index_ used by Sanzana et al. [28], since long and narrow geometries can be as convex as short and wide ones, or _fractal dimension_ in Grippa et al. [21] since the difference between urban blocks and face artifacts does not reside in their fractal dimensions. Shpuza's indices based on space syntax theory [42] have a level of complexity that does not fit our intention to propose a simple and computationally cheap heuristic. Grippa et al. [21] do not report formulae, so we can only assume how compactness metrics relative to a square and to a circle were measured. A compactness metric presented in Louf and Barthelemy [43] is equal to _circular compactness_ (also known as _minimum bounding circle ratio_), first used by Reock [44] and later by Frolov [45], and likely also to the circular compactness as used by Grippa et al. [21]. Further, a metric presented as _elongation_ in Gil et al. [46] is equal to the _diameter ratio_ presented earlier by Flaherty and Crumplin [38].
### Proposed heuristic: Face artifact index
For each face polygon \(p\) in every FUA, we compute its area and the compactness metric \(Ci,p\) (where \(i\) denotes one of the five selected compactness metrics: circular compactness, isoperimetric quotient, isoareal quotient, radii ratio, and
\begin{table}
\begin{tabular}{l c c c} \hline Compactness metric \(C_{i}\) & Formula & Reference & Face artifact index \(F_{i}\) \\ \hline Circular compactness \(C_{cc}\) & \(\dfrac{a_{g}}{a_{mbc}}\) & [44] & \(F_{cc}\) \\ Isoperimetric quotient \(C_{ipq}\) & \(\dfrac{4\pi a_{g}}{p_{g}^{2}}\) & [37] & \(F_{ipq}\) \\ Isoareal quotient \(C_{iaq}\) & \(\dfrac{2\pi\sqrt{\frac{a_{g}}{\pi}}}{p_{g}}\) & [37] & \(F_{iaq}\) \\ Radii ratio \(C_{rr}\) & \(\dfrac{\sqrt{\frac{a_{g}}{\pi}}}{r_{mbc}}\) & [38] & \(F_{rr}\) \\ Diameter ratio \(C_{dr}\) & \(\dfrac{w_{mrr}}{l_{mrr}}\) & [38] & \(F_{dr}\) \\ \end{tabular}
\end{table}
Table 1: A selection of compactness metrics \(C_{i}\) tested within this study, where \(a_{g}\) = area of a polygon; \(a_{mbc}\) = area of minimum bounding circle of a polygon; \(p_{g}\) = perimeter of a polygon; \(r_{mbc}\) = radius of a minimum bounding circle of a polygon; \(w_{mrr}\) = width of a minimum rotated rectangle of a polygon, where width is equal to the smaller dimension of the rectangle; \(l_{mrr}\) = length of a minimum rotated rectangle of a polygon, where length is equal to the larger dimension of the rectangle.
diameter ratio). Then, for each compactness metric we derive a corresponding _face artifact index_\(F_{i,p}\), that captures both a polygon's compactness and area:
\[F_{i,p}=\log(C_{i,p}*a_{p}) \tag{1}\]
The multiplication captures the relationship between the compactness and area, while logarithmic scaling smoothens out the heavy-tailed distribution of the index caused by outliers (mostly geometries with particularly large areas). Next, we observe the frequency distributions \(\phi_{i}=\phi(F_{i})\) of the five face artifact indices for each FUA. For the majority of the analyzed FUAs and compactness metrics, the \(\phi_{i}\) reveal a common feature of showing at least two well-pronounced peaks, i.e., two prominent local maxima, as will be shown in Figure 4.
Through visual analysis, we estimate that these peaks represent two different types of polygons: most polygons within the first (leftmost) peak can be attributed to face artifacts in the street network, whereas most polygons within the second (rightmost) peak represent actual urban blocks. Therefore, for FUAs that show a pronounced two-peak pattern in their \(\phi(F_{i})\) distribution, we define the _face artifact index threshold_\(T_{i}\) as the value of \(F_{i}\) corresponding to the valley, i.e., the local minimum _between_ the two peaks in the distribution \(\phi_{i}\). To find the value of \(T_{i}\) for a given FUA, we approximate \(\phi_{i}\) with a Gaussian kernel density estimation (see Supplementary Material for specifications). Then, in order to account for different urban forms that result in varying shapes of \(\phi\) for different FUAs (some FUAs' \(\phi_{i}\) distributions have more than two peaks), we formulate two conditions: (1) the face artifact index distribution has _at least_ two peaks; (2) the face artifact index distribution has _at least_ one valley. If both these conditions are fulfilled for a given FUA and a face artifact index \(F_{i}\), we define the face artifact index threshold \(T_{i}\) as the value of \(F_{i}\) at the location of the first valley that lies between two peaks, one of which is the highest one:
\[T_{i}=F_{i}|\phi^{\prime}(F_{i})=0\,\land\] \[\phi^{\prime\prime}(F_{i})>0\,\land\] \[\exists\ G_{i}<F_{i}|(\phi^{\prime}(G_{i})=0\land\phi^{\prime \prime}(G_{i})<0)\,\land \tag{2}\] \[\exists\ H_{i}>F_{i}|(\phi^{\prime}(H_{i})=0\land\phi^{\prime \prime}(H_{i})<0)\,\land\] \[\arg\max(\phi)\in[G_{i},H_{i}]\]
We postulate that once we have computed, for a given FUA and a selected compactness metric, both \(F_{i,p}\) and \(T_{i}\), the face polygons can be easily classified into artifacts and urban blocks. Polygons with a face artifact index below the threshold (\(F_{i,p}<T_{i}\)) will most likely be face artifacts; polygons with a face artifact index at or above the threshold (\(F_{i,p}\geq T_{i}\)) will most likely be urban blocks.
In the next steps, we describe the methods used to check the validity of the proposed heuristic and to evaluate which compactness metric is best used for face artifact detection.
### Validation: Visual assessment and overlay with OSM building data
To validate our proposed heuristic and to assess whether the face artifact identification actually yields satisfactory results, we conduct two validation steps. First, we validate our method visually by generating plots of all face artifacts for each FUA where a face artifact index threshold could be identified. All plots can be found in the project repository archive available from [https://doi.org/10.5281/zenodo.8300730](https://doi.org/10.5281/zenodo.8300730); Figure 5 shows four examples.
Next, we validate our method computationally using OSM building data. By our own definition of face artifacts, these are part of the street network surface of a city and, therefore, should not contain any buildings - with minor exceptions, for example, when a bus stop in the middle of a road is mapped out as a separate building, or when a road leads underneath a building. Thus, from the perspective of face artifact identification, we can use OSM building data to identify "false positives" - polygons that have been classified as face artifacts by our method but contain (or overlap with) building footprints. Note that this is the only corner of the confusion matrix for our face artifact identification heuristic that we are able to estimate computationally in the absence of ground truth. Whether "negatives" (polygons classified as urban blocks) are true (actual urban blocks) or false (wrongly classified face artifacts) cannot be evaluated through building footprints, since urban blocks may or may not contain buildings. The same goes for "true positives" (correctly identified face artifacts): we cannot estimate their number computationally since the absence of building footprints is a necessary but not a sufficient condition for a face polygon to be classified as face artifact.
Since the completeness of OSM building data is an ongoing object of study [47, 48] and can vary greatly depending on the region, we first download and visually inspect OSM building data sets for all FUAs. We discard FUAs with low coverage of mapped buildings and keep only FUAs with sufficiently complete OSM building data. Then, for each FUA that has both a detected face artifact index threshold and sufficient building data, we compute the overlap of face polygons and building footprint polygons. We then compute the percentage of false positives, i.e., the percentage of face artifacts that contain up to \(X\) square meters of building footprints, letting \(X\) vary between 0 and 100 \(m^{2}\) to account for building overlap exceptions described above (see Figure 6).
### Evaluation: Performance of compactness metrics
Next, we aim to select one out of the five analyzed compactness metrics, based on which the face artifact index threshold shall be computed. To this end, we compare the performance of compactness metrics \(C_{i}\) listed in table 1 based on their ability to detect face artifacts and on their computational efficiency. An overview of all evaluation steps is shown in Figure 8.
The ability to detect a threshold \(T_{i}i\) that divides face polygons into artifacts and urban blocks is the most critical aspect of the proposed heuristic. The first
evaluation step therefore captures the percentage of successfully detected \(T_{i}\) for each compactness metric \(C_{i}\). It is assumed that the variation in distributions among metrics will result in differences reflecting the quality of each metric.
The second evaluation computes the peak prominence of the face artifact index threshold, i.e., the vertical distance between the valley (corresponding to \(T_{i}\)) and the peak to its left in the distribution \(\phi_{i}\). This metric captures how easy it is to derive the threshold from the distribution itself, depending on which compactness metric \(C_{i}\) the computation is based on.
Next, we evaluate the computational efficiency, as we believe that the lower the usage barriers of a method, the higher its value for the scientific community. Computational efficiency is one potential usage barrier, excluding researchers with limited access to high-performance computing infrastructure. We therefore benchmark the single-threaded performance of each metric on a random sample of 10000 polygons taken from the pool of all FUAs. The implementation of all compactness metrics in the esda package relies on the vectorized geometry engine of a shapely 2[49] wrapping around the GEOS library [50], ensuring minimal processing overhead. The benchmark is run on an Intel Xeon W-2245 3.9GHz CPU, repeating the same measurement 100 times.
The fourth and last evaluation step is based on the validation with OSM building data and looks at the percentage of "false positives", i.e., the percentage of polygons that have been identified as face polygons but contain buildings. The false positives have been computed only for those FUAs where OSM building data are sufficiently complete; therefore this evaluation step has fewer data points.
## 3 Results
Below, we summarize the results for each of the methodological steps, described in detail in Section 2). Detailed results can be found in the project repository archive (doi.org/10.5281/zenodo.8300730).
Using the sample of 131 FUAs, we were able to retrieve a total of 3440838 face polygons, with 25% in North America, 23% in South America, 16% in Europe, 14% in Africa and 13% and 7% in Asia and Oceania respectively, and a mean number of over 26000 polygons per FUA. That leaves us with a large amount of data ensuring a certain level of robustness and sampling bias reduction.
### Compactness metrics
As expected, there is a strong relationship between the individual compactness metrics as shown in the figure 3. The Pearson correlation coefficient ranges from 0.735 between the _isoareal quotient_ and the _diameter ratio_ to 0.987 between the _isoperimetric quotient_ and the _isoareal quotient_. The latter is due to a non-linear relationship between the two metrics, evident both in Figure 3 and in Spearman's rank correlation coefficient of 1.0. Other pairs of metrics clearly
capture similar shape characteristics but do not have a direct (linear or non-linear relationship). We can therefore assume that all five compactness metrics should be able to detect face artifacts in a similar manner, though likely with a different performance.
This assumption is confirmed once we take a look at the distributions \(\phi_{i}\), as shown in Figure 4 for five different cities: all face artifact indices \(F_{i}\) appear to yield similar two-peak patterns of \(\phi_{i}\), though the peak prominence and exact curve shape vary depending on the compactness metric used.
Figure 3: Pairwise scatter plots and histograms of five shape metrics for a combined sample of all 131 FUAs. The diagonal plots show the distribution of each metric, while the off-diagonal plots show the correlation between the two metrics. Each scatter plot contains a value of Pearson correlation coefficient (\(\rho\)) and of Spearman’s rank correlation coefficient (\(r_{s}\)) for a specific pair.
Figure 4: Face artifact index distributions for different compactness metrics and cities. In the columns, from left to right: circular compactness; isoperimetric quotient; isoareal quotient; radii ratio; diameter ratio. In the rows, from top to bottom: a) Cochabamba (Bolivia); b) Douala (Cameroon); c) Sydney (Australia); d) Tbilisi (Georgia); e) Montreal (Canada). Peaks are highlighted with black dots; valleys with red dots. The dashed red vertical line shows the position of the identified face artifact index threshold \(T_{i}\) for the given compactness metric and city.
### Validation
The visual validation step provides a first insight into the success of our method. We generate separate plots of face artifacts for each FUA where a face artifact threshold could be identified (see doi.org/10.5281/zenodo.8300730) and note that almost all of them clearly align with street network patterns, as can be seen in Figure 5. The only clear outlier is Mogadishu (Somalia), where entire areas of the city have been wrongly marked as face artifacts by our method - notably the districts of Shibis and Karan, which are characterized by a dense rectangular street pattern with a typical grid cell length of around \(30~{}m\). The corresponding face artifact index distribution suggests that the threshold should be set to the first valley (and not the one to the left of the highest peak) in order to obtain better results.
The computational validation step also shows satisfactory results. As seen in panel b) of Figure 6, for the threshold based on circular compactness and with an area threshold of \(10~{}m^{2}\), for most FUAs the rate of false positives lies between \(2.5\) and \(4.5\%\). In other words, for most FUAs, only \(2.5-4.5\%\) of detected face artifacts overlap with OSM building data, while the overwhelming majority of face artifacts (\(95.5-97.5\%\)) overlap with no building footprints or with a negligible extent (\(\leq 10~{}m^{2}\)) of building footprints. Figure 6 further shows that we get comparable results for face artifact index thresholds based on other compactness metrics.
Figure 7 shows the relationship between polygon compactness and area for all face polygons within the FUA of Raleigh (USA). The plot distinguishes between identified urban blocks (blue), identified face artifacts with no building overlap (yellow) and identified face artifacts with building overlap, i.e., false positives (pink). The distribution visually indicates two large clusters, one representing face artifacts and the other one capturing urban blocks. However, it is to be noted that while the area where the threshold lies is sparsely occupied, there is no clear-cut boundary between the polygon groups, at least not visually.
Figure 5: Birdview plots of detected face artifacts (red polygons) within each FUA border (black lines), clockwise from top left: Khartoum (Sudan), Moscow (Russia), Perth (Australia), and Tijuana (Mexico).
### Evaluation
After confirming that our heuristic is indeed able to identify face artifacts, we need to decide which of the five compactness metrics \(C_{i}\) to settle on. To evaluate the compactness metrics, we look at four aspects: percentage of FUAs with an identified threshold; computational efficiency; peak prominence; and percentage of false positives (see Section 2 for details).
As illustrated in panel a) of Figure 6, the face artifact index based on circular compactness could identify the greatest number of thresholds, namely in 117 out of 131 FUAs (89%). The least number of thresholds was identified by the face artifact indices based on isoareal quotient and radii ratio, with 77 (58%) and 83 (63%) identified FUA thresholds, respectively. The face artifact indices based on isoperimetric quotient and diameter ratio both detected a threshold in 109 cases (83%), though not the same set. It is worth noting that even the face artifact index based on circular compactness does not capture all FUA thresholds captured by the other indices. For example, the threshold for Ibadan (Nigeria), not found by the former, is indeed identified by three other face artifact indices (based on isoperimetric quotient, radii ratio, and isoareal quotient). Furthermore, the face artifact index based on diameter ratio allowed the identification of thresholds in three additional cases: Dhaka (Bangladesh), Jombang (Indonesia), and Recife (Brazil). Neither one of the compactness metrics, therefore, seems to be universally superior to the others. However, for the particular sample of 131 FUAs used in this study, circular compactness wins in absolute numbers of identified thresholds.
Analyzing the peak prominence for each of the shape metrics tells a similar story as the evaluation of detected thresholds above, but with a few differences. The metric with the highest peak prominence is the face artifact index based on the isoperimetric quotient, closely followed by the circular compactness, both
Figure 6: Validating the heuristic with OSM building data
Figure 7: A scatter plot of individual polygons based on their area and circular compactness, with individual groups highlighted based on the result of the face artifact detection method and its validation. The panel (a) reflects a zoomed-in version of the bottom left corner of the full figure in panel (b).
with a mean of 0.32. The face artifact index based on the isoareal quotient shows the least prominent peaks, with a mean value of 0.28. However, the overall differences in peak prominence do not appear to be large, with all of the distributions following roughly the same shape leading to similar distances between their peaks and valleys; no one metric can therefore be singled out in contrast to the others.
Results from the computational performance benchmark, tested on 10000 randomly sampled polygons, are plotted in the third panel of Figure 8. Our heuristic is indeed computationally cheap (as intended) since even the slowest metric takes only an average of 147 \(ms\). This translates into a nearly negligible approximate processing time of 400 \(ms\) per average-sized FUA. When comparing the face artifact indices against each other, there are notable differences in computational speed: \(F_{iaq}\) and \(F_{ipq}\), the two fastest metrics, are more than 10 times faster than \(F_{cc}\) and \(F_{rr}\), probably due to the fact that the latter requires computation of minimum bounding circles. From the perspective of computational performance, \(F_{cc}\) seems to be the worst available option in comparison to the other face artifact indices; however, given how computationally cheap the heuristic is even for \(F_{cc}\), this evaluation step seems to be relevant only for much larger data sets than the one used in this study. When using the GEOS-based implementation either directly in C++ or through efficient bindings (shapely in Python, sf or terra in R, or PostGIS in PostgreSQL) for a data set in the order of the one used in this study, it should therefore not matter which shape metric is used from a perspective of computational performance.
We compute the percentage of false positives for four different area threshold values of \(X\epsilon\{0,10,50,100\}[m^{2}]\) and compare the results for all five face artifact indices \(F_{i}\). The overall percentage of false positives is decreasing with an increasing \(X\) (as was to be expected), but is reasonably low even for the strict requirement of \(X=0\), i.e., for the case in which _any_ amount of overlap with the OSM building data set leads to an identified face polygon to be classified as "false positive". Similarly to the previous evaluation steps, comparing the percentage of false positives by face artifact index \(F_{i}\) does not indicate any clear winner.
## 4 Discussion
Unlike previous attempts based on complex machine learning algorithms [25, 27], the method proposed in this study is a simple heuristic based on properties of individual geometries. Our method is computationally cheap, transferrable, and easy to reproduce and replicate. By leveraging the characteristic patterns in urban street networks and their geometric representation, the face artifact index proposed in this study manages to capture both types of face artifacts, i.e., the elongated polygons between dual carriageways as well as small polygons of various shapes resulting from complex intersections, with a single metric and a single threshold.
There is no clear indication as to which compactness metric the face artifact
index should be best based on - rather, this will depend on the use case and on the data set available. The performance of \(F_{iaq}\) and \(F_{ipq}\) suggest that they should not be used but the decision on the remaining metrics is not that clear. For the data set we worked with in this study, where the primary goal was to identify face artifact index thresholds in as many as possible of 131 cities, the best choice is \(F_{cc}\), i.e., the face artifact index based on circular compactness. However, for several cities in our data set, \(F_{dr}\) would have been the better choice. Thus, we maintain the formula for \(F_{i,p}\) introduced in Equation 1 in general terms, where the compactness metric \(C_{i}\) has to be chosen at the analyst's discretion.
However, there are also some shortcomings of our method. In 11% of cases, the \(\phi_{i}\) distribution does not follow a two-peak pattern and the threshold heuristic is not applicable. While that may mean that there are no face artifacts in the whole FUA, that is not very likely. We have observed varying percentages of artifacts within FUAs depending on their geographical location, reflecting differences in cultural background and the way it is translated into the urban structure. However, there are likely at least some polygons that can be considered artifacts. Cases with a low percentage of artifacts will remain without a detected threshold. It is possible that a threshold derived at a geographically close location will be applicable in these cases, but we recommend proceeding with caution when applying an externally computed threshold to a study area.
A low number of artifacts can be caused not only by the lack of physical objects of a transportation origin, but also by the quality of data. OSM, which was used to retrieve all the street network geometries, has a very high coverage of the world when it comes to the geometries with a "highway" tag (consisting not only of highways but also other drivable or walkable linear elements like roads and paths). Unfortunately, the same cannot be said about the quality of the data. When there is a complex intersection, roundabout, or dual carriageway on the ground, its representation in OSM data may significantly differ from a
Figure 8: Evaluating compactness metrics
detailed drawing in one case and only a single node in the other. Naturally, the latter case does not produce any face artifacts, while the former can create several face artifacts in one single location.
The nature of the distribution with two local maxima required for the detection of \(T_{i}\) comes with another limitation. In the ideal situation, we can conceptually split the distribution into two Gaussian distributions, one capturing the face artifact index of artifact poylgons and the other capturing the face artifact index of actual urban block polygons, as illustrated in Figure 9. Their two tails overlap, leading to both false positive and false negative classification of polygons as artifacts (also illustrated by Figure 7). We have seen in Figure 8 that the issue of false positives is relatively low, the median value ranging from 2-4% depending on the selected compactness metric, but there is no straightforward way of quantifying false negatives. However, if we accept the conceptualization based on two normal distributions, we can assume that the number of false negatives is even lower than the number of false positives.
Due to the design of urban road systems, contiguous artifacts often capture roads of a higher hierarchy, and their presence (or a lack thereof) reflects differences between design principles applied to cities in different parts of the world. These differences are clearly visible in the location of thresholds and the overall distributions of the face artifact index, as illustrated in Figure 10. While many distributions across all (sub)continents show a similar bimodal pattern, there are notable differences in the peak prominence and locations of the peaks and valleys. The most prominent thresholds are found in North American and Australian cities, most likely due to the prevalent car-oriented urban design that generates a large number of face artifacts as a byproduct. Asian cities seem to
Figure 9: Conceptual illustration of the composition of \(\phi\) consisting of two overlapping Gaussian distributions and resulting identification of false positives and false negatives when relying on \(T\).
be divided into two subgroups, where the first one tends to have the first peak more prominent than the second, contrary to all the other cases. The lowest prominence is visible in African cities, likely due to a combination of less developed motorized transportation infrastructure compared to North America and less precise mapping of such elements in the OpenStreetMap.
We call for future work to focus on the second part of the pursuit towards automatized simplification and transformation of street networks. Now that face artifacts can be computationally identified, the next step will be to develop an algorithm that can eliminate and appropriately replace face artifacts.
In addition, the face artifact detection results are worth exploring on their own as they uncover structural differences between selected FUAs. The value of \(T\) differs across locations; the shape of \(\phi\) does not always resemble a bimodal distribution as shown in figure 9, and even when it does, the locations of peaks and the spread of distributions differ. The more profound exploration of regularities and irregularities in these results in connection with the geographical location may uncover higher-level principles guiding the development of global urban networks.
Every geospatial dataset is collected with a specific purpose in mind, which affects what is being captured, in which detail, where, and how. Nevertheless, the original purpose may not be the only one for which the dataset is potentially useful. Other use cases may arise, and with them, a need to adapt the data to fit the new purpose. In those cases, we need methods and processes allowing us to detect what needs to be changed and (ideally) to automatically adapt the input to the desired output. In the case of street networks captured with transportation in mind, the transformation process cannot be fully automated yet, but we believe that the method presented in this article provides an important
Figure 10: KDEs of the face artifact index in individual FUAs grouped by the (sub)continent (a-f); a combined plot of all the distributions (g) and a box plot showcasing the differences in the value of \(T_{cc}\) across geographical locations (h). Each distribution plot also includes a rug plot of locations of individual \(T_{cc}\) values in red.
step toward this goal.
### Data and code availability
The whole method is encapsulated in a series of Jupyter notebooks executed in a containerized environment darribas/gds_py:9.0 [51], ensuring full reproducibility. All components of the work rely on open source software and open data, with the resulting code and data being openly available at [https://github.com/martinfleis/urban-block-artifacts](https://github.com/martinfleis/urban-block-artifacts) and archived at [https://doi.org/10.5281/zenodo.8300730](https://doi.org/10.5281/zenodo.8300730). The face artifact functionality has been contributed to the open source package momepy [52], focusing on the analysis of urban morphology.
## Acknowledgments
The authors declare no conflicts of interest. M.F. kindly acknowledges funding by the UK's Economic and Social Research Council through the project "Learning an urban grammar from satellite data through AI", project reference ES/T005238/1 covering the initial exploration of the issue.
|
2310.15630
|
Compressive quantum waveform estimation
|
Quantum waveform estimation, in which quantum sensors sample entire time
series, promises to revolutionize the sensing of weak and stochastic signals,
such as the biomagnetic impulses emitted by firing neurons. For long duration
signals with rapid transients, regular quantum sampling becomes prohibitively
resource intensive as it demands many measurements with distinct control and
readout. In this Manuscript, we demonstrate how careful choice of quantum
measurements, along with the modern mathematics of compressive sensing,
achieves quantum waveform estimation of sparse signals in a number of
measurements far below the Nyquist requirement. We sense synthesized
neural-like magnetic signals with radiofrequency-dressed ultracold atoms,
retrieving successful waveform estimates with as few measurements as
compressive theoretical bounds guarantee.
|
Alex Tritt, Joshua Morris, Christopher C. Bounds, Hamish A. M. Taylor, James Saunderson, L. D. Turner
|
2023-10-24T08:53:49Z
|
http://arxiv.org/abs/2310.15630v2
|
# Compressive quantum waveform estimation
###### Abstract
Applying quantum sensors to sample entire signals (quantum waveform estimation) promises to revolutionize the sensing of small signals, such as the monitoring of electrical pulses generated by neurons for medical research. However, intensive use of quantum resources (_e.g.,_ long sensing times and/or many destructive measurements) make current implementations impractical for real-world use. In this Letter, we experimentally demonstrate quantum waveform estimation of a synthesized neural-like signal, taking many fewer cold-atom measurements than would naively be necessary.
Quantum waveform estimation extends the applicability of precise quantum sensors to the realm of sampling entire signals [1], at the cost of requiring many quantum measurements. One proposed application is the non-invasive sensing of electrical communication in a network of neurons by recording proximal magnetic fields [2; 3; 4; 5; 6; 7; 8; 9]. Such measurements would aid in understanding the brain on the microscale, in health and in disease [10]. Neural waveforms, composed of bipolar pulses separated by large gaps [11], clearly contain scant information: sampling one uniformly at its Nyquist rate [12, Sec. 7.1] will measure mostly silence. If information is so sparse, then could one save on resources by taking far fewer quantum measurements? Naively reducing a uniform sample rate, however, would ultimately lose pulses to aliasing. In this Letter, we use compressive sensing to demonstrate quantum waveform estimation of a sparse, neural-like magnetic signal using many fewer quantum measurements than the Nyquist-Shannon sampling theorem would deem necessary.
The modern mathematics of compressive sensing [13; 14; 15] (also _compressed sensing_ or _compressive sampling_) describes how, under certain conditions, sparse signals can be fully recovered from an _incomplete_ set of measurements. Compressive sensors thrive where there is benefit to taking few measurements, _e.g.,_ cameras for wavelengths with expensive photodetectors [16], and computed tomography scans which irradiate patients [17]. Compressive sensing in quantum science has assisted quantum process and state tomography [18], quantum communication [19; 20], quantum computation [21; 22; 23], quantum annealing [24; 25; 26; 27], ghost imaging [28; 29; 16; 30], and quantum sensor readout [31; 32]. In quantum waveform estimation, compressive sensing has been used to denoise signals [33; 34] or disambiguate frequencies in complete sets of quantum measurements [35]. Magesan _et al._[36] proposed literal _compressive quantum sensing_, simulating the reconstruction of a neural-like signal from an incomplete set of an incomplete set of Walsh basis measurements using sparse recovery. However, the theoretical sensor, based on nitrogen vacancy (NV) centers in diamond, would decohere before a typical neural firing would complete, and no such sensor has been constructed.
Compressive quantum sensors that significantly reduce the number of required quantum measurements would make quantum waveform estimation much more practical. In particular, it would bring into view quantum sensor arrays capturing entire waveforms in a single shot. To capitalize on this promise, we need: (1) quantum sensors that stay coherent for the duration of the signal, (2) quantum sensors that measure in a basis that collects information uniformly, and (3) a post-processing algorithm that can robustly recover the signal from these measurements. Compressive sensing satisfies (3); here we demonstrate that sensors made of trapped cold atoms can satisfy (1) and (2).
Sensing electrical signals from a live _network_ of _in-vitro_ neurons would provide information about how nerve cells communicate with each other, and how this relates to diseases [10]. Conventionally, pulses are measured using invasive patch clamps galvanically connected to individual neurons [11]. The process perturbs neural connections and can damage cell membranes, while not scaling beyond microscopic regions of tissue. This motivates ongoing work to develop non-invasive methods which infer neuronal current waveforms from their corresponding magnetic fields. Being on the order of nanotesla at sensing range [3; 11], quantum sensors are natural candidates for magnetometers that could sense such neural signals. Quantum magnetometry of living neurons to date [2; 3; 4; 5; 6] has been achieved with continuous-time ensemble sensing, averaged over many nominally-identical stimulated neural responses. In contrast, we propose a projective measurement protocol, which if executed on a quantum sensing array, would plausibly sense a unique neural waveform, containing multiple firings, without averaging.
Neural waveforms are an example of _sparse_ signals: those with only a small proportion of non-zero values [see Fig. 1(a)]. Consequently, information in neural waveforms is concentrated in small durations of time, and sampling in time is inherently inefficient [37]. Any uniform quantum measurement of such sparse signals will
waste quantum resources (_e.g.,_ sensing time of NVs, or the limited number of projective measurements of cold atoms) by making measurements that are not very informative.
We instead reduce the number of measurements by taking non-uniform samples in a discrete sine transform (DST) [38, Eq. (37)] basis. Time sparse signals have information spread across their broad frequency spectra. Hence, one can learn all information about sparse signals using only an incomplete subset of frequency measurements [39], seemingly violating the Nyquist-Shannon sampling theorem [12, Sec. 7.1]. This is the central idea of _compressive sensing_[14, 15]. While many possible signals are consistent with a randomly chosen incomplete set of frequency measurements of a sparse signal, it is very unlikely any will be as sparse as the true signal [14, Th. 6]. One then recovers the true signal from incomplete measurements using an optimization algorithm to find the sparsest signal consistent with the measurements and a linear model of the sampling process. A hurdle for applying this to _quantum_ sensing is that, to measure a frequency coefficient, the sensor would need to be coherent for the duration of the experiment. With coherence times of several seconds, trapped cold atoms are a promising candidate for such a task.
Here, we theoretically and experimentally show that cold atoms can emulate a DST-based measurement model. We then describe a recovery process that retrieves neural-like signals from an incomplete set of such measurements. Finally, we demonstrate experimental signal estimations derived from incomplete data taken from cold atoms exposed to synthesized neural-like signals.
We model our cold \({}^{87}\)Rb atom clouds as ensembles of non-interacting spin-one systems that couple to magnetic field \(\operatorname{B}(t)\) via \(\hat{H}(t)=-\gamma\operatorname{B}(t)\cdot\hat{\operatorname{F}}\). Here \(\hat{F}_{X,Y,Z}\) are spin-one hyperfine operators and \(\gamma\) is the appropriate gyromagnetic ratio.
We aim to measure the Fourier sine coefficient \(y(f)\) of a signal \(x^{\natural}(t)\), _i.e.,_
\[y(f)= \frac{1}{T}\int_{0}^{T}\sin(2\pi f)\,x^{\natural}(t)\,\mathrm{d}t, \tag{1}\]
for different frequencies \(f\). To do this, we apply both a bias \(\omega_{L}\) aligned with the signal and resonant dressing,
\[\hat{H}(t)=\omega_{L}\,\hat{F}_{Z}+2\Omega\,\cos(\omega_{\mathrm{rf}}t)\,\hat {F}_{X}+2\pi\,x^{\natural}(t)\,\hat{F}_{Z}, \tag{2}\]
before readout. The system approximately measures the Fourier sine coefficient of \(x^{\natural}(t)\) at frequency \(f=\Omega/(2\pi)\), because the eigenvalues of its dressed frame (rotating at resonant \(\omega_{\mathrm{rf}}\) around \(Z\)) Hamiltonian are split by \(\hbar\,\Omega\):
\[\hat{H}^{\mathbb{R}}(t)=\Omega\,\hat{F}^{\mathbb{R}}_{X}+2\pi\,x^{\natural}(t )\,\hat{F}_{Z}\xrightarrow{\text{no signal}}\Omega\,\hat{F}^{\mathbb{R}}_{X}. \tag{3}\]
(See Supplemental Material for a more technical treatment [40].) The sine component is resonant with this splitting, resulting in transitions between dressed states at a rate proportional to \(y(f)\), _i.e.,_ a shift of the dressed-frame plane of Rabi-flopping shown in Fig. 1(b). Dressed state populations are mapped to lab frame \(\hat{F}_{Z}\) populations \(N_{+,0,-}\) using a \(\pi/2\) pulse [RF sequence in Fig. 1(b)], which are then measured using Stern-Gerlach absorption imaging. Starting with a superposition of dressed states, we can be linearly sensitive to small \(y(f)\) via \(y(f)=-\langle\hat{F}^{\mathbb{R}}_{X}(T)\rangle/(2\pi\,T\hbar)=-\langle\hat{F} _{Z}\rangle_{\text{readout}}/(2\pi\,T\hbar)=(N_{-}-N_{+})/[2\pi T(N_{-}+N_{0}+ N_{+})]\). As a result, we can measure different sine coefficients by repeating experiments under this Hamiltonian while varying RF amplitude \(\Omega\).
We discretize our model for the measurement protocol in Eq. (1) as
\[y_{j}= \frac{1}{T}\sum_{k=1}^{K-1}\sin[2\pi(j\,\Delta f)(k\,\Delta t)]\,x ^{\natural}_{k}\,\Delta t, \tag{4}\]
for \(j\in\{1,2,\ldots,K-1\}\). Here the signal time series \(x^{\natural}_{k}\) starts at \(k=1\) and has length \(K-1\). By choosing bandwidth as \(W\stackrel{{\text{def}}}{{=}}K\,\Delta f=1/(2\,\Delta t)\) and sensing duration \(T\stackrel{{\text{def}}}{{=}}K\,\Delta t=1/(2\,\Delta f)\), Eq. (4) becomes the DST [38, Eq. (37)]; _i.e.,_ the matrix equation \(y=\Delta x^{\natural}\), with \(\Delta_{jk}=\sin(\pi jk/K)/K\). Compressive sensing theory roughly says that, since \(x^{\natural}\) is sparse, and the DCT is sensitive to all sparse signals, we can subsample our measurements and still be able to recover the signal [14, Th. 6]: _i.e.,_ measure only \(M<K\) components \(y_{j}\) of \(y\), so that now \(j\in\Lambda\subset\{1,2,\ldots,K-1\}\). Equation (4) still holds, but only for \(j\in\Lambda\); written \(y_{\wedge}=\Delta_{\wedge}x^{\natural}\), and illustrated as the matrix of sine waves of few randomly chosen frequencies in Fig. 1(b). Therefore, we know that the signal must be one of the many solutions \(x\) to the underdetermined system \(y_{\wedge}=\Delta_{\wedge}x\).
In general this is not uniquely solvable, but because \(x^{\natural}\) is sparse, the correct recovery, \(x^{*}\), will, with high probability, be the sparsest such \(x\)[14, Th. 6]. Naively this is an NP-hard problem [15, Th. 2.17], but assuming that \(\Delta_{\wedge}\) satisfies criteria involving the restricted isometry property (RIP), the sparest solution of \(y_{\wedge}=\Delta_{\wedge}x^{\natural}\) will also be the solution which minimizes the \(\ell_{1}\) norm, \(\|x\|_{1}\stackrel{{\text{def}}}{{=}}\sum_{k}|x_{k}|\)[14, Th. 9]. These criteria translate to requirements of measuring a system-dependent minimum number of sine coefficients [15, Eq. (9.24)] and for the set of subsampled frequencies to have no structure (which is likely if it is chosen randomly [15, Th. 11.23]). In practice, our linear model [Eq. (4)] will only be approximately satisfied due to measurement noise. Hence, we relax to a problem called the _least absolute shrinkage and selection operator (LASSO)_[42, Eq. 3.14], where we find the \(x\in\mathbb{R}^{K-1}\) that minimizes \(\|\Delta_{\wedge}x-y_{\wedge}\|_{2}^{2}+\lambda\|x\|_{1}\), illustrated in Fig. 1(c). Here, _regularization parameter_\(\lambda>0\)[14, Sec. 8.2.2] is chosen [40] to produce the smallest error on simulated (using our open-source solver [43])
training data, and \(\|y_{\wedge}\|_{2}^{2}\stackrel{{\text{def}}}{{=}}\sum_{j\in\wedge}|y _{j}|^{2}\) is the \(\ell_{2}\) norm. To solve the LASSO, we implemented the _fast iterative shrinkage-thresholding algorithm (FISTA)_[44]. We chose the step-size of FISTA using the singular values of \(A_{\wedge}\) and an an estimate of the amplitude of \(x(t)\).
To experimentally verify this sensing method, we applied the Hamiltonian in Eq. (2) to a dipole-trapped cloud of approximately \(6\times 10^{5}\) laser cooled atoms of \({}^{87}\)Rb at a temperature of order \(100\) nK. Here the signal \(x^{\natural}(t)\) was synthesized by a coil driver, modeled as a \(143\) nT peak, \(200\)\(\mu\)s duration single-cycle pulse of a sine wave. This model is higher amplitude and faster than a real neural pulse (\(2\) nT peak, \(2\) ms duration), so as to avoid both the sampling of AC electrical line interference, and detuning error from slow magnetic drift. However, it allows us to demonstrate a proof-of-concept and is consistent with models from other groups [5, 36]. Each term in Eq. (2) was produced by separate magnetic coils, with the bias \(\omega_{L}\) and signal \(x^{\natural}(t)\) coils being concentric, and the RF \(\Omega\) coil being perpendicular. State populations \(N_{+,0,-}\) were recorded using Stern-Gerlach absorption imaging of the ensemble after a readout \(\pi/2\) pulse. The procedure was repeated, changing the value of \(\Omega\) for each shot to record each frequency component from the incomplete set \(\Lambda\). Our bias was \(861\) mG, giving a Larmor and radio frequency of \(\omega_{L}\approx\omega_{\text{rf}}=2\pi\times 603\) kHz. Our modeled neural pulse had negligible sine components above \(W=10\) kHz (consistent with Webb _et al._[3]) which fixed our highest meaningful time resolution to be \(\Delta t=50\)\(\mu\)s (pulses \(4\) samples long). We chose an experiment duration of \(T=5\) ms (\(K=100\)) in order to sense on the peak of a \(20\) ms-period electrical line-cycle, fixing the frequency resolution of the complete DST to \(\Delta f=100\) Hz.
We wished to compare measurements using our compressive protocol with complete measurements of both time and frequency samples. For the former, we used a separate Ramsey sequence to sample the amplitude of the signal at all of the \(99\) points on the time-grid. Here the second \(\pi/2\) pulse was in quadrature to the first in order for the populations to be linearly sensitive to the signal (see the coherent signal case in Mouradian _et al._[37]). The Ramsey sequences approximate capturing the field at a fixed time by sensing over a rectangular window of length \(60\)\(\mu\)s center on the time-grid points. For the latter, we measured a complete set of \(99\) sine coefficients via the Hamiltonian in Eq. (2), and recovered using a complete inverse DST. Consequently, we obtained the incomplete data \(y_{\wedge}\) for the compressive protocol by taking subsamples of the full sets of \(99\) sine coefficients \(y\). Note, a _genuine_ compressive sensing protocol would instead take an incomplete dataset in the first place. We provide such measurements in Supplemental Material [40].
Figure 2 compares waveform measurements of repeated one- and two-pulse signals using the Ramsey protocol, an inverse DST of the complete set of sine coefficients, and the compressive protocol using a random set of \(60\) of the \(99\) sine coefficients. The Ramsey measurements in Figs. 2(a) and (b) are unable to resolve the waveform. The recovery is overwhelmed by shot-to-shot drift of the bias magnetic field, which we later discovered was caused by suburban railway lines \(2\) km away. Our choice to
Figure 1: An overview of the protocol. (a) Trapped atoms exposed to (synthesized) neural (illustrated by phreed [41] on right) magnetic signal \(x^{\natural}\). Signal drawn as vector underneath plot. (b) Three level system (Bloch sphere on right) has Rabi frequency proportional to continuous dressing amplitude (top left). Path of Rabi flopping shifts proportional to sine coefficient \(y_{j}\) of \(x^{\natural}\) with frequency of Rabi flopping \(\Omega_{j}\). Dynamics illustrated on Bloch sphere for two possible RF amplitudes \(\Omega_{i}\) and \(\Omega_{j}\) (top right). Protocol modeled as an (exaggerated here) underdetermined matrix equation (bottom). Equality is approximate due to measurement noise and simplification of model. Experiment is repeated for different RF amplitudes \(\Omega_{j}\) measuring different sine coefficients \(y_{j}\). (c) FISTA searches the set of signals that fit the measurements (left) for the sparsest such signal, which it returns (right).
randomize shot order made these slow drifts appear as white noise. Another source of deviation includes the interference from the AC electrical lines. Furthermore, the square sampling window of the Ramsey measurements means that waveform estimation is susceptible to high frequency noise from surrounding instrumentation, _e.g.,_ switching power supplies. Counterintuitively, despite only measuring short pulses, sampling for only a short duration resulted in a compromised retrieval. In contrast, recoveries from sine coefficients were able to resolve the synthesized pulses, as the DST inherently cuts out unwanted DC and high frequency coefficients. The inverse DST protocol in Figs. 2(c) and (d) are able to resolve the neural pulses more clearly than the Ramsey protocol using the same number of sensors. Additionally, despite the compressive protocol in Figs. 2(e) and (f) only taking into account 60 of the full 99 sine coefficients, it produces an even clearer signal. This is expected from the de-noising property of the LASSO [42, Fig. 6(a)].
Compressive sensing protocols have lower bounds as to how few samples can be used to recover a signal [15, Eq. (9.24)]. For our protocol, we wanted to experimentally determine this limit for our protocol and dataset. Here we compare our recovered signals to a trace of the waveform we commanded the magnetic coils with. While standard root-mean-square error is a useful metric in other contexts, we found that it had difficulty distinguishing between noisy but otherwise "successful" recoveries, and completely failed recoveries of our sparse signals. Canonical analysis of detecting signals in noise involves plotting a curve called the receiver operating characteristic (ROC); the area under the curve (AUC) [45, Sec. 7] is an appropriate metric for determining signal recovery success [40]. The AUC is 100% if a threshold can be drawn to distinguish between recovered pulses and noise, and decreases as noise and distortion down out the signal. In Fig. 3, the compressive protocol has an AUC of over 99% when over 36 samples are used in the recovery for a signal with one pulse, over 52 for two. These echo
Figure 3: Finding the fewest number of samples that can be used by the compressive protocol before it fails in recovering the signal. AUC (see Supplemental Material [40]) of 100% means perfect pulse detection, 50% means random categorization. AUCs from 200 random subsets of the complete datasets were averaged to form each data point; shading shows one standard deviation either side. One-pulse traces (light orange) draw from sine coefficients used in Fig. 2(c), two-pulse traces (dark red) from Fig. 2(d). Left and right insets show ROC curves for the two-pulse signal when 20 and 60 measurements are used in the recovery respectively. ROCs are parameterized by categorization threshold. If a threshold perfectly distinguishes between pulses and noise, the ROC reaches the top left and thus encloses an area (AUC plotted in main figure) of 100%.
Figure 2: Neural magnetic pulses recorded using different methods. Solid signal is recovered, dashed signal is commanded to coils. (a, b) Ramsey sampling of a (a) one- and (b) two-pulse signal. Signal is not able to be recovered due to bias drift over the course of data taking. (c, d) Inverse DST using the complete set of sine coefficients recorded by the dressed atoms. (e, f) Compressive recovery using the FISTA. Here only 60 of the 99 recorded sine coefficients are used in the recovery.
the known theoretical bounds of 34 and 57 for a simplified system [15, Eq. (9.24)]. We thus predict the protocol to work if 60 coefficients are used, which informed using that number for recoveries in Figs. 2(e) and (f). While a reduction in quantum resources by almost half is already enough to be useful, the theoretical model predicts even larger relative gains when measuring longer, sparser signals.
The compressive quantum sensor was able to recover neural signals in a situation where Ramsey measurements could not, while using an incomplete set of measurements. Choice of signal basis can drastically change both the fidelity, and, when undersampling, efficiency of quantum waveform estimation. This proof-of-concept demonstrates the viability of a potential quantum sensor array, where all frequency measurements are sampled by separate atom clouds parallel in space.
AT, CCB, and HT acknowledge support through Australian Government Research Training Program (RTP) scholarships. JS is the recipient of an Australian Research Council Discovery Early Career Researcher Award (project number DE210101056) funded by the Australian Government. LDT acknowledges funding from the Australian Research Council Linkage Project (project number LP200100082).
|
2308.00711
|
Interacting Random-field Dipole Defect Model for Heating in
Semiconductor-based Qubit Devices
|
Semiconductor qubit devices suffer from the drift of important device
parameters as they are operated. The most important example is a shift in qubit
operating frequencies. This effect appears to be directly related to the
heating of the system as gate operations are applied. We show that the main
features of this phenomenon can be explained by the two-level systems that can
also produce charge noise, if these systems are considered to form an
interacting random-field glass. The most striking feature of the theory is that
the frequency shift can be non-monotonic in temperature. The success of the
theory narrows considerably the possible models for the two-level systems.
|
Yujun Choi, Robert Joynt
|
2023-07-29T18:29:35Z
|
http://arxiv.org/abs/2308.00711v1
|
# Interacting Random-field Dipole Defect Model for Heating in Semiconductor-based Qubit Devices
###### Abstract
Semiconductor qubit devices suffer from the drift of important device parameters as they are operated. The most important example is a shift in qubit operating frequencies. This effect appears to be directly related to the heating of the system as gate operations are applied. We show that the main features of this phenomenon can be explained by the two-level systems that can also produce charge noise, if these systems are considered to form an interacting random-field glass. The most striking feature of the theory is that the frequency shift can be non-monotonic in temperature. The success of the theory narrows considerably the possible models for the two-level systems.
## I Introduction
There has been considerable progress in semiconductor quantum computing, with significant strides in scaling up and in gate fidelities [1; 2]. The chief difficulty is decoherence, with charge noise as the main culprit [3; 4; 5; 6; 7; 8; 9; 10; 11]. The precise nature of the two-level systems (TLS) that produce the noise remains elusive. There have been extensive characterizations of the noise spectrum [12; 13], and the spatial correlations in the noise [14; 15] in different devices. These observations can help in the elucidation of the nature of the TLS, but the data are not currently sufficient to pin things down precisely.
Another problem, at first sight quite separate, that interferes with qubit operation is the pulse-induced resonance shift (PIRS). This is a shift in the operating frequency of the qubits as a computation proceeds. This is highly problematic, since continual recalibration of the system is not practical. Quadrature control [16] and prepulsing [17] can mitigate but not eliminate this issue. PIRS has recently received an intensive experimental study [18].
Here we propose that the source of PIRS is also a group of TLS, perhaps the same group that gives charge noise. Hence detailed observations of PIRS may provide additional insight into the microscopic origin of the noise. We proceed in the time-honored fashion of proposing a phenomenological model that explains the data, and then seeing what constraints the model places on the underlying physics of the system.
Let us take a concrete situation in which the system is in a resting state at low temperature \(T\) for times \(t<0\) and the operations, which involve microwave pulses that feed energy into the system, begin at \(t=0\) and end at some later time \(t_{f}\). PIRS is a time-dependent shift \(\Delta f(t)\) with \(\Delta f(t=0)=0\) by definition. \(\Delta f\) is a function of time that ultimately reverts to the base state some time after the operations have ceased. PIRS appears to be rather ubiquitous in semiconductor qubits, but there is considerable variability in how it manifests itself. Early observations found positive shifts (\(\Delta f\geq 0\)) of order a few MHz [17]. The magnitude of the shift was an increasing function of the energy injected by the pulses. It also depended on the details of the electron wavefunctions in the dots, for example on the dot occupations. The MHz magnitude of the shifts is fairly typical for quantum dot qubits. Importantly, \(\Delta f\) can also be negative [16; 19]. The decay time after \(t_{f}\) varies in the dot systems, with values from 0.5 ms [1] to 38 \(\mu\)s [19] having been observed. PIRS also occurs in donor-based qubit systems [20], though \(|\Delta f|\) is much smaller, of order 10s of kHz. Effects of a similar magnitude are seen in flip-flop qubits [21]. In this work we concentrate on experiments in dots, but we expect the theory to apply more broadly.
Our interpretation is based on the fact that dot and donor systems share the feature that the qubit operating frequency depends on the spatial position of the qubit. The arbitrary sign of \(\Delta f\) then suggests that a change in the electric field on the qubit is the origin of PIRS. Experimentally, it now appears to be clear that PIRS is essentially a thermal heating effect rather than a mechanical effect [1; 18]. This is also supported by the characteristic return to a base state, most naturally interpreted as a return to thermal equilibrium. The most striking feature of the results is that the magnitude of \(\Delta f\) is typically not monotonic in temperature (\(T\)), instead rising to a maximum at about 200-300 mK, then decreasing [18].
## II Model
To explain these observations, we introduce a model based on the charged TLS that are known to exist in these devices, and that in fact are also responsible, at least in part, for the decoherence of the qubits. These charged defects or traps are modeled as a collection of \(N\) fluctuating electric dipoles. The \(j\)th dipole fluctuates between states \(s_{j}\mathbf{p}_{j}\), where \(s_{j}=\pm 1\) and \(\mathbf{p}_{j}\) is a fixed vector for each \(j\). For simplicity we assume that the dipoles all have the same magnitude: \(|\mathbf{p}_{j}|=p_{0}\). This is reasonable if all the TLS have the same physical origin. The dipoles can have a non-zero equilibrium moment which is random in direction and they interact via the long
range Coulomb interaction. We call this the Interacting Random-field Glass Model (IRGM). Somewhat similar models have been introduced to understand charge noise (rather than equilibrium fields) in dot systems [22] and also in the context of superconducting qubit systems to explain fluctuations in the relaxation time \(T_{1}\)[23; 24].
The electric field \(\langle\mathbf{F}_{q}\rangle\) on a qubit at the origin of coordinates is
\[\langle\mathbf{F}_{q}\rangle=\frac{1}{4\pi\varepsilon}\sum_{j=1}^{N}\langle s _{j}\rangle\frac{3(\mathbf{p}_{j}\cdot\mathbf{r}_{j})\,\mathbf{r}_{j}-\mathbf{ p}_{j}|\mathbf{r}_{j}|^{2}}{|\mathbf{r}_{j}|^{5}}\equiv\sum_{j=1}^{N}\langle s _{j}\rangle\mathbf{F}_{j}. \tag{1}\]
The angle brackets indicate a thermal average. In the devices in question, the qubit operation frequency depends linearly on the electric field at its position. The frequency is a quasi-equilibrium quantity, so \(\langle\mathbf{F}_{q}\rangle\) is the object of interest for our purposes. The relation between field and frequency is platform-dependent. In the set-up of Refs. [1; 16; 17], \(\langle\mathbf{F}_{q}\rangle\) causes the displacement of the spin qubit in a magnetic field gradient, while in the flip-flop qubit architecture motion of the qubit caused by \(\langle\mathbf{F}_{q}\rangle\) would change the hyperfine coupling or the g-factor [25]. In all cases the displacement changes the qubit operating frequency. The qubit frequency is \(f(T)=f_{0}+\mathbf{c}_{q}\cdot\langle\mathbf{F}_{q}\rangle(T=0)+\Delta f(T)\), where \(f_{0}\) is the \(T\)-independent part from the applied magnetic field, \(\langle\mathbf{F}_{q}\rangle(T=0)\neq 0\) is a constant that comes from the ground state configuration of the TLS, and \(\Delta f(T)\) is the PIRS effect and all the \(T\) dependence of \(f\) comes from it. Thus \(\Delta f(T)=\mathbf{c}_{q}\cdot[\langle\mathbf{F}_{q}\rangle(T)-\langle \mathbf{F}_{q}\rangle(T=0)]\) is the quantity of interest. \(\mathbf{c}_{q}\) depends on the particular type of qubit and the position of the qubit in the device. Its direction is determined by the condition that the effective magnetic field produced by \(\langle\mathbf{F}_{q}\rangle\) should be parallel to the applied field.
The Hamiltonian of the TLS in our model contains a random-field term \(H_{r}\) and an interaction term \(H_{int}\):
\[H=H_{r}+H_{int}=-p_{0}\sum_{j=1}^{N}s_{j}\mathbf{E}_{j}\cdot\hat{p}_{j}-\frac {p_{0}^{2}}{8\pi\varepsilon}\sum_{j\neq k=1}^{N}s_{j}s_{k}V_{jk}.\]
Here
\[V_{jk}=\frac{3\hat{p}_{j}\cdot\left(\mathbf{r}_{j}-\mathbf{r}_{k}\right)\hat {p}_{k}\cdot\left(\mathbf{r}_{j}-\mathbf{r}_{k}\right)-\left(\hat{p}_{j}\cdot \hat{p}_{k}\right)|\mathbf{r}_{j}-\mathbf{r}_{k}|^{2}}{|\mathbf{r}_{j}- \mathbf{r}_{k}|^{5}}.\]
The random effective electric fields \(\mathbf{E}_{j}\) if interpreted in a double-well picture of the TLS are related to the energy asymmetry ('detuning') of the two wells. However, the physical origin of the \(\mathbf{E}_{j}\) may not be the same in all cases. For example, they could be actual external electric fields coming from the gate electrodes, strain fields, asymmetric microscopic defects, _etc._ For our purposes they are considered to be phenomenological parameters that must be fit, since they are very difficult to estimate in the absence of a real microscopic model. We expect \(N\) to be a number in the range of perhaps 10 to 100 and to be sample-dependent [12]. The dipoles may well be the same TLS that give rise to the noise in the system, but here we are interested in their equilibrium behavior, not their fluctuations. This assumes that measurement of PIRS takes place over a time interval longer than the characteristic switching times of the TLS. However, the intersection of the set of TLS that causes qubit dephasing and the set that causes PIRS need not be complete.
## III Non-monotonicity of PIRS
The \(T\) dependence of \(\langle\mathbf{F}_{q}\rangle\) in a non-interacting model with \(H_{int}=0\) is already interesting, so we discuss it in detail. In this case the problem is exactly solvable, once the positions of the dipoles are specified: \(\langle s_{j}\rangle=\operatorname{sgn}(\hat{p}_{j}\cdot\mathbf{E}_{j}) \tanh(p_{0}\,\hat{p}_{j}\cdot\mathbf{E}_{j}/k_{B}T)\). \(\langle s_{j}\rangle\) has a definite sign at \(T=0\) but eventually \(\langle s_{j}\rangle\to 0\) as \(T\rightarrow\infty\). We can identify a turn-off temperature \(T_{j}=p_{0}|\mathbf{E}_{j}|/k_{B}\) for each TLS. Substitution into Eq. 1 and performing the sum gives the equilibrium electric field \(\langle\mathbf{F}_{q}\rangle(T)\) at the qubit.
To understand the qualitative \(T\) dependence of \(\langle\mathbf{F}_{q}\rangle\) in the IRGM we begin with \(T=0\). We divide the TLS into two groups. In the set \(S^{+}\) we have the indices \(j\) for which \(\langle\mathbf{F}_{q}\rangle(T=0)\cdot\mathbf{F}_{j}\langle s_{j}\rangle(T=0)>0\) while in group \(S^{-}\) we have the indices \(j\) for which \(\langle\mathbf{F}_{q}\rangle(T=0)\cdot\mathbf{F}_{j}\langle s_{j}\rangle(T=0)<0\). That is, the dipoles in \(S^{+}\) are aligned with the ground state resultant field, while those in \(S^{-}\) are anti-aligned. The electric field at the qubit is the result of a random walk of the vectors \(\langle s_{j}\rangle\,\mathbf{F}_{j}\) with a resultant vector
\[\langle\mathbf{F}_{q}\rangle=\sum_{j\in S^{+}}\langle s_{j}\rangle\mathbf{F}_ {j}+\sum_{j\in S^{-}}\langle s_{j}\rangle\mathbf{F}_{j} \tag{2}\]
The vectors in group \(S^{+}\) are in the direction of the final result of the walk, while those in group \(S^{-}\) are in the opposite direction. Due to the randomness in the asymmetry, the various components of the walk turn off at different temperatures, and there will be some average turn-off temperature \(T^{+}\) for group \(S^{+}\) and a different average turn-off temperature \(T^{-}\) for group \(S^{-}\). We now increase \(T\) from zero. If \(T^{+}>T^{-}\) then the total field strength \(|\langle\mathbf{F}_{q}\rangle|\) will first increase and eventually vanish when \(T\gg T^{+}\). In this case we have a non-monotonic \(T\)-dependence of the field strength \(|\langle\mathbf{F}_{q}\rangle|\), a rather surprising result. \(T^{+}\) and \(T^{-}\) are not expected to be very different if the dipoles have the same physical structure, but even in this case the relatively small value of \(N\) implies that random fluctuations will make \(T^{+}\neq T^{-}\). If \(T^{+}<T^{-}\) then the total field strength \(|\langle\mathbf{F}_{q}\rangle|\) will decrease as the dominant dipoles turn off and eventually vanish when \(T\gg T^{-}\). If there is a gross mismatch between the \(T^{+}\) and \(T^{-}\) components of \(|\langle\mathbf{F}_{q}\rangle|\) could even reverse sign, but overall we would expect a monotonic decrease. The relative magnitudes of \(|\sum_{j\in S^{+}}\langle s_{j}\rangle\mathbf{F}_{j}|\) and \(|\sum_{j\in S^{-}}\langle s_{j}\rangle\mathbf{F}_{j}|\) are also important. If \(|\sum_{j\in S^{+}}\langle s_{j}\rangle\mathbf{F}_{j}|\) dominates, then there is little cancellation in the sum and the non-monotonic behavior will be suppressed. If
the fields from \(S^{+}\) and \(S^{-}\) are comparable, then non-monotonicity is more likely.
Now we turn to the effects of interactions. Overall, dipolar interactions are always antiferromagnetic. This favors depolarization - a smaller net moment \(|\langle\mathbf{P}\rangle|=|\sum_{j}\langle s_{j}\rangle\mathbf{p}_{j}|\). If the TLS are located to one side of the qubit, then the correlation between \(|\langle\mathbf{P}\rangle|\) and \(|\langle\mathbf{F}_{q}\rangle|\) will be strong, but even if the qubit is surrounded symmetrically by the TLS, fluctuations will still give some correlation in a given sample. Small \(|\langle\mathbf{P}\rangle|\) comes from cancellation in the directions of the individual moments.
There is an additional temperature scale associated with the interactions, which is the average change in the interaction energy from flipping one spin: \(T_{int}=|\langle H_{int}\rangle|/Nk_{B}\). If \(T_{int}<T^{+},T^{-}\) then the interactions will turn off before the random field effects as \(T\) is increased, and \(|\langle\mathbf{P}\rangle|\) increases. If \(T_{int}\gg T^{+},T^{-}\) then the system is frozen by the interactions and we expect little change in \(|\langle\mathbf{P}\rangle|\) until \(T\gg T_{int}\).
Overall, the effect of interactions is to make cancellation more likely due to their antiferromagnetic character. Unless the interactions are extremely strong, they make the non-monotonic \(T\) dependence of \(|\langle\mathbf{F}_{q}\rangle|\) and therefore of \(\Delta f(T)\) more likely.
## IV Numerical results
This analytic analysis of the IRGM is semi-quantitative. To make the arguments more firm we perform numerical simulations. We do this for three different physical pictures of the TLS. In all simulations there is a single qubit at the origin.
In the first picture the TLS are charge traps near the surface of a two-dimensional electron gas. The trap is positively charged when empty and then relatively negatively charged when full. We include the image charge. This can be described as a fluctuating dipole in the z direction, z being the growth direction. These dipoles are uniformly distributed in a thin layer at positions \(\mathbf{r}_{j}=(x_{j},y_{j},z_{j})\) with \(-150\,\mathrm{nm}<x_{j},y_{j}<150\,\mathrm{nm}\) and \(z_{j}=50\,\mathrm{nm}\). This is the trap picture.
The second picture conceives of the TLS as point defect dipoles in the oxide with orientations uniformly distributed in direction. Their positions \(\mathbf{r}_{j}=(x_{j},y_{j},z_{j})\) are uniformly distributed in a layer with coordinates satisfying \(\mathbf{r}_{j}=(x_{j},y_{j},z_{j})\) with \(-150\,\mathrm{nm}<x_{j},y_{j}<150\,\mathrm{nm}\) and \(30\,\mathrm{nm}<z_{j}<50\,\mathrm{nm}\). This is the random dipole picture.
In the third picture the TLS are distributed in the neighborhood of the qubit. For the simulations, we take the TLS to be uniformly distributed in a spherical shell with the qubit at the center. \(\mathbf{r}_{j}=(r_{j},\theta_{j},\phi_{j})\) satisfy \(60\,\mathrm{nm}<r_{j}<80\,\mathrm{nm}\), \(0\leq\theta_{j}<\pi\), and \(0\leq\phi_{j}<2\pi\). The radii are chosen to make \(T_{int}\sim 1\) K. This is the spherical shell picture.
We define a random field temperature scale \(T_{r}=|\langle H_{r}\rangle|/Nk_{B}\) in addition to \(T_{int}\). We use the parameter values \(p_{0}=48\) Debye \(\approx 1|e|\)-nm and \(N=30\), which are chosen because they give \(T_{2}\) of order \(10^{-6}\)s in the correct experimental range [26]. The TLS density from these values is also consistent with those computed from the magnitude of measured power spectra [27; 28]. The random field strength is taken as \(\Delta E_{0}=10^{4}\) V/m, which denotes the standard deviation of a distribution centered on zero. With these values, \(T_{r}\sim 0.1\) K and \(T_{int}\sim 1\) K in most samples, but while \(T_{r}\) is almost independent of the disorder because its strength is roughly fixed, \(T_{int}\) can vary from 0.1 K to 10 K because it depends sensitively on the separations and orientations of the TLS. Overall, these values are chosen to be representative of experiments on semiconductor qubit systems, in the sense that most analyses give something like our value for \(N\) which together with value of \(p_{0}\) that we use gives a reasonable dephasing time. A key observation here is that \(T_{int}\) comes out approximately correct when the distances of the TLS from each and from the qubit, and the dipole magnitude, are chosen to fit \(T_{2}\) and other noise experiments. Some further details are given in Appendix A. The IRGM passes the self-consistency check that \(T_{int}\) does indeed match the temperature scale on which \(\Delta f\) varies.
In Fig. 1 we plot one example of \(\Delta f(T)\) in the trap picture for both the non-interacting case \(H_{int}=0\), which is exactly solvable, and the fully interacting case, which is computed by a standard Monte Carlo (MC) algorithm. We use arbitrary units for \(\Delta f\) since the conversion factor \(\mathbf{c}_{q}\) is platform-dependent. For the interacting case, a moving average over 11 neighboring temperatures is applied to obtain stable results, and a smaller number of neighboring temperatures are used for the moving average at the ends of the curves.
We stress that \(\langle\mathbf{F}_{q}\rangle(T)\) is sample-dependent for all three pictures in that changing the parameters in natural ranges can alter \(\Delta f(T)\) qualitatively. In particular, non-monotonic behavior of \(\langle\mathbf{F}_{q}\rangle\) is by no means universal. It is even possible for a single sample that one component of the field is non-monotonic and another is monotonic. Some idea of the variety of possible behaviors is given in Appendix B.
Fig. 1 demonstrates that non-monotonicity can arise already even when the TLS do not interact. This comes simply from the fact that there can be cancellation of the random fields at \(T=0\) that is lessened as \(T\) increases in certain circumstances, as explained above. The interactions tend to enhance non-monotonicity as in Fig. 1(a), though this effect is not universal. In fact, as we will see below, only a minority of samples for the trap picture show non-monotonicity. Interactions can create non-monotonicity when the non-interacting picture shows monotonicity as seen in Fig. 1(b). Again, this is consistent with the idea of cancellation as the active ingredient in non-monotonicity. Additional examples of the different types of behavior that can occur for \(\Delta f(T)\) are given in Appendix B.
Once universality of the non-monotonicity is ruled out, the question becomes whether it is likely or not. To an
swer this we did simulations over 10,000 samples for each of the three pictures and determined whether a monotonic or non-monotonic behavior was observed for each component of the electric field. The precise criterion for monotonicity or its absence is given in Appendix C.
The results in Table 1 support the physical picture explained above. In the trap picture, the cancellation of fields from different dipoles is relatively small, since the vectors leading from the qubit to the TLS, while not collinear, generally do not make large angles with each other. Similar statements apply to the random dipole picture, but there is some additional cancellation due to the different orientations. The most interesting result is for the spherical shell picture. Here we see the appearance of non-monotonic behavior for non-interacting TLS in about one quarter of the cases, as would be suggested by the above arguments. With dipoles on all sides of the qubit, the cancellation effect is quite strong.
Interactions do promote non-monotonicity in all cases, as expected, especially when the interactions are strong: \(T_{int}>T_{r}\). Overall, non-monotonicity increases as we proceed from the trap to the random dipole to the spherical shell pictures. The interaction enhancement reinforces this pattern of non-monotonicity.
We turn now to comparison of theory and experiment. In nearly all measurements \(\Delta f\) is measured as a function of time, not temperature. Comparing to data of this type would require detailed modeling of the heat flow in the system, which is outside the scope of this paper. We therefore compare only to the temperature data for \(\Delta f(T)\) for 6 qubits in a single device reported in Ref. [18]. We plot these data with a theoretical fit in Fig. 2. The fit was done as follows. We first chose a set of TLS positions and random fields so that the parameters were in a range where non-monotonicity could be expected, and peak in \(\Delta f\) would be around 250 mK. Then, since the curves have a fairly similar shape but differ in vertical scale, we varied \(\mathbf{c}_{q}\) for each of the curves. Finally, the positions of the TLS were adjusted to fit each curve individually. The main feature that needed to be accounted for in this final step was the sharper peak and steeper falloff at high \(T\) that is seen in qubits 3-6. Some further details are given in Appendix D.
The fits are quite good quantitatively, but not too
Figure 1: Temperature dependence of qubit frequency shifts in the trap picture of the TLS in some representative samples. The shifts are proportional to one component of the equilibrium electric field \(\langle\mathbf{F}_{q}\rangle(T)\) at the qubit. Without Int. and With Int. mean the non-interacting and fully interacting case, respectively. **(a)** Both non-interacting and interacting cases show non-monotonic shifts. **(b)** Example in which the interaction causes non-monotonic behavior.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Trap & \(\langle F_{q,x}\rangle\) & \(\langle F_{q,y}\rangle\) & \(\langle F_{q,z}\rangle\) \\ \hline \multicolumn{4}{|c|}{\(T_{r}<T_{int}\)} \\ \hline Without Int. & 15.9\% & 16.4\% & 12.0\% \\ With Int. & 48.0\% & 48.6\% & 40.1\% \\ \hline \multicolumn{4}{|c|}{\(T_{r}\sim T_{int}\)} \\ \hline Without Int. & 11.0\% & 10.9\% & 7.4\% \\ With Int. & 30.5\% & 30.2\% & 24.2\% \\ \hline \hline Random dipole & \(\langle F_{q,x}\rangle\) & \(\langle F_{q,y}\rangle\) & \(\langle F_{q,z}\rangle\) \\ \hline \multicolumn{4}{|c|}{\(T_{r}<T_{int}\)} \\ \hline Without Int. & 14.7\% & 15.9\% & 12.4\% \\ With Int. & 61.9\% & 61.9\% & 57.2\% \\ \hline \multicolumn{4}{|c|}{\(T_{r}\sim T_{int}\)} \\ \hline Without Int. & 10.0\% & 9.4\% & 7.8\% \\ With Int. & 42.5\% & 42.3\% & 36.9\% \\ \hline \hline Spherical shell & \(\langle F_{q,x}\rangle\) & \(\langle F_{q,y}\rangle\) & \(\langle F_{q,z}\rangle\) \\ \hline \multicolumn{4}{|c|}{\(T_{r}<T_{int}\)} \\ \hline Without Int. & 25.0\% & 25.3\% & 28.0\% \\ With Int. & 89.5\% & 89.4\% & 90.1\% \\ \hline \multicolumn{4}{|c|}{\(T_{r}\sim T_{int}\)} \\ \hline Without Int. & 21.0\% & 21.8\% & 25.8\% \\ With Int. & 79.4\% & 79.0\% & 81.4\% \\ \hline \end{tabular}
\end{table}
Table 1: The fractional number of samples showing non-monotonicity obtained from Monte Carlo simulation and exact solutions for three physical pictures. \(\langle F_{q,i}\rangle\) is the \(i\) component of the vector \(\langle\mathbf{F}_{q}\rangle(T)\). Without Int. and With Int. mean the non-interacting and fully interacting case, respectively. \(T_{r}<T_{int}\) indicates \(T_{r}\sim\) 0.1 K and \(T_{int}\sim\) 1 K, while \(T_{r}\sim T_{int}\) indicates \(T_{r}\sim\) 0.1 K and \(T_{int}\sim\) 0.1K.
much should be made of this, since the number of parameters far exceeds the number of qualitative features to be fit for each curve. However, even given this, the fit does provide evidence for the correctness of the IRGM. The non-monotonicity arises naturally, but also the linear behavior at small \(T\), (which is caused by the uniform distribution of the \({\bf E}_{j}\) near zero) and finally the \(1/T\) behavior at large \(T\).
## V Conclusion
We conclude by summarizing the strong and weak points of the IRGM as a model for PIRS. There are three evaluation categories; qualitative phenomenological understanding, semi-quantitative self-consistency, and quantitative fit of theory and experiment.
In qualitative terms, the surprising non-monotonic \(T\) dependence and some of the other features of the \(T\)-dependence of the qubit frequencies finds a natural explanation in the IRGM. The important theoretical ingredients are the cancellations due to vector summations that involve only a relatively small number of variables combined with the natural temperature dependence of the TLS fluctuations and the depolarizing effects of interactions. The explanation of non-monotonicity is quite defies the usual expectation that thermal effects in the absence of phase transitions tend to be monotonic. Overall, this is quite strong evidence for the IRGM. Aside from the non-monotonicity, there is the observation that \(\Delta f\) is sometimes negative [16; 19]. This is also somewhat surprising if one assumes that the heating affects the qubits directly in some fashion. In the IRGM, there is nothing that constrains the sign of the components of \({\bf c}_{q}\), so the sign of the effect is not determined. Similarly, the fact that the effect is not resonant with qubit frequencies suggests that an ancillary part of the device is driven by the heating - in the IRGM, the system of TLS is driven. Hysteresis does not seem to be a feature of PIRS. This might seem to argue against the IRGM but in fact with only a few tens of TLS involved, this aspect of glassiness does not argue against the model. In contrast, no T-dependent electric field shows up in measurements at the charge sensor [18], which is not explained in the model as it stands.
There are two experimental scales that must be consistent with the theory: the overall magnitude of \(\Delta f\) (1 MHz in dot systems), and the temperature of the peak in \(\Delta f\) (about 0.2 to 0.4 K in dot systems). The first number is very consistent with the roughly known numbers for the size of the dipoles and their presumed positions. The second depends on the distribution of the random local fields \({\bf E}_{j}\) and indeed the distribution must be such that \(|p_{0}{\bf E}_{j}/k_{B}|\) is clustered near 0.3 K. There is no obvious reason why this should be so, so the IRGM does include at least one _ad hoc_ element.
The fit to the data in Fig. 2 is strikingly accurate, but it raises questions. If interactions are relatively weak for some reason (such that the TLS are particularly far apart), then why do six out of six qubits all show non-monotonicity? This is only consistent with the spherical shell picture, which in turn is not very consistent with the usual idea that the TLS are associated with the oxide layer. Furthermore, the non-monotonicity in all six qubits means that there must be correlations between the positions of the TLS and the strength and direction of their random fields. Specifically, the TLS closer to the qubits must have stronger random fields for each qubit. In addition, in order for the shift to be positive in all qubits, the random fields for the nearby TLS must all have the same orientation across all six qubits.
We conclude that the basic mechanism of PIRS is explained by the IRGM, but that the explanation is far from complete at this stage. Most likely the model needs to supplemented by a better picture of the positions of the TLS and a better understanding of their physical nature. This would limit the model to a smaller region of its parameter space than we have investigated here.
###### Acknowledgements.
We acknowledge helpful discussions and correspondence with S.N. Coppersmith, M.A. Eriksson, M. Friesen, A. Laucht, A. Morello, A. Saraiva, L.M.K. Vandersypen, and H. Yang. This research was sponsored by the Army Research Office (ARO) under Awards No. W911NF-17-1-0274 and No. W911NF-22-1-0090. The views, conclusions, and recommendations contained in this document are those of the authors and are not necessarily endorsed nor should they be interpreted as representing the official policies, either expressed or implied, of the Army Research Office (ARO) or the U.S. Government. The U.S. Government is authorized to reproduce and dis
Figure 2: PIRS data: theory and experiment. The points are measured frequency shifts for six qubits Q1-Q6 from Ref. [18] and the dashed lines are theoretical fits. The qubits are situated in a one-dimensional array. The fitting procedure is described in detail in the text. The applied magnetic field is assumed to be in the \(y\)-direction.
tribute reprints for Government purposes notwithstanding any copyright notation herein. This research was performed using the computer resources and assistance of the UW-Madison Center for High Throughput Computing (CHTC) in the Department of Computer Sciences. The CHTC is supported by UW-Madison, the Advanced Computing Initiative, the Wisconsin Alumni Research Foundation, the Institutes for Discovery, and the National Science Foundation, and is an active member of the Open Science Grid, which is supported by the National Science Foundation and the U.S. Department of Energy's Office of Science.
## Appendix A Magnitude of frequency shift in quantum dot architecture
Here we give an order-of-magnitude estimate of the PIRS effect and some of the intermediate quantities involved in it for a Si/SiGe heterostructure quantum dot device with a micromagnet. In this case, \(\Delta f\) is due to the shift in position of the electron in the non-uniform magnetic field caused by the micromagnet. Let there be an electron spin qubit in a quantum dot at the origin of coordinates. The electron is at the bottom of a circularly symmetric two-dimensional harmonic potential \(k(x^{2}+y^{2})/2\). At this point there is a magnetic field gradient \(\partial B_{i}/\partial x_{j}\), where \(i\) and \(j\) are Cartesian indices.
The qubit frequency shift is given by \(\Delta f=\mathbf{c}_{q}\cdot[\langle\mathbf{F}_{q}(T)\rangle-\langle\mathbf{F }_{q}(T)\rangle(T=0)]\), with [26]:
\[\mathbf{c}_{q}=\frac{g\mu_{B}}{h}\frac{q}{m_{t}\omega_{orb}^{2}}\ \left(\frac{ \partial B_{y}}{\partial x}\hat{x}+\frac{\partial B_{y}}{\partial y}\hat{y} \right). \tag{1}\]
We have assumed that the applied field is in the \(y\)-direction. A typical device of this kind that was particularly well-characterized was described in Ref. [27]. The magnetic field gradients for that device in units of mT(nm)\({}^{-1}\) were \(\partial B_{y}/\partial x=-0.05\) and \(\partial B_{y}/\partial y=0.18\). The transverse effective mass \(m_{t}=0.19\,m_{e}=1.73\times 10^{-31}\,\)kg. We take the lowest orbital excitation frequency as \(\omega_{orb}\sim 2meV/\hbar\), which is related to the spring constant by \(k=m_{t}\omega_{orb}^{2}\), and an average value of \(|\nabla B|\) as 0.1 mT (nm)\({}^{-1}\). These numerical values should be more or less typical of micromagnet-based Si/SiGe devices, but variations from device to device can certainly alter our estimate.
A single component of \(\langle\mathbf{F}_{q}\rangle\) at \(T=0\) is the result of a random-walk summation of the same component of the field exerted at the position of the qubit by the N TLSs. It is therefore given by \(\sqrt{N}\) times the rms value of the individual contributions to one component in the sum in Eq. 1. There is an additional angular average over the directions of \(\mathbf{p}_{i}\) with the result that \(\Delta f(T=0)\sim\sqrt{2N/3}\,p_{0}/4\pi\varepsilon_{\tau}d^{3}\), where \(d\) is an average distance from the TLS to the qubit, and we use \(d\sim 50\) nm and \(\varepsilon_{\tau}\sim\)11. Combining this with Eq. 1 we find \(\Delta f(T=0)\sim 1.4\) MHz, not too far from what is observed for the maximum \(\Delta f\), which should be roughly comarable with the computed quantity. With these parameters, the qubit moves about 0.5 nm due to \(\langle\mathbf{F}_{q}\rangle\), a field of about 4500 V/m.
## Appendix B Further examples of the temperature dependence of the electric field
In this section we give a few representative examples of the temperature dependence of the qubit frequency for single TLS configurations in the different pictures of TLS positions and directions. In each figure we show two configurations but with the interaction turned off and on. In fact we compute the \(y\) component of \(\langle\mathbf{F}_{q}\rangle\) and leave \(\mathbf{c}_{q},y\) arbitrary. Then we enforce the condition that \(\Delta f(T=0)=0\). This means that the ground state configurations for the interacting and non-interacting cases and their resultant \(\langle\mathbf{F}_{q}\rangle\) may be quite different. The curves for the interacting case are smoothed by averaging over the 11 points centered at the plotted point, except at the ends of the curve. Note that \(\langle\mathbf{F}_{q}\rangle\to 0\) as \(T\rightarrow\infty\), but this asymptote is usually off the plotted region.
In Fig. 1 we plot results for the random dipole picture. Fig. 1(a) is an example where \(T^{+}\) considerably exceeds \(T^{-}\), leading in the non-interacting case to a peak at relatively low T. Interactions mainly shift the peak but leave non-monotonicity intact. In Fig. 1(b) the needed cancellation pattern does not occur for the non-interacting case but there is non-monotonicity in the interacting case because of increased cancellation. Interactions have a strong influence on the ground state configuration for this particular sample, as indicated by the change in sign for the interacting and non-interacting cases.
In Fig. 2 we see \(\Delta f(T)\) for the spherical shell picture for two different TLS configurations. In this picture the interactions are more effective in producing non-monotonicity and once more we see that the non-monotonicity can be present already in the non-interacting case or it can be induced by the interactions. \(\Delta f(T)\) can show somewhat surprising behavior in the interacting case, in this case a change of sign as a function of T. This has not been observed to date. If it were, it would be a sign that interactions are important.
In Fig. 3 we plot \(\Delta f(T)\) for two samples belonging to the trap and random dipole pictures. These plots are mainly included to dispel any impression that non-monotonicity is universal in the IRGM. Both the non-interacting and the interacting cases can show monotonic behavior as a function of temperature. This can happen as in Fig. 3(a), where the ground state configuration of the TLS changes drastically when interactions are turned on, or as in Fig. 3(b), where the two ground states are apparently rather similar.
## Appendix C Non-monotonicity criterion
To determine whether a component \(F(T)\) of \(\langle\mathbf{F_{q}}\rangle\) has non-monotonic T dependence, we set up a criterion as follows: from exact or numerical results, we define a set of differences
\[F_{diff}=\{F_{m+1}-F_{m}|\;m=1,2,3,...,M-1\} \tag{10}\]
where \(M=100\) is the number of temperature points for evaluation. The average slope magnitude is defined as
\[s=\frac{1}{M-1}\sum_{m=1}^{M-1}|F_{m+1}-F_{m}|. \tag{11}\]
To avoid false positives from small random fluctuations (particularly important for MC simulations), small slope elements are excluded from \(F_{diff}\) so that the smaller set is
\[F_{large}=\{F_{m+1}-F_{m}|\;\frac{s}{2}<|F_{m+1}-F_{m}|\}. \tag{12}\]
Defining signs of the differences as \(\sigma_{m}=\text{sgn}(F_{m+1}-F_{m})\), we have groups of positive and negative slopes:
\[F_{pos}=\{F_{m+1}-F_{m}|\;\sigma_{m}>0\text{ and }F_{m+1}-F_{m}\in F_{large}\}, \tag{13}\] \[F_{neg}=\{F_{m+1}-F_{m}|\;\sigma_{m}<0\text{ and }F_{m+1}-F_{m}\in F _{large}\}.\]
The final non-monotonicity criterion is
\[\frac{\min(|F_{pos}|,|F_{neg}|)}{|F_{pos}|+|F_{neg}|}>0.1\text{ and }s>5 \tag{14}\]
where the first inequality requires that \(\langle\mathbf{F}_{q}\rangle(T)\) has non-negligible positive and negative parts of slopes and the
Figure 4: Temperature dependence of qubit frequency shifts in the spherical shell picture of the TLS. The shifts are computed from the equilibrium electric field \(\langle\mathbf{F}_{q}\rangle(T)-\langle\mathbf{F}_{q}\rangle(T=0)\) at the position of the qubit for two configurations of the TLS. Without Int. and With Int. mean the non-interacting (\(H_{int}=0\)) and fully interacting case, respectively. **(a)** Both non-interacting and interacting cases show non-monotonic shifts. **(b)** Example in which interaction causes non-monotonic behavior. The applied magnetic field is assumed to be in the \(y\)-direction. Note the change in vertical scale from **(a)** to **(b)**.
Figure 3: Temperature dependence of qubit frequency shifts in the random dipole picture of the TLS. The shifts are computed from the equilibrium electric field \(\langle\mathbf{F}_{q}\rangle(T)-\langle\mathbf{F}_{q}\rangle(T=0)\) at the position of the qubit for two configurations of the TLS. Without Int. and With Int. mean the non-interacting (\(H_{int}=0\)) and fully interacting case, respectively. **(a)** Both non-interacting and interacting cases show non-monotonic shifts. **(b)** Example in which the interaction causes non-monotonic behavior. The applied magnetic field is assumed to be in the \(y\)-direction.
second one demands that the overall frequency shift in the temperature range is not so small. The tolerance values, 0.1 and 5, are empirically chosen and can be adjusted for different systems.
## Appendix D Fitting procedure for Fig. 2 of main text
In Fig. 2 of the main text we give a comparison of theory and the PIRS experiment of Undseth _et al._ in which \(\Delta f\) was measured for each of six qubits in a row. We found that the best fit is obtained by taking the case that \(T_{r}\gg T_{int}\), which amounts to a non-interacting model. We used a trap picture, but the other two pictures could also have been used for the fit. The parameters are the same as those in the description of the trap picture in the main text except for the \(z\) coordinates of the TLS, which are now taken as \(z_{j}=36\) nm following Ref. [18; 1], and \(\Delta E_{0}=10^{5}\) V/m. The six samples are generated by varying the conversion factor \(\mathbf{c}_{q,y}\) and the positions of TLS in the \(x-y\) plane, which are randomly assigned within circles whose centers are TLS positions of a reference sample and radii are 5 nm. The applied magnetic field is assumed to be in the \(y\)-direction. As we showed in Sec. 1, the order of magnitude of the effect is consistent between theory and experiment. The best fit for the conversion factors for the qubits \(Q_{i}\) with \(i=\) 1, 2, 3, 4, 5, 6 were in the ratios 0.105 : 0.114 : 0.072 : 0.077 : 0.103 : 0.059. All fit parameters are available from the authors on request.
|
2304.09386
|
Towards Objective-Tailored Genetic Improvement Through Large Language
Models
|
While Genetic Improvement (GI) is a useful paradigm to improve functional and
nonfunctional aspects of software, existing techniques tended to use the same
set of mutation operators for differing objectives, due to the difficulty of
writing custom mutation operators. In this work, we suggest that Large Language
Models (LLMs) can be used to generate objective-tailored mutants, expanding the
possibilities of software optimizations that GI can perform. We further argue
that LLMs and the GI process can benefit from the strengths of one another, and
present a simple example demonstrating that LLMs can both improve the
effectiveness of the GI optimization process, while also benefiting from the
evaluation steps of GI. As a result, we believe that the combination of LLMs
and GI has the capability to significantly aid developers in optimizing their
software.
|
Sungmin Kang, Shin Yoo
|
2023-04-19T02:45:11Z
|
http://arxiv.org/abs/2304.09386v1
|
# Towards Objective-Tailored Genetic Improvement Through Large Language Models
###### Abstract
While Genetic Improvement (GI) is a useful paradigm to improve functional and nonfunctional aspects of software, existing techniques tended to use the same set of mutation operators for differing objectives, due to the difficulty of writing custom mutation operators. In this work, we suggest that Large Language Models (LLMs) can be used to generate objective-tailored mutants, expanding the possibilities of software optimizations that GI can perform. We further argue that LLMs and the GI process can benefit from the strengths of one another, and present a simple example demonstrating that LLMs can both improve the effectiveness of the GI optimization process, while also benefiting from the evaluation steps of GI. As a result, we believe that the combination of LLMs and GI has the capability to significantly aid developers in optimizing their software.
optimization, genetic algorithm
## I Introduction
Software is complex, and as a result it can take significant manual effort to optimize software to better meet requirements such as runtime or memory usage. To reduce the amount of developer time that must be spent on optimizing software, the field of Genetic Improvement (GI) [1] has strived to use the principles of stochastic optimization to automatically optimize software. Specifically, one defines a fitness function (e.g., the execution time of code) and genetic operators that change and combine source code in ways that are expected to help; at each 'generation' the best solutions according to the fitness function are selected and further modified. This process is repeated until some termination criterion is met, and the optimized results are presented to a developer. Such a paradigm has led to significant successes, such as automated program specialization [2, 3], energy consumption optimization [4] and continuous automated program repair [5].
Despite these successes, the traditional formulation has a major limitation in its use of genetic operators. In the traditional formulation of GI, the code under improvement is to be randomly modified then to be compiled and evaluated for the given objective in an automated pipeline. Any change that is not compilable will block the pipeline, significantly reducing the overall efficiency. Consequently, much existing work [2, 3, 5] relies on the same genetic operators first investigated by GenProg [6], i.e., inserting code borrowed somewhere else in the same codebase, deleting code, or swapping two existing code elements. While this approach improves the probability of successful compilation of the randomly modified code, it also has two limitations. First, the change is not customized for the given objective, leaving us simply to hope for the existence of some ingredients that can contribute to the given objective. Second, this approach is known to also generate _bloated_ results, requiring additional post-processing to minimize the results [7].
Meanwhile, large language models (LLMs) are showing impressive performance in natural language processing and software engineering tasks [8, 9]; one can ask an LLM to change code in a specific manner using natural language (as we later show with an example), resulting in objective-specific genetic operators. Furthermore, an LLM trained on software defines a distribution over code, and as a result generates code with high naturalness [10], improving the probability of successful compilation as well as that of the results being accepted in practice.
This is not to say LLMs are a panacea - while LLMs generate plausible improvements, these improvements are not always correct, and can suffer from myriad mistakes [11]. Perhaps most critically, while GI-generated solutions that have a better fitness score than the original are verified solutions that yield better results on what the fitness score measures, LLM-generated solutions are not guaranteed to improve the software, at least not without a verification step.
This is why we believe LLMs and the GI process have significant synergy: working together, each component would augment the strengths and complement the weaknesses of the other. That is, LLM+GI would allow objective-specific modifications to be added to the GI candidate pool via the flexibility of LLMs; LLM+GI can ensure that the desired objective is met by the rigor of the evolutionary cycle.
In the next section, we provide an example of an LLM to improve inefficient Python implementations of Fibonacci number calculation, to demonstrate the significant potential of LLMs when incorporated in the GI process.
## II Demonstration
As a simple demonstration of LLMs, we choose two objectives from the GI Call for Papers: "improv[ing] efficiency" (which we understand as execution time) and "decreas[ing] memory consumption". We manually write inefficient implementations for calculating the Fibonacci numbers on each aspect, as shown in Figure 1. Figure 1(a) leads to an \(O(\phi^{n})\) computational complexity (where \(\phi=\frac{1+\sqrt{5}}{2}\)), whereas Fibonacci numbers can be calculated in \(O(n)\) time; Figure 1(b)
leads to an \(O(n)\) memory complexity, whereas Fibonacci numbers can be calculated using \(O(1)\) memory.
We prompt the LLM to make these implementations more efficient by specifying the objectives in natural language. Specifically, we add a comment above the original code saying 'original, [time/memory]-inefficient code', then add a comment below the code saying 'fixed, [time/memory]-efficient code', along with a function name indicating the property of the implementation (see Figure 2 for an example). This prompt is then submitted to the LLM to get modified code. We used the code-davinci-002 model from OpenAI, under the default parameter settings.
The results are presented in Figure 3: each version is more efficient than the original in the way intended. In Fig. 3(a), note that the code with exponential time complexity was replaced with an \(O(n)\) time complexity algorithm that is fast. Meanwhile, Fig. 3(b) shows a memory-efficient implementation that only uses two variables to calculate Fibonacci numbers.
In our examples, all implementations generated by the LLM are correct, as the Fibonacci calculation problem is well-known and doubtless part of its training data. Nonetheless, LLMs are prone to generating incorrect outputs as Sarkar et al. [11] note, especially when the code to optimize becomes complex. Further, the verification step adds value to developers: Winter et al. [12] find that developers are likely to be persuaded by objective metrics. As a result, we believe that LLM output can become significantly more valuable when combined with the GI process.
## III Conclusion
In this work, we argue that there is significant synergy between large language models and genetic improvement. To this end, we provide an example demonstrating that LLMs can act as effective mutators of source code given a natural language description, while also suggesting that LLM output would benefit from the rigorous evaluation of GI as well. Such results demonstrate the possibility of using LLMs to significantly reduce developer effort when optimizing software, coming closer to the overall goal of automatic software improvement; as a result, we believe there is a bright future for combined techniques utilizing LLMs and GI.
## Acknowledgment
This work was supported by the National Research Foundation of Korea (NRF) Grant (NRF-2020R1A2C1013629).
|
2310.15156
|
Optimal unilocal virtual quantum broadcasting
|
Quantum broadcasting is central to quantum information processing and
characterizes the correlations within quantum states. Nonetheless, traditional
quantum broadcasting encounters inherent limitations dictated by the principles
of quantum mechanics. In a previous study, Parzygnat et al. [Phys. Rev. Lett.
132, 110203 (2024)] introduced a canonical broadcasting quantum map that goes
beyond the quantum no-broadcasting theorem through a virtual process. In this
work, we generalize the concept of virtual broadcasting to unilocal
broadcasting by incorporating a reference system and introduce protocols that
can be approximated using physical operations with minimal cost. First, we
propose a universal unilocal protocol enabling multiple parties to share the
correlations of a target bipartite state, which is encoded in the expectation
value for any observable. Second, we formalize the simulation cost of a virtual
quantum broadcasting protocol into a semidefinite programming problem. Notably,
we propose a specific protocol with optimal simulation cost for the
2-broadcasting scenario, revealing an explicit relationship between simulation
cost and the quantum system's dimension. Moreover, we establish upper and lower
bounds on the simulation cost of the virtual $n$-broadcasting protocol and
demonstrate the convergence of the lower bound to the upper bound as the
quantum system's dimension increases.
|
Hongshun Yao, Xia Liu, Chengkai Zhu, Xin Wang
|
2023-10-23T17:56:02Z
|
http://arxiv.org/abs/2310.15156v3
|
# Optimal unilocal virtual quantum broadcasting
###### Abstract
Quantum broadcasting is a cornerstone in the realm of quantum information processing and characterizes the correlations within quantum states. Nonetheless, traditional quantum broadcasting encounters inherent limitations dictated by the principles of quantum mechanics. In this work, we introduce a novel protocol known as _virtual quantum broadcasting_ which focuses on broadcasting measurement statistics of a target state rather than the state itself. First, we propose a universal unilocal protocol enabling multiple parties to share the expectation value for any observable in any target bipartite state. Second, we formalize the simulation cost of a virtual quantum broadcasting protocol into a semidefinite programming problem. Notably, we propose a specific protocol with optimal simulation cost for the 2-broadcasting scenario, revealing an explicit relationship between simulation cost and the quantum system's dimension. Moreover, we establish upper and lower bounds on the simulation cost of the virtual \(n\)-broadcasting protocol and demonstrate the convergence of the lower bound to the upper bound as the quantum system's dimension increases. Our work paves the way for new approaches to distributing quantum information, potentially advancing quantum communication and computing technologies.
## 1 Introduction
In classical information processing, creating duplicates is a straightforward task. However, the quantum realm presents a challenge due to the no-cloning theorem [1, 2], rendering direct copies impossible. Quantum broadcasting [3, 4], a concept milder than quantum cloning, offers a distinct perspective on the classical-quantum interface. Unfortunately, there are also fundamental restrictions on quantum broadcasting [5, 6]. The no-broadcasting theorem states that it is only possible to broadcast a set of quantum states if they commute with each other. In other words, if the quantum states have properties that can be simultaneously measured without disturbing each other, it is possible to broadcast them.
These no-go theorems can be further extended to the setting of local broadcasting for composite quantum systems [6, 7, 8]. Given a bipartite quantum state \(\rho_{AB}\) shared by Alice and Bob, the local-broadcasting aims to perform local operations \(\Lambda_{A\to A_{1}A_{2}}\) and \(\Gamma_{B\to B_{1}B_{2}}\) to produce a state \(\widehat{\rho}_{A_{1}A_{2}B_{1}B_{2}}=\left(\Lambda_{A\to A_{1}A_{2}}\otimes \Gamma_{B\to B_{1}B_{2}}\right)\rho_{AB}\) such that \(\mathrm{Tr}_{A_{1}B_{1}}[\widehat{\rho}_{A_{1}A_{2}B_{1}B_{2}}]=\mathrm{Tr}_{ A_{2}B_{2}}[\widehat{\rho}_{A_{1}A_{2}B_{1}B_{2}}]=\rho_{AB}\). Furthermore, unilocal broadcasting is considered when the local operations are only allowed for one party, e.g., Bob. It is shown that the unilocal broadcasting can be done if and only if \(\rho_{AB}\) is classical on \(B\)[6, 8, 9, 7]. More generally, a unilocal \(n\)-broadcasting performs the local operation \(\Gamma_{B\to B_{1}\cdots B_{n}}\) to produce the state \(\widehat{\rho}_{AB_{1}\cdots B_{n}}=\Gamma_{B\to B_{1}\cdots B_{n}}(\rho_{AB})\) such that \(\mathrm{Tr}_{\setminus AB_{1}}[\widehat{\rho}_{AB_{1}\cdots B_{n}}]=\cdots= \mathrm{Tr}_{\setminus AB_{n}}[\widehat{\rho}_{AB_{1}\cdots B_{n}}]=\rho_{AB}\)[4], which shown in Fig. 1.
To overcome the limitations of quantum broadcasting led by these no-go theorems, we propose a framework called _virtual quantum broadcasting_, which aims to broadcast classical information of a target state rather than the state itself. The motivation for a _virtual_ broadcasting is that our predominant concern is frequently directed towards the classical information discerned after measurement, denoted as _shadow information_[10, 11] within the majority of quantum information and quantum computing tasks,
rather than the whole information of a state. Therefore, we focus on a broadcasting task that the local operations employed by Bob enable \(n\) parties \(A\) and \(B_{1},\cdots,B_{n}\) to share the same shadow information \(\mathrm{Tr}[O\rho_{AB}]\) with respect to any observable \(O\).
Technically, we extend the traditional quantum broadcasting by employing Hermitian preserving and trace-preserving (HPTP) maps, which can be physically implemented by quasiprobability decomposition (QPD) [12, 13, 14] and measurement-controlled post-processing [15]. Such physical simulation of unphysical maps plays a crucial role in applications such as entanglement detection [16, 17, 18, 19], error mitigation [12, 14, 13, 20, 21], and two-point correlator [22]. In specific, we may construct an HPTP map \(\Gamma_{B\to B^{n}}\) and decompose it into a linear combination of local channels \(\mathcal{N}_{j}\) for Bob, i.e., \(\Gamma_{B\to B^{n}}=\sum_{j=1}c_{j}\mathcal{N}_{j}\), where \(c_{j}\) are certain real numbers. One can estimate the shadow information by sampling quantum channels \(\mathcal{N}_{j}\) and post-processing the measurement outcomes [20] for an observable \(O\) and quantum state \(\rho_{AB}\) (see Proposition 2 for a precise statement). Then, it is essential to understand the power and limitations of such virtual quantum broadcasting as the following two question arises
1. _Is there a universal virtual quantum broadcasting protocol?_
2. _What is the optimal protocol that has the minimum sampling cost?_
In this paper, we fully address these two questions. In Sec. 2, we demonstrate the existence of a _universal_ unilocal virtual \(n\)-broadcasting protocol, for any bipartite quantum state \(\rho_{AB}\) and observable \(O\). In Sec. 3, we formalize the simulation cost of a unilocal virtual \(n\)-broadcasting into a semidefinite programming (SDP) [23]. Notably, we provide an analytical universal unilocal virtual \(2\)-broadcasting protocol to elucidate the optimal simulation cost. In addition, we investigate the upper and lower bounds on the simulation cost of the unilocal virtual \(n\)-broadcasting protocol.
## 2 Universal virtual broadcasting protocol
We consider a finite-dimensional Hilbert space \(\mathcal{H}\) and denote \(A\) and \(B\) as two parties, each possessing their respective Hilbert spaces \(\mathcal{H}_{A}\) and \(\mathcal{H}_{B}\). We denote the dimension of \(\mathcal{H}_{A}\) and \(\mathcal{H}_{B}\) as \(d_{A},d_{B}\), respectively. Throughout the paper, we consider \(\mathcal{H}_{A}\cong\mathcal{H}_{B}\) and \(d_{A}=d_{B}=d\). Let \(\{|j\rangle\}_{j=0,\cdots,d-1}\) be a standard computational basis. Denote \(\mathcal{L}(\mathcal{H}_{A})\) as the set of linear operators that map from \(\mathcal{H}_{A}\) to itself. A linear operator in \(\mathcal{L}(\mathcal{H}_{A})\) is called a density operator if it is positive semidefinite with trace one, and denotes \(\mathcal{D}(\mathcal{H}_{A})\) as the set of all density operators on \(\mathcal{H}_{A}\). We denote \(F_{B_{1}B_{2}}:=\sum_{ij=0}^{d-1}|ij\rangle\!\langle ji|\) as swap operator between subsystems \(B_{1}\) and \(B_{2}\), and denote \(\Phi_{BB_{1}}:=\sum_{i,j=0}^{d-1}|ii\rangle\!\langle jj|_{BB_{1}}\) as the unnormalized \(d\otimes d\) maximally entangled state. In the absence of ambiguity, subsystems may be omitted, i.e., \(\Phi_{d}\). A quantum channel \(\mathcal{N}_{A\to B}\) is a linear map from \(\mathcal{L}(\mathcal{H}_{A})\) to \(\mathcal{L}(\mathcal{H}_{B})\) that is completely positive and trace-preserving (CPTP). Its associated Choi-Jamiolkowski operator is expressed as \(J^{\mathcal{N}}_{AB}:=\sum_{i,j=0}^{d_{A}-1}|i\rangle\!\langle j|\otimes \mathcal{N}_{A\to B}(|i\rangle\!\langle j|)\).
Formally, a unilocal virtual \(n\)-broadcasting protocol for a bipartite quantum state \(\rho_{AB}\) is defined as follows.
Fig 1: Unilocal(left) and local(right) \(n\)-broadcasting for bipartite state \(\rho_{AB}\). The goal is for the map \(\Gamma\) to minimize the dissimilarity between the states on \(\rho_{AB_{j}}\) and \(\rho_{AB}\) in a certain measure. Conventionally, \(\Gamma\) is a CPTP map, i.e., quantum channel. In this paper, we focus on the scenario that \(\Gamma\) is an HPTP map.
**Definition 1** (Unilocal virtual \(n\)-broadcasting protocol): _Let \(\rho_{AB}\) be a bipartite state. Then an HPTP map \(\Gamma_{B\to B^{n}}\) is called a unilocal virtual \(n\)-broadcasting protocol for \(\rho_{AB}\) if_
\[\rho_{AB}=\operatorname{Tr}_{\backslash AB_{j}}[\Gamma_{B\to B^{n}}(\rho_{AB}) ],\quad\forall j=1,2,\cdots,n, \tag{1}\]
_where identity map is omitted, \(\operatorname{Tr}_{\backslash AB_{j}}\) denotes taking partial trace on the subsystems while excluding \(AB_{j}\), and \(B^{n}\) is the abbreviation of the subsystems \(B_{1}B_{2}\cdots B_{n}\)._
We note that if there is a unilocal virtual \(n\)-broadcasting protocol \(\Gamma_{B\to B^{n}}\) for all quantum states \(\rho_{AB}\in\mathcal{D}(\mathcal{H}_{A}\otimes\mathcal{H}_{B})\), we call it _a universal unilocal virtual \(n\)-broadcasting protocol_. Equivalently, a universal unilocal virtual \(n\)-broadcasting protocol \(\Gamma_{B\to B_{1}\cdots B_{n}}\) can be characterized by its Choi operator \(J^{\Gamma}_{BB^{n}}\) as shown in the following Lemma.
**Lemma 1**: _Let \(J^{\Gamma}_{BB^{n}}\) be the Choi operator of the HPTP map \(\Gamma_{B\to B^{n}}\). Then \(\Gamma_{B\to B^{n}}\) is a universal unilocal virtual \(n\)-broadcasting protocol, i.e.,_
\[\rho_{AB}=\operatorname{Tr}_{\backslash AB_{j}}[\Gamma_{B\to B^{n}}(\rho_{AB}) ],\quad\forall j=1,2,\cdots,n,\quad\forall\rho_{AB}\in\mathcal{D}(\mathcal{H} _{A}\otimes\mathcal{H}_{B}) \tag{2}\]
_if and only if \(J^{\Gamma}_{BB_{1}}=J^{\Gamma}_{BB_{2}}=\cdots=J^{\Gamma}_{BB_{n}}=\Phi_{d}\)._
Lemma 1 states that a universal unilocal virtual \(n\)-broadcasting protocol can be described by its Choi operator, which means we can check constraints on Choi operators instead of constraints involving input and output states. The proof can be found in Appendix A. One of the remarkable and valuable findings in this paper is that there indeed exists a universal virtual \(n\)-broadcasting protocol. As a warm-up example, we present a universal unilocal virtual \(2\)-broadcasting protocol as follows:
\[\Gamma_{B\to B_{1}B_{2}}(\rho_{AB}):=\rho_{AB_{1}}\otimes\frac{I_{B_{2}}}{d}+ \mathcal{S}_{B_{1}B_{2}}(\rho_{AB_{1}}\otimes\frac{I_{B_{2}}}{d})-\mathcal{R} _{B\to B_{1}B_{2}}(\rho_{AB}), \tag{3}\]
where \(\mathcal{S}_{B_{1}B_{2}}(\cdot)\) denotes the swap operation between the subsystem \(B_{1}\) and \(B_{2}\), \(\mathcal{R}_{B\to B_{1}B_{2}}(\cdot)\) denotes the replacement channel yielding the normalized \(d\otimes d\) maximally entangled state between subsystem \(B_{1}\) and \(B_{2}\) for any input state. Its Choi operator can be written as
\[J^{\Gamma_{B\to B_{1}B_{2}}}_{BB_{1}B_{2}}:=\frac{1}{d}\Phi_{BB_{1}}\otimes I _{B_{2}}+\frac{1}{d}\Phi_{BB_{2}}\otimes I_{B_{1}}-\frac{1}{d}\Phi_{B_{1}B_{2 }}\otimes I_{B}. \tag{4}\]
It is straightforward to check that \(J^{\Gamma_{BB_{1}}B_{2}}_{BB_{1}}=J^{\Gamma_{BB_{1}}B_{2}}_{BB_{1}}=\Phi_{d}\). Consequently, \(\Gamma_{B\to B_{1}B_{2}}\) is a universal unilocal virtual \(2\)-broadcasting protocol by Lemma 1. Furthermore, we extend our investigation to encompass the realm of \(n\)-broadcasting, where we demonstrate the existence of a universal unilocal virtual \(n\)-broadcasting protocol as follows.
Fig 2: Illustration of using a universal virtual \(n\)-broadcasting \(\Gamma^{\prime}_{B\to B^{n}}=p_{1}\mathcal{N}_{1}-p_{2}\mathcal{N}_{2}\) to share shadow information between different parties. For a given observable \(O\) and many copies of a bipartite state \(\rho_{AB}\), we sample local quantum channels \(\mathcal{N}_{1}\) and \(\mathcal{N}_{2}\) with probability \(p_{1}/(p_{1}+p_{2})\) and \(p_{2}/(p_{1}+p_{2})\) respectively. Iterating this procedure \(m\) times, we can obtain \(\rho_{AB}^{(k)}\) for \(k=1,2,\cdots,m\). Afterwards, each party \(AB_{j}\), where \(j=1,2,\cdots,n\), obtains \(\operatorname{Tr}[O\rho_{AB}]\) since \(\operatorname{Tr}[O\rho_{AB}]=\operatorname{Tr}[O\operatorname{Tr}_{\backslash AB _{j}}[\Gamma^{\prime}_{B\to B^{n}}(\rho_{AB})]]=(p_{1}+p_{2})(\frac{p_{1}}{p_ {1}+p_{2}}\operatorname{Tr}[O\operatorname{Tr}_{\backslash AB_{j}}[\mathcal{N} _{1}(\rho_{AB})]]-\frac{p_{2}}{p_{1}+p_{2}}\operatorname{Tr}[O\operatorname{Tr }_{\backslash AB_{j}}[\mathcal{N}_{2}(\rho_{AB})]])\).
**Proposition 2** (**Universal unilocal virtual \(n\)-broadcasting**): _For a bipartite quantum system \(\mathcal{H}_{A}\otimes\mathcal{H}_{B}\), there exists a universal unilocal virtual \(n\)-broadcasting protocol \(\Gamma^{\prime}_{B\to B^{n}}\) which can be written as_
\[\Gamma^{\prime}_{B\to B^{n}}(\rho_{AB}):=\sum_{j=1}^{n}\mathcal{S}_{B_{1}B_{j}} (\rho_{AB_{1}}\otimes\frac{I_{B^{2}\cdots B_{n}}}{d^{n-1}})-(n-1)\mathcal{R}_{ B\to B^{n}}(\rho_{AB}), \tag{5}\]
_where \(\mathcal{S}_{B_{1}B_{j}}(\cdot)\) denotes the swap operation between the subsystems \(B_{1}\) and \(B_{j}\), and \(\mathcal{R}_{B\to B^{n}}(\cdot)\) denotes the replacement channel yielding \(\Phi_{B_{1}B_{2}}\otimes\frac{I_{B_{2}\cdots B_{n}}}{d^{n-3}}\) for any input state._
We remain the detailed proof in Appendix A. Considering Proposition 2, we can implement a universal unilocal virtual \(n\)-broadcasting protocol through the following process as shown in Fig. 2. Given an observable \(O\) and many copies of bipartite states \(\rho_{AB}\) shared between Alice and Bob, we decompose the unilocal virtual \(n\)-broadcasting protocol as \(\Gamma_{B\to B^{n}}=p_{1}\mathcal{N}_{1}-p_{2}\mathcal{N}_{2}\), where \(\mathcal{N}_{1}\) and \(\mathcal{N}_{2}\) are quantum channels [13]. Building on the framework of quasi-probabilistic decomposition [13, 20, 21], Bob samples \(\mathcal{N}_{1}\) and \(\mathcal{N}_{2}\) with probability \(p_{1}/(p_{1}+p_{2})\) and \(p_{2}/(p_{1}+p_{2})\) respectively. Repeat this process \(m\) times to achieve the desired estimation precision and apply these channels to \(m\) copies of \(\rho_{AB}\), resulting \(\rho_{AB^{n}}^{(k)}\) for \(k=1,2,\cdots,m\). Subsequently, each bipartite system \(AB_{j}\), where \(j=1,2,\cdots,n\), acquires the information of \(\mathrm{Tr}[O\rho_{AB}]\) by measuring their subsystems in the eigenbasis of \(O\) and post-processing [20]. In essence, by delivering \(m\) copies of quantum states to each bipartite party \(AB_{j}\), this broadcasting protocol efficiently encodes and shares the classical information associated with \(\mathrm{Tr}[O\rho_{AB}]\). Using these samples, each bipartite party can further apply quantum information processing and extract the information related to \(\mathrm{Tr}[O\rho_{AB}]\) by post-processing.
Notice that this universal unilocal virtual \(n\)-broadcasting protocol can be applicable for any state \(\rho_{AB}\) and any observable \(O\). However, it is impossible when one considers using one channel to deal with this task. Consequently, we may extend the no-go theorem for local broadcasting by involving HPTP maps in the broadcasting procedure.
## 3 Optimal virtual broadcasting
In this section, we explore the optimal universal unilocal virtual \(n\)-broadcasting protocol which can be simulated with minimum costs. Treating the unilocal virtual \(n\)-broadcasting protocol as a general HPTP map, its simulation or sampling cost can be characterized via the following physical implementability, which plays the key role in quantifying the number of rounds required to reach the desired estimating precision [13].
**Definition 2** (**Simulation cost of an HPTP map [13]**): _The simulation cost (or physical implementability) of an HPTP map \(\Gamma\) is defined as_
\[\nu(\Gamma):=\log\min\Big{\{}p_{1}+p_{2}\big{|}\Gamma=p_{1}\mathcal{N}_{1}-p_{ 2}\mathcal{N}_{2},\ p_{1},p_{2}\geq 0,\ \mathcal{N}_{1},\mathcal{N}_{2}\in\text{CPTP}\ \Big{\}}, \tag{6}\]
_where logarithms are in base \(2\) throughout this paper._
**Definition 3** (**Optimal simulation cost**): _The optimal simulation cost of a universal unilocal virtual \(n\)-broadcasting protocol is defined as_
\[\gamma_{n}^{*}:=\min\{\nu(\Gamma):\Gamma\in\mathcal{T}_{n}\}, \tag{7}\]
_where \(\mathcal{T}_{n}\) denotes the set of universal unilocal virtual \(n\)-broadcasting protocols. The corresponding protocol \(\Gamma^{*}:=\operatorname*{argmin}\{\nu(\Gamma):\Gamma\in\mathcal{T}_{n}\}\) is the optimal universal \(n\)-broadcasting protocols._
Combined with the properties that a universal virtual broadcasting should satisfy as stated in Lemma 1, the optimal simulation cost can be formalized as follows.
**Proposition 3**: _For a bipartite quantum system \(\mathcal{H}_{A}\otimes\mathcal{H}_{B}\), the optimal simulation cost of a universal unilocal virtual \(n\)-broadcasting protocol can be characterized as the following SDP:_
\[2^{\gamma^{*}_{n}}=\min\frac{\textbf{Primal Program}}{p_{1}+p_{2}} \max\limits_{j=1}^{n}\operatorname{Tr}[X_{BB_{j}}\Phi_{BB_{j}}]\] \[\operatorname{s.t.}\operatorname{Tr}[X_{BB_{j}}[J^{\mathcal{N}_{ 1}}_{BB^{n}}-J^{\mathcal{N}_{2}}_{BB^{n}}]=\Phi_{BB_{j}},\] \[\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \
Furthermore, we have
\[\left[\begin{array}{cc}2I&\frac{M+NMN}{d+1}\\ 4I&2I\end{array}\right]=\left[\begin{array}{cc}I&\frac{M-NMN}{d+1}\\ \frac{M-NMN}{d-1}&I\end{array}\right]+\left[\begin{array}{cc}\frac{I}{2}& \frac{2NMN}{d+1}\\ 0&\frac{2}{2}\end{array}\right]+\left[\begin{array}{cc}\frac{I}{2}&0\\ 4I-\frac{M-NMN}{d-1}&\frac{I}{2}\end{array}\right]\geq 0,\]
where the inequality follows from that the two terms on the right of the equation are positive semidefinite matrices using Schur complement. This implies \(2(I-\frac{M+NMN}{d+1})\geq 0\). Therefore, \(\{X_{BB_{1}},Y_{BB_{2}},Z_{B},K_{B}\}\) is a feasible solution. Finally, we further check the objective function,
\[\mathrm{Tr}[X_{BB_{1}}\Phi_{BB_{1}}]+\mathrm{Tr}[Y_{BB_{2}}\Phi_{BB_{2}}]=3- \frac{4}{d+1}, \tag{12}\]
which yields \(2^{\gamma_{2}^{*}}\geq\frac{3d-1}{d+1}\). Combining the primal part and the dual part, we conclude that
\[\gamma_{2}^{*}=\log\left(3-\frac{4}{d+1}\right), \tag{13}\]
which completes this proof.
**Remark 1** By definition of the optimal simulation cost, the corresponding optimal \(2\)-broadcasting protocol \(\Gamma_{B\to B_{1}B_{2}}^{*}\) can be written as the linear combination of quantum channels, i.e., \(\Gamma_{B\to B_{1}B_{2}}^{*}=p_{1}\mathcal{N}_{1}-p_{2}\mathcal{N}_{2}\), where \(p_{1}=\frac{2d}{d+1}\), \(p_{2}=\frac{d-1}{d+1}\), and
\[\mathcal{N}_{1}(\rho_{AB}):= \frac{d}{d+1}\mathcal{P}(\rho_{AB})+\frac{1}{d+1}\mathcal{Q}( \rho_{AB}), \tag{14}\] \[\mathcal{N}_{2}(\rho_{AB}):= \frac{d^{2}}{d^{2}-2}\mathcal{I}(\rho_{AB})-\frac{2d^{2}}{(d^{2} -2)(d^{2}-1)}\mathcal{P}(\rho_{AB})+\frac{2}{(d^{2}-2)(d^{2}-1)}\mathcal{Q}( \rho_{AB}), \tag{15}\]
where \(\mathcal{Q}(\rho_{AB}):=\frac{1}{2}(F_{B_{1}B_{2}}\rho_{AB_{1}}\otimes I_{B_{2 }}+\rho_{AB_{1}}\otimes I_{B_{2}}\), \(\mathcal{P}(\rho_{AB}):=\frac{1}{2d}(\rho_{AB_{1}}\otimes I_{B_{2}}+ \mathcal{S}_{B_{1}B_{2}}(\rho_{AB_{1}}\otimes I_{B_{2}}))\), and \(\mathcal{I}(\cdot)\) denotes the replacement channel yielding \(\frac{1}{d^{2}}I_{B_{1}B_{2}}\).
Theorem 4 proposes the optimal universal virtual 2-broadcasting protocol, taking into account the sampling cost required to broadcast the classical information \(\mathrm{Tr}[O\rho_{AB}]\) with a desired estimating precision. Note that what we obtained here is the minimum cost protocol among all possible universal unilocal virtual \(2\)-broadcasting protocols. We first find the HPTP protocol for the desired simulation cost and then utilize the dual SDP (8) to establish the optimality of this protocol. Moreover, Theorem 4 reveals an intriguing relationship between the sampling cost and the system's dimension. As the dimension of the quantum system grows, the simulation cost for universal virtual 2-broadcasting converges to a constant value of \(\log 3\), which means that even in high-dimensional quantum systems, the simulation cost is still within a controllable range.
We extend our investigation to the context of unilocal virtual \(n\)-broadcasting, to analyze the change in simulation cost in relation to the number of parties involved, i.e., from system \(B\) to \(B_{1},\cdots,B_{n}\). In particular, we derive an upper bound and a lower bound for the simulation cost of universal virtual \(n\)-broadcasting.
**Theorem 5**: _For a bipartite quantum system \(\mathcal{H}_{A}\otimes\mathcal{H}_{B}\), the optimal simulation cost \(\gamma_{n}^{*}\) of a universal unilocal virtual \(n\)-broadcasting protocol \(\Gamma_{B\to B^{n}}^{*}\) satisfies_
\[\log\left(\frac{2nd}{n+d-1}-1\right)\leq\gamma_{n}^{*}\leq\log(2n-1), \tag{16}\]
_where \(d\) denotes the dimension of system \(B\)._
**Proof** We first show the upper bound on the minimum simulation cost. According to Proposition 2, one can find that the simulation cost of the universal protocol \(\Gamma_{B\to B^{n}}^{\prime}\) can be an upper bound of \(\gamma_{n}^{*}\), i.e., \(\gamma_{n}^{*}\leq\nu(\Gamma_{B\to B^{n}}^{\prime})\). Then, rewrite the universal virtual \(n\)-broadcasting protocol \(\Gamma_{B\to B^{n}}^{\prime}\) into the linear
combination of two quantum channels \(\mathcal{M}_{1}\) and \(\mathcal{M}_{2}\) as
\[\Gamma^{\prime}_{B\to B^{n}}=n\mathcal{M}_{1}-(n-1)\mathcal{M}_{2}, \tag{17}\]
where the Choi operators of \(\mathcal{M}_{1}\) and \(\mathcal{M}_{2}\) can be written as \(J^{\mathcal{M}_{1}}:=\frac{1}{nd^{n-1}}\sum_{j=1}^{n}\mathcal{S}_{B_{1}B_{j}}( \Phi_{BB_{1}}\otimes I_{B_{2}\cdots B_{n}})\) and \(J^{\mathcal{M}_{2}}:=\frac{1}{d^{n-1}}\Phi_{B_{1}B_{2}}\otimes I_{B_{2}B_{3} \cdots B_{n}}\), respectively. Then, by definition, we have \(\nu(\Gamma^{\prime}_{B\to B^{n}})\leq\log(2n-1)\), which directly gives \(\gamma^{*}_{n}\leq\log(2n-1)\).
Second, we are going to derive the lower bound by showing that \(\{X_{BB_{1}},\cdots,X_{BB_{n}},Z_{B},K_{B}\}\) is a feasible solution of the dual SDP (8), where \(Z_{B}=K_{B}=\frac{I_{B}}{d}\), and \(X_{BB_{1}}=\cdots=X_{BB_{n}}=\frac{2}{d(n+d-1)}\Phi_{d}-\frac{1}{nd}I\). It is straightforward to check that \(\{X_{BB_{1}},\cdots,X_{BB_{n}},Z_{B},K_{B}\}\) satisfies the constrains of SDP (8). We further check the objective function
\[\sum_{j=1}^{n}\mathrm{Tr}[X_{BB_{j}}\Phi_{BB_{j}}]=\frac{2nd}{n+d-1}-1. \tag{18}\]
According to the fact that the optimal solution of dual SDP (8) is a lower bound of the optimal solution of primal SDP (8), we have the following inequality
\[\log\left(\frac{2nd}{n+d-1}-1\right)\leq\gamma^{*}_{n}, \tag{19}\]
which completes the proof.
Remarkably, the upper bound is independent of the quantum system's dimension, and the lower bound converges to the upper bound as the dimension of the quantum system grows. This implies our ability to effectively tackle the unilocal virtual \(n\)-broadcasting task within a high-dimensional Hilbert space. In summary, Theorem 4 and Theorem 5 reveal that engaging in virtual \(n\)-broadcasting not only enables the acquisition of shadow information but also grants control over the associated costs.
## 4 Concluding remarks
In this work, we propose a novel protocol known as unilocal virtual quantum broadcasting, employing Hermitian-preserving trace-preserving (HPTP) maps. We demonstrate the existence of a universal unilocal virtual \(n\)-broadcasting protocol capable of distributing information from any bipartite quantum state to multiple parties via local operations. Furthermore, we formalize the simulation cost of this broadcasting protocol as a semidefinite programming problem. Notably, we provide an analytical universal unilocal virtual 2-broadcasting protocol to clarify the optimal simulation cost. By accurately characterizing simulation cost, we find virtual 2-broadcasting remains applicable in high-dimensional quantum systems, as the corresponding simulation cost converges to a constant \(\log 3\) with increasing dimensions. Furthermore, we give upper and lower bounds on the simulation cost of the virtual \(n\)-broadcasting protocol and demonstrate that the lower bound converges to the upper bound \(\log(2n-1)\) that is independent of the system dimension. The findings above demonstrate the practical potential of our virtual broadcasting protocol, as the simulation costs are always controllable.
Our results open new avenues for understanding and harnessing the unique properties of quantum mechanics. The exploration of virtual broadcasting not only broadens our comprehension of quantum information distribution [25, 26, 27] but also provides a valuable tool for advancing quantum communication and computing technologies [28]. Future work will focus on further practical applications of our proposed virtual broadcasting method.
**Note added.** While finishing this manuscript, we became aware of a closely related work [29], that independently proposed the idea of _virtual broadcasting maps_ and gave a universal virtual broadcasting map. They referred to the universal virtual broadcasting map as the _canonical broadcasting map_ and focused primarily on the condition that a virtual broadcasting map should satisfy. Also, Ref. [29] further studied the relationship between the virtual broadcasting maps and the universal quantum cloner as well as quantum states over time. Whereas, this work mainly discovered the optimal universal virtual map
concerning the simulation cost. We propose the optimal universal virtual 2-broadcasting map which consumes the least amount of sampling copies. Furthermore, we analyze the scenario of \(n\)-broadcasting.
## 5 Acknowledgements
We would like to thank Ranyiliu Chen and Xuanqiang Zhao for their helpful comments. This work has been supported by the Start-up Fund from The Hong Kong University of Science and Technology (Guangzhou).
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.